* [PATCH] version: 22.07-rc0
@ 2022-03-18 14:35 10% David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2022-03-18 14:35 UTC (permalink / raw)
To: dev; +Cc: thomas, Aaron Conole, Michael Santana
Start a new release cycle with empty release notes.
Bump version and ABI minor.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
.github/workflows/build.yml | 2 +-
.travis.yml | 2 +-
ABI_VERSION | 2 +-
VERSION | 2 +-
doc/guides/rel_notes/index.rst | 1 +
doc/guides/rel_notes/release_22_07.rst | 138 +++++++++++++++++++++++++
6 files changed, 143 insertions(+), 4 deletions(-)
create mode 100644 doc/guides/rel_notes/release_22_07.rst
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index d30cfd08d7..02819aa5de 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -23,7 +23,7 @@ jobs:
LIBABIGAIL_VERSION: libabigail-1.8
MINI: ${{ matrix.config.mini != '' }}
PPC64LE: ${{ matrix.config.cross == 'ppc64le' }}
- REF_GIT_TAG: v21.11
+ REF_GIT_TAG: v22.03
RUN_TESTS: ${{ contains(matrix.config.checks, 'tests') }}
strategy:
diff --git a/.travis.yml b/.travis.yml
index 0838f80d3c..5f46dccb54 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -42,7 +42,7 @@ script: ./.ci/${TRAVIS_OS_NAME}-build.sh
env:
global:
- LIBABIGAIL_VERSION=libabigail-1.8
- - REF_GIT_TAG=v21.11
+ - REF_GIT_TAG=v22.03
jobs:
include:
diff --git a/ABI_VERSION b/ABI_VERSION
index 70a91e23ec..95af471221 100644
--- a/ABI_VERSION
+++ b/ABI_VERSION
@@ -1 +1 @@
-22.1
+22.2
diff --git a/VERSION b/VERSION
index f9be6be181..3da485c34d 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-22.03.0
+22.07.0-rc0
diff --git a/doc/guides/rel_notes/index.rst b/doc/guides/rel_notes/index.rst
index 876ffd28f6..93a3f7e5da 100644
--- a/doc/guides/rel_notes/index.rst
+++ b/doc/guides/rel_notes/index.rst
@@ -8,6 +8,7 @@ Release Notes
:maxdepth: 1
:numbered:
+ release_22_07
release_22_03
release_21_11
release_21_08
diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst
new file mode 100644
index 0000000000..42a5f2d990
--- /dev/null
+++ b/doc/guides/rel_notes/release_22_07.rst
@@ -0,0 +1,138 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2022 The DPDK contributors
+
+.. include:: <isonum.txt>
+
+DPDK Release 22.07
+==================
+
+.. **Read this first.**
+
+ The text in the sections below explains how to update the release notes.
+
+ Use proper spelling, capitalization and punctuation in all sections.
+
+ Variable and config names should be quoted as fixed width text:
+ ``LIKE_THIS``.
+
+ Build the docs and view the output file to ensure the changes are correct::
+
+ ninja -C build doc
+ xdg-open build/doc/guides/html/rel_notes/release_22_07.html
+
+
+New Features
+------------
+
+.. This section should contain new features added in this release.
+ Sample format:
+
+ * **Add a title in the past tense with a full stop.**
+
+ Add a short 1-2 sentence description in the past tense.
+ The description should be enough to allow someone scanning
+ the release notes to understand the new feature.
+
+ If the feature adds a lot of sub-features you can use a bullet list
+ like this:
+
+ * Added feature foo to do something.
+ * Enhanced feature bar to do something else.
+
+ Refer to the previous release notes for examples.
+
+ Suggested order in release notes items:
+ * Core libs (EAL, mempool, ring, mbuf, buses)
+ * Device abstraction libs and PMDs (ordered alphabetically by vendor name)
+ - ethdev (lib, PMDs)
+ - cryptodev (lib, PMDs)
+ - eventdev (lib, PMDs)
+ - etc
+ * Other libs
+ * Apps, Examples, Tools (if significant)
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Removed Items
+-------------
+
+.. This section should contain removed items in this release. Sample format:
+
+ * Add a short 1-2 sentence description of the removed item
+ in the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+API Changes
+-----------
+
+.. This section should contain API changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the API change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+ABI Changes
+-----------
+
+.. This section should contain ABI changes. Sample format:
+
+ * sample: Add a short 1-2 sentence description of the ABI change
+ which was announced in the previous releases and made in this release.
+ Start with a scope label like "ethdev:".
+ Use fixed width quotes for ``function_names`` or ``struct_names``.
+ Use the past tense.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+* No ABI change that would break compatibility with 21.11.
+
+
+Known Issues
+------------
+
+.. This section should contain new known issues in this release. Sample format:
+
+ * **Add title in present tense with full stop.**
+
+ Add a short 1-2 sentence description of the known issue
+ in the present tense. Add information on any known workarounds.
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
+
+
+Tested Platforms
+----------------
+
+.. This section should contain a list of platforms that were tested
+ with this release.
+
+ The format is:
+
+ * <vendor> platform with <vendor> <type of devices> combinations
+
+ * List of CPU
+ * List of OS
+ * List of devices
+ * Other relevant details...
+
+ This section is a comment. Do not overwrite or remove it.
+ Also, make sure to start the actual text at the margin.
+ =======================================================
--
2.23.0
^ permalink raw reply [relevance 10%]
* RE: [PATCH v1] bbdev: add new operation for FFT processing
2022-03-11 1:12 3% ` Stephen Hemminger
@ 2022-03-17 18:42 3% ` Chautru, Nicolas
0 siblings, 0 replies; 200+ results
From: Chautru, Nicolas @ 2022-03-17 18:42 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, gakhil, trix, thomas, hemant.agrawal, Zhang, Mingshan,
david.marchand
Hi Stephen,
Yes I am deferring thispatch to 22.11 due to ABI breakage that cannot be resolved using versioning in a few places.
Still that patch can be used in anticipation of 22.11 to get early comments on the API extension. I have marked it as deferred in patchwork.
For 22.07 I have pushed this notice to highlight change in 22.11 so that to clean some of this to be more future proof and extend the API. Ie. no actual change of API in 22.07.
=> https://patches.dpdk.org/project/dpdk/patch/1647542252-35727-2-git-send-email-nicolas.chautru@intel.com/
Thanks
Nic
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Thursday, March 10, 2022 5:13 PM
> To: Chautru, Nicolas <nicolas.chautru@intel.com>
> Cc: dev@dpdk.org; gakhil@marvell.com; trix@redhat.com;
> thomas@monjalon.net; hemant.agrawal@nxp.com; Zhang, Mingshan
> <mingshan.zhang@intel.com>; david.marchand@redhat.com
> Subject: Re: [PATCH v1] bbdev: add new operation for FFT processing
>
> On Thu, 10 Mar 2022 15:49:17 -0800
> Nicolas Chautru <nicolas.chautru@intel.com> wrote:
>
> > diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c index
> > aaee7b7..a72ecba 100644
> > --- a/lib/bbdev/rte_bbdev.c
> > +++ b/lib/bbdev/rte_bbdev.c
> > @@ -850,6 +850,9 @@ struct rte_bbdev *
> > case RTE_BBDEV_OP_LDPC_ENC:
> > result = sizeof(struct rte_bbdev_enc_op);
> > break;
> > + case RTE_BBDEV_OP_FFT:
> > + result = sizeof(struct rte_bbdev_fft_op);
> > + break;
> > default:
> > break;
> > }
> > @@ -873,6 +876,10 @@ struct rte_bbdev *
> > struct rte_bbdev_enc_op *op = element;
> > memset(op, 0, mempool->elt_size);
> > op->mempool = mempool;
> > + } else if (type == RTE_BBDEV_OP_FFT) {
> > + struct rte_bbdev_fft_op *op = element;
> > + memset(op, 0, mempool->elt_size);
> > + op->mempool = mempool;
> > }
> > }
> >
> > @@ -1123,6 +1130,7 @@ struct rte_mempool *
> > "RTE_BBDEV_OP_TURBO_ENC",
> > "RTE_BBDEV_OP_LDPC_DEC",
> > "RTE_BBDEV_OP_LDPC_ENC",
> > + "RTE_BBDEV_OP_FFT",
> > };
> >
> > if (op_type < RTE_BBDEV_OP_TYPE_COUNT) diff --git
> > a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h index b88c881..e9ca673
> > 100644
> > --- a/lib/bbdev/rte_bbdev.h
> > +++ b/lib/bbdev/rte_bbdev.h
> > @@ -380,6 +380,12 @@ typedef uint16_t
> (*rte_bbdev_enqueue_dec_ops_t)(
> > struct rte_bbdev_dec_op **ops,
> > uint16_t num);
> >
> > +/** @internal Enqueue fft operations for processing on queue of a
> > +device. */ typedef uint16_t (*rte_bbdev_enqueue_fft_ops_t)(
> > + struct rte_bbdev_queue_data *q_data,
> > + struct rte_bbdev_fft_op **ops,
> > + uint16_t num);
> > +
> > /** @internal Dequeue encode operations from a queue of a device. */
> > typedef uint16_t (*rte_bbdev_dequeue_enc_ops_t)(
> > struct rte_bbdev_queue_data *q_data, @@ -390,6 +396,11
> @@ typedef
> > uint16_t (*rte_bbdev_dequeue_dec_ops_t)(
> > struct rte_bbdev_queue_data *q_data,
> > struct rte_bbdev_dec_op **ops, uint16_t num);
> >
> > +/** @internal Dequeue fft operations from a queue of a device. */
> > +typedef uint16_t (*rte_bbdev_dequeue_fft_ops_t)(
> > + struct rte_bbdev_queue_data *q_data,
> > + struct rte_bbdev_fft_op **ops, uint16_t num);
> > +
> > #define RTE_BBDEV_NAME_MAX_LEN 64 /**< Max length of device name
> */
> >
> > /**
> > @@ -438,6 +449,10 @@ struct __rte_cache_aligned rte_bbdev {
> > rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> > /** Dequeue decode function */
> > rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> > + /** Enqueue FFT function */
> > + rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> > + /** Dequeue FFT function */
> > + rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> > const struct rte_bbdev_ops *dev_ops; /**< Functions exported by
> PMD */
> > struct rte_bbdev_data *data; /**< Pointer to device data */
> > enum rte_bbdev_state state; /**< If device is currently used or not
> > */
>
>
> Since rte_bbdev is exposed in rte_bbdev.h it can not be changed without
> breaking ABI. It would have been better if data structure was better hidden
> (hint).
> But you can't change it now until 22.11
^ permalink raw reply [relevance 3%]
* [PATCH v2] doc: announce changes in bbdev related to enum extension
2022-03-17 18:37 3% [PATCH v2] doc: announce changes in bbdev related to enum extension Nicolas Chautru
@ 2022-03-17 18:37 10% ` Nicolas Chautru
0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2022-03-17 18:37 UTC (permalink / raw)
To: dev, gakhil, thomas
Cc: trix, ray.kinsella, bruce.richardson, hemant.agrawal,
mingshan.zhang, david.marchand, stephen, Nicolas Chautru
Intent to resolve in DPDK 22.11 historical usage which prevents
graceful extension of enum and API without troublesome ABI breakage
as well as extending API RTE_BBDEV_OP_FFT for new operation type
in bbdev.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c..ff161c5 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -112,6 +112,14 @@ Deprecation Notices
session and the private data of session. An opaque pointer can be exposed
directly to application which can be attached to the ``rte_crypto_op``.
+* bbdev: Will fix extending some enum breaking the ABI. Notably
+ deprecating ``RTE_BBDEV_OP_TYPE_COUNT`` terminating the ``rte_bbdev_op_type``
+ and use fixed array size when required to allow for future enum extension.
+ Will also remove some of the inlining when causing ABI future-proof concerns.
+ Will extend API to support new operation type ``RTE_BBDEV_OP_FFT`` as per this
+ RFC https://patchwork.dpdk.org/project/dpdk/list/?series=22111
+ This should be updated in DPDK 22.11.
+
* security: Hide structure ``rte_security_session`` and expose an opaque
pointer for the private data to the application which can be attached
to the packet while enqueuing.
--
1.8.3.1
^ permalink raw reply [relevance 10%]
* [PATCH v2] doc: announce changes in bbdev related to enum extension
@ 2022-03-17 18:37 3% Nicolas Chautru
2022-03-17 18:37 10% ` Nicolas Chautru
0 siblings, 1 reply; 200+ results
From: Nicolas Chautru @ 2022-03-17 18:37 UTC (permalink / raw)
To: dev, gakhil, thomas
Cc: trix, ray.kinsella, bruce.richardson, hemant.agrawal,
mingshan.zhang, david.marchand, stephen, Nicolas Chautru
v2: indentation fix
Realizing when submitting new bbdev operation in this patch
(https://patchwork.dpdk.org/project/dpdk/list/?series=22111) that this is not workable in practice to extend this API into 22.07 without fundamental ABI breakage even by using existing versionning framework.
Some existing learnings to be applied here to prevent extension being blocked, hence accouncing changes in bbdev intended for 22.11 to make this more future-proof, including dropping max value from enum, as well as deferring extension of the API for FFT operation into DPDK 22.11.
Let me know if any comments or in case this should be captured differently.
Thanks
Nic
Nicolas Chautru (1):
doc: announce changes in bbdev related to enum extension
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* DPDK 22.03 released
@ 2022-03-17 9:54 3% Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-17 9:54 UTC (permalink / raw)
To: announce
A new release is available:
https://fast.dpdk.org/rel/dpdk-22.03.tar.xz
Winter release numbers are quite small as usual:
956 commits from 181 authors
2289 files changed, 83849 insertions(+), 97801 deletions(-)
It is not planned to start a maintenance branch for 22.03.
This version is ABI-compatible with 21.11.
Below are some new features:
- fast restart by reusing hugepages
- UDP/TCP checksum on multi-segments
- IP reassembly offload
- queue-based priority flow control
- flow API for templates and async operations
- private ethdev driver info dump
- private user data in asymmetric crypto session
More details in the release notes:
https://doc.dpdk.org/guides/rel_notes/release_22_03.html
There are 51 new contributors (including authors, reviewers and testers).
Welcome to Abhimanyu Saini, Adham Masarwah, Asaf Ravid, Bin Zheng,
Brian Dooley, Brick Yang, Bruce Merry, Christophe Fontaine,
Chuanshe Zhang, Dawid Gorecki, Daxue Gao, Geoffrey Le Gourriérec,
Gerry Gribbon, Harold Huang, Harshad Narayane, Igor Chauskin,
Jakub Poczatek, Jeff Daly, Jie Hai, Josh Soref, Kamalakannan R,
Karl Bonde Torp, Kevin Liu, Kumara Parameshwaran, Madhuker Mythri,
Markus Theil, Martijn Bakker, Maxime Gouin, Megha Ajmera, Michael Barker,
Michal Wilczynski, Nan Zhou, Nobuhiro Miki, Padraig Connolly, Peng Yu,
Peng Zhang, Qiao Liu, Rahul Bhansali, Stephen Douthit, Tianli Lai,
Tudor Brindus, Usama Arif, Wang Yunjian, Weiguo Li, Wenxiang Qian,
Wenxuan Wu, Yajun Wu, Yiding Zhou, Yingya Han, Yu Wenjun and Yuan Wang.
Below is the number of commits per employer (with authors count):
242 Intel (53)
209 Marvell (26)
184 NVIDIA (27)
61 Huawei (7)
29 Microsoft (2)
27 Broadcom (5)
26 Red Hat (4)
24 NXP (8)
23 Semihalf (4)
21 Weiguo Li (1)
16 OKTET Labs (1)
12 Trustnet (1)
10 Arm (5)
A big thank to all courageous people who took on the non rewarding task
of reviewing other's job.
Based on Reviewed-by and Acked-by tags, the top non-PMD reviewers are:
41 Akhil Goyal <gakhil@marvell.com>
29 Bruce Richardson <bruce.richardson@intel.com>
26 Ferruh Yigit <ferruh.yigit@intel.com>
20 Ori Kam <orika@nvidia.com>
19 David Marchand <david.marchand@redhat.com>
16 Tyler Retzlaff <roretzla@linux.microsoft.com>
15 Viacheslav Ovsiienko <viacheslavo@nvidia.com>
15 Morten Brørup <mb@smartsharesystems.com>
15 Chenbo Xia <chenbo.xia@intel.com>
14 Stephen Hemminger <stephen@networkplumber.org>
14 Jerin Jacob <jerinj@marvell.com>
12 Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
11 Ruifeng Wang <ruifeng.wang@arm.com>
11 Maxime Coquelin <maxime.coquelin@redhat.com>
Next version will be 22.07 in July.
The new features for 22.07 can be submitted during the next 3 weeks:
http://core.dpdk.org/roadmap#dates
Please share your roadmap.
Thanks everyone
^ permalink raw reply [relevance 3%]
* [PATCH v1] doc: announce changes in bbdev related to enum extension
2022-03-17 0:11 3% [PATCH v1] doc: announce changes in bbdev related to enum extension Nicolas Chautru
@ 2022-03-17 0:11 10% ` Nicolas Chautru
0 siblings, 0 replies; 200+ results
From: Nicolas Chautru @ 2022-03-17 0:11 UTC (permalink / raw)
To: dev, gakhil, thomas
Cc: trix, ray.kinsella, bruce.richardson, hemant.agrawal,
mingshan.zhang, david.marchand, Nicolas Chautru
Intent to resolve in DPDK 22.11 historical usage which prevents
graceful extension of enum and API without troublesome ABI breakage
as well as extending API RTE_BBDEV_OP_FFT for new operation type
in bbdev.
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 4e5b23c..238920e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -112,6 +112,14 @@ Deprecation Notices
session and the private data of session. An opaque pointer can be exposed
directly to application which can be attached to the ``rte_crypto_op``.
+* bbdev: Will fix extending some enum breaking the ABI. Notably
+deprecating ``RTE_BBDEV_OP_TYPE_COUNT`` terminating the ``rte_bbdev_op_type``
+and use fixed array size when required to allow for future enum extension.
+Will also remove some of the inlining when causing ABI future-proof concerns.
+Will extend API to support new operation type ``RTE_BBDEV_OP_FFT`` as per this
+RFC https://patchwork.dpdk.org/project/dpdk/list/?series=22111
+This should be updated in DPDK 22.11.
+
* security: Hide structure ``rte_security_session`` and expose an opaque
pointer for the private data to the application which can be attached
to the packet while enqueuing.
--
1.8.3.1
^ permalink raw reply [relevance 10%]
* [PATCH v1] doc: announce changes in bbdev related to enum extension
@ 2022-03-17 0:11 3% Nicolas Chautru
2022-03-17 0:11 10% ` Nicolas Chautru
0 siblings, 1 reply; 200+ results
From: Nicolas Chautru @ 2022-03-17 0:11 UTC (permalink / raw)
To: dev, gakhil, thomas
Cc: trix, ray.kinsella, bruce.richardson, hemant.agrawal,
mingshan.zhang, david.marchand, Nicolas Chautru
Realizing when submitting new bbdev operation in this patch
(https://patchwork.dpdk.org/project/dpdk/list/?series=22111) that this is not
workable in practice to extend this API into 22.07 without fundamental ABI breakage
even by using existing versionning framework.
Some existing learnings to be applied here to prevent extension being blocked,
hence accouncing changes in bbdev intended for 22.11 to make this more
future-proof, including dropping max value from enum, as well as deferring extension
of the API for FFT operation into DPDK 22.11.
Let me know if any comments or in case this should be captured differently.
Thanks
Nic
Nicolas Chautru (1):
doc: announce changes in bbdev related to enum extension
doc/guides/rel_notes/deprecation.rst | 8 ++++++++
1 file changed, 8 insertions(+)
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v1] bbdev: add new operation for FFT processing
@ 2022-03-11 1:12 3% ` Stephen Hemminger
2022-03-17 18:42 3% ` Chautru, Nicolas
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2022-03-11 1:12 UTC (permalink / raw)
To: Nicolas Chautru
Cc: dev, gakhil, trix, thomas, hemant.agrawal, mingshan.zhang,
david.marchand
On Thu, 10 Mar 2022 15:49:17 -0800
Nicolas Chautru <nicolas.chautru@intel.com> wrote:
> diff --git a/lib/bbdev/rte_bbdev.c b/lib/bbdev/rte_bbdev.c
> index aaee7b7..a72ecba 100644
> --- a/lib/bbdev/rte_bbdev.c
> +++ b/lib/bbdev/rte_bbdev.c
> @@ -850,6 +850,9 @@ struct rte_bbdev *
> case RTE_BBDEV_OP_LDPC_ENC:
> result = sizeof(struct rte_bbdev_enc_op);
> break;
> + case RTE_BBDEV_OP_FFT:
> + result = sizeof(struct rte_bbdev_fft_op);
> + break;
> default:
> break;
> }
> @@ -873,6 +876,10 @@ struct rte_bbdev *
> struct rte_bbdev_enc_op *op = element;
> memset(op, 0, mempool->elt_size);
> op->mempool = mempool;
> + } else if (type == RTE_BBDEV_OP_FFT) {
> + struct rte_bbdev_fft_op *op = element;
> + memset(op, 0, mempool->elt_size);
> + op->mempool = mempool;
> }
> }
>
> @@ -1123,6 +1130,7 @@ struct rte_mempool *
> "RTE_BBDEV_OP_TURBO_ENC",
> "RTE_BBDEV_OP_LDPC_DEC",
> "RTE_BBDEV_OP_LDPC_ENC",
> + "RTE_BBDEV_OP_FFT",
> };
>
> if (op_type < RTE_BBDEV_OP_TYPE_COUNT)
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index b88c881..e9ca673 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -380,6 +380,12 @@ typedef uint16_t (*rte_bbdev_enqueue_dec_ops_t)(
> struct rte_bbdev_dec_op **ops,
> uint16_t num);
>
> +/** @internal Enqueue fft operations for processing on queue of a device. */
> +typedef uint16_t (*rte_bbdev_enqueue_fft_ops_t)(
> + struct rte_bbdev_queue_data *q_data,
> + struct rte_bbdev_fft_op **ops,
> + uint16_t num);
> +
> /** @internal Dequeue encode operations from a queue of a device. */
> typedef uint16_t (*rte_bbdev_dequeue_enc_ops_t)(
> struct rte_bbdev_queue_data *q_data,
> @@ -390,6 +396,11 @@ typedef uint16_t (*rte_bbdev_dequeue_dec_ops_t)(
> struct rte_bbdev_queue_data *q_data,
> struct rte_bbdev_dec_op **ops, uint16_t num);
>
> +/** @internal Dequeue fft operations from a queue of a device. */
> +typedef uint16_t (*rte_bbdev_dequeue_fft_ops_t)(
> + struct rte_bbdev_queue_data *q_data,
> + struct rte_bbdev_fft_op **ops, uint16_t num);
> +
> #define RTE_BBDEV_NAME_MAX_LEN 64 /**< Max length of device name */
>
> /**
> @@ -438,6 +449,10 @@ struct __rte_cache_aligned rte_bbdev {
> rte_bbdev_dequeue_enc_ops_t dequeue_ldpc_enc_ops;
> /** Dequeue decode function */
> rte_bbdev_dequeue_dec_ops_t dequeue_ldpc_dec_ops;
> + /** Enqueue FFT function */
> + rte_bbdev_enqueue_fft_ops_t enqueue_fft_ops;
> + /** Dequeue FFT function */
> + rte_bbdev_dequeue_fft_ops_t dequeue_fft_ops;
> const struct rte_bbdev_ops *dev_ops; /**< Functions exported by PMD */
> struct rte_bbdev_data *data; /**< Pointer to device data */
> enum rte_bbdev_state state; /**< If device is currently used or not */
Since rte_bbdev is exposed in rte_bbdev.h it can not be changed without
breaking ABI. It would have been better if data structure was better hidden (hint).
But you can't change it now until 22.11
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 2/2] build: hide local symbols in shared libraries
2022-03-09 10:58 0% ` Kevin Traynor
@ 2022-03-09 18:54 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-09 18:54 UTC (permalink / raw)
To: Kevin Traynor
Cc: dev, david.marchand, stable, Ray Kinsella, Parav Pandit,
Xueming Li, Elena Agostini, Ori Kam, Andrew Rybchenko,
Michael Baum
09/03/2022 11:58, Kevin Traynor:
> Hi Thomas,
>
> On 08/03/2022 14:24, Thomas Monjalon wrote:
> > The symbols which are not listed in the version script
> > are exported by default.
> > Adding a local section with a wildcard make non-listed functions
> > and variables as hidden, as it should be in all version.map files.
> >
> > These are the changes done in the shared libraries:
> > - DF .text Base auxiliary_add_device
> > - DF .text Base auxiliary_dev_exists
> > - DF .text Base auxiliary_dev_iterate
> > - DF .text Base auxiliary_insert_device
> > - DF .text Base auxiliary_is_ignored_device
> > - DF .text Base auxiliary_match
> > - DF .text Base auxiliary_on_scan
> > - DF .text Base auxiliary_scan
> > - DO .bss Base auxiliary_bus_logtype
> > - DO .data Base auxiliary_bus
> > - DO .bss Base gpu_logtype
> >
> > There is no impact on regexdev library.
> >
> > Because these local symbols were exported as non-internal
> > in DPDK 21.11, any change in these functions would break the ABI.
> > Exception rules are added for these experimental libraries,
> > so the ABI check will skip them until the next ABI version.
> >
> > A check is added to avoid such miss in future.
> >
> > Fixes: 1afce3086cf4 ("bus/auxiliary: introduce auxiliary bus")
> > Fixes: 8b8036a66e3d ("gpudev: introduce GPU device class library")
> > Cc: stable@dpdk.org
> >
>
> If I take this 2/2 for 21.11.1, then I also need to backport [0] so I
> won't have errors for common_mlx5.
>
> Any problem with taking both?
>
> [0]
> commit c2e3059a10f2389b791d5d485fe71e666984c193
> Author: Michael Baum <michaelba@nvidia.com>
> Date: Fri Feb 25 01:25:06 2022 +0200
>
> common/mlx5: consider local functions as internal
I think that's fine to backport this as well.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 2/2] build: hide local symbols in shared libraries
2022-03-08 14:24 4% ` [PATCH v2 2/2] build: hide local symbols in shared libraries Thomas Monjalon
@ 2022-03-09 10:58 0% ` Kevin Traynor
2022-03-09 18:54 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kevin Traynor @ 2022-03-09 10:58 UTC (permalink / raw)
To: Thomas Monjalon, dev
Cc: david.marchand, stable, Ray Kinsella, Parav Pandit, Xueming Li,
Elena Agostini, Ori Kam, Andrew Rybchenko, Michael Baum
Hi Thomas,
On 08/03/2022 14:24, Thomas Monjalon wrote:
> The symbols which are not listed in the version script
> are exported by default.
> Adding a local section with a wildcard make non-listed functions
> and variables as hidden, as it should be in all version.map files.
>
> These are the changes done in the shared libraries:
> - DF .text Base auxiliary_add_device
> - DF .text Base auxiliary_dev_exists
> - DF .text Base auxiliary_dev_iterate
> - DF .text Base auxiliary_insert_device
> - DF .text Base auxiliary_is_ignored_device
> - DF .text Base auxiliary_match
> - DF .text Base auxiliary_on_scan
> - DF .text Base auxiliary_scan
> - DO .bss Base auxiliary_bus_logtype
> - DO .data Base auxiliary_bus
> - DO .bss Base gpu_logtype
>
> There is no impact on regexdev library.
>
> Because these local symbols were exported as non-internal
> in DPDK 21.11, any change in these functions would break the ABI.
> Exception rules are added for these experimental libraries,
> so the ABI check will skip them until the next ABI version.
>
> A check is added to avoid such miss in future.
>
> Fixes: 1afce3086cf4 ("bus/auxiliary: introduce auxiliary bus")
> Fixes: 8b8036a66e3d ("gpudev: introduce GPU device class library")
> Cc: stable@dpdk.org
>
If I take this 2/2 for 21.11.1, then I also need to backport [0] so I
won't have errors for common_mlx5.
Any problem with taking both?
[0]
commit c2e3059a10f2389b791d5d485fe71e666984c193
Author: Michael Baum <michaelba@nvidia.com>
Date: Fri Feb 25 01:25:06 2022 +0200
common/mlx5: consider local functions as internal
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
> devtools/check-symbol-maps.sh | 7 +++++++
> devtools/libabigail.abignore | 8 ++++++++
> drivers/bus/auxiliary/version.map | 2 ++
> lib/gpudev/version.map | 2 ++
> lib/regexdev/version.map | 2 ++
> 5 files changed, 21 insertions(+)
>
> diff --git a/devtools/check-symbol-maps.sh b/devtools/check-symbol-maps.sh
> index 5bd290ac97..8266fdf9ea 100755
> --- a/devtools/check-symbol-maps.sh
> +++ b/devtools/check-symbol-maps.sh
> @@ -53,4 +53,11 @@ if [ -n "$duplicate_symbols" ] ; then
> ret=1
> fi
>
> +local_miss_maps=$(grep -L 'local: \*;' $@)
> +if [ -n "$local_miss_maps" ] ; then
> + echo "Found maps without local catch-all:"
> + echo "$local_miss_maps"
> + ret=1
> +fi
> +
> exit $ret
> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> index 18c11c80c6..c618f20032 100644
> --- a/devtools/libabigail.abignore
> +++ b/devtools/libabigail.abignore
> @@ -32,3 +32,11 @@
> ; Ignore changes in common mlx5 driver, should be all internal
> [suppress_file]
> soname_regexp = ^librte_common_mlx5\.
> +
> +; Ignore visibility fix of local functions in experimental auxiliary driver
> +[suppress_file]
> + soname_regexp = ^librte_bus_auxiliary\.
> +
> +; Ignore visibility fix of local functions in experimental gpudev library
> +[suppress_file]
> + soname_regexp = ^librte_gpudev\.
> diff --git a/drivers/bus/auxiliary/version.map b/drivers/bus/auxiliary/version.map
> index a52260657c..dc993e84ff 100644
> --- a/drivers/bus/auxiliary/version.map
> +++ b/drivers/bus/auxiliary/version.map
> @@ -4,4 +4,6 @@ EXPERIMENTAL {
> # added in 21.08
> rte_auxiliary_register;
> rte_auxiliary_unregister;
> +
> + local: *;
> };
> diff --git a/lib/gpudev/version.map b/lib/gpudev/version.map
> index b23e3fd6eb..a2c8ce5759 100644
> --- a/lib/gpudev/version.map
> +++ b/lib/gpudev/version.map
> @@ -39,4 +39,6 @@ INTERNAL {
> rte_gpu_get_by_name;
> rte_gpu_notify;
> rte_gpu_release;
> +
> + local: *;
> };
> diff --git a/lib/regexdev/version.map b/lib/regexdev/version.map
> index 988b909638..3c6e9fffa1 100644
> --- a/lib/regexdev/version.map
> +++ b/lib/regexdev/version.map
> @@ -26,6 +26,8 @@ EXPERIMENTAL {
> rte_regexdev_xstats_get;
> rte_regexdev_xstats_names_get;
> rte_regexdev_xstats_reset;
> +
> + local: *;
> };
>
> INTERNAL {
^ permalink raw reply [relevance 0%]
* Re: [PATCH v1 1/2] bbdev: add device info on queue topology
@ 2022-03-09 1:28 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2022-03-09 1:28 UTC (permalink / raw)
To: Nicolas Chautru; +Cc: dev, gakhil, trix, hemant.agrawal, mingshan.zhang
On Tue, 8 Mar 2022 16:22:34 -0800
Nicolas Chautru <nicolas.chautru@intel.com> wrote:
> diff --git a/lib/bbdev/rte_bbdev.h b/lib/bbdev/rte_bbdev.h
> index b88c881..10c06b6 100644
> --- a/lib/bbdev/rte_bbdev.h
> +++ b/lib/bbdev/rte_bbdev.h
> @@ -274,6 +274,10 @@ struct rte_bbdev_driver_info {
>
> /** Maximum number of queues supported by the device */
> unsigned int max_num_queues;
> + /** Maximum number of queues supported per operation type */
> + unsigned int num_queues[RTE_BBDEV_OP_TYPE_COUNT];
> + /** Priority level supported per operation type */
> + unsigned int queue_priority[RTE_BBDEV_OP_TYPE_COUNT];
> /** Queue size limit (queue size must also be power of 2) */
> uint32_t queue_size_lim;
> /** Set if device off-loads operation to hardware */
This breaks ABI of rte_bbdev_info_get.
^ permalink raw reply [relevance 3%]
* RE: [PATCH v2 1/3] common/mlx5: add Netlink event helpers
2022-03-08 13:48 3% ` Kevin Traynor
@ 2022-03-08 15:18 0% ` Dmitry Kozlyuk
0 siblings, 0 replies; 200+ results
From: Dmitry Kozlyuk @ 2022-03-08 15:18 UTC (permalink / raw)
To: Kevin Traynor, Ferruh Yigit, dev, Luca Boccassi
Cc: stable, Slava Ovsiienko, Matan Azrad, Ray Kinsella
Hi Kevin,
> -----Original Message-----
> From: Kevin Traynor <ktraynor@redhat.com>
[...]
> The updated API is internal so that should be ok. I'm ok to take this
> on
> 21.11 as long as you can confirm it's not breaking any user
> compatibility with external sw versions/ABI/API etc from 21.11.0 ?
>
> Assuming that's ok, please send a rebased series for 21.11. I'm not
> comfortable rebasing the series with the amount of changes on dpdk
> main branch to same functions in mlx5_os.c.
Changes are not breaking any external SW.
Backport sent:
http://inbox.dpdk.org/stable/20220308151044.1012413-1-dkozlyuk@nvidia.com
^ permalink raw reply [relevance 0%]
* [PATCH v2 2/2] build: hide local symbols in shared libraries
2022-03-08 14:24 4% ` [PATCH v2 1/2] regexdev: fix section attribute of symbols Thomas Monjalon
@ 2022-03-08 14:24 4% ` Thomas Monjalon
2022-03-09 10:58 0% ` Kevin Traynor
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-03-08 14:24 UTC (permalink / raw)
To: dev
Cc: david.marchand, stable, Ray Kinsella, Parav Pandit, Xueming Li,
Elena Agostini, Ori Kam, Andrew Rybchenko
The symbols which are not listed in the version script
are exported by default.
Adding a local section with a wildcard make non-listed functions
and variables as hidden, as it should be in all version.map files.
These are the changes done in the shared libraries:
- DF .text Base auxiliary_add_device
- DF .text Base auxiliary_dev_exists
- DF .text Base auxiliary_dev_iterate
- DF .text Base auxiliary_insert_device
- DF .text Base auxiliary_is_ignored_device
- DF .text Base auxiliary_match
- DF .text Base auxiliary_on_scan
- DF .text Base auxiliary_scan
- DO .bss Base auxiliary_bus_logtype
- DO .data Base auxiliary_bus
- DO .bss Base gpu_logtype
There is no impact on regexdev library.
Because these local symbols were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
Exception rules are added for these experimental libraries,
so the ABI check will skip them until the next ABI version.
A check is added to avoid such miss in future.
Fixes: 1afce3086cf4 ("bus/auxiliary: introduce auxiliary bus")
Fixes: 8b8036a66e3d ("gpudev: introduce GPU device class library")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
devtools/check-symbol-maps.sh | 7 +++++++
devtools/libabigail.abignore | 8 ++++++++
drivers/bus/auxiliary/version.map | 2 ++
lib/gpudev/version.map | 2 ++
lib/regexdev/version.map | 2 ++
5 files changed, 21 insertions(+)
diff --git a/devtools/check-symbol-maps.sh b/devtools/check-symbol-maps.sh
index 5bd290ac97..8266fdf9ea 100755
--- a/devtools/check-symbol-maps.sh
+++ b/devtools/check-symbol-maps.sh
@@ -53,4 +53,11 @@ if [ -n "$duplicate_symbols" ] ; then
ret=1
fi
+local_miss_maps=$(grep -L 'local: \*;' $@)
+if [ -n "$local_miss_maps" ] ; then
+ echo "Found maps without local catch-all:"
+ echo "$local_miss_maps"
+ ret=1
+fi
+
exit $ret
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 18c11c80c6..c618f20032 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -32,3 +32,11 @@
; Ignore changes in common mlx5 driver, should be all internal
[suppress_file]
soname_regexp = ^librte_common_mlx5\.
+
+; Ignore visibility fix of local functions in experimental auxiliary driver
+[suppress_file]
+ soname_regexp = ^librte_bus_auxiliary\.
+
+; Ignore visibility fix of local functions in experimental gpudev library
+[suppress_file]
+ soname_regexp = ^librte_gpudev\.
diff --git a/drivers/bus/auxiliary/version.map b/drivers/bus/auxiliary/version.map
index a52260657c..dc993e84ff 100644
--- a/drivers/bus/auxiliary/version.map
+++ b/drivers/bus/auxiliary/version.map
@@ -4,4 +4,6 @@ EXPERIMENTAL {
# added in 21.08
rte_auxiliary_register;
rte_auxiliary_unregister;
+
+ local: *;
};
diff --git a/lib/gpudev/version.map b/lib/gpudev/version.map
index b23e3fd6eb..a2c8ce5759 100644
--- a/lib/gpudev/version.map
+++ b/lib/gpudev/version.map
@@ -39,4 +39,6 @@ INTERNAL {
rte_gpu_get_by_name;
rte_gpu_notify;
rte_gpu_release;
+
+ local: *;
};
diff --git a/lib/regexdev/version.map b/lib/regexdev/version.map
index 988b909638..3c6e9fffa1 100644
--- a/lib/regexdev/version.map
+++ b/lib/regexdev/version.map
@@ -26,6 +26,8 @@ EXPERIMENTAL {
rte_regexdev_xstats_get;
rte_regexdev_xstats_names_get;
rte_regexdev_xstats_reset;
+
+ local: *;
};
INTERNAL {
--
2.34.1
^ permalink raw reply [relevance 4%]
* [PATCH v2 1/2] regexdev: fix section attribute of symbols
@ 2022-03-08 14:24 4% ` Thomas Monjalon
2022-03-08 14:24 4% ` [PATCH v2 2/2] build: hide local symbols in shared libraries Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-08 14:24 UTC (permalink / raw)
To: dev
Cc: david.marchand, stable, Ori Kam, Ray Kinsella, Pavan Nikhilesh,
Jerin Jacob
The functions used by the drivers must be internal,
while the function and variables used in inline functions
must be experimental.
These are the changes done in the shared library:
- DF .text Base rte_regexdev_get_device_by_name
+ DF .text INTERNAL rte_regexdev_get_device_by_name
- DF .text Base rte_regexdev_register
+ DF .text INTERNAL rte_regexdev_register
- DF .text Base rte_regexdev_unregister
+ DF .text INTERNAL rte_regexdev_unregister
- DF .text Base rte_regexdev_is_valid_dev
+ DF .text EXPERIMENTAL rte_regexdev_is_valid_dev
- DO .bss Base rte_regex_devices
+ DO .bss EXPERIMENTAL rte_regex_devices
- DO .bss Base rte_regexdev_logtype
+ DO .bss EXPERIMENTAL rte_regexdev_logtype
Because these symbols were exported in the default section in DPDK 21.11,
any change in these functions would be seen as incompatible
by the ABI compatibility check.
An exception rule is added for this experimental library,
so the ABI check will skip it until the next ABI version.
Fixes: bab9497ef78b ("regexdev: introduce API")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ori Kam <orika@nvidia.com>
---
devtools/libabigail.abignore | 4 ++++
lib/regexdev/rte_regexdev.h | 4 ++++
lib/regexdev/rte_regexdev_driver.h | 3 +++
lib/regexdev/version.map | 9 +++++++++
4 files changed, 20 insertions(+)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 9c921c47d4..18c11c80c6 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -25,6 +25,10 @@
[suppress_type]
name = rte_crypto_asym_op
+; Ignore section attribute fixes in experimental regexdev library
+[suppress_file]
+ soname_regexp = ^librte_regexdev\.
+
; Ignore changes in common mlx5 driver, should be all internal
[suppress_file]
soname_regexp = ^librte_common_mlx5\.
diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h
index 4ba67b0c25..3bce8090f6 100644
--- a/lib/regexdev/rte_regexdev.h
+++ b/lib/regexdev/rte_regexdev.h
@@ -225,6 +225,9 @@ extern int rte_regexdev_logtype;
} while (0)
/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
* Check if dev_id is ready.
*
* @param dev_id
@@ -234,6 +237,7 @@ extern int rte_regexdev_logtype;
* - 0 if device state is not in ready state.
* - 1 if device state is ready state.
*/
+__rte_experimental
int rte_regexdev_is_valid_dev(uint16_t dev_id);
/**
diff --git a/lib/regexdev/rte_regexdev_driver.h b/lib/regexdev/rte_regexdev_driver.h
index 64742016c0..6246b144a6 100644
--- a/lib/regexdev/rte_regexdev_driver.h
+++ b/lib/regexdev/rte_regexdev_driver.h
@@ -32,6 +32,7 @@ extern "C" {
* A pointer to the RegEx device slot case of success,
* NULL otherwise.
*/
+__rte_internal
struct rte_regexdev *rte_regexdev_register(const char *name);
/**
@@ -41,6 +42,7 @@ struct rte_regexdev *rte_regexdev_register(const char *name);
* @param dev
* Device to be released.
*/
+__rte_internal
void rte_regexdev_unregister(struct rte_regexdev *dev);
/**
@@ -50,6 +52,7 @@ void rte_regexdev_unregister(struct rte_regexdev *dev);
* @param name
* The device name.
*/
+__rte_internal
struct rte_regexdev *rte_regexdev_get_device_by_name(const char *name);
#ifdef __cplusplus
diff --git a/lib/regexdev/version.map b/lib/regexdev/version.map
index 8db9b17018..988b909638 100644
--- a/lib/regexdev/version.map
+++ b/lib/regexdev/version.map
@@ -1,6 +1,7 @@
EXPERIMENTAL {
global:
+ rte_regex_devices;
rte_regexdev_attr_get;
rte_regexdev_attr_set;
rte_regexdev_close;
@@ -11,6 +12,8 @@ EXPERIMENTAL {
rte_regexdev_enqueue_burst;
rte_regexdev_get_dev_id;
rte_regexdev_info_get;
+ rte_regexdev_is_valid_dev;
+ rte_regexdev_logtype;
rte_regexdev_queue_pair_setup;
rte_regexdev_rule_db_compile_activate;
rte_regexdev_rule_db_export;
@@ -24,3 +27,9 @@ EXPERIMENTAL {
rte_regexdev_xstats_names_get;
rte_regexdev_xstats_reset;
};
+
+INTERNAL {
+ rte_regexdev_get_device_by_name;
+ rte_regexdev_register;
+ rte_regexdev_unregister;
+};
--
2.34.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers
2022-03-02 10:16 3% ` Ray Kinsella
@ 2022-03-08 14:04 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-08 14:04 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Ray Kinsella
02/03/2022 11:16, Ray Kinsella:
>
> David Marchand <david.marchand@redhat.com> writes:
>
> > Convert the existing exception in the ABI script into a libabigail
> > suppression rule.
> >
> > Note: file_name_regexp could be used to achive the same with versions of
> > libabigail < 1.7 but soname_regexp has been preferred here since it is
> > already used with a recent change on common/mlx5.
> >
> > While at it, fix indent from a recent change.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > ---
> > devtools/check-abi.sh | 7 -------
> > devtools/libabigail.abignore | 8 ++++++--
> > 2 files changed, 6 insertions(+), 9 deletions(-)
> >
>
> Minor niggle that changes to the check-abi.sh script should have been in
> the first patch?
No, first patch is about DLB, second is mlx.
> Acked-by: Ray Kinsella <mdr@ashroe.eu>
Series applied, thanks.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 1/3] common/mlx5: add Netlink event helpers
@ 2022-03-08 13:48 3% ` Kevin Traynor
2022-03-08 15:18 0% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Kevin Traynor @ 2022-03-08 13:48 UTC (permalink / raw)
To: Dmitry Kozlyuk, Ferruh Yigit, dev, Luca Boccassi
Cc: stable, Slava Ovsiienko, Matan Azrad, Ray Kinsella
On 02/03/2022 15:56, Dmitry Kozlyuk wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> [...]
>> Hi Dmitry,
>>
>> For clarification, this patch is not fix, but it is requested
>> to be backported to be able to backport fixes in this patchset,
>> right?
>
> Yes.
The updated API is internal so that should be ok. I'm ok to take this on
21.11 as long as you can confirm it's not breaking any user
compatibility with external sw versions/ABI/API etc from 21.11.0 ?
Assuming that's ok, please send a rebased series for 21.11. I'm not
comfortable rebasing the series with the amount of changes on dpdk main
branch to same functions in mlx5_os.c.
P.S. Better to rebase on patch queue [0] to avoid conflicts with other
backports not pushed to dpdk.org yet.
thanks,
Kevin.
[0] https://github.com/kevintraynor/dpdk-stable.git
^ permalink raw reply [relevance 3%]
* RE: [PATCH] examples/distributor: one Tx queue is enough
@ 2022-03-08 3:26 3% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2022-03-08 3:26 UTC (permalink / raw)
To: Honnappa Nagarahalli, dev, lijuan.tu, juraj.linkes, ohilyard,
david.marchand, thomas, david.hunt
Cc: Ruifeng Wang, nd, bruce.richardson, reshma.pattan, stable, nd
The ABI test failure is not related to this patch.
Thanks,
Honnappa
> -----Original Message-----
> From: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Sent: Monday, March 7, 2022 4:40 PM
> To: dev@dpdk.org; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; lijuan.tu@intel.com;
> juraj.linkes@pantheon.tech; ohilyard@iol.unh.edu;
> david.marchand@redhat.com; thomas@monjalon.net; david.hunt@intel.com
> Cc: Ruifeng Wang <Ruifeng.Wang@arm.com>; nd <nd@arm.com>;
> bruce.richardson@intel.com; reshma.pattan@intel.com; stable@dpdk.org
> Subject: [PATCH] examples/distributor: one Tx queue is enough
>
> Distributor application creates one Tx queue per core. However the transmit is
> done only from a single core. Hence creating one Tx queue is enough.
>
> Fixes: 07db4a975094 ("examples/distributor: new sample app")
> Cc: bruce.richardson@intel.com
> Cc: reshma.pattan@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> DTS test cases make this change to DPDK. However, I find that, one queue is
> enough. Hence we could make this change in DPDK.
>
> examples/distributor/main.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/examples/distributor/main.c b/examples/distributor/main.c index
> c681e237ea..02bf91f555 100644
> --- a/examples/distributor/main.c
> +++ b/examples/distributor/main.c
> @@ -108,7 +108,7 @@ static inline int
> port_init(uint16_t port, struct rte_mempool *mbuf_pool) {
> struct rte_eth_conf port_conf = port_conf_default;
> - const uint16_t rxRings = 1, txRings = rte_lcore_count() - 1;
> + const uint16_t rxRings = 1, txRings = 1;
> int retval;
> uint16_t q;
> uint16_t nb_rxd = RX_RING_SIZE;
> --
> 2.25.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH 1/2] regexdev: fix section attribute of symbols
2022-03-06 9:20 4% ` [PATCH 1/2] regexdev: fix section attribute of symbols Thomas Monjalon
@ 2022-03-07 10:15 0% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2022-03-07 10:15 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL), dev
Cc: stable, Ray Kinsella, Jerin Jacob, Pavan Nikhilesh
Hi Thomas,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Sunday, March 6, 2022 11:20 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Ray Kinsella <mdr@ashroe.eu>; Ori Kam <orika@nvidia.com>; Jerin Jacob
> <jerinj@marvell.com>; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [PATCH 1/2] regexdev: fix section attribute of symbols
>
> The functions used by the drivers must be internal,
> while the function and variables used in inline functions
> must be experimental.
>
> These are the changes done in the shared libraries:
> - DF .text Base rte_regexdev_get_device_by_name
> + DF .text INTERNAL rte_regexdev_get_device_by_name
> - DF .text Base rte_regexdev_register
> + DF .text INTERNAL rte_regexdev_register
> - DF .text Base rte_regexdev_unregister
> + DF .text INTERNAL rte_regexdev_unregister
> - DF .text Base rte_regexdev_is_valid_dev
> + DF .text EXPERIMENTAL rte_regexdev_is_valid_dev
> - DO .bss Base rte_regex_devices
> + DO .bss EXPERIMENTAL rte_regex_devices
> - DO .bss Base rte_regexdev_logtype
> + DO .bss EXPERIMENTAL rte_regexdev_logtype
>
> Because these symbols were exported in the default section in DPDK 21.11,
> any change in these functions would be seen as incompatible
> by the ABI compatibility check.
> An exception rule is added for this experimental library,
> so the ABI check will skip it until the next ABI version.
>
> Fixes: bab9497ef78b ("regexdev: introduce API")
> Cc: stable@dpdk.org
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
Acked-by: Ori Kam <orika@nvidia.com>
Best,
Ori
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2] ci: remove outdated default versions for ABI check
2022-03-01 10:07 4% ` David Marchand
@ 2022-03-06 9:27 4% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-06 9:27 UTC (permalink / raw)
To: dev; +Cc: Aaron Conole, Michael Santana, David Marchand
01/03/2022 11:07, David Marchand:
> On Tue, Mar 1, 2022 at 10:57 AM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > The variables REF_GIT_TAG and LIBABIGAIL_VERSION are set
> > in the CI configuration like .travis.yml or .github/workflows/build.yml.
> > The default values are outdated and probably unused.
> >
> > The default values are removed completely
> > to avoid forgetting an update in future.
> >
> > The use of the variables is quoted to make sure
> > a missing value will trigger an appropriate failure.
> >
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > Acked-by: Aaron Conole <aconole@redhat.com>
> Acked-by: David Marchand <david.marchand@redhat.com>
Applied
^ permalink raw reply [relevance 4%]
* [PATCH 2/2] build: hide local symbols in shared libraries
2022-03-06 9:20 4% ` [PATCH 1/2] regexdev: fix section attribute of symbols Thomas Monjalon
@ 2022-03-06 9:20 4% ` Thomas Monjalon
2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-03-06 9:20 UTC (permalink / raw)
To: dev
Cc: stable, Ray Kinsella, Parav Pandit, Xueming Li, Elena Agostini,
Ori Kam, Andrew Rybchenko
The symbols which are not listed in the version script
are exported by default.
Adding a local section with a wildcard make non-listed functions
and variables as hidden, as it should be in all version.map files.
These are the changes done in the shared libraries:
- DF .text Base auxiliary_add_device
- DF .text Base auxiliary_dev_exists
- DF .text Base auxiliary_dev_iterate
- DF .text Base auxiliary_insert_device
- DF .text Base auxiliary_is_ignored_device
- DF .text Base auxiliary_match
- DF .text Base auxiliary_on_scan
- DF .text Base auxiliary_scan
- DO .bss Base auxiliary_bus_logtype
- DO .data Base auxiliary_bus
- DO .bss Base gpu_logtype
There is no impact on regexdev library.
Because these local symbols were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
Exception rules are added for these experimental libraries,
so the ABI check will skip them until the next ABI version.
Fixes: 1afce3086cf4 ("bus/auxiliary: introduce auxiliary bus")
Fixes: 8b8036a66e3d ("gpudev: introduce GPU device class library")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
devtools/libabigail.abignore | 8 ++++++++
drivers/bus/auxiliary/version.map | 2 ++
lib/gpudev/version.map | 2 ++
lib/regexdev/version.map | 2 ++
4 files changed, 14 insertions(+)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index cff7a293ae..d698d8199d 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -28,3 +28,11 @@
; Ignore changes in common mlx5 driver, should be all internal
[suppress_file]
soname_regexp = ^librte_common_mlx5\.
+
+; Ignore visibility fix of local functions in experimental auxiliary driver
+[suppress_file]
+ soname_regexp = ^librte_bus_auxiliary\.
+
+; Ignore visibility fix of local functions in experimental gpudev library
+[suppress_file]
+ soname_regexp = ^librte_gpudev\.
diff --git a/drivers/bus/auxiliary/version.map b/drivers/bus/auxiliary/version.map
index a52260657c..dc993e84ff 100644
--- a/drivers/bus/auxiliary/version.map
+++ b/drivers/bus/auxiliary/version.map
@@ -4,4 +4,6 @@ EXPERIMENTAL {
# added in 21.08
rte_auxiliary_register;
rte_auxiliary_unregister;
+
+ local: *;
};
diff --git a/lib/gpudev/version.map b/lib/gpudev/version.map
index b23e3fd6eb..a2c8ce5759 100644
--- a/lib/gpudev/version.map
+++ b/lib/gpudev/version.map
@@ -39,4 +39,6 @@ INTERNAL {
rte_gpu_get_by_name;
rte_gpu_notify;
rte_gpu_release;
+
+ local: *;
};
diff --git a/lib/regexdev/version.map b/lib/regexdev/version.map
index 988b909638..3c6e9fffa1 100644
--- a/lib/regexdev/version.map
+++ b/lib/regexdev/version.map
@@ -26,6 +26,8 @@ EXPERIMENTAL {
rte_regexdev_xstats_get;
rte_regexdev_xstats_names_get;
rte_regexdev_xstats_reset;
+
+ local: *;
};
INTERNAL {
--
2.34.1
^ permalink raw reply [relevance 4%]
* [PATCH 1/2] regexdev: fix section attribute of symbols
@ 2022-03-06 9:20 4% ` Thomas Monjalon
2022-03-07 10:15 0% ` Ori Kam
2022-03-06 9:20 4% ` [PATCH 2/2] build: hide local symbols in shared libraries Thomas Monjalon
2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-03-06 9:20 UTC (permalink / raw)
To: dev; +Cc: stable, Ray Kinsella, Ori Kam, Jerin Jacob, Pavan Nikhilesh
The functions used by the drivers must be internal,
while the function and variables used in inline functions
must be experimental.
These are the changes done in the shared libraries:
- DF .text Base rte_regexdev_get_device_by_name
+ DF .text INTERNAL rte_regexdev_get_device_by_name
- DF .text Base rte_regexdev_register
+ DF .text INTERNAL rte_regexdev_register
- DF .text Base rte_regexdev_unregister
+ DF .text INTERNAL rte_regexdev_unregister
- DF .text Base rte_regexdev_is_valid_dev
+ DF .text EXPERIMENTAL rte_regexdev_is_valid_dev
- DO .bss Base rte_regex_devices
+ DO .bss EXPERIMENTAL rte_regex_devices
- DO .bss Base rte_regexdev_logtype
+ DO .bss EXPERIMENTAL rte_regexdev_logtype
Because these symbols were exported in the default section in DPDK 21.11,
any change in these functions would be seen as incompatible
by the ABI compatibility check.
An exception rule is added for this experimental library,
so the ABI check will skip it until the next ABI version.
Fixes: bab9497ef78b ("regexdev: introduce API")
Cc: stable@dpdk.org
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
devtools/libabigail.abignore | 4 ++++
lib/regexdev/rte_regexdev.h | 4 ++++
lib/regexdev/rte_regexdev_driver.h | 3 +++
lib/regexdev/version.map | 9 +++++++++
4 files changed, 20 insertions(+)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 301b3dacb8..cff7a293ae 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -21,6 +21,10 @@
[suppress_type]
name = rte_crypto_asym_op
+; Ignore section attribute fixes in experimental regexdev library
+[suppress_file]
+ soname_regexp = ^librte_regexdev\.
+
; Ignore changes in common mlx5 driver, should be all internal
[suppress_file]
soname_regexp = ^librte_common_mlx5\.
diff --git a/lib/regexdev/rte_regexdev.h b/lib/regexdev/rte_regexdev.h
index 4ba67b0c25..3bce8090f6 100644
--- a/lib/regexdev/rte_regexdev.h
+++ b/lib/regexdev/rte_regexdev.h
@@ -225,6 +225,9 @@ extern int rte_regexdev_logtype;
} while (0)
/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
* Check if dev_id is ready.
*
* @param dev_id
@@ -234,6 +237,7 @@ extern int rte_regexdev_logtype;
* - 0 if device state is not in ready state.
* - 1 if device state is ready state.
*/
+__rte_experimental
int rte_regexdev_is_valid_dev(uint16_t dev_id);
/**
diff --git a/lib/regexdev/rte_regexdev_driver.h b/lib/regexdev/rte_regexdev_driver.h
index 64742016c0..6246b144a6 100644
--- a/lib/regexdev/rte_regexdev_driver.h
+++ b/lib/regexdev/rte_regexdev_driver.h
@@ -32,6 +32,7 @@ extern "C" {
* A pointer to the RegEx device slot case of success,
* NULL otherwise.
*/
+__rte_internal
struct rte_regexdev *rte_regexdev_register(const char *name);
/**
@@ -41,6 +42,7 @@ struct rte_regexdev *rte_regexdev_register(const char *name);
* @param dev
* Device to be released.
*/
+__rte_internal
void rte_regexdev_unregister(struct rte_regexdev *dev);
/**
@@ -50,6 +52,7 @@ void rte_regexdev_unregister(struct rte_regexdev *dev);
* @param name
* The device name.
*/
+__rte_internal
struct rte_regexdev *rte_regexdev_get_device_by_name(const char *name);
#ifdef __cplusplus
diff --git a/lib/regexdev/version.map b/lib/regexdev/version.map
index 8db9b17018..988b909638 100644
--- a/lib/regexdev/version.map
+++ b/lib/regexdev/version.map
@@ -1,6 +1,7 @@
EXPERIMENTAL {
global:
+ rte_regex_devices;
rte_regexdev_attr_get;
rte_regexdev_attr_set;
rte_regexdev_close;
@@ -11,6 +12,8 @@ EXPERIMENTAL {
rte_regexdev_enqueue_burst;
rte_regexdev_get_dev_id;
rte_regexdev_info_get;
+ rte_regexdev_is_valid_dev;
+ rte_regexdev_logtype;
rte_regexdev_queue_pair_setup;
rte_regexdev_rule_db_compile_activate;
rte_regexdev_rule_db_export;
@@ -24,3 +27,9 @@ EXPERIMENTAL {
rte_regexdev_xstats_names_get;
rte_regexdev_xstats_reset;
};
+
+INTERNAL {
+ rte_regexdev_get_device_by_name;
+ rte_regexdev_register;
+ rte_regexdev_unregister;
+};
--
2.34.1
^ permalink raw reply [relevance 4%]
* Re: [RFC] ethdev: introduce protocol type based header split
2022-03-04 9:58 3% ` Zhang, Qi Z
2022-03-04 11:54 0% ` Morten Brørup
@ 2022-03-04 17:32 3% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2022-03-04 17:32 UTC (permalink / raw)
To: Zhang, Qi Z
Cc: Ding, Xuan, thomas, Yigit, Ferruh, andrew.rybchenko, dev,
viacheslavo, Yu, Ping, Wang, YuanX
On Fri, 4 Mar 2022 09:58:11 +0000
"Zhang, Qi Z" <qi.z.zhang@intel.com> wrote:
> > -----Original Message-----
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Friday, March 4, 2022 12:16 AM
> > To: Ding, Xuan <xuan.ding@intel.com>
> > Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> > andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; viacheslavo@nvidia.com;
> > Zhang, Qi Z <qi.z.zhang@intel.com>; Yu, Ping <ping.yu@intel.com>; Wang,
> > YuanX <yuanx.wang@intel.com>
> > Subject: Re: [RFC] ethdev: introduce protocol type based header split
> >
> > On Thu, 3 Mar 2022 06:01:36 +0000
> > xuan.ding@intel.com wrote:
> >
> > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > > c2d1f9a972..6743648c22 100644
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -1202,7 +1202,8 @@ struct rte_eth_rxseg_split {
> > > struct rte_mempool *mp; /**< Memory pool to allocate segment from.
> > */
> > > uint16_t length; /**< Segment data length, configures split point. */
> > > uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */
> > > - uint32_t reserved; /**< Reserved field. */
> > > + uint16_t proto;
> > > + uint16_t reserved; /**< Reserved field. */
> > > };
> >
> > This feature suffers from a common bad design pattern.
> > You can't just start using reserved fields unless the previous versions enforced
> > that the field was a particular value (usually zero).
>
> Yes, agree, that's a mistake there is no document for the reserved field in the previous release, and usually, it should be zero,
> And I think one of the typical purposes of the reserved field is to make life easy for new feature adding without breaking ABI.
> So, should we just take the risk as I guess it might not be a big deal in real cases?
There is a cost/benefit tradeoff here. Although HW vendors would like to enable
more features, it really is not that much of an impact to wait until next LTS
for users.
Yes, the API/ABI rules are restrictive, but IMHO it is about learning how to
handle SW upgrades in a more user friendly manner. It was hard for the Linux
kernel to learn how to do this, but after 10 years they mostly have it right.
If this were a bug (especially a security bug), then the rules could be lifted.
^ permalink raw reply [relevance 3%]
* RE: [RFC] ethdev: introduce protocol type based header split
2022-03-04 9:58 3% ` Zhang, Qi Z
@ 2022-03-04 11:54 0% ` Morten Brørup
2022-03-04 17:32 3% ` Stephen Hemminger
1 sibling, 0 replies; 200+ results
From: Morten Brørup @ 2022-03-04 11:54 UTC (permalink / raw)
To: Zhang, Qi Z, Stephen Hemminger, Ding, Xuan
Cc: thomas, Yigit, Ferruh, andrew.rybchenko, dev, viacheslavo, Yu,
Ping, Wang, YuanX
> From: Zhang, Qi Z [mailto:qi.z.zhang@intel.com]
> Sent: Friday, 4 March 2022 10.58
>
> > From: Stephen Hemminger <stephen@networkplumber.org>
> > Sent: Friday, March 4, 2022 12:16 AM
> >
> > On Thu, 3 Mar 2022 06:01:36 +0000
> > xuan.ding@intel.com wrote:
> >
> > > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index
> > > c2d1f9a972..6743648c22 100644
> > > --- a/lib/ethdev/rte_ethdev.h
> > > +++ b/lib/ethdev/rte_ethdev.h
> > > @@ -1202,7 +1202,8 @@ struct rte_eth_rxseg_split {
> > > struct rte_mempool *mp; /**< Memory pool to allocate segment
> from.
> > */
> > > uint16_t length; /**< Segment data length, configures split
> point. */
> > > uint16_t offset; /**< Data offset from beginning of mbuf data
> buffer. */
> > > - uint32_t reserved; /**< Reserved field. */
> > > + uint16_t proto;
> > > + uint16_t reserved; /**< Reserved field. */
> > > };
> >
> > This feature suffers from a common bad design pattern.
> > You can't just start using reserved fields unless the previous
> versions enforced
> > that the field was a particular value (usually zero).
>
> Yes, agree, that's a mistake there is no document for the reserved
> field in the previous release, and usually, it should be zero,
> And I think one of the typical purposes of the reserved field is to
> make life easy for new feature adding without breaking ABI.
> So, should we just take the risk as I guess it might not be a big deal
> in real cases?
>
In this specific case, I think it can be done with very low risk in real cases.
Assuming that splitting based on fixed length and protocol header parsing is mutually exclusive, the PMDs can simply ignore the "proto" field (and log a warning about it) if the length field is non-zero. This will provide backwards compatibility with applications not zeroing out the 32 bit "reserved" field.
> Thanks
> Qi
>
>
>
> >
> > There is no guarantee that application will initialize these reserved
> fields and
> > now using them risks breaking the API/ABI. It looks like
> >
> > rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split
> *rx_seg,
> >
> > Would have had to check in previous release.
> >
> > This probably has to wait until 22.11 next API release.
^ permalink raw reply [relevance 0%]
* RE: [RFC] ethdev: introduce protocol type based header split
2022-03-03 16:15 3% ` Stephen Hemminger
@ 2022-03-04 9:58 3% ` Zhang, Qi Z
2022-03-04 11:54 0% ` Morten Brørup
2022-03-04 17:32 3% ` Stephen Hemminger
0 siblings, 2 replies; 200+ results
From: Zhang, Qi Z @ 2022-03-04 9:58 UTC (permalink / raw)
To: Stephen Hemminger, Ding, Xuan
Cc: thomas, Yigit, Ferruh, andrew.rybchenko, dev, viacheslavo, Yu,
Ping, Wang, YuanX
> -----Original Message-----
> From: Stephen Hemminger <stephen@networkplumber.org>
> Sent: Friday, March 4, 2022 12:16 AM
> To: Ding, Xuan <xuan.ding@intel.com>
> Cc: thomas@monjalon.net; Yigit, Ferruh <ferruh.yigit@intel.com>;
> andrew.rybchenko@oktetlabs.ru; dev@dpdk.org; viacheslavo@nvidia.com;
> Zhang, Qi Z <qi.z.zhang@intel.com>; Yu, Ping <ping.yu@intel.com>; Wang,
> YuanX <yuanx.wang@intel.com>
> Subject: Re: [RFC] ethdev: introduce protocol type based header split
>
> On Thu, 3 Mar 2022 06:01:36 +0000
> xuan.ding@intel.com wrote:
>
> > diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index
> > c2d1f9a972..6743648c22 100644
> > --- a/lib/ethdev/rte_ethdev.h
> > +++ b/lib/ethdev/rte_ethdev.h
> > @@ -1202,7 +1202,8 @@ struct rte_eth_rxseg_split {
> > struct rte_mempool *mp; /**< Memory pool to allocate segment from.
> */
> > uint16_t length; /**< Segment data length, configures split point. */
> > uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */
> > - uint32_t reserved; /**< Reserved field. */
> > + uint16_t proto;
> > + uint16_t reserved; /**< Reserved field. */
> > };
>
> This feature suffers from a common bad design pattern.
> You can't just start using reserved fields unless the previous versions enforced
> that the field was a particular value (usually zero).
Yes, agree, that's a mistake there is no document for the reserved field in the previous release, and usually, it should be zero,
And I think one of the typical purposes of the reserved field is to make life easy for new feature adding without breaking ABI.
So, should we just take the risk as I guess it might not be a big deal in real cases?
Thanks
Qi
>
> There is no guarantee that application will initialize these reserved fields and
> now using them risks breaking the API/ABI. It looks like
>
> rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg,
>
> Would have had to check in previous release.
>
> This probably has to wait until 22.11 next API release.
^ permalink raw reply [relevance 3%]
* Re: [RFC] ethdev: introduce protocol type based header split
@ 2022-03-03 16:15 3% ` Stephen Hemminger
2022-03-04 9:58 3% ` Zhang, Qi Z
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2022-03-03 16:15 UTC (permalink / raw)
To: xuan.ding
Cc: thomas, ferruh.yigit, andrew.rybchenko, dev, viacheslavo,
qi.z.zhang, ping.yu, Yuan Wang
On Thu, 3 Mar 2022 06:01:36 +0000
xuan.ding@intel.com wrote:
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index c2d1f9a972..6743648c22 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -1202,7 +1202,8 @@ struct rte_eth_rxseg_split {
> struct rte_mempool *mp; /**< Memory pool to allocate segment from. */
> uint16_t length; /**< Segment data length, configures split point. */
> uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */
> - uint32_t reserved; /**< Reserved field. */
> + uint16_t proto;
> + uint16_t reserved; /**< Reserved field. */
> };
This feature suffers from a common bad design pattern.
You can't just start using reserved fields unless the previous versions
enforced that the field was a particular value (usually zero).
There is no guarantee that application will initialize these reserved
fields and now using them risks breaking the API/ABI. It looks like
rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg,
Would have had to check in previous release.
This probably has to wait until 22.11 next API release.
^ permalink raw reply [relevance 3%]
* Re: [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers
2022-03-01 16:54 10% ` [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers David Marchand
@ 2022-03-02 10:16 3% ` Ray Kinsella
2022-03-08 14:04 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-03-02 10:16 UTC (permalink / raw)
To: David Marchand; +Cc: dev, thomas
David Marchand <david.marchand@redhat.com> writes:
> Convert the existing exception in the ABI script into a libabigail
> suppression rule.
>
> Note: file_name_regexp could be used to achive the same with versions of
> libabigail < 1.7 but soname_regexp has been preferred here since it is
> already used with a recent change on common/mlx5.
>
> While at it, fix indent from a recent change.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> devtools/check-abi.sh | 7 -------
> devtools/libabigail.abignore | 8 ++++++--
> 2 files changed, 6 insertions(+), 9 deletions(-)
>
Minor niggle that changes to the check-abi.sh script should have been in
the first patch?
Acked-by: Ray Kinsella <mdr@ashroe.eu>
--
Regards, Ray K
^ permalink raw reply [relevance 3%]
* Re: [PATCH 1/2] devtools: remove event/dlb exception in ABI check
2022-03-01 16:54 15% [PATCH 1/2] devtools: remove event/dlb exception in ABI check David Marchand
2022-03-01 16:54 10% ` [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers David Marchand
@ 2022-03-02 10:13 4% ` Ray Kinsella
1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2022-03-02 10:13 UTC (permalink / raw)
To: David Marchand; +Cc: dev, thomas, stable, Ferruh Yigit
David Marchand <david.marchand@redhat.com> writes:
> The event/dlb driver exception can be removed, as this rule made sense
> for changes in DPDK_21 ABI and is obsolete for DPDK_22.
>
> Fixes: fdab8f2e1749 ("version: 21.11-rc0")
> Cc: stable@dpdk.org
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
Acked-by: Ray Kinsella <mdr@ashroe.eu>
--
Regards, Ray K
^ permalink raw reply [relevance 4%]
* [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers
2022-03-01 16:54 15% [PATCH 1/2] devtools: remove event/dlb exception in ABI check David Marchand
@ 2022-03-01 16:54 10% ` David Marchand
2022-03-02 10:16 3% ` Ray Kinsella
2022-03-02 10:13 4% ` [PATCH 1/2] devtools: remove event/dlb exception in ABI check Ray Kinsella
1 sibling, 1 reply; 200+ results
From: David Marchand @ 2022-03-01 16:54 UTC (permalink / raw)
To: dev; +Cc: thomas, Ray Kinsella
Convert the existing exception in the ABI script into a libabigail
suppression rule.
Note: file_name_regexp could be used to achive the same with versions of
libabigail < 1.7 but soname_regexp has been preferred here since it is
already used with a recent change on common/mlx5.
While at it, fix indent from a recent change.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
devtools/check-abi.sh | 7 -------
devtools/libabigail.abignore | 8 ++++++--
2 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index 033f6252d0..64e148070d 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -37,13 +37,6 @@ fi
error=
for dump in $(find $refdir -name "*.dump"); do
name=$(basename $dump)
- # skip glue drivers, example librte_pmd_mlx5_glue.dump
- # We can't rely on a suppression rule for now:
- # https://sourceware.org/bugzilla/show_bug.cgi?id=25480
- if grep -qE "\<soname='[^']*_glue\.so\.[^']*'" $dump; then
- echo "Skipped glue library $name."
- continue
- fi
if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
continue
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index 301b3dacb8..9c921c47d4 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -12,10 +12,14 @@
[suppress_variable]
name_regexp = _pmd_info$
+; Ignore changes on soname for mlx glue internal drivers
+[suppress_file]
+ soname_regexp = ^librte_.*mlx.*glue\.
+
; Ignore fields inserted in place of reserved_opts of rte_security_ipsec_sa_options
[suppress_type]
- name = rte_security_ipsec_sa_options
- has_data_member_inserted_between = {offset_of(reserved_opts), end}
+ name = rte_security_ipsec_sa_options
+ has_data_member_inserted_between = {offset_of(reserved_opts), end}
; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
[suppress_type]
--
2.23.0
^ permalink raw reply [relevance 10%]
* [PATCH 1/2] devtools: remove event/dlb exception in ABI check
@ 2022-03-01 16:54 15% David Marchand
2022-03-01 16:54 10% ` [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers David Marchand
2022-03-02 10:13 4% ` [PATCH 1/2] devtools: remove event/dlb exception in ABI check Ray Kinsella
0 siblings, 2 replies; 200+ results
From: David Marchand @ 2022-03-01 16:54 UTC (permalink / raw)
To: dev; +Cc: thomas, stable, Ray Kinsella, Ferruh Yigit
The event/dlb driver exception can be removed, as this rule made sense
for changes in DPDK_21 ABI and is obsolete for DPDK_22.
Fixes: fdab8f2e1749 ("version: 21.11-rc0")
Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
devtools/check-abi.sh | 4 ----
1 file changed, 4 deletions(-)
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index 675f10142e..033f6252d0 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -44,10 +44,6 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped glue library $name."
continue
fi
- if grep -qE "\<soname='librte_event_dlb\.so" $dump; then
- echo "Skipped removed driver $name."
- continue
- fi
if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
continue
--
2.23.0
^ permalink raw reply [relevance 15%]
* Re: [PATCH v2] ci: remove outdated default versions for ABI check
2022-03-01 9:56 9% ` [PATCH v2] ci: remove outdated default versions for ABI check Thomas Monjalon
@ 2022-03-01 10:07 4% ` David Marchand
2022-03-06 9:27 4% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2022-03-01 10:07 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Aaron Conole, Michael Santana
On Tue, Mar 1, 2022 at 10:57 AM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> The variables REF_GIT_TAG and LIBABIGAIL_VERSION are set
> in the CI configuration like .travis.yml or .github/workflows/build.yml.
> The default values are outdated and probably unused.
>
> The default values are removed completely
> to avoid forgetting an update in future.
>
> The use of the variables is quoted to make sure
> a missing value will trigger an appropriate failure.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Aaron Conole <aconole@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
--
David Marchand
^ permalink raw reply [relevance 4%]
* [PATCH v2] ci: remove outdated default versions for ABI check
2022-02-08 13:47 4% [PATCH] ci: remove outdated default reference tag for ABI Thomas Monjalon
2022-02-08 15:08 7% ` Aaron Conole
@ 2022-03-01 9:56 9% ` Thomas Monjalon
2022-03-01 10:07 4% ` David Marchand
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-03-01 9:56 UTC (permalink / raw)
To: dev; +Cc: david.marchand, Aaron Conole, Michael Santana
The variables REF_GIT_TAG and LIBABIGAIL_VERSION are set
in the CI configuration like .travis.yml or .github/workflows/build.yml.
The default values are outdated and probably unused.
The default values are removed completely
to avoid forgetting an update in future.
The use of the variables is quoted to make sure
a missing value will trigger an appropriate failure.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Aaron Conole <aconole@redhat.com>
---
.ci/linux-build.sh | 7 ++-----
1 file changed, 2 insertions(+), 5 deletions(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index 67d68535e0..05aa21ec69 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -104,8 +104,6 @@ if [ "$AARCH64" != "true" ] && [ "$PPC64LE" != "true" ]; then
fi
if [ "$ABI_CHECKS" = "true" ]; then
- LIBABIGAIL_VERSION=${LIBABIGAIL_VERSION:-libabigail-1.6}
-
if [ "$(cat libabigail/VERSION 2>/dev/null)" != "$LIBABIGAIL_VERSION" ]; then
rm -rf libabigail
# if we change libabigail, invalidate existing abi cache
@@ -113,14 +111,13 @@ if [ "$ABI_CHECKS" = "true" ]; then
fi
if [ ! -d libabigail ]; then
- install_libabigail $LIBABIGAIL_VERSION $(pwd)/libabigail
+ install_libabigail "$LIBABIGAIL_VERSION" $(pwd)/libabigail
echo $LIBABIGAIL_VERSION > libabigail/VERSION
fi
export PATH=$(pwd)/libabigail/bin:$PATH
REF_GIT_REPO=${REF_GIT_REPO:-https://dpdk.org/git/dpdk}
- REF_GIT_TAG=${REF_GIT_TAG:-v19.11}
if [ "$(cat reference/VERSION 2>/dev/null)" != "$REF_GIT_TAG" ]; then
rm -rf reference
@@ -128,7 +125,7 @@ if [ "$ABI_CHECKS" = "true" ]; then
if [ ! -d reference ]; then
refsrcdir=$(readlink -f $(pwd)/../dpdk-$REF_GIT_TAG)
- git clone --single-branch -b $REF_GIT_TAG $REF_GIT_REPO $refsrcdir
+ git clone --single-branch -b "$REF_GIT_TAG" $REF_GIT_REPO $refsrcdir
meson $OPTS -Dexamples= $refsrcdir $refsrcdir/build
ninja -C $refsrcdir/build
DESTDIR=$(pwd)/reference ninja -C $refsrcdir/build install
--
2.34.1
^ permalink raw reply [relevance 9%]
* Re: [EXT] [PATCH] crypto: fix misspelled key in qt format
2022-02-25 17:56 4% ` Thomas Monjalon
@ 2022-02-25 19:35 0% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2022-02-25 19:35 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Akhil Goyal, Kusztal, ArkadiuszX, Zhang, Roy Fan, ray.kinsella,
david.marchand, dev
Thomas Monjalon <thomas@monjalon.net> writes:
> 18/02/2022 07:11, Kusztal, ArkadiuszX:
>> From: Akhil Goyal <gakhil@marvell.com>
>> > Fix ABI warning.
>> > Add libabigail.abiignore rule.
>>
>> I think what is worth noticing is a fact that after "random 'k' patch" addition of
>> [suppress_type]
>> name = rte_crypto_asym_op
>> this problem does not show up.
>>
>> But I think it is safer to send addition of
>> [suppress_type]
>> name = rte_crypto_rsa_priv_key_type
>> anyway.
>> Will send v2.
>
> I don't understand why adding this rule,
> and the comment does say nothing about it:
> "Ignore name change of rsa qt key type"
>
> The ABI check is fine without above because of this existing exception:
>
> ; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
> [suppress_type]
> name = rte_crypto_asym_op
>
> So I will just drop the unjustified additional exception while pulling.
>
> Next time, please make sure such ABI exception is approved by more maintainers.
To be fair to those involved, I had been CC'ed on the v2 of this.
I didn't respond before the patch was merged however.
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
2022-02-25 18:38 3% ` Thomas Monjalon
@ 2022-02-25 19:13 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-25 19:13 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Michael Baum, dev, Matan Azrad, Raslan Darawsheh,
Viacheslav Ovsiienko, David Marchand, Ray Kinsella
On 2/25/2022 6:38 PM, Thomas Monjalon wrote:
> 25/02/2022 19:01, Ferruh Yigit:
>> On 2/24/2022 11:25 PM, Michael Baum wrote:
>>> The functions which are not explicitly marked as internal
>>> were exported because the local catch-all rule was missing in the
>>> version script.
>>> After adding the missing rule, all local functions are hidden.
>>> The function mlx5_get_device_guid is used in another library,
>>> so it needs to be exported (as internal).
>>>
>>> Because the local functions were exported as non-internal
>>> in DPDK 21.11, any change in these functions would break the ABI.
>>> An ABI exception is added for this library, considering that all
>>> functions are either local or internal.
>>>
>>
>> When a function is not listed explicitly in .map file, it shouldn't
>> be exported at all.
>
> It seems we need local:* to achieve this behaviour.
> Few other libs are missing it. I plan to send a patch for them.
>
+1 for this patch, thanks.
>> So I am not sure if this exception is required, did you get
>> warning for tool, or is this theoretical?
>
> It is not theoritical, you can check with objdump:
> objdump -T build/lib/librte_common_mlx5.so | sed -rn 's,^[[:xdigit:]]* g *(D[^0]*)[^ ]* *,\1,p'
>
> I did not check the ABI tool without the exception.
>
Yes tool complains with change [1], I will proceed with original patch.
[1]
29 Removed functions:
[D] 'function int mlx5_auxiliary_get_pci_str(const rte_auxiliary_device*, char*, size_t)' {mlx5_auxiliary_get_pci_str}
[D] 'function void mlx5_common_auxiliary_init()' {mlx5_common_auxiliary_init}
[D] 'function int mlx5_common_dev_dma_map(rte_device*, void*, uint64_t, size_t)' {mlx5_common_dev_dma_map}
[D] 'function int mlx5_common_dev_dma_unmap(rte_device*, void*, uint64_t, size_t)' {mlx5_common_dev_dma_unmap}
[D] 'function int mlx5_common_dev_probe(rte_device*)' {mlx5_common_dev_probe}
[D] 'function int mlx5_common_dev_remove(rte_device*)' {mlx5_common_dev_remove}
[D] 'function void mlx5_common_driver_on_register_pci(mlx5_class_driver*)' {mlx5_common_driver_on_register_pci}
[D] 'function void mlx5_common_pci_init()' {mlx5_common_pci_init}
[D] 'function mlx5_mr* mlx5_create_mr_ext(void*, uintptr_t, size_t, int, mlx5_reg_mr_t)' {mlx5_create_mr_ext}
[D] 'function bool mlx5_dev_pci_match(const mlx5_class_driver*, const rte_device*)' {mlx5_dev_pci_match}
[D] 'function int mlx5_dev_to_pci_str(const rte_device*, char*, size_t)' {mlx5_dev_to_pci_str}
[D] 'function void mlx5_free_mr_by_addr(mlx5_mr_share_cache*, const char*, void*, size_t)' {mlx5_free_mr_by_addr}
[D] 'function ibv_device* mlx5_get_aux_ibv_device(const rte_auxiliary_device*)' {mlx5_get_aux_ibv_device}
[D] 'function void mlx5_glue_constructor()' {mlx5_glue_constructor}
[D] 'function void mlx5_malloc_mem_select(uint32_t)' {mlx5_malloc_mem_select}
[D] 'function void mlx5_mr_btree_dump(mlx5_mr_btree*)' {mlx5_mr_btree_dump}
[D] 'function int mlx5_mr_create_cache(mlx5_mr_share_cache*, int)' {mlx5_mr_create_cache}
[D] 'function void mlx5_mr_free(mlx5_mr*, mlx5_dereg_mr_t)' {mlx5_mr_free}
[D] 'function int mlx5_mr_insert_cache(mlx5_mr_share_cache*, mlx5_mr*)' {mlx5_mr_insert_cache}
[D] 'function mlx5_mr* mlx5_mr_lookup_list(mlx5_mr_share_cache*, mr_cache_entry*, uintptr_t)' {mlx5_mr_lookup_list}
[D] 'function void mlx5_mr_rebuild_cache(mlx5_mr_share_cache*)' {mlx5_mr_rebuild_cache}
[D] 'function void mlx5_mr_release_cache(mlx5_mr_share_cache*)' {mlx5_mr_release_cache}
[D] 'function int mlx5_nl_devlink_family_id_get(int)' {mlx5_nl_devlink_family_id_get}
[D] 'function int mlx5_nl_enable_roce_get(int, int, const char*, int*)' {mlx5_nl_enable_roce_get}
[D] 'function int mlx5_nl_enable_roce_set(int, int, const char*, int)' {mlx5_nl_enable_roce_set}
[D] 'function int mlx5_os_open_device(mlx5_common_device*, uint32_t)' {mlx5_os_open_device}
[D] 'function int mlx5_os_pd_create(mlx5_common_device*)' {mlx5_os_pd_create}
[D] 'function void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t*, mlx5_dereg_mr_t*)' {mlx5_os_set_reg_mr_cb}
[D] 'function void mlx5_set_context_attr(rte_device*, ibv_context*)' {mlx5_set_context_attr}
2 Removed variables:
[D] 'uint32_t atomic_sn' {atomic_sn}
[D] 'int mlx5_common_logtype' {mlx5_common_logtype}
1 Removed function symbol not referenced by debug info:
[D] mlx5_mr_dump_cache
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
2022-02-25 18:01 0% ` Ferruh Yigit
@ 2022-02-25 18:38 3% ` Thomas Monjalon
2022-02-25 19:13 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-25 18:38 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Michael Baum, dev, Matan Azrad, Raslan Darawsheh,
Viacheslav Ovsiienko, David Marchand, Ray Kinsella
25/02/2022 19:01, Ferruh Yigit:
> On 2/24/2022 11:25 PM, Michael Baum wrote:
> > The functions which are not explicitly marked as internal
> > were exported because the local catch-all rule was missing in the
> > version script.
> > After adding the missing rule, all local functions are hidden.
> > The function mlx5_get_device_guid is used in another library,
> > so it needs to be exported (as internal).
> >
> > Because the local functions were exported as non-internal
> > in DPDK 21.11, any change in these functions would break the ABI.
> > An ABI exception is added for this library, considering that all
> > functions are either local or internal.
> >
>
> When a function is not listed explicitly in .map file, it shouldn't
> be exported at all.
It seems we need local:* to achieve this behaviour.
Few other libs are missing it. I plan to send a patch for them.
> So I am not sure if this exception is required, did you get
> warning for tool, or is this theoretical?
It is not theoritical, you can check with objdump:
objdump -T build/lib/librte_common_mlx5.so | sed -rn 's,^[[:xdigit:]]* g *(D[^0]*)[^ ]* *,\1,p'
I did not check the ABI tool without the exception.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 1/6] common/mlx5: consider local functions as internal
2022-02-24 23:25 4% ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
@ 2022-02-25 18:01 0% ` Ferruh Yigit
2022-02-25 18:38 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2022-02-25 18:01 UTC (permalink / raw)
To: Michael Baum, dev, Thomas Monjalon
Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko,
David Marchand, Ray Kinsella
On 2/24/2022 11:25 PM, Michael Baum wrote:
> The functions which are not explicitly marked as internal
> were exported because the local catch-all rule was missing in the
> version script.
> After adding the missing rule, all local functions are hidden.
> The function mlx5_get_device_guid is used in another library,
> so it needs to be exported (as internal).
>
> Because the local functions were exported as non-internal
> in DPDK 21.11, any change in these functions would break the ABI.
> An ABI exception is added for this library, considering that all
> functions are either local or internal.
>
When a function is not listed explicitly in .map file, it shouldn't
be exported at all.
So I am not sure if this exception is required, did you get
warning for tool, or is this theoretical?
cc'ed David and Ray for comment.
> Signed-off-by: Michael Baum <michaelba@nvidia.com>
> Acked-by: Matan Azrad <matan@nvidia.com>
<...>
^ permalink raw reply [relevance 0%]
* Re: [EXT] [PATCH] crypto: fix misspelled key in qt format
2022-02-18 6:11 0% ` Kusztal, ArkadiuszX
@ 2022-02-25 17:56 4% ` Thomas Monjalon
2022-02-25 19:35 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-25 17:56 UTC (permalink / raw)
To: Akhil Goyal, Kusztal, ArkadiuszX
Cc: dev, Zhang, Roy Fan, ray.kinsella, david.marchand
18/02/2022 07:11, Kusztal, ArkadiuszX:
> From: Akhil Goyal <gakhil@marvell.com>
> > Fix ABI warning.
> > Add libabigail.abiignore rule.
>
> I think what is worth noticing is a fact that after "random 'k' patch" addition of
> [suppress_type]
> name = rte_crypto_asym_op
> this problem does not show up.
>
> But I think it is safer to send addition of
> [suppress_type]
> name = rte_crypto_rsa_priv_key_type
> anyway.
> Will send v2.
I don't understand why adding this rule,
and the comment does say nothing about it:
"Ignore name change of rsa qt key type"
The ABI check is fine without above because of this existing exception:
; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
[suppress_type]
name = rte_crypto_asym_op
So I will just drop the unjustified additional exception while pulling.
Next time, please make sure such ABI exception is approved by more maintainers.
^ permalink raw reply [relevance 4%]
* [PATCH v3 1/6] common/mlx5: consider local functions as internal
2022-02-24 23:25 3% ` [PATCH v3 " Michael Baum
@ 2022-02-24 23:25 4% ` Michael Baum
2022-02-25 18:01 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
The functions which are not explicitly marked as internal
were exported because the local catch-all rule was missing in the
version script.
After adding the missing rule, all local functions are hidden.
The function mlx5_get_device_guid is used in another library,
so it needs to be exported (as internal).
Because the local functions were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
An ABI exception is added for this library, considering that all
functions are either local or internal.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
---
devtools/libabigail.abignore | 4 ++++
drivers/common/mlx5/linux/mlx5_common_os.h | 1 +
drivers/common/mlx5/version.map | 3 +++
3 files changed, 8 insertions(+)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ef0602975a..78d57497e6 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -20,3 +20,7 @@
; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
[suppress_type]
name = rte_crypto_asym_op
+
+; Ignore changes in common mlx5 driver, should be all internal
+[suppress_file]
+ soname_regexp = ^librte_common_mlx5\.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index 83066e752d..edf356a30a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -300,6 +300,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx);
* 0 if OFED doesn't support.
* >0 if success.
*/
+__rte_internal
int
mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len);
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 1c6153c576..cb20a7d893 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -80,6 +80,7 @@ INTERNAL {
mlx5_free;
+ mlx5_get_device_guid; # WINDOWS_NO_EXPORT
mlx5_get_ifname_sysfs; # WINDOWS_NO_EXPORT
mlx5_get_pci_addr; # WINDOWS_NO_EXPORT
@@ -149,4 +150,6 @@ INTERNAL {
mlx5_mp_req_mempool_reg;
mlx5_mr_mempool2mr_bh;
mlx5_mr_mempool_populate_cache;
+
+ local: *;
};
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH v3 0/6] mlx5: external RxQ support
2022-02-23 18:48 3% ` [PATCH v2 " Michael Baum
2022-02-23 18:48 4% ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-24 8:38 0% ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
@ 2022-02-24 23:25 3% ` Michael Baum
2022-02-24 23:25 4% ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
2 siblings, 1 reply; 200+ results
From: Michael Baum @ 2022-02-24 23:25 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
These patches add support to external Rx queues.
External queue is a queue that is managed by a process external to PMD,
but uses PMD process to generate its flow rules.
For the hardware to allow the DPDK process to set rules for it, the
process needs to use the same PD of the external process. In addition,
the indexes of the queues in hardware are represented by 32-bit compared
to the rte_flow indexes represented by 16-bit, so the processes need to
share some mapping between the indexes.
These patches allow the external process to provide devargs which enable
importing its context and PD, instead of prepare new ones. In addition,
an API is provided for mapping for the indexes of the queues.
v1:
- initial commits.
v2:
- Rebase.
- Add ABI exception for common/mlx5 library.
- Correct DevX flag updating.
- Improve explanations in doc and comments.
- Remove teatpmd part.
v3:
- Rebase.
- Fix compilation error.
- Avoide TOCTOU issue in external RxQ map/unmap functions.
- Add check it the queue still referenced in unmapping function.
- Improve guide explanations for the new devargs.
Michael Baum (6):
common/mlx5: consider local functions as internal
common/mlx5: glue device and PD importation
common/mlx5: add remote PD and CTX support
net/mlx5: optimize RxQ/TxQ control structure
net/mlx5: add external RxQ mapping API
net/mlx5: support queue/RSS action for external RxQ
devtools/libabigail.abignore | 4 +
doc/guides/nics/mlx5.rst | 1 +
doc/guides/platform/mlx5.rst | 37 ++-
doc/guides/rel_notes/release_22_03.rst | 1 +
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 196 ++++++++++++--
drivers/common/mlx5/linux/mlx5_common_os.h | 7 +-
drivers/common/mlx5/linux/mlx5_glue.c | 41 +++
drivers/common/mlx5/linux/mlx5_glue.h | 4 +
drivers/common/mlx5/mlx5_common.c | 84 ++++--
drivers/common/mlx5/mlx5_common.h | 23 +-
drivers/common/mlx5/version.map | 3 +
drivers/common/mlx5/windows/mlx5_common_os.c | 37 ++-
drivers/common/mlx5/windows/mlx5_common_os.h | 1 -
drivers/net/mlx5/linux/mlx5_os.c | 17 ++
drivers/net/mlx5/mlx5.c | 5 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_devx.c | 52 ++--
drivers/net/mlx5/mlx5_ethdev.c | 18 +-
drivers/net/mlx5/mlx5_flow.c | 43 +--
drivers/net/mlx5/mlx5_flow_dv.c | 14 +-
drivers/net/mlx5/mlx5_rx.h | 49 +++-
drivers/net/mlx5/mlx5_rxq.c | 266 +++++++++++++++++--
drivers/net/mlx5/mlx5_trigger.c | 36 +--
drivers/net/mlx5/mlx5_tx.h | 7 +-
drivers/net/mlx5/mlx5_txq.c | 14 +-
drivers/net/mlx5/rte_pmd_mlx5.h | 50 +++-
drivers/net/mlx5/version.map | 3 +
29 files changed, 838 insertions(+), 181 deletions(-)
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v18 8/8] eal: implement functions for mutex management
2022-02-24 17:29 0% ` Ananyev, Konstantin
@ 2022-02-24 17:44 0% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2022-02-24 17:44 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Dmitry Kozlyuk, Narcisa Ana Maria Vasile, Richardson, Bruce,
david.marchand, dev, dmitrym, khot, navasile, ocardona, Kadam,
Pallavi, roretzla, talshn, thomas
On Thu, 24 Feb 2022 17:29:03 +0000
"Ananyev, Konstantin" <konstantin.ananyev@intel.com> wrote:
> Hi Dmitry,
>
> > 2022-02-21 00:56 (UTC+0300), Dmitry Kozlyuk:
> > > 2022-02-09 13:57 (UTC+0000), Ananyev, Konstantin:
> > > > > > Actually, please scrap that comment.
> > > > > > Obviously it wouldn't work for static variables,
> > > > > > and doesn't make much sense.
> > > > > > Though few thoughts remain:
> > > > > > for posix we probably don't need an indirection and
> > > > > > rte_thread_mutex can be just typedef of pthread_mutex_t.
> > > > > > also for posix we don't need RTE_INIT constructor for each
> > > > > > static mutex initialization.
> > > > > > Something like:
> > > > > > #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> > > > > > rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> > > > > > should work, I think.
> > > > > > Konstantin
> > > > >
> > > > > Thank you for reviewing, Konstantin!
> > > > > Some context for the current representation of mutex
> > > > > can be found in v9, patch 7/10 of this patchset.
> > > > >
> > > > > Originally we've typedef'ed the pthread_mutex_t on POSIX, just
> > > > > like you are suggesting here.
> > > > > However, on Windows there's no static initializer similar to the pthread
> > > > > one. Still, we want ABI compatibility and same thread behavior between
> > > > > platforms. The most elegant solution we found was the current representation,
> > > > > as suggested by Dmitry K.
> > > >
> > > > Yes, I agree it is a problem with Windows for static initializer.
> > > > But why we can't have different structs typedef for mutex
> > > > for posix and windows platforms?
> > >
> > > Yes, I agree that having different mutex types on *nix and Windows
> > > is a great idea. It will avoid ABI change for *nix
> > > and will guarantee no performance impact.
> > >
> > > Maybe wrap pthread_mutex_t into a struct to have a distinct type
> > > and to force using only DPDK API with it?
> > >
> > > [...]
> > > > Yes, on Windows rte_thread_mutex still wouldn't work for MP,
> > > > but that's the same as with current design.
> > >
> > > MP support is not planned for Windows and it is unknown if it ever will be,
> > > so it's not an issue.
> > > Data location is.
> > > The reason rte_thread_mutex_t is not a typedef of CRITICAL_SECTION
> > > (akin to pthread_mutex_t) is to avoid including Windows headers
> > > into DPDK public headers, because Windows headers can break user code
> > > by some macros they define.
> > > Maybe instead of a pointer it could be an opaque array:
> > >
> > > #define RTE_PTHREAD_MUTEX_SIZE 40
> > >
> > > struct rte_pthread_mutex_t {
> > > uint8_t opaque[RTE_PTHREAD_MUTEX_SIZE];
> > > };
> > >
> > > where RTE_PTHREAD_MUTEX_SIZE is actually sizeof(CRITICAL_SECTION).
> > > Win32 ABI is remarkably stable, I don't think this constant will ever change,
> > > it would break all the Windows user space.
> > > Naty, DmitryM, Tyler, what do you think?
> >
> > Conclusion from offline call: yes, this is OK to do so.
> >
> > However, DmitryM suggested using Slim Reader-Writer lock (SRW):
> > https://docs.microsoft.com/en-us/windows/win32/sync/slim-reader-writer--srw--locks
> > instead of CRITICAL_SECTION.
> > It seems to be a much better option:
> >
> > * sizeof(SRWLOCK) == 8 (technically "size of a pointer"),
> > same as sizeof(pthread_mutex_t) on a typical Linux.
> > Layout of data structures containing rte_thread_mutex_t
> > can be the same on Windows and Unix,
> > which simplifies design and promises similar less performance differences.
> >
> > * Can be taken by multiple readers and one writer,
> > which is semantically similar to pthread_mutex_t
>
> Not sure I understand you here:
> pthread_mutex provides only mutually-exclusive lock semantics.
> For RW lock there exists: pthread_rwlock_t.
> Off-course you can use rwlock fo exclusive locking too -
> if all callers will use only writer locks, but usually that's no point to do that -
> mutexes are simpler and faster.
> That's for posix-like systems, don't know much about Windows environment :)
>
> > (CRITICAL_SECTION can only be taken by a single thread).
> >
> > Technically it can be a "typedef uintptr_t" or a structure wrapping it.
>
> Again can't say much about Windows, but pthread_mutex_t
> can (and is) bigger then then 8 bytes.
>
>
There seems to be some confusion here:
pthread_mutex put thread to sleep if contended and on linux are built on the futex system call.
pthread_rwlock are the reader/writer versions of these.
The DPDK has primitives for multiple types of locks: spinlock, rwlock, ticketlock, pflock, etc
these are build using atomic primitives (no syscall).
these are platform independent
these spin if contended
Not sure about Windows, but it looks like slim rwlocks came from Windows NT and are an implementation
of the same kind of spinning lock DPDK already has.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v18 8/8] eal: implement functions for mutex management
2022-02-23 17:08 0% ` Dmitry Kozlyuk
@ 2022-02-24 17:29 0% ` Ananyev, Konstantin
2022-02-24 17:44 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2022-02-24 17:29 UTC (permalink / raw)
To: Dmitry Kozlyuk
Cc: Narcisa Ana Maria Vasile, Richardson, Bruce, david.marchand, dev,
dmitrym, khot, navasile, ocardona, Kadam, Pallavi, roretzla,
talshn, thomas
Hi Dmitry,
> 2022-02-21 00:56 (UTC+0300), Dmitry Kozlyuk:
> > 2022-02-09 13:57 (UTC+0000), Ananyev, Konstantin:
> > > > > Actually, please scrap that comment.
> > > > > Obviously it wouldn't work for static variables,
> > > > > and doesn't make much sense.
> > > > > Though few thoughts remain:
> > > > > for posix we probably don't need an indirection and
> > > > > rte_thread_mutex can be just typedef of pthread_mutex_t.
> > > > > also for posix we don't need RTE_INIT constructor for each
> > > > > static mutex initialization.
> > > > > Something like:
> > > > > #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> > > > > rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> > > > > should work, I think.
> > > > > Konstantin
> > > >
> > > > Thank you for reviewing, Konstantin!
> > > > Some context for the current representation of mutex
> > > > can be found in v9, patch 7/10 of this patchset.
> > > >
> > > > Originally we've typedef'ed the pthread_mutex_t on POSIX, just
> > > > like you are suggesting here.
> > > > However, on Windows there's no static initializer similar to the pthread
> > > > one. Still, we want ABI compatibility and same thread behavior between
> > > > platforms. The most elegant solution we found was the current representation,
> > > > as suggested by Dmitry K.
> > >
> > > Yes, I agree it is a problem with Windows for static initializer.
> > > But why we can't have different structs typedef for mutex
> > > for posix and windows platforms?
> >
> > Yes, I agree that having different mutex types on *nix and Windows
> > is a great idea. It will avoid ABI change for *nix
> > and will guarantee no performance impact.
> >
> > Maybe wrap pthread_mutex_t into a struct to have a distinct type
> > and to force using only DPDK API with it?
> >
> > [...]
> > > Yes, on Windows rte_thread_mutex still wouldn't work for MP,
> > > but that's the same as with current design.
> >
> > MP support is not planned for Windows and it is unknown if it ever will be,
> > so it's not an issue.
> > Data location is.
> > The reason rte_thread_mutex_t is not a typedef of CRITICAL_SECTION
> > (akin to pthread_mutex_t) is to avoid including Windows headers
> > into DPDK public headers, because Windows headers can break user code
> > by some macros they define.
> > Maybe instead of a pointer it could be an opaque array:
> >
> > #define RTE_PTHREAD_MUTEX_SIZE 40
> >
> > struct rte_pthread_mutex_t {
> > uint8_t opaque[RTE_PTHREAD_MUTEX_SIZE];
> > };
> >
> > where RTE_PTHREAD_MUTEX_SIZE is actually sizeof(CRITICAL_SECTION).
> > Win32 ABI is remarkably stable, I don't think this constant will ever change,
> > it would break all the Windows user space.
> > Naty, DmitryM, Tyler, what do you think?
>
> Conclusion from offline call: yes, this is OK to do so.
>
> However, DmitryM suggested using Slim Reader-Writer lock (SRW):
> https://docs.microsoft.com/en-us/windows/win32/sync/slim-reader-writer--srw--locks
> instead of CRITICAL_SECTION.
> It seems to be a much better option:
>
> * sizeof(SRWLOCK) == 8 (technically "size of a pointer"),
> same as sizeof(pthread_mutex_t) on a typical Linux.
> Layout of data structures containing rte_thread_mutex_t
> can be the same on Windows and Unix,
> which simplifies design and promises similar less performance differences.
>
> * Can be taken by multiple readers and one writer,
> which is semantically similar to pthread_mutex_t
Not sure I understand you here:
pthread_mutex provides only mutually-exclusive lock semantics.
For RW lock there exists: pthread_rwlock_t.
Off-course you can use rwlock fo exclusive locking too -
if all callers will use only writer locks, but usually that's no point to do that -
mutexes are simpler and faster.
That's for posix-like systems, don't know much about Windows environment :)
> (CRITICAL_SECTION can only be taken by a single thread).
>
> Technically it can be a "typedef uintptr_t" or a structure wrapping it.
Again can't say much about Windows, but pthread_mutex_t
can (and is) bigger then then 8 bytes.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 0/6] mlx5: external RxQ support
2022-02-23 18:48 3% ` [PATCH v2 " Michael Baum
2022-02-23 18:48 4% ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
@ 2022-02-24 8:38 0% ` Matan Azrad
2022-02-24 23:25 3% ` [PATCH v3 " Michael Baum
2 siblings, 0 replies; 200+ results
From: Matan Azrad @ 2022-02-24 8:38 UTC (permalink / raw)
To: Michael Baum, dev; +Cc: Raslan Darawsheh, Slava Ovsiienko
From: Michael Baum
> These patches add support to external Rx queues.
> External queue is a queue that is managed by a process external to PMD, but
> uses PMD process to generate its flow rules.
>
> For the hardware to allow the DPDK process to set rules for it, the process needs
> to use the same PD of the external process. In addition, the indexes of the
> queues in hardware are represented by 32-bit compared to the rte_flow indexes
> represented by 16-bit, so the processes need to share some mapping between
> the indexes.
>
> These patches allow the external process to provide devargs which enable
> importing its context and PD, instead of prepare new ones. In addition, an API is
> provided for mapping for the indexes of the queues.
>
> v2:
> - Rebase.
> - Add ABI exception for common/mlx5 library.
> - Correct DevX flag updating.
> - Improve explanations in doc and comments.
> - Remove teatpmd part.
>
Series-acked-by: Matan Azrad <matan@nvidia.com>
> Michael Baum (6):
> common/mlx5: consider local functions as internal
> common/mlx5: glue device and PD importation
> common/mlx5: add remote PD and CTX support
> net/mlx5: optimize RxQ/TxQ control structure
> net/mlx5: add external RxQ mapping API
> net/mlx5: support queue/RSS action for external RxQ
>
> devtools/libabigail.abignore | 4 +
> doc/guides/nics/mlx5.rst | 1 +
> doc/guides/platform/mlx5.rst | 37 ++-
> doc/guides/rel_notes/release_22_03.rst | 1 +
> drivers/common/mlx5/linux/meson.build | 2 +
> drivers/common/mlx5/linux/mlx5_common_os.c | 196 ++++++++++++--
> drivers/common/mlx5/linux/mlx5_common_os.h | 7 +-
> drivers/common/mlx5/linux/mlx5_glue.c | 41 +++
> drivers/common/mlx5/linux/mlx5_glue.h | 4 +
> drivers/common/mlx5/mlx5_common.c | 64 ++++-
> drivers/common/mlx5/mlx5_common.h | 23 +-
> drivers/common/mlx5/version.map | 3 +
> drivers/common/mlx5/windows/mlx5_common_os.c | 37 ++-
> drivers/common/mlx5/windows/mlx5_common_os.h | 1 -
> drivers/net/mlx5/linux/mlx5_os.c | 18 ++
> drivers/net/mlx5/mlx5.c | 6 +
> drivers/net/mlx5/mlx5.h | 1 +
> drivers/net/mlx5/mlx5_defs.h | 3 +
> drivers/net/mlx5/mlx5_devx.c | 52 ++--
> drivers/net/mlx5/mlx5_ethdev.c | 18 +-
> drivers/net/mlx5/mlx5_flow.c | 43 ++--
> drivers/net/mlx5/mlx5_flow_dv.c | 14 +-
> drivers/net/mlx5/mlx5_rx.h | 49 +++-
> drivers/net/mlx5/mlx5_rxq.c | 258 +++++++++++++++++--
> drivers/net/mlx5/mlx5_trigger.c | 36 +--
> drivers/net/mlx5/mlx5_tx.h | 7 +-
> drivers/net/mlx5/mlx5_txq.c | 14 +-
> drivers/net/mlx5/rte_pmd_mlx5.h | 50 +++-
> drivers/net/mlx5/version.map | 3 +
> 29 files changed, 821 insertions(+), 172 deletions(-)
>
> --
> 2.25.1
^ permalink raw reply [relevance 0%]
* [PATCH v2 1/6] common/mlx5: consider local functions as internal
2022-02-23 18:48 3% ` [PATCH v2 " Michael Baum
@ 2022-02-23 18:48 4% ` Michael Baum
2022-02-24 8:38 0% ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
2022-02-24 23:25 3% ` [PATCH v3 " Michael Baum
2 siblings, 0 replies; 200+ results
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
The functions which are not explicitly marked as internal
were exported because the local catch-all rule was missing in the
version script.
After adding the missing rule, all local functions are hidden.
The function mlx5_get_device_guid is used in another library,
so it needs to be exported (as internal).
Because the local functions were exported as non-internal
in DPDK 21.11, any change in these functions would break the ABI.
An ABI exception is added for this library, considering that all
functions are either local or internal.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
---
devtools/libabigail.abignore | 4 ++++
drivers/common/mlx5/linux/mlx5_common_os.h | 1 +
drivers/common/mlx5/version.map | 3 +++
3 files changed, 8 insertions(+)
diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
index ef0602975a..78d57497e6 100644
--- a/devtools/libabigail.abignore
+++ b/devtools/libabigail.abignore
@@ -20,3 +20,7 @@
; Ignore changes to rte_crypto_asym_op, asymmetric crypto API is experimental
[suppress_type]
name = rte_crypto_asym_op
+
+; Ignore changes in common mlx5 driver, should be all internal
+[suppress_file]
+ soname_regexp = ^librte_common_mlx5\.
\ No newline at end of file
diff --git a/drivers/common/mlx5/linux/mlx5_common_os.h b/drivers/common/mlx5/linux/mlx5_common_os.h
index 83066e752d..edf356a30a 100644
--- a/drivers/common/mlx5/linux/mlx5_common_os.h
+++ b/drivers/common/mlx5/linux/mlx5_common_os.h
@@ -300,6 +300,7 @@ mlx5_set_context_attr(struct rte_device *dev, struct ibv_context *ctx);
* 0 if OFED doesn't support.
* >0 if success.
*/
+__rte_internal
int
mlx5_get_device_guid(const struct rte_pci_addr *dev, uint8_t *guid, size_t len);
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index 1c6153c576..cb20a7d893 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -80,6 +80,7 @@ INTERNAL {
mlx5_free;
+ mlx5_get_device_guid; # WINDOWS_NO_EXPORT
mlx5_get_ifname_sysfs; # WINDOWS_NO_EXPORT
mlx5_get_pci_addr; # WINDOWS_NO_EXPORT
@@ -149,4 +150,6 @@ INTERNAL {
mlx5_mp_req_mempool_reg;
mlx5_mr_mempool2mr_bh;
mlx5_mr_mempool_populate_cache;
+
+ local: *;
};
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH v2 0/6] mlx5: external RxQ support
@ 2022-02-23 18:48 3% ` Michael Baum
2022-02-23 18:48 4% ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
` (2 more replies)
0 siblings, 3 replies; 200+ results
From: Michael Baum @ 2022-02-23 18:48 UTC (permalink / raw)
To: dev; +Cc: Matan Azrad, Raslan Darawsheh, Viacheslav Ovsiienko
These patches add support to external Rx queues.
External queue is a queue that is managed by a process external to PMD,
but uses PMD process to generate its flow rules.
For the hardware to allow the DPDK process to set rules for it, the
process needs to use the same PD of the external process. In addition,
the indexes of the queues in hardware are represented by 32-bit compared
to the rte_flow indexes represented by 16-bit, so the processes need to
share some mapping between the indexes.
These patches allow the external process to provide devargs which enable
importing its context and PD, instead of prepare new ones. In addition,
an API is provided for mapping for the indexes of the queues.
v2:
- Rebase.
- Add ABI exception for common/mlx5 library.
- Correct DevX flag updating.
- Improve explanations in doc and comments.
- Remove teatpmd part.
Michael Baum (6):
common/mlx5: consider local functions as internal
common/mlx5: glue device and PD importation
common/mlx5: add remote PD and CTX support
net/mlx5: optimize RxQ/TxQ control structure
net/mlx5: add external RxQ mapping API
net/mlx5: support queue/RSS action for external RxQ
devtools/libabigail.abignore | 4 +
doc/guides/nics/mlx5.rst | 1 +
doc/guides/platform/mlx5.rst | 37 ++-
doc/guides/rel_notes/release_22_03.rst | 1 +
drivers/common/mlx5/linux/meson.build | 2 +
drivers/common/mlx5/linux/mlx5_common_os.c | 196 ++++++++++++--
drivers/common/mlx5/linux/mlx5_common_os.h | 7 +-
drivers/common/mlx5/linux/mlx5_glue.c | 41 +++
drivers/common/mlx5/linux/mlx5_glue.h | 4 +
drivers/common/mlx5/mlx5_common.c | 64 ++++-
drivers/common/mlx5/mlx5_common.h | 23 +-
drivers/common/mlx5/version.map | 3 +
drivers/common/mlx5/windows/mlx5_common_os.c | 37 ++-
drivers/common/mlx5/windows/mlx5_common_os.h | 1 -
drivers/net/mlx5/linux/mlx5_os.c | 18 ++
drivers/net/mlx5/mlx5.c | 6 +
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_devx.c | 52 ++--
drivers/net/mlx5/mlx5_ethdev.c | 18 +-
drivers/net/mlx5/mlx5_flow.c | 43 ++--
drivers/net/mlx5/mlx5_flow_dv.c | 14 +-
drivers/net/mlx5/mlx5_rx.h | 49 +++-
drivers/net/mlx5/mlx5_rxq.c | 258 +++++++++++++++++--
drivers/net/mlx5/mlx5_trigger.c | 36 +--
drivers/net/mlx5/mlx5_tx.h | 7 +-
drivers/net/mlx5/mlx5_txq.c | 14 +-
drivers/net/mlx5/rte_pmd_mlx5.h | 50 +++-
drivers/net/mlx5/version.map | 3 +
29 files changed, 821 insertions(+), 172 deletions(-)
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v18 8/8] eal: implement functions for mutex management
2022-02-20 21:56 4% ` Dmitry Kozlyuk
@ 2022-02-23 17:08 0% ` Dmitry Kozlyuk
2022-02-24 17:29 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2022-02-23 17:08 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Narcisa Ana Maria Vasile, Richardson, Bruce, david.marchand, dev,
dmitrym, khot, navasile, ocardona, Kadam, Pallavi, roretzla,
talshn, thomas
2022-02-21 00:56 (UTC+0300), Dmitry Kozlyuk:
> 2022-02-09 13:57 (UTC+0000), Ananyev, Konstantin:
> > > > Actually, please scrap that comment.
> > > > Obviously it wouldn't work for static variables,
> > > > and doesn't make much sense.
> > > > Though few thoughts remain:
> > > > for posix we probably don't need an indirection and
> > > > rte_thread_mutex can be just typedef of pthread_mutex_t.
> > > > also for posix we don't need RTE_INIT constructor for each
> > > > static mutex initialization.
> > > > Something like:
> > > > #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> > > > rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> > > > should work, I think.
> > > > Konstantin
> > >
> > > Thank you for reviewing, Konstantin!
> > > Some context for the current representation of mutex
> > > can be found in v9, patch 7/10 of this patchset.
> > >
> > > Originally we've typedef'ed the pthread_mutex_t on POSIX, just
> > > like you are suggesting here.
> > > However, on Windows there's no static initializer similar to the pthread
> > > one. Still, we want ABI compatibility and same thread behavior between
> > > platforms. The most elegant solution we found was the current representation,
> > > as suggested by Dmitry K.
> >
> > Yes, I agree it is a problem with Windows for static initializer.
> > But why we can't have different structs typedef for mutex
> > for posix and windows platforms?
>
> Yes, I agree that having different mutex types on *nix and Windows
> is a great idea. It will avoid ABI change for *nix
> and will guarantee no performance impact.
>
> Maybe wrap pthread_mutex_t into a struct to have a distinct type
> and to force using only DPDK API with it?
>
> [...]
> > Yes, on Windows rte_thread_mutex still wouldn't work for MP,
> > but that's the same as with current design.
>
> MP support is not planned for Windows and it is unknown if it ever will be,
> so it's not an issue.
> Data location is.
> The reason rte_thread_mutex_t is not a typedef of CRITICAL_SECTION
> (akin to pthread_mutex_t) is to avoid including Windows headers
> into DPDK public headers, because Windows headers can break user code
> by some macros they define.
> Maybe instead of a pointer it could be an opaque array:
>
> #define RTE_PTHREAD_MUTEX_SIZE 40
>
> struct rte_pthread_mutex_t {
> uint8_t opaque[RTE_PTHREAD_MUTEX_SIZE];
> };
>
> where RTE_PTHREAD_MUTEX_SIZE is actually sizeof(CRITICAL_SECTION).
> Win32 ABI is remarkably stable, I don't think this constant will ever change,
> it would break all the Windows user space.
> Naty, DmitryM, Tyler, what do you think?
Conclusion from offline call: yes, this is OK to do so.
However, DmitryM suggested using Slim Reader-Writer lock (SRW):
https://docs.microsoft.com/en-us/windows/win32/sync/slim-reader-writer--srw--locks
instead of CRITICAL_SECTION.
It seems to be a much better option:
* sizeof(SRWLOCK) == 8 (technically "size of a pointer"),
same as sizeof(pthread_mutex_t) on a typical Linux.
Layout of data structures containing rte_thread_mutex_t
can be the same on Windows and Unix,
which simplifies design and promises similar less performance differences.
* Can be taken by multiple readers and one writer,
which is semantically similar to pthread_mutex_t
(CRITICAL_SECTION can only be taken by a single thread).
Technically it can be a "typedef uintptr_t" or a structure wrapping it.
^ permalink raw reply [relevance 0%]
* Re: DPDK LTS release
@ 2022-02-23 16:57 3% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2022-02-23 16:57 UTC (permalink / raw)
To: Kamaraj P; +Cc: dev, hpai, Kamaraj P (kamp)
On 23/02/2022 16:16, Kamaraj P wrote:
> Thanks Kevin.
>
> Apart from release notes to identify changes of DPDK version from 19.11 to
> 21.11, is there any any major design changes in DPDK ? for example (memory
> management, mempool allocation behavior etc)
> Please share if there is any pointers.
>
Notably the make based build system was removed in 20.11, but you'll
need to check the release notes [0] to see what else impacts you. In
particular the 20.11 and 21.11 were ABI breaking releases.
[0] http://doc.dpdk.org/guides/rel_notes/index.html
> Thanks,
> Kamaraj
>
> On Mon, Feb 21, 2022 at 3:53 PM Kevin Traynor <ktraynor@redhat.com> wrote:
>
>> On 21/02/2022 06:47, Kamaraj P wrote:
>>> Hi Team,
>>>
>>
>> Hi,
>>
>>> We are planning to upgrade the DPDK stable LTS version from DPDK19.11.
>>> Could you please suggest what would be the stable LTS version of DPDK ?
>>>
>>
>> If you are looking for an LTS version that will be maintained for the
>> longest time in the future, then you can jump to the latest DPDK LTS
>> series 21.11.
>>
>> It will be maintained with backported bugfixes until at least November
>> 2023, but we are trialing longer support on some LTS releases so there
>> is a possibility it might be extended.
>>
>> If you take a look at http://core.dpdk.org/roadmap/#stable you can see
>> the current roadmap of how long each LTS version is maintained for.
>>
>> thanks,
>> Kevin.
>>
>>> Thanks,
>>> Kamaraj
>>>
>>
>>
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH] raw/cnxk_gpio: fix DPDK version in a map file
@ 2022-02-23 16:28 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-02-23 16:28 UTC (permalink / raw)
To: Tomasz Duszynski; +Cc: dev, jerinj, Ferruh Yigit
23/02/2022 14:50, Ferruh Yigit:
> On 2/23/2022 1:32 PM, Tomasz Duszynski wrote:
> > PMD driver got merged during 22.03 merge window and number in map file
> > should reflect that.
> >
> > Fixes: d0b8a4e19131 ("raw/cnxk_gpio: add GPIO driver skeleton")
> >
> > Reported-by: Ferruh Yigit <ferruh.yigit@intel.com>
> > Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com>
>
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Applied with title "fix ABI version"
^ permalink raw reply [relevance 3%]
* RE: [PATCH v8 02/11] ethdev: add flow item/action templates
@ 2022-02-21 15:14 3% ` Alexander Kozyrev
0 siblings, 0 replies; 200+ results
From: Alexander Kozyrev @ 2022-02-21 15:14 UTC (permalink / raw)
To: Ori Kam, Andrew Rybchenko, dev
Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
ivan.malov, ferruh.yigit, mohammad.abdul.awal, qi.z.zhang,
jerinj, ajit.khaparde, bruce.richardson
On Monday, February 21, 2022 8:12 Ori Kam <orika@nvidia.com> wrote:
> > See notes about order of checks in previous patch review notes.
I'll fix order of checks in all patches, thank you for the suggestion.
> > Would it be useful to mentioned that at least one direction
> > bit must be set? Otherwise request does not make sense.
> >
> Agree one direction must be set.
Will add comments about mandatory setting of the direction.
> > > + */
> > > +__extension__
> > > +struct rte_flow_pattern_template_attr {
> > > + /**
> > > + * Relaxed matching policy.
> > > + * - PMD may match only on items with mask member set and skip
> > > + * matching on protocol layers specified without any masks.
> > > + * - If not set, PMD will match on protocol layers
> > > + * specified without any masks as well.
> > > + * - Packet data must be stacked in the same order as the
> > > + * protocol layers to match inside packets, starting from the lowest.
> > > + */
> > > + uint32_t relaxed_matching:1;
> >
> > I should notice this earlier, but it looks like a new feature
> > which sounds unrelated to templates. If so, it makes asymmetry
> > in sync and async flow rules capabilities.
> > Am I missing something?
> >
> > Anyway, the feature looks hidden in the patch.
> >
> No this is not hidden feature.
> In current API application must specify all the preciding items,
> For example application wants to match on udp source port.
> The rte flow will look something like eth / ipv4/ udp sport = xxx ..
> When PMD gets this pattern it must enforce the after the eth
> there will be IPv4 and then UDP and then add the match for the
> sport.
> This means that the PMD addes extra matching.
> If the application already validated that there is udp in the packet
> in group 0 and then jump to group 1 it can save the HW those extra matching
> by enabling this bit which means that the HW should only match on implicit
> masked fields.
This is a new capability that only exists for templates.
We can think about adding it to the old rte_flow_create() API when
we are allowed to break ABI again.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v18 8/8] eal: implement functions for mutex management
2022-02-09 13:57 0% ` Ananyev, Konstantin
@ 2022-02-20 21:56 4% ` Dmitry Kozlyuk
2022-02-23 17:08 0% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Dmitry Kozlyuk @ 2022-02-20 21:56 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Narcisa Ana Maria Vasile, Richardson, Bruce, david.marchand, dev,
dmitrym, khot, navasile, ocardona, Kadam, Pallavi, roretzla,
talshn, thomas
2022-02-09 13:57 (UTC+0000), Ananyev, Konstantin:
> > > Actually, please scrap that comment.
> > > Obviously it wouldn't work for static variables,
> > > and doesn't make much sense.
> > > Though few thoughts remain:
> > > for posix we probably don't need an indirection and
> > > rte_thread_mutex can be just typedef of pthread_mutex_t.
> > > also for posix we don't need RTE_INIT constructor for each
> > > static mutex initialization.
> > > Something like:
> > > #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> > > rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> > > should work, I think.
> > > Konstantin
> >
> > Thank you for reviewing, Konstantin!
> > Some context for the current representation of mutex
> > can be found in v9, patch 7/10 of this patchset.
> >
> > Originally we've typedef'ed the pthread_mutex_t on POSIX, just
> > like you are suggesting here.
> > However, on Windows there's no static initializer similar to the pthread
> > one. Still, we want ABI compatibility and same thread behavior between
> > platforms. The most elegant solution we found was the current representation,
> > as suggested by Dmitry K.
>
> Yes, I agree it is a problem with Windows for static initializer.
> But why we can't have different structs typedef for mutex
> for posix and windows platforms?
Yes, I agree that having different mutex types on *nix and Windows
is a great idea. It will avoid ABI change for *nix
and will guarantee no performance impact.
Maybe wrap pthread_mutex_t into a struct to have a distinct type
and to force using only DPDK API with it?
[...]
> Yes, on Windows rte_thread_mutex still wouldn't work for MP,
> but that's the same as with current design.
MP support is not planned for Windows and it is unknown if it ever will be,
so it's not an issue.
Data location is.
The reason rte_thread_mutex_t is not a typedef of CRITICAL_SECTION
(akin to pthread_mutex_t) is to avoid including Windows headers
into DPDK public headers, because Windows headers can break user code
by some macros they define.
Maybe instead of a pointer it could be an opaque array:
#define RTE_PTHREAD_MUTEX_SIZE 40
struct rte_pthread_mutex_t {
uint8_t opaque[RTE_PTHREAD_MUTEX_SIZE];
};
where RTE_PTHREAD_MUTEX_SIZE is actually sizeof(CRITICAL_SECTION).
Win32 ABI is remarkably stable, I don't think this constant will ever change,
it would break all the Windows user space.
Naty, DmitryM, Tyler, what do you think?
^ permalink raw reply [relevance 4%]
* [PATCH v3 0/8] yet more unnecessary NULL checks
@ 2022-02-20 18:21 3% ` Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2022-02-20 18:21 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
Thomas suggested there are some other functions that could
use the nullfree cleanup; this covers the rest of the story.
Note: this does not change existing API/ABI, there are still
some outliers that don't use the convention but fixing these
will have to wait until next LTS.
v3 - fix another typo and add more functions
v2 - fix spelling typo and add functions
Stephen Hemminger (8):
cocci/nullfree: add more functions
acl: remove unnecessary null checks
lpm: remove unnecessary NULL checks
lib: document existing free functions
test: remove unnecessary NULL checks before free
fips_validation: remove unnecessary NULL check
event/sw: remove unnecessary NULL check
pipeline: remove unnecessary checks for NULL pointer before free
app/test/test_acl.c | 12 +-
app/test/test_cmdline_lib.c | 3 +-
app/test/test_cryptodev.c | 9 +-
app/test/test_cryptodev_asym.c | 30 ++---
app/test/test_cryptodev_blockcipher.c | 3 +-
app/test/test_func_reentrancy.c | 6 +-
app/test/test_hash.c | 3 +-
devtools/cocci/nullfree.cocci | 108 +++++++++++++++++-
drivers/event/sw/sw_evdev.c | 6 +-
examples/fips_validation/fips_dev_self_test.c | 3 +-
lib/acl/rte_acl.h | 1 +
lib/bitratestats/rte_bitrate.h | 1 +
lib/compressdev/rte_comp.h | 1 +
lib/cryptodev/rte_crypto.h | 1 +
lib/eal/include/rte_interrupts.h | 4 +-
lib/efd/rte_efd.h | 1 +
lib/eventdev/rte_event_ring.h | 1 +
lib/fib/rte_fib.h | 1 +
lib/fib/rte_fib6.h | 1 +
lib/lpm/rte_lpm.h | 1 +
lib/lpm/rte_lpm6.h | 1 +
lib/member/rte_member.h | 1 +
lib/pipeline/rte_port_in_action.h | 6 +-
lib/pipeline/rte_swx_ctl.c | 3 +-
lib/pipeline/rte_swx_ctl.h | 1 +
lib/pipeline/rte_swx_pipeline.c | 6 +-
lib/pipeline/rte_swx_pipeline.h | 1 +
lib/reorder/rte_reorder.h | 1 +
lib/rib/rte_rib.h | 1 +
lib/rib/rte_rib6.h | 1 +
lib/sched/rte_sched.h | 1 +
lib/stack/rte_stack.h | 1 +
lib/table/rte_swx_table_wm.c | 3 +-
lib/table/rte_table_acl.c | 15 +--
lib/telemetry/rte_telemetry.h | 2 +-
35 files changed, 162 insertions(+), 78 deletions(-)
--
2.34.1
^ permalink raw reply [relevance 3%]
* RE: [EXT] [PATCH] crypto: fix misspelled key in qt format
2022-02-12 11:34 3% ` [EXT] " Akhil Goyal
@ 2022-02-18 6:11 0% ` Kusztal, ArkadiuszX
2022-02-25 17:56 4% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Kusztal, ArkadiuszX @ 2022-02-18 6:11 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: Zhang, Roy Fan
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Saturday, February 12, 2022 12:34 PM
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>
> Subject: RE: [EXT] [PATCH] crypto: fix misspelled key in qt format
>
> > This patch fixes misspelled RTE_RSA_KEY_TYPE_QT, this will prevent
> > checkpach from complaining wherever change to RSA is being made.
> >
> > Fixes: 26008aaed14c ("cryptodev: add asymmetric xform and op
> > definitions")
> >
> > Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> > ---
> Fix ABI warning.
> Add libabigail.abiignore rule.
I think what is worth noticing is a fact that after "random 'k' patch" addition of
[suppress_type]
name = rte_crypto_asym_op
this problem does not show up.
But I think it is safer to send addition of
[suppress_type]
name = rte_crypto_rsa_priv_key_type
anyway.
Will send v2.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-15 15:12 0% ` Thomas Monjalon
@ 2022-02-15 16:12 0% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2022-02-15 16:12 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Thomas Monjalon <thomas@monjalon.net> writes:
> 15/02/2022 14:55, Ray Kinsella:
>> Ray Kinsella <mdr@ashroe.eu> writes:
>> > Thomas Monjalon <thomas@monjalon.net> writes:
>> >> 14/02/2022 17:06, Ray Kinsella:
>> >>> Thomas Monjalon <thomas@monjalon.net> writes:
>> >>> > 14/02/2022 11:16, Ray Kinsella:
>> >>> >> Ray Kinsella <mdr@ashroe.eu> writes:
>> >>> >> > Thomas Monjalon <thomas@monjalon.net> writes:
>> >>> >> >> We never know how this enum will be used by the application.
>> >>> >> >> The max value may be used for the size of an event array.
>> >>> >> >> It looks a real ABI issue unfortunately.
>> >>> >> >
>> >>> >> > Right - but we only really care about it when an array size based on MAX
>> >>> >> > is likely to be passed to DPDK, which doesn't apply in this case.
>> >>> >
>> >>> > I don't completely agree.
>> >>> > A developer may assume an event will never exceed MAX value.
>> >>> > However, after an upgrade of DPDK without app rebuild,
>> >>> > a higher event value may be received in the app,
>> >>> > breaking the assumption.
>> >>> > Should we consider this case as an ABI breakage?
>> >>>
>> >>> Nope - I think we should explicitly exclude MAX values from any
>> >>> ABI guarantee, as being able to change them is key to our be able to
>> >>> evolve DPDK while maintaining ABI stability.
>> >>
>> >> Or we can simply remove the MAX values so there is no confusion.
>> >>
>> >>> Consider what it means applying the ABI policy to a MAX value, you are
>> >>> in effect saying that that no value can be added to this enumeration
>> >>> until the next ABI version, for me this is very restrictive without a
>> >>> solid reason.
>> >>
>> >> I agree it is too much restrictive, that's why I am advocating
>> >> for their removal.
>> >
>> > I think that would be simplest yes - may require some rework of the
>> > sample code though. I will take a look at it.
>>
>> Thinking about this some more - we can't remove the MAX values between
>> now the next stable ABI. So we may need a short term plan, and long term
>> plan.
>>
>> Long term, I agree we look at every _MAX enumeration value and ask do we
>> need it.
>>
>> Short term (until the next ABI), we still need to answer the question do
>> we allow people to change _MAX values?
>
> There's a problem of incentive.
> We already said in the past that we should remove the _MAX values,
> but it doesn't happen because our time on this planet is limited.
> I think it would work if the developers have a special interest in such work.
> And guess what? Here there is a special interest to remove this one.
> If we are at the negotiation table, we could imagine a short-term exception
> if the long-term patch is available and reviewed.
:-) ... I like that plan.
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-15 13:55 4% ` Ray Kinsella
@ 2022-02-15 15:12 0% ` Thomas Monjalon
2022-02-15 16:12 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-15 15:12 UTC (permalink / raw)
To: Ray Kinsella
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
15/02/2022 14:55, Ray Kinsella:
> Ray Kinsella <mdr@ashroe.eu> writes:
> > Thomas Monjalon <thomas@monjalon.net> writes:
> >> 14/02/2022 17:06, Ray Kinsella:
> >>> Thomas Monjalon <thomas@monjalon.net> writes:
> >>> > 14/02/2022 11:16, Ray Kinsella:
> >>> >> Ray Kinsella <mdr@ashroe.eu> writes:
> >>> >> > Thomas Monjalon <thomas@monjalon.net> writes:
> >>> >> >> We never know how this enum will be used by the application.
> >>> >> >> The max value may be used for the size of an event array.
> >>> >> >> It looks a real ABI issue unfortunately.
> >>> >> >
> >>> >> > Right - but we only really care about it when an array size based on MAX
> >>> >> > is likely to be passed to DPDK, which doesn't apply in this case.
> >>> >
> >>> > I don't completely agree.
> >>> > A developer may assume an event will never exceed MAX value.
> >>> > However, after an upgrade of DPDK without app rebuild,
> >>> > a higher event value may be received in the app,
> >>> > breaking the assumption.
> >>> > Should we consider this case as an ABI breakage?
> >>>
> >>> Nope - I think we should explicitly exclude MAX values from any
> >>> ABI guarantee, as being able to change them is key to our be able to
> >>> evolve DPDK while maintaining ABI stability.
> >>
> >> Or we can simply remove the MAX values so there is no confusion.
> >>
> >>> Consider what it means applying the ABI policy to a MAX value, you are
> >>> in effect saying that that no value can be added to this enumeration
> >>> until the next ABI version, for me this is very restrictive without a
> >>> solid reason.
> >>
> >> I agree it is too much restrictive, that's why I am advocating
> >> for their removal.
> >
> > I think that would be simplest yes - may require some rework of the
> > sample code though. I will take a look at it.
>
> Thinking about this some more - we can't remove the MAX values between
> now the next stable ABI. So we may need a short term plan, and long term
> plan.
>
> Long term, I agree we look at every _MAX enumeration value and ask do we
> need it.
>
> Short term (until the next ABI), we still need to answer the question do
> we allow people to change _MAX values?
There's a problem of incentive.
We already said in the past that we should remove the _MAX values,
but it doesn't happen because our time on this planet is limited.
I think it would work if the developers have a special interest in such work.
And guess what? Here there is a special interest to remove this one.
If we are at the negotiation table, we could imagine a short-term exception
if the long-term patch is available and reviewed.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-14 18:27 0% ` Ray Kinsella
@ 2022-02-15 13:55 4% ` Ray Kinsella
2022-02-15 15:12 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-15 13:55 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Ray Kinsella <mdr@ashroe.eu> writes:
> Thomas Monjalon <thomas@monjalon.net> writes:
>
>> 14/02/2022 17:06, Ray Kinsella:
>>> Thomas Monjalon <thomas@monjalon.net> writes:
>>> > 14/02/2022 11:16, Ray Kinsella:
>>> >> Ray Kinsella <mdr@ashroe.eu> writes:
>>> >> > Thomas Monjalon <thomas@monjalon.net> writes:
>>> >> >> 02/02/2022 12:44, Ray Kinsella:
>>> >> >>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
>>> >> >>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
>>> >> >>> >> --- a/lib/ethdev/rte_ethdev.h
>>> >> >>> >> +++ b/lib/ethdev/rte_ethdev.h
>>> >> >>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>>> >> >>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
>>> >> >>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>>> >> >>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>>> >> >>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
>>> >> >>> >> + /**< port recovering from an error
>>> >> >>> >> + *
>>> >> >>> >> + * PMD detected a FW reset or error condition.
>>> >> >>> >> + * PMD will try to recover from the error.
>>> >> >>> >> + * Data path may be quiesced and Control path operations
>>> >> >>> >> + * may fail at this time.
>>> >> >>> >> + */
>>> >> >>> >> + RTE_ETH_EVENT_RECOVERED,
>>> >> >>> >> + /**< port recovered from an error
>>> >> >>> >> + *
>>> >> >>> >> + * PMD has recovered from the error condition.
>>> >> >>> >> + * Control path and Data path are up now.
>>> >> >>> >> + * PMD re-configures the port to the state prior to the error.
>>> >> >>> >> + * Since the device has undergone a reset, flow rules
>>> >> >>> >> + * offloaded prior to reset may be lost and
>>> >> >>> >> + * the application should recreate the rules again.
>>> >> >>> >> + */
>>> >> >>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
>>> >> >>> > to evaluate if it is a false positive:
>>> >> >>> >
>>> >> >>> >
>>> >> >>> > 1 function with some indirect sub-type change:
>>> >> >>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
>>> >> >>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
>>> >> >>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
>>> >> >>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
>>> >> >>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
>>> >> >>> > type size hasn't changed
>>> >> >>> > 2 enumerator insertions:
>>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
>>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
>>> >> >>> > 1 enumerator change:
>>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>>> >> >>>
>>> >> >>> I don't immediately see the problem that this would cause.
>>> >> >>> There are no array sizes etc dependent on the value of MAX for instance.
>>> >> >>>
>>> >> >>> Looks safe?
>>> >> >>
>>> >> >> We never know how this enum will be used by the application.
>>> >> >> The max value may be used for the size of an event array.
>>> >> >> It looks a real ABI issue unfortunately.
>>> >> >
>>> >> > Right - but we only really care about it when an array size based on MAX
>>> >> > is likely to be passed to DPDK, which doesn't apply in this case.
>>> >
>>> > I don't completely agree.
>>> > A developer may assume an event will never exceed MAX value.
>>> > However, after an upgrade of DPDK without app rebuild,
>>> > a higher event value may be received in the app,
>>> > breaking the assumption.
>>> > Should we consider this case as an ABI breakage?
>>>
>>> Nope - I think we should explicitly exclude MAX values from any
>>> ABI guarantee, as being able to change them is key to our be able to
>>> evolve DPDK while maintaining ABI stability.
>>
>> Or we can simply remove the MAX values so there is no confusion.
>>
>>> Consider what it means applying the ABI policy to a MAX value, you are
>>> in effect saying that that no value can be added to this enumeration
>>> until the next ABI version, for me this is very restrictive without a
>>> solid reason.
>>
>> I agree it is too much restrictive, that's why I am advocating
>> for their removal.
>
> I think that would be simplest yes - may require some rework of the
> sample code though. I will take a look at it.
Thinking about this some more - we can't remove the MAX values between
now the next stable ABI. So we may need a short term plan, and long term
plan.
Long term, I agree we look at every _MAX enumeration value and ask do we
need it.
Short term (until the next ABI), we still need to answer the question do
we allow people to change _MAX values?
>>
>>> >> > I noted that some Linux folks explicitly mark similar MAX values as not
>>> >> > part of the ABI.
>>> >> >
>>> >> > /usr/include/linux/perf_event.h
>>> >> > 37: PERF_TYPE_MAX, /* non-ABI */
>>> >> > 60: PERF_COUNT_HW_MAX, /* non-ABI */
>>> >> > 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
>>> >> > 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
>>> >> > 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
>>> >> > 116: PERF_COUNT_SW_MAX, /* non-ABI */
>>> >> > 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
>>> >> > 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
>>> >> > non-ABI; internal use */
>>> >> > 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
>>> >> > 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
>>> >> > 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
>>> >> > 1067: PERF_RECORD_MAX, /* non-ABI */
>>> >> > 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
>>> >> > 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>>> >>
>>> >> Any thoughts on similarly annotating all our _MAX enums in the same way?
>>> >> We could also add a section in the ABI Policy to make it explicit _MAX
>>> >> enum values are not part of the ABI - what do folks think?
>>> >
>>> > Interesting. I am not sure it is always ABI-safe though.
--
Regards, Ray K
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-14 16:25 0% ` Thomas Monjalon
@ 2022-02-14 18:27 0% ` Ray Kinsella
2022-02-15 13:55 4% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-14 18:27 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Thomas Monjalon <thomas@monjalon.net> writes:
> 14/02/2022 17:06, Ray Kinsella:
>> Thomas Monjalon <thomas@monjalon.net> writes:
>> > 14/02/2022 11:16, Ray Kinsella:
>> >> Ray Kinsella <mdr@ashroe.eu> writes:
>> >> > Thomas Monjalon <thomas@monjalon.net> writes:
>> >> >> 02/02/2022 12:44, Ray Kinsella:
>> >> >>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
>> >> >>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
>> >> >>> >> --- a/lib/ethdev/rte_ethdev.h
>> >> >>> >> +++ b/lib/ethdev/rte_ethdev.h
>> >> >>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>> >> >>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
>> >> >>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>> >> >>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>> >> >>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
>> >> >>> >> + /**< port recovering from an error
>> >> >>> >> + *
>> >> >>> >> + * PMD detected a FW reset or error condition.
>> >> >>> >> + * PMD will try to recover from the error.
>> >> >>> >> + * Data path may be quiesced and Control path operations
>> >> >>> >> + * may fail at this time.
>> >> >>> >> + */
>> >> >>> >> + RTE_ETH_EVENT_RECOVERED,
>> >> >>> >> + /**< port recovered from an error
>> >> >>> >> + *
>> >> >>> >> + * PMD has recovered from the error condition.
>> >> >>> >> + * Control path and Data path are up now.
>> >> >>> >> + * PMD re-configures the port to the state prior to the error.
>> >> >>> >> + * Since the device has undergone a reset, flow rules
>> >> >>> >> + * offloaded prior to reset may be lost and
>> >> >>> >> + * the application should recreate the rules again.
>> >> >>> >> + */
>> >> >>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
>> >> >>> >
>> >> >>> >
>> >> >>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
>> >> >>> > to evaluate if it is a false positive:
>> >> >>> >
>> >> >>> >
>> >> >>> > 1 function with some indirect sub-type change:
>> >> >>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
>> >> >>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
>> >> >>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
>> >> >>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
>> >> >>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
>> >> >>> > type size hasn't changed
>> >> >>> > 2 enumerator insertions:
>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
>> >> >>> > 1 enumerator change:
>> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>> >> >>>
>> >> >>> I don't immediately see the problem that this would cause.
>> >> >>> There are no array sizes etc dependent on the value of MAX for instance.
>> >> >>>
>> >> >>> Looks safe?
>> >> >>
>> >> >> We never know how this enum will be used by the application.
>> >> >> The max value may be used for the size of an event array.
>> >> >> It looks a real ABI issue unfortunately.
>> >> >
>> >> > Right - but we only really care about it when an array size based on MAX
>> >> > is likely to be passed to DPDK, which doesn't apply in this case.
>> >
>> > I don't completely agree.
>> > A developer may assume an event will never exceed MAX value.
>> > However, after an upgrade of DPDK without app rebuild,
>> > a higher event value may be received in the app,
>> > breaking the assumption.
>> > Should we consider this case as an ABI breakage?
>>
>> Nope - I think we should explicitly exclude MAX values from any
>> ABI guarantee, as being able to change them is key to our be able to
>> evolve DPDK while maintaining ABI stability.
>
> Or we can simply remove the MAX values so there is no confusion.
>
>> Consider what it means applying the ABI policy to a MAX value, you are
>> in effect saying that that no value can be added to this enumeration
>> until the next ABI version, for me this is very restrictive without a
>> solid reason.
>
> I agree it is too much restrictive, that's why I am advocating
> for their removal.
I think that would be simplest yes - may require some rework of the
sample code though. I will take a look at it.
>
>> >> > I noted that some Linux folks explicitly mark similar MAX values as not
>> >> > part of the ABI.
>> >> >
>> >> > /usr/include/linux/perf_event.h
>> >> > 37: PERF_TYPE_MAX, /* non-ABI */
>> >> > 60: PERF_COUNT_HW_MAX, /* non-ABI */
>> >> > 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
>> >> > 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
>> >> > 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
>> >> > 116: PERF_COUNT_SW_MAX, /* non-ABI */
>> >> > 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
>> >> > 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
>> >> > non-ABI; internal use */
>> >> > 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
>> >> > 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
>> >> > 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
>> >> > 1067: PERF_RECORD_MAX, /* non-ABI */
>> >> > 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
>> >> > 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>> >>
>> >> Any thoughts on similarly annotating all our _MAX enums in the same way?
>> >> We could also add a section in the ABI Policy to make it explicit _MAX
>> >> enum values are not part of the ABI - what do folks think?
>> >
>> > Interesting. I am not sure it is always ABI-safe though.
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-14 16:06 5% ` Ray Kinsella
@ 2022-02-14 16:25 0% ` Thomas Monjalon
2022-02-14 18:27 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-14 16:25 UTC (permalink / raw)
To: Ray Kinsella
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
14/02/2022 17:06, Ray Kinsella:
> Thomas Monjalon <thomas@monjalon.net> writes:
> > 14/02/2022 11:16, Ray Kinsella:
> >> Ray Kinsella <mdr@ashroe.eu> writes:
> >> > Thomas Monjalon <thomas@monjalon.net> writes:
> >> >> 02/02/2022 12:44, Ray Kinsella:
> >> >>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
> >> >>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
> >> >>> >> --- a/lib/ethdev/rte_ethdev.h
> >> >>> >> +++ b/lib/ethdev/rte_ethdev.h
> >> >>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
> >> >>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
> >> >>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
> >> >>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
> >> >>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
> >> >>> >> + /**< port recovering from an error
> >> >>> >> + *
> >> >>> >> + * PMD detected a FW reset or error condition.
> >> >>> >> + * PMD will try to recover from the error.
> >> >>> >> + * Data path may be quiesced and Control path operations
> >> >>> >> + * may fail at this time.
> >> >>> >> + */
> >> >>> >> + RTE_ETH_EVENT_RECOVERED,
> >> >>> >> + /**< port recovered from an error
> >> >>> >> + *
> >> >>> >> + * PMD has recovered from the error condition.
> >> >>> >> + * Control path and Data path are up now.
> >> >>> >> + * PMD re-configures the port to the state prior to the error.
> >> >>> >> + * Since the device has undergone a reset, flow rules
> >> >>> >> + * offloaded prior to reset may be lost and
> >> >>> >> + * the application should recreate the rules again.
> >> >>> >> + */
> >> >>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
> >> >>> >
> >> >>> >
> >> >>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
> >> >>> > to evaluate if it is a false positive:
> >> >>> >
> >> >>> >
> >> >>> > 1 function with some indirect sub-type change:
> >> >>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
> >> >>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
> >> >>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
> >> >>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
> >> >>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
> >> >>> > type size hasn't changed
> >> >>> > 2 enumerator insertions:
> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
> >> >>> > 1 enumerator change:
> >> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
> >> >>>
> >> >>> I don't immediately see the problem that this would cause.
> >> >>> There are no array sizes etc dependent on the value of MAX for instance.
> >> >>>
> >> >>> Looks safe?
> >> >>
> >> >> We never know how this enum will be used by the application.
> >> >> The max value may be used for the size of an event array.
> >> >> It looks a real ABI issue unfortunately.
> >> >
> >> > Right - but we only really care about it when an array size based on MAX
> >> > is likely to be passed to DPDK, which doesn't apply in this case.
> >
> > I don't completely agree.
> > A developer may assume an event will never exceed MAX value.
> > However, after an upgrade of DPDK without app rebuild,
> > a higher event value may be received in the app,
> > breaking the assumption.
> > Should we consider this case as an ABI breakage?
>
> Nope - I think we should explicitly exclude MAX values from any
> ABI guarantee, as being able to change them is key to our be able to
> evolve DPDK while maintaining ABI stability.
Or we can simply remove the MAX values so there is no confusion.
> Consider what it means applying the ABI policy to a MAX value, you are
> in effect saying that that no value can be added to this enumeration
> until the next ABI version, for me this is very restrictive without a
> solid reason.
I agree it is too much restrictive, that's why I am advocating
for their removal.
> >> > I noted that some Linux folks explicitly mark similar MAX values as not
> >> > part of the ABI.
> >> >
> >> > /usr/include/linux/perf_event.h
> >> > 37: PERF_TYPE_MAX, /* non-ABI */
> >> > 60: PERF_COUNT_HW_MAX, /* non-ABI */
> >> > 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
> >> > 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
> >> > 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
> >> > 116: PERF_COUNT_SW_MAX, /* non-ABI */
> >> > 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
> >> > 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
> >> > non-ABI; internal use */
> >> > 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
> >> > 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
> >> > 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
> >> > 1067: PERF_RECORD_MAX, /* non-ABI */
> >> > 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
> >> > 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
> >>
> >> Any thoughts on similarly annotating all our _MAX enums in the same way?
> >> We could also add a section in the ABI Policy to make it explicit _MAX
> >> enum values are not part of the ABI - what do folks think?
> >
> > Interesting. I am not sure it is always ABI-safe though.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-14 11:15 4% ` Thomas Monjalon
@ 2022-02-14 16:06 5% ` Ray Kinsella
2022-02-14 16:25 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-14 16:06 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Thomas Monjalon <thomas@monjalon.net> writes:
> 14/02/2022 11:16, Ray Kinsella:
>> Ray Kinsella <mdr@ashroe.eu> writes:
>> > Thomas Monjalon <thomas@monjalon.net> writes:
>> >> 02/02/2022 12:44, Ray Kinsella:
>> >>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
>> >>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
>> >>> >> --- a/lib/ethdev/rte_ethdev.h
>> >>> >> +++ b/lib/ethdev/rte_ethdev.h
>> >>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>> >>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
>> >>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>> >>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>> >>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
>> >>> >> + /**< port recovering from an error
>> >>> >> + *
>> >>> >> + * PMD detected a FW reset or error condition.
>> >>> >> + * PMD will try to recover from the error.
>> >>> >> + * Data path may be quiesced and Control path operations
>> >>> >> + * may fail at this time.
>> >>> >> + */
>> >>> >> + RTE_ETH_EVENT_RECOVERED,
>> >>> >> + /**< port recovered from an error
>> >>> >> + *
>> >>> >> + * PMD has recovered from the error condition.
>> >>> >> + * Control path and Data path are up now.
>> >>> >> + * PMD re-configures the port to the state prior to the error.
>> >>> >> + * Since the device has undergone a reset, flow rules
>> >>> >> + * offloaded prior to reset may be lost and
>> >>> >> + * the application should recreate the rules again.
>> >>> >> + */
>> >>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
>> >>> >
>> >>> >
>> >>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
>> >>> > to evaluate if it is a false positive:
>> >>> >
>> >>> >
>> >>> > 1 function with some indirect sub-type change:
>> >>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
>> >>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
>> >>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
>> >>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
>> >>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
>> >>> > type size hasn't changed
>> >>> > 2 enumerator insertions:
>> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
>> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
>> >>> > 1 enumerator change:
>> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>> >>>
>> >>> I don't immediately see the problem that this would cause.
>> >>> There are no array sizes etc dependent on the value of MAX for instance.
>> >>>
>> >>> Looks safe?
>> >>
>> >> We never know how this enum will be used by the application.
>> >> The max value may be used for the size of an event array.
>> >> It looks a real ABI issue unfortunately.
>> >
>> > Right - but we only really care about it when an array size based on MAX
>> > is likely to be passed to DPDK, which doesn't apply in this case.
>
> I don't completely agree.
> A developer may assume an event will never exceed MAX value.
> However, after an upgrade of DPDK without app rebuild,
> a higher event value may be received in the app,
> breaking the assumption.
> Should we consider this case as an ABI breakage?
Nope - I think we should explicitly exclude MAX values from any
ABI guarantee, as being able to change them is key to our be able to
evolve DPDK while maintaining ABI stability.
Consider what it means applying the ABI policy to a MAX value, you are
in effect saying that that no value can be added to this enumeration
until the next ABI version, for me this is very restrictive without a
solid reason.
>
>> > I noted that some Linux folks explicitly mark similar MAX values as not
>> > part of the ABI.
>> >
>> > /usr/include/linux/perf_event.h
>> > 37: PERF_TYPE_MAX, /* non-ABI */
>> > 60: PERF_COUNT_HW_MAX, /* non-ABI */
>> > 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
>> > 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
>> > 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
>> > 116: PERF_COUNT_SW_MAX, /* non-ABI */
>> > 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
>> > 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
>> > non-ABI; internal use */
>> > 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
>> > 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
>> > 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
>> > 1067: PERF_RECORD_MAX, /* non-ABI */
>> > 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
>> > 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>>
>> Any thoughts on similarly annotating all our _MAX enums in the same way?
>> We could also add a section in the ABI Policy to make it explicit _MAX
>> enum values are not part of the ABI - what do folks think?
>
> Interesting. I am not sure it is always ABI-safe though.
--
Regards, Ray K
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-14 10:16 4% ` Ray Kinsella
@ 2022-02-14 11:15 4% ` Thomas Monjalon
2022-02-14 16:06 5% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-14 11:15 UTC (permalink / raw)
To: Ray Kinsella
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
14/02/2022 11:16, Ray Kinsella:
> Ray Kinsella <mdr@ashroe.eu> writes:
> > Thomas Monjalon <thomas@monjalon.net> writes:
> >> 02/02/2022 12:44, Ray Kinsella:
> >>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
> >>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
> >>> >> --- a/lib/ethdev/rte_ethdev.h
> >>> >> +++ b/lib/ethdev/rte_ethdev.h
> >>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
> >>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
> >>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
> >>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
> >>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
> >>> >> + /**< port recovering from an error
> >>> >> + *
> >>> >> + * PMD detected a FW reset or error condition.
> >>> >> + * PMD will try to recover from the error.
> >>> >> + * Data path may be quiesced and Control path operations
> >>> >> + * may fail at this time.
> >>> >> + */
> >>> >> + RTE_ETH_EVENT_RECOVERED,
> >>> >> + /**< port recovered from an error
> >>> >> + *
> >>> >> + * PMD has recovered from the error condition.
> >>> >> + * Control path and Data path are up now.
> >>> >> + * PMD re-configures the port to the state prior to the error.
> >>> >> + * Since the device has undergone a reset, flow rules
> >>> >> + * offloaded prior to reset may be lost and
> >>> >> + * the application should recreate the rules again.
> >>> >> + */
> >>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
> >>> >
> >>> >
> >>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
> >>> > to evaluate if it is a false positive:
> >>> >
> >>> >
> >>> > 1 function with some indirect sub-type change:
> >>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
> >>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
> >>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
> >>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
> >>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
> >>> > type size hasn't changed
> >>> > 2 enumerator insertions:
> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
> >>> > 1 enumerator change:
> >>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
> >>>
> >>> I don't immediately see the problem that this would cause.
> >>> There are no array sizes etc dependent on the value of MAX for instance.
> >>>
> >>> Looks safe?
> >>
> >> We never know how this enum will be used by the application.
> >> The max value may be used for the size of an event array.
> >> It looks a real ABI issue unfortunately.
> >
> > Right - but we only really care about it when an array size based on MAX
> > is likely to be passed to DPDK, which doesn't apply in this case.
I don't completely agree.
A developer may assume an event will never exceed MAX value.
However, after an upgrade of DPDK without app rebuild,
a higher event value may be received in the app,
breaking the assumption.
Should we consider this case as an ABI breakage?
> > I noted that some Linux folks explicitly mark similar MAX values as not
> > part of the ABI.
> >
> > /usr/include/linux/perf_event.h
> > 37: PERF_TYPE_MAX, /* non-ABI */
> > 60: PERF_COUNT_HW_MAX, /* non-ABI */
> > 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
> > 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
> > 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
> > 116: PERF_COUNT_SW_MAX, /* non-ABI */
> > 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
> > 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
> > non-ABI; internal use */
> > 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
> > 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
> > 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
> > 1067: PERF_RECORD_MAX, /* non-ABI */
> > 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
> > 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>
> Any thoughts on similarly annotating all our _MAX enums in the same way?
> We could also add a section in the ABI Policy to make it explicit _MAX
> enum values are not part of the ABI - what do folks think?
Interesting. I am not sure it is always ABI-safe though.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-11 10:09 5% ` Ray Kinsella
@ 2022-02-14 10:16 4% ` Ray Kinsella
2022-02-14 11:15 4% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-14 10:16 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Ray Kinsella <mdr@ashroe.eu> writes:
> Thomas Monjalon <thomas@monjalon.net> writes:
>
>> 02/02/2022 12:44, Ray Kinsella:
>>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
>>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
>>> >> --- a/lib/ethdev/rte_ethdev.h
>>> >> +++ b/lib/ethdev/rte_ethdev.h
>>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
>>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
>>> >> + /**< port recovering from an error
>>> >> + *
>>> >> + * PMD detected a FW reset or error condition.
>>> >> + * PMD will try to recover from the error.
>>> >> + * Data path may be quiesced and Control path operations
>>> >> + * may fail at this time.
>>> >> + */
>>> >> + RTE_ETH_EVENT_RECOVERED,
>>> >> + /**< port recovered from an error
>>> >> + *
>>> >> + * PMD has recovered from the error condition.
>>> >> + * Control path and Data path are up now.
>>> >> + * PMD re-configures the port to the state prior to the error.
>>> >> + * Since the device has undergone a reset, flow rules
>>> >> + * offloaded prior to reset may be lost and
>>> >> + * the application should recreate the rules again.
>>> >> + */
>>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
>>> >
>>> >
>>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
>>> > to evaluate if it is a false positive:
>>> >
>>> >
>>> > 1 function with some indirect sub-type change:
>>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
>>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
>>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
>>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
>>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
>>> > type size hasn't changed
>>> > 2 enumerator insertions:
>>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
>>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
>>> > 1 enumerator change:
>>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>>>
>>> I don't immediately see the problem that this would cause.
>>> There are no array sizes etc dependent on the value of MAX for instance.
>>>
>>> Looks safe?
>>
>> We never know how this enum will be used by the application.
>> The max value may be used for the size of an event array.
>> It looks a real ABI issue unfortunately.
>
> Right - but we only really care about it when an array size based on MAX
> is likely to be passed to DPDK, which doesn't apply in this case.
>
> I noted that some Linux folks explicitly mark similar MAX values as not
> part of the ABI.
>
> /usr/include/linux/perf_event.h
> 37: PERF_TYPE_MAX, /* non-ABI */
> 60: PERF_COUNT_HW_MAX, /* non-ABI */
> 79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
> 87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
> 94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
> 116: PERF_COUNT_SW_MAX, /* non-ABI */
> 149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
> 151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
> non-ABI; internal use */
> 189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
> 267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
> 301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
> 1067: PERF_RECORD_MAX, /* non-ABI */
> 1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
> 1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>
>>
>> PS: I am not Cc'ed in this patchset,
>> so copying what I said on v6 (more than a year ago):
>> Please use the option --cc-cmd devtools/get-maintainer.sh
Any thoughts on similarly annotating all our _MAX enums in the same way?
We could also add a section in the ABI Policy to make it explicit _MAX
enum values are not part of the ABI - what do folks think?
--
Regards, Ray K
^ permalink raw reply [relevance 4%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2022-01-21 10:22 0% ` Ferruh Yigit
2022-01-24 5:12 0% ` Hemant Agrawal
@ 2022-02-12 14:01 0% ` Yanling Song
1 sibling, 0 replies; 200+ results
From: Yanling Song @ 2022-02-12 14:01 UTC (permalink / raw)
To: Ferruh Yigit
Cc: dev, yanling.song, yanggan, xuyun, stephen, lihuisong,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Hemant Agrawal
On Fri, 21 Jan 2022 10:22:10 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> On 1/21/2022 9:27 AM, Yanling Song wrote:
> > On Wed, 19 Jan 2022 16:56:52 +0000
> > Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> >> On 12/30/2021 6:08 AM, Yanling Song wrote:
> >>> The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial
> >>> NIC cards into DPDK 22.03. Ramaxel Memory Technology is a company
> >>> which supply a lot of electric products: storage, communication,
> >>> PCB... SPNxxx is a serial PCIE interface NIC cards:
> >>> SPN110: 2 PORTs *25G
> >>> SPN120: 4 PORTs *25G
> >>> SPN130: 2 PORTs *100G
> >>>
> >>
> >> Hi Yanling,
> >>
> >> As far as I can see hnic (from Huawei) and this spnic drivers are
> >> alike, what is the relation between these two?
> >>
> > It is hard to create a brand new driver from scratch, so we
> > referenced to hinic driver when developing spnic.
> >
>
> That is OK, but based on the familiarity of the code you may consider
> keeping the original code Copyright, I didn't investigate in
> that level but cc'ed hinic maintainers for info.
> Also cc'ed Hemant for guidance.
>
>
Sorry for late reponse since it was Spring Festival and I was in
vacation, just back to work.
Hemant gave the guidance already, but we do not want to keep another
company's copyright in our code. How should we modify code so that the
code meet DPDK's requirment and can be accepted with our copyright
only?
> But my question was more related to the HW, is there any relation
> between the hinic HW and spnic HW? Like one is derived from other
> etc...
I'm not clear what's the relation between hinic/spnic hw since we do
not know what's the hinic hw.
>
> >>> The following is main features of our SPNIC:
> >>> - TSO
> >>> - LRO
> >>> - Flow control
> >>> - SR-IOV(Partially supported)
> >>> - VLAN offload
> >>> - VLAN filter
> >>> - CRC offload
> >>> - Promiscuous mode
> >>> - RSS
> >>>
> >>> v6->v5, No real changes:
> >>> 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26
> >>> to patch 2; 2. Change the description of patch 26.
> >>>
> >>> v5->v4:
> >>> 1. Add prefix "spinc_" for external functions;
> >>> 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
> >>> 3. Do not use void* for keeping the type information
> >>>
> >>> v3->v4:
> >>> 1. Fix ABI test failure;
> >>> 2. Remove some descriptions in spnic.rst.
> >>>
> >>> v2->v3:
> >>> 1. Fix clang compiling failure.
> >>>
> >>> v1->v2:
> >>> 1. Fix coding style issues and compiling failures;
> >>> 2. Only support linux in meson.build;
> >>> 3. Use CLOCK_MONOTONIC_COARSE instead of
> >>> CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW; 4. Fix time_before();
> >>> 5. Remove redundant checks in spnic_dev_configure();
> >>>
> >>> Yanling Song (26):
> >>> drivers/net: introduce a new PMD driver
> >>> net/spnic: initialize the HW interface
> >>> net/spnic: add mbox message channel
> >>> net/spnic: introduce event queue
> >>> net/spnic: add mgmt module
> >>> net/spnic: add cmdq and work queue
> >>> net/spnic: add interface handling cmdq message
> >>> net/spnic: add hardware info initialization
> >>> net/spnic: support MAC and link event handling
> >>> net/spnic: add function info initialization
> >>> net/spnic: add queue pairs context initialization
> >>> net/spnic: support mbuf handling of Tx/Rx
> >>> net/spnic: support Rx congfiguration
> >>> net/spnic: add port/vport enable
> >>> net/spnic: support IO packets handling
> >>> net/spnic: add device configure/version/info
> >>> net/spnic: support RSS configuration update and get
> >>> net/spnic: support VLAN filtering and offloading
> >>> net/spnic: support promiscuous and allmulticast Rx modes
> >>> net/spnic: support flow control
> >>> net/spnic: support getting Tx/Rx queues info
> >>> net/spnic: net/spnic: support xstats statistics
> >>> net/spnic: support VFIO interrupt
> >>> net/spnic: support Tx/Rx queue start/stop
> >>> net/spnic: add doc infrastructure
> >>> net/spnic: fixes unsafe C style code
> >>
> >> <...>
> >
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v5] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-31 0:05 8% ` [PATCH v5] " Michael Barker
@ 2022-02-12 14:00 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-02-12 14:00 UTC (permalink / raw)
To: Michael Barker; +Cc: dev, Ray Kinsella
31/01/2022 01:05, Michael Barker:
> When compiling with clang using -Wpedantic (or -Wgcc-compat) the use of
> diagnose_if kicks up a warning:
>
> .../include/rte_interrupts.h:623:1: error: 'diagnose_if' is a clang
> extension [-Werror,-Wgcc-compat]
> __rte_internal
> ^
> .../include/rte_compat.h:36:16: note: expanded from macro '__rte_internal'
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
>
> This change ignores the '-Wgcc-compat' warning in the specific location
> where the warning occurs. It is safe to do in this circumstance as the
> specific macro is only defined when using the clang compiler.
>
> Signed-off-by: Michael Barker <mikeb01@gmail.com>
Applied with following title, thanks:
eal: ignore gcc-compat warning in clang-only macro
^ permalink raw reply [relevance 0%]
* RE: [EXT] [PATCH] crypto: fix misspelled key in qt format
@ 2022-02-12 11:34 3% ` Akhil Goyal
2022-02-18 6:11 0% ` Kusztal, ArkadiuszX
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-12 11:34 UTC (permalink / raw)
To: Arek Kusztal, dev; +Cc: roy.fan.zhang
> This patch fixes misspelled RTE_RSA_KEY_TYPE_QT,
> this will prevent checkpach from complaining wherever
> change to RSA is being made.
>
> Fixes: 26008aaed14c ("cryptodev: add asymmetric xform and op definitions")
>
> Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> ---
Fix ABI warning.
Add libabigail.abiignore rule.
^ permalink raw reply [relevance 3%]
* [PATCH v5 2/2] buildtools/chkincs: test headers for C++ compatibility
@ 2022-02-11 11:36 11% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2022-02-11 11:36 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Aaron Conole, Michael Santana
Add support for checking each of our headers for issues when included in
a C++ file.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.ci/linux-build.sh | 1 +
.github/workflows/build.yml | 2 +-
buildtools/chkincs/main.cpp | 4 ++++
buildtools/chkincs/meson.build | 18 ++++++++++++++++++
devtools/test-meson-builds.sh | 3 ++-
5 files changed, 26 insertions(+), 2 deletions(-)
create mode 100644 buildtools/chkincs/main.cpp
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index c10c1a8ab5..67d68535e0 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -74,6 +74,7 @@ fi
if [ "$BUILD_32BIT" = "true" ]; then
OPTS="$OPTS -Dc_args=-m32 -Dc_link_args=-m32"
+ OPTS="$OPTS -Dcpp_args=-m32 -Dcpp_link_args=-m32"
export PKG_CONFIG_LIBDIR="/usr/lib32/pkgconfig"
fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 6cf997d6ee..d30cfd08d7 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -116,7 +116,7 @@ jobs:
libdw-dev
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
- run: sudo apt install -y gcc-multilib
+ run: sudo apt install -y gcc-multilib g++-multilib
- name: Install aarch64 cross compiling packages
if: env.AARCH64 == 'true'
run: sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross
diff --git a/buildtools/chkincs/main.cpp b/buildtools/chkincs/main.cpp
new file mode 100644
index 0000000000..d25bb8852a
--- /dev/null
+++ b/buildtools/chkincs/main.cpp
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+int main(void) { return 0; }
diff --git a/buildtools/chkincs/meson.build b/buildtools/chkincs/meson.build
index 7ea136ff95..790f700619 100644
--- a/buildtools/chkincs/meson.build
+++ b/buildtools/chkincs/meson.build
@@ -27,3 +27,21 @@ executable('chkincs', sources,
include_directories: includes,
dependencies: deps,
install: false)
+
+# run tests for c++ builds also
+if not add_languages('cpp', required: false)
+ subdir_done()
+endif
+
+gen_cpp_files = generator(gen_c_file_for_header,
+ output: '@BASENAME@.cpp',
+ arguments: ['@INPUT@', '@OUTPUT@'])
+
+cpp_sources = files('main.cpp')
+cpp_sources += gen_cpp_files.process(dpdk_chkinc_headers)
+
+executable('chkincs-cpp', cpp_sources,
+ cpp_args: ['-include', 'rte_config.h', cflags],
+ include_directories: includes,
+ dependencies: deps,
+ install: false)
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 4ed61328b9..c07fd16fdc 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -246,7 +246,8 @@ if check_cc_flags '-m32' ; then
export PKG_CONFIG_LIBDIR='/usr/lib/pkgconfig'
fi
target_override='i386-pc-linux-gnu'
- build build-32b cc ABI -Dc_args='-m32' -Dc_link_args='-m32'
+ build build-32b cc ABI -Dc_args='-m32' -Dc_link_args='-m32' \
+ -Dcpp_args='-m32' -Dcpp_link_args='-m32'
target_override=
unset PKG_CONFIG_LIBDIR
fi
--
2.32.0
^ permalink raw reply [relevance 11%]
* Re: [EXT] [PATCH v2 4/4] crypto: reorganize endianness comments, add crypto uint
2022-02-10 21:08 4% ` Akhil Goyal
@ 2022-02-11 10:54 0% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2022-02-11 10:54 UTC (permalink / raw)
To: Akhil Goyal
Cc: Zhang, Roy Fan, Kusztal, ArkadiuszX, David Marchand,
ray.kinsella, Ramkumar Balu, dev
Hi Akhil,
Akhil Goyal <gakhil@marvell.com> writes:
> Hi Fan,
>> Hi Akhil,
>>
>> I assume everything in asym crypto is under experimental tag at the moment
>> right?
>> The goal is to have them updated and fixed before DPDK 22.11 so the
>> experimental tag can be removed.
>>
> Asymmetric crypto APIs are marked as experimental, but the structures are not
> explicitly marked experimental.
> rte_crypto_asym_op is part of union in rte_crypto_op which is definitely not experimental.
> So a change in asym_op will result in ABI issues in rte_crypto_op.
>
> David/Ray: Can you review the patch 1/4 of this series from ABI compatibility point of view.
> http://patches.dpdk.org/project/dpdk/patch/20220207113555.8431-2-arkadiuszx.kusztal@intel.com/
> IMO, as per current experimental tags, we cannot change parameters inside rte_crypto_asym_op
> and subsequently in struct rte_crypto_dsa_op_param. What do you suggest?
> However, I remember, some exception was added to ignore ABI issues related to asymmetric
> crypto. Could you please check why that exception is not working in this case?
>
> Regards,
> Akhil
So rte_crypto_asym_op is at the end of the rte_crypto_op struct, so any
changes there are safe.
http://mails.dpdk.org/archives/test-report/2022-February/257617.html
The warning above is complaining about changes to rte_crypto_asym_op.
IMHO it is safe to condone these warnings in the libabigail.ignore.
libabigail.ignore exceptions was reset at the 21.11 release, although I
took a look and don't see anything related to asymmetric crypto prior to
that.
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-10 22:16 3% ` Thomas Monjalon
@ 2022-02-11 10:09 5% ` Ray Kinsella
2022-02-14 10:16 4% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-11 10:09 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ferruh Yigit, Kalesh A P, dev, ajit.khaparde, asafp,
David Marchand, Andrew Rybchenko
Thomas Monjalon <thomas@monjalon.net> writes:
> 02/02/2022 12:44, Ray Kinsella:
>> Ferruh Yigit <ferruh.yigit@intel.com> writes:
>> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
>> >> --- a/lib/ethdev/rte_ethdev.h
>> >> +++ b/lib/ethdev/rte_ethdev.h
>> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
>> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>> >> + RTE_ETH_EVENT_ERR_RECOVERING,
>> >> + /**< port recovering from an error
>> >> + *
>> >> + * PMD detected a FW reset or error condition.
>> >> + * PMD will try to recover from the error.
>> >> + * Data path may be quiesced and Control path operations
>> >> + * may fail at this time.
>> >> + */
>> >> + RTE_ETH_EVENT_RECOVERED,
>> >> + /**< port recovered from an error
>> >> + *
>> >> + * PMD has recovered from the error condition.
>> >> + * Control path and Data path are up now.
>> >> + * PMD re-configures the port to the state prior to the error.
>> >> + * Since the device has undergone a reset, flow rules
>> >> + * offloaded prior to reset may be lost and
>> >> + * the application should recreate the rules again.
>> >> + */
>> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
>> >
>> >
>> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
>> > to evaluate if it is a false positive:
>> >
>> >
>> > 1 function with some indirect sub-type change:
>> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
>> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
>> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
>> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
>> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
>> > type size hasn't changed
>> > 2 enumerator insertions:
>> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
>> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
>> > 1 enumerator change:
>> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>>
>> I don't immediately see the problem that this would cause.
>> There are no array sizes etc dependent on the value of MAX for instance.
>>
>> Looks safe?
>
> We never know how this enum will be used by the application.
> The max value may be used for the size of an event array.
> It looks a real ABI issue unfortunately.
Right - but we only really care about it when an array size based on MAX
is likely to be passed to DPDK, which doesn't apply in this case.
I noted that some Linux folks explicitly mark similar MAX values as not
part of the ABI.
/usr/include/linux/perf_event.h
37: PERF_TYPE_MAX, /* non-ABI */
60: PERF_COUNT_HW_MAX, /* non-ABI */
79: PERF_COUNT_HW_CACHE_MAX, /* non-ABI */
87: PERF_COUNT_HW_CACHE_OP_MAX, /* non-ABI */
94: PERF_COUNT_HW_CACHE_RESULT_MAX, /* non-ABI */
116: PERF_COUNT_SW_MAX, /* non-ABI */
149: PERF_SAMPLE_MAX = 1U << 24, /* non-ABI */
151: __PERF_SAMPLE_CALLCHAIN_EARLY = 1ULL << 63, /*
non-ABI; internal use */
189: PERF_SAMPLE_BRANCH_MAX_SHIFT /* non-ABI */
267: PERF_TXN_MAX = (1 << 8), /* non-ABI */
301: PERF_FORMAT_MAX = 1U << 4, /* non-ABI */
1067: PERF_RECORD_MAX, /* non-ABI */
1078: PERF_RECORD_KSYMBOL_TYPE_MAX /* non-ABI */
1087: PERF_BPF_EVENT_MAX, /* non-ABI */
>
> PS: I am not Cc'ed in this patchset,
> so copying what I said on v6 (more than a year ago):
> Please use the option --cc-cmd devtools/get-maintainer.sh
--
Regards, Ray K
^ permalink raw reply [relevance 5%]
* [PATCH v7 5/5] crypto: modify return value for asym session create
2022-02-11 9:29 1% ` [PATCH v7 2/5] crypto: use single buffer for asymmetric session Ciara Power
@ 2022-02-11 9:29 2% ` Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-11 9:29 UTC (permalink / raw)
To: dev; +Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
v5: Added session parameter to create session trace.
v4:
- Reordered function parameters.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Fixed variable declarations, putting initialised variable last.
- Made function comment for return value more generic.
- Fixed log to include line break.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 12 ++-
app/test/test_cryptodev_asym.c | 109 +++++++++++++------------
doc/guides/rel_notes/release_22_03.rst | 3 +-
lib/cryptodev/rte_cryptodev.c | 28 ++++---
lib/cryptodev/rte_cryptodev.h | 13 ++-
lib/cryptodev/rte_cryptodev_trace.h | 4 +-
6 files changed, 92 insertions(+), 77 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index b8f590b397..479c40eead 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -734,7 +734,9 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform auth_xform;
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
+ void *asym_sess = NULL;
struct rte_crypto_asym_xform xform = {0};
+ int ret;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
- if (sess == NULL)
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform,
+ sess_mp, &asym_sess);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1, "Asym session create failed\n");
return NULL;
-
- return sess;
+ }
+ return asym_sess;
}
#ifdef RTE_LIB_SECURITY
/*
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f0cb839a49..c2e1b4dafd 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -317,7 +317,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
xform_tc.next = NULL;
xform_tc.xform_type = data_tc->modex.xform_type;
@@ -452,14 +452,14 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
- ts_params->session_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool, &sess);
+ if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -646,7 +646,7 @@ test_rsa_sign_verify(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -659,12 +659,12 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -686,7 +686,7 @@ test_rsa_enc_dec(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -699,11 +699,11 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -726,7 +726,7 @@ test_rsa_sign_verify_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
@@ -738,12 +738,12 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -766,7 +766,7 @@ test_rsa_enc_dec_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
@@ -778,12 +778,12 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1047,7 +1047,7 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
@@ -1074,12 +1074,12 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1130,7 +1130,7 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1152,12 +1152,12 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1211,7 +1211,7 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1241,12 +1241,12 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1300,7 +1300,7 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t out_pub_key[TEST_DH_MOD_LEN];
uint8_t out_prv_key[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform pub_key_xform;
@@ -1330,12 +1330,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1419,12 +1419,12 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1543,13 +1543,13 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1653,13 +1653,14 @@ test_dsa_sign(void)
uint8_t r[TEST_DH_MOD_LEN];
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
+ int ret;
- sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
/* set up crypto op data structure */
@@ -1788,7 +1789,7 @@ test_ecdsa_sign_verify(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -1833,12 +1834,12 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
@@ -1990,7 +1991,7 @@ test_ecpm(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -2035,12 +2036,12 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a930cbbad6..640691c3ef 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -123,7 +123,8 @@ API Changes
The session structure was moved to ``cryptodev_pmd.h``,
hiding it from applications.
The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
- is now moved to ``rte_cryptodev_asym_session_create``.
+ is now moved to ``rte_cryptodev_asym_session_create``, which was updated to
+ return an integer value to indicate initialisation errors.
ABI Changes
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 91d48d5886..727d271fb9 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -1912,9 +1912,10 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
return sess;
}
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session)
{
struct rte_cryptodev_asym_session *sess;
uint32_t session_priv_data_sz;
@@ -1926,17 +1927,17 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return NULL;
+ return -EINVAL;
}
dev = rte_cryptodev_pmd_get_dev(dev_id);
if (dev == NULL)
- return NULL;
+ return -EINVAL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
- return NULL;
+ return -EINVAL;
}
session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
@@ -1946,22 +1947,23 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
CDEV_LOG_DEBUG(
"The private session data size used when creating the mempool is smaller than this device's private session data.");
- return NULL;
+ return -EINVAL;
}
/* Verify if provided mempool can hold elements big enough. */
if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
- return NULL;
+ return -EINVAL;
}
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(mp, (void **)&sess)) {
+ if (rte_mempool_get(mp, session)) {
CDEV_LOG_ERR("couldn't get object from session mempool");
- return NULL;
+ return -ENOMEM;
}
+ sess = *session;
sess->driver_id = dev->driver_id;
sess->user_data_sz = pool_priv->user_data_sz;
sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
@@ -1969,7 +1971,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
/* Clear device session pointer.*/
memset(sess->sess_private_data, 0, session_priv_data_sz + sess->user_data_sz);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, -ENOTSUP);
if (sess->sess_private_data[0] == 0) {
ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
@@ -1977,12 +1979,12 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
CDEV_LOG_ERR(
"dev_id %d failed to configure session details",
dev_id);
- return NULL;
+ return ret;
}
}
- rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
- return sess;
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp, sess);
+ return 0;
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 1d7bd07680..19e2e70287 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -996,14 +996,19 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
* processed with this session
* @param mp mempool to allocate asymmetric session
* objects from
+ * @param session void ** for session to be used
+ *
* @return
- * - On success return pointer to asym-session
- * - On failure returns NULL
+ * - 0 on success.
+ * - -EINVAL on invalid arguments.
+ * - -ENOMEM on memory error for session allocation.
+ * - -ENOTSUP if device doesn't support session configuration.
*/
__rte_experimental
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session);
/**
* Frees symmetric crypto session header, after checking that all
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index 005a4fe38b..a3f6048e7d 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -96,10 +96,12 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool,
+ void *sess),
rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
+ rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v7 2/5] crypto: use single buffer for asymmetric session
@ 2022-02-11 9:29 1% ` Ciara Power
2022-02-11 9:29 2% ` [PATCH v7 5/5] crypto: modify return value for asym session create Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-11 9:29 UTC (permalink / raw)
To: dev
Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty,
Ankur Dwivedi, Tejasree Kondoj, John Griffin, Fiona Trahe,
Deepak Kumar Jain
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v7: Fixed compilation warning for unused parameter.
v6:
- Reordered variable declarations to follow cnxk file format.
- Added fix for crypto perf app asymmetric modex operation, there
is no longer a need for private mempool, and the
rte_cryptodev_asym_session_pool_create API should be used.
v5:
- Removed get API for session private data, can be accessed directly.
- Modified test application to create a session mempool for
TEST_NUM_SESSIONS rather than TEST_NUM_SESSIONS * 2.
- Reworded create session function description.
- Removed sess parameter from create session trace,
to be added in a later patch.
v4:
- Merged asym crypto session clear and free functions.
- Reordered some function parameters.
- Updated trace function for asym crypto session create.
- Fixed cnxk clear, the PMD no longer needs to put private data
back into a mempool.
- Renamed struct field for max private session size.
- Replaced __extension__ with RTE_STD_C11.
- Moved some parameter validity checks to before functional code.
- Reworded release note.
- Removed mempool parameter from session configure function.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Corrected formatting of struct comments.
- Increased size of max_priv_session_sz to uint16_t.
- Removed trace for asym session init function that was
previously removed.
- Added documentation.
v2:
- Renamed function typedef from "free" to "clear" as session private
data isn't being freed in that function.
- Moved user data API to separate patch.
- Minor fixes to comments, formatting, return values.
---
app/test-crypto-perf/cperf_ops.c | 14 +-
app/test-crypto-perf/cperf_test_throughput.c | 8 +-
app/test-crypto-perf/main.c | 31 +--
app/test/test_cryptodev_asym.c | 272 +++++--------------
doc/guides/prog_guide/cryptodev_lib.rst | 21 +-
doc/guides/rel_notes/release_22_03.rst | 7 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 22 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 3 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 32 +--
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 24 +-
drivers/crypto/qat/qat_asym.c | 54 +---
drivers/crypto/qat/qat_asym.h | 5 +-
lib/cryptodev/cryptodev_pmd.h | 23 +-
lib/cryptodev/cryptodev_trace_points.c | 9 +-
lib/cryptodev/rte_cryptodev.c | 213 ++++++++-------
lib/cryptodev/rte_cryptodev.h | 97 ++++---
lib/cryptodev/rte_cryptodev_trace.h | 38 ++-
lib/cryptodev/version.map | 7 +-
21 files changed, 319 insertions(+), 581 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d975ae1ab8..b125c699de 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -735,7 +735,6 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
struct rte_crypto_asym_xform xform = {0};
- int rc;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -745,19 +744,10 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp);
+ sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
if (sess == NULL)
return NULL;
- rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
- &xform, priv_mp);
- if (rc < 0) {
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id,
- (void *)sess);
- rte_cryptodev_asym_session_free((void *)sess);
- }
- return NULL;
- }
+
return sess;
}
#ifdef RTE_LIB_SECURITY
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 51512af2ad..ee21ff27f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -35,11 +35,9 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
if (!ctx)
return;
if (ctx->sess) {
- if (ctx->options->op_type == CPERF_ASYM_MODEX) {
- rte_cryptodev_asym_session_clear(ctx->dev_id,
- (void *)ctx->sess);
- rte_cryptodev_asym_session_free((void *)ctx->sess);
- }
+ if (ctx->options->op_type == CPERF_ASYM_MODEX)
+ rte_cryptodev_asym_session_free(ctx->dev_id,
+ (void *)ctx->sess);
#ifdef RTE_LIB_SECURITY
else if (ctx->options->op_type == CPERF_PDCP ||
ctx->options->op_type == CPERF_DOCSIS ||
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 6fdb92fb7c..2531de43a2 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -69,39 +69,16 @@ const struct cperf_test cperf_testmap[] = {
};
static int
-create_asym_op_pool_socket(uint8_t dev_id, int32_t socket_id,
- uint32_t nb_sessions)
+create_asym_op_pool_socket(int32_t socket_id, uint32_t nb_sessions)
{
char mp_name[RTE_MEMPOOL_NAMESIZE];
struct rte_mempool *mpool = NULL;
- unsigned int session_size =
- RTE_MAX(rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
-
- if (session_pool_socket[socket_id].priv_mp == NULL) {
- snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "perf_asym_priv_pool%u",
- socket_id);
-
- mpool = rte_mempool_create(mp_name, nb_sessions, session_size,
- 0, 0, NULL, NULL, NULL, NULL,
- socket_id, 0);
- if (mpool == NULL) {
- printf("Cannot create pool \"%s\" on socket %d\n",
- mp_name, socket_id);
- return -ENOMEM;
- }
- printf("Allocated pool \"%s\" on socket %d\n", mp_name,
- socket_id);
- session_pool_socket[socket_id].priv_mp = mpool;
- }
if (session_pool_socket[socket_id].sess_mp == NULL) {
-
snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "perf_asym_sess_pool%u",
socket_id);
- mpool = rte_mempool_create(mp_name, nb_sessions,
- session_size, 0, 0, NULL, NULL, NULL,
- NULL, socket_id, 0);
+ mpool = rte_cryptodev_asym_session_pool_create(mp_name,
+ nb_sessions, 0, socket_id);
if (mpool == NULL) {
printf("Cannot create pool \"%s\" on socket %d\n",
mp_name, socket_id);
@@ -336,7 +313,7 @@ cperf_initialize_cryptodev(struct cperf_options *opts, uint8_t *enabled_cdevs)
}
if (opts->op_type == CPERF_ASYM_MODEX)
- ret = create_asym_op_pool_socket(cdev_id, socket_id,
+ ret = create_asym_op_pool_socket(socket_id,
sessions_needed);
else
ret = fill_session_pool_socket(socket_id, max_sess_size,
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 8d7290f9ed..88433faf1c 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -452,7 +452,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool);
if (!sess) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -462,15 +463,6 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform_tc,
- ts_params->session_mpool) < 0) {
- snprintf(test_msg, ASYM_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
rte_crypto_op_attach_asym_session(op, sess);
} else {
asym_op->xform = &xform_tc;
@@ -512,10 +504,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
snprintf(test_msg, ASYM_TEST_MSG_LEN, "SESSIONLESS PASS");
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -669,18 +659,11 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -688,9 +671,7 @@ test_rsa_sign_verify(void)
status = queue_ops_rsa_sign_verify(sess);
error_exit:
-
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -718,17 +699,10 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -737,8 +711,7 @@ test_rsa_enc_dec(void)
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -765,28 +738,20 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
status = TEST_FAILED;
- return status;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify_crt\n");
- status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_sign_verify(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -813,27 +778,20 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec_crt\n");
status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_enc_dec(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -927,7 +885,6 @@ testsuite_setup(void)
/* configure qp */
ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
ts_params->qp_conf.mp_session = ts_params->session_mpool;
- ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
@@ -936,21 +893,9 @@ testsuite_setup(void)
qp_id, dev_id);
}
- /* setup asym session pool */
- unsigned int session_size = RTE_MAX(
- rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
- /*
- * Create mempool with TEST_NUM_SESSIONS * 2,
- * to include the session headers
- */
- ts_params->session_mpool = rte_mempool_create(
- "test_asym_sess_mp",
- TEST_NUM_SESSIONS * 2,
- session_size,
- 0, 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
+ ts_params->session_mpool = rte_cryptodev_asym_session_pool_create(
+ "test_asym_sess_mp", TEST_NUM_SESSIONS, 0,
+ SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
@@ -1107,14 +1052,6 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1137,11 +1074,11 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1176,10 +1113,8 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1199,14 +1134,6 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1225,11 +1152,11 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1265,10 +1192,8 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1290,14 +1215,6 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1324,11 +1241,11 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1365,10 +1282,8 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data, asym_op->dh.priv_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1391,15 +1306,6 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform pub_key_xform;
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1423,11 +1329,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.pub_key.length = sizeof(out_pub_key);
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1462,10 +1369,8 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
out_pub_key, asym_op->dh.pub_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1514,7 +1419,7 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
@@ -1523,15 +1428,6 @@ test_mod_inv(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modinv_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* generate crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1583,10 +1479,8 @@ test_mod_inv(void)
}
error_exit:
- if (sess) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op)
rte_crypto_op_free(op);
@@ -1649,7 +1543,7 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1,
"line %u "
@@ -1659,15 +1553,6 @@ test_mod_exp(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modex_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
asym_op = op->asym;
memcpy(input, base, sizeof(base));
asym_op->modex.base.data = input;
@@ -1706,10 +1591,8 @@ test_mod_exp(void)
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1771,7 +1654,7 @@ test_dsa_sign(void)
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1800,15 +1683,6 @@ test_dsa_sign(void)
debug_hexdump(stdout, "priv_key: ", dsa_xform.dsa.x.data,
dsa_xform.dsa.x.length);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &dsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* attach asymmetric crypto session to crypto operations */
rte_crypto_op_attach_asym_session(op, sess);
asym_op->dsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
@@ -1882,10 +1756,8 @@ test_dsa_sign(void)
status = TEST_FAILED;
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1944,15 +1816,6 @@ test_ecdsa_sign_verify(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1970,11 +1833,11 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2082,10 +1945,8 @@ test_ecdsa_sign_verify(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -2157,15 +2018,6 @@ test_ecpm(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -2183,11 +2035,11 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2255,10 +2107,8 @@ test_ecpm(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 9f33f7a177..b4dbd384bf 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1038,20 +1038,17 @@ It is the application's responsibility to create and manage the session mempools
Application using both symmetric and asymmetric sessions should allocate and maintain
different sessions pools for each type.
-An application can use ``rte_cryptodev_get_asym_session_private_size()`` to
-get the private size of asymmetric session on a given crypto device. This
-function would allow an application to calculate the max device asymmetric
-session size of all crypto devices to create a single session mempool.
-If instead an application creates multiple asymmetric session mempools,
-the Crypto device framework also provides ``rte_cryptodev_asym_get_header_session_size()`` to get
-the size of an uninitialized session.
+An application can use ``rte_cryptodev_asym_session_pool_create()`` to create a mempool
+with a specified number of elements. The element size will allow for the session header,
+and the max private session size.
+The max private session size is chosen based on available crypto devices,
+the biggest private session size is used. This means any of those devices can be used,
+and the mempool element will have available space for its private session data.
Once the session mempools have been created, ``rte_cryptodev_asym_session_create()``
-is used to allocate an uninitialized asymmetric session from the given mempool.
-The session then must be initialized using ``rte_cryptodev_asym_session_init()``
-for each of the required crypto devices. An asymmetric transform chain
-is used to specify the operation and its parameters. See the section below for
-details on transforms.
+is used to allocate and initialize an asymmetric session from the given mempool.
+An asymmetric transform chain is used to specify the operation and its parameters.
+See the section below for details on transforms.
When a session is no longer used, user must call ``rte_cryptodev_asym_session_clear()``
for each of the crypto devices that are using the session, to free all driver
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a820cc5596..ea4c5309a0 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -112,6 +112,13 @@ API Changes
* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
which are kept for backward compatibility, are marked as deprecated.
+* cryptodev: The asymmetric session handling was modified to use a single
+ mempool object. An API ``rte_cryptodev_asym_session_pool_create`` was added
+ to create a mempool with element size big enough to hold the generic asymmetric
+ session header and max size for a device private session data.
+ The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
+ is now moved to ``rte_cryptodev_asym_session_create``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index d217bbf383..c4d5d039ec 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -157,8 +157,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- ae_sess = get_asym_session_private_data(
- asym_op->session, cn10k_cryptodev_driver_id);
+ ae_sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0],
ae_sess);
if (unlikely(ret))
@@ -431,8 +431,8 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn10k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ac1953b66d..b8ad4bf211 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -138,8 +138,8 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- sess = get_asym_session_private_data(
- asym_op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
@@ -453,8 +453,8 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index a5fb68da02..7237dacb48 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -658,10 +658,9 @@ void
cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- struct rte_mempool *sess_mp;
struct cnxk_ae_sess *priv;
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cnxk_ae_sess *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -670,40 +669,29 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
/* Reset and free object back to pool */
memset(priv, 0, cnxk_ae_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
int
cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
+ struct cnxk_ae_sess *priv =
+ (struct cnxk_ae_sess *) sess->sess_private_data;
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
- struct cnxk_ae_sess *priv;
union cpt_inst_w7 w7;
int ret;
- if (rte_mempool_get(pool, (void **)&priv))
- return -ENOMEM;
-
- memset(priv, 0, sizeof(struct cnxk_ae_sess));
-
ret = cnxk_ae_fill_session_parameters(priv, xform);
- if (ret) {
- rte_mempool_put(pool, priv);
+ if (ret)
return ret;
- }
w7.u64 = 0;
w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
priv->cpt_inst_w7 = w7.u64;
priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
priv->ec_grp = vf->ec_grp;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 0656ba9675..ab0f00ee7c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -122,8 +122,7 @@ void cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
int cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool);
+ struct rte_cryptodev_asym_session *sess);
void cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp);
static inline union rte_event_crypto_metadata *
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index f7ca8a8a8e..0cb6dbb38c 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -375,35 +375,24 @@ otx_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
static int
-otx_cpt_asym_session_cfg(struct rte_cryptodev *dev,
+otx_cpt_asym_session_cfg(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform __rte_unused,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
- struct cpt_asym_sess_misc *priv;
+ struct cpt_asym_sess_misc *priv = (struct cpt_asym_sess_misc *)
+ sess->sess_private_data;
int ret;
CPT_PMD_INIT_FUNC_TRACE();
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
ret = cpt_fill_asym_session_parameters(priv, xform);
if (ret) {
CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
return ret;
}
priv->cpt_inst_w7 = 0;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
@@ -412,11 +401,10 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
CPT_PMD_INIT_FUNC_TRACE();
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cpt_asym_sess_misc *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -424,9 +412,6 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
/* Free resources allocated during session configure */
cpt_free_asym_session_parameters(priv);
memset(priv, 0, otx_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
static __rte_always_inline void * __rte_hot
@@ -471,8 +456,8 @@ otx_cpt_enq_single_asym(struct cpt_instance *instance,
return NULL;
}
- sess = get_asym_session_private_data(asym_op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *)
+ asym_op->session->sess_private_data;
/* Store phys_addr of the mdata to meta_buf */
params.meta_buf = rte_mempool_virt2iova(mdata);
@@ -852,8 +837,7 @@ otx_cpt_asym_post_process(struct rte_crypto_op *cop,
struct rte_crypto_asym_op *op = cop->asym;
struct cpt_asym_sess_misc *sess;
- sess = get_asym_session_private_data(op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *) op->session->sess_private_data;
switch (sess->xfrm_type) {
case RTE_CRYPTO_ASYM_XFORM_RSA:
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..d80e1052e2 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -748,9 +748,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
} else {
if (likely(op->asym->session != NULL))
asym_sess = (struct openssl_asym_session *)
- get_asym_session_private_data(
- op->asym->session,
- cryptodev_driver_id);
+ op->asym->session->sess_private_data;
if (asym_sess == NULL)
op->status =
RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..556fd226ed 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1119,8 +1119,7 @@ static int openssl_set_asym_session_parameters(
static int
openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
void *asym_sess_private_data;
int ret;
@@ -1130,25 +1129,14 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -EINVAL;
}
- if (rte_mempool_get(mempool, &asym_sess_private_data)) {
- CDEV_LOG_ERR(
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
+ asym_sess_private_data = sess->sess_private_data;
ret = openssl_set_asym_session_parameters(asym_sess_private_data,
xform);
if (ret != 0) {
OPENSSL_LOG(ERR, "failed configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(mempool, asym_sess_private_data);
return ret;
}
- set_asym_session_private_data(sess, dev->driver_id,
- asym_sess_private_data);
-
return 0;
}
@@ -1206,19 +1194,15 @@ static void openssl_reset_asym_session(struct openssl_asym_session *sess)
* so it doesn't leave key material behind
*/
static void
-openssl_pmd_asym_session_clear(struct rte_cryptodev *dev,
+openssl_pmd_asym_session_clear(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
/* Zero out the whole structure */
if (sess_priv) {
openssl_reset_asym_session(sess_priv);
memset(sess_priv, 0, sizeof(struct openssl_asym_session));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
}
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 09d8761c5f..f46eefd4b3 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -492,8 +492,7 @@ qat_asym_build_request(void *in_op,
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
ctx = (struct qat_asym_session *)
- get_asym_session_private_data(
- op->asym->session, qat_asym_driver_id);
+ op->asym->session->sess_private_data;
if (unlikely(ctx == NULL)) {
QAT_LOG(ERR, "Session has not been created for this device");
goto error;
@@ -711,8 +710,8 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)get_asym_session_private_data(
- rx_op->asym->session, qat_asym_driver_id);
+ ctx = (struct qat_asym_session *)
+ rx_op->asym->session->sess_private_data;
qat_asym_collect_response(rx_op, cookie, ctx->xform);
} else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform);
@@ -726,61 +725,42 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
- int err = 0;
- void *sess_private_data;
struct qat_asym_session *session;
- if (rte_mempool_get(mempool, &sess_private_data)) {
- QAT_LOG(ERR,
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
- session = sess_private_data;
+ session = (struct qat_asym_session *) sess->sess_private_data;
if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) {
if (xform->modex.exponent.length == 0 ||
xform->modex.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod exp input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) {
if (xform->modinv.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod inv input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (xform->rsa.n.length == 0) {
QAT_LOG(ERR, "Invalid rsa input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
} else {
QAT_LOG(ERR, "Asymmetric crypto xform not implemented");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
session->xform = xform;
- qat_asym_build_req_tmpl(sess_private_data);
- set_asym_session_private_data(sess, dev->driver_id,
- sess_private_data);
+ qat_asym_build_req_tmpl(session);
return 0;
-error:
- rte_mempool_put(mempool, sess_private_data);
- return err;
}
unsigned int qat_asym_session_get_private_size(
@@ -793,15 +773,9 @@ void
qat_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
struct qat_asym_session *s = (struct qat_asym_session *)sess_priv;
- if (sess_priv) {
+ if (sess_priv)
memset(s, 0, qat_asym_session_get_private_size(dev));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
- }
}
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 308b6b2e0b..c9242a12ca 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -46,10 +46,9 @@ struct qat_asym_session {
};
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool);
+ struct rte_cryptodev_asym_session *sess);
unsigned int
qat_asym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index b9146f652c..142bfb7c66 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -319,7 +319,6 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
* @param dev Crypto device pointer
* @param xform Single or chain of crypto xforms
* @param session Pointer to cryptodev's private session structure
- * @param mp Mempool where the private session is allocated
*
* @return
* - Returns 0 if private session structure have been created successfully.
@@ -329,8 +328,7 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
*/
typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *session,
- struct rte_mempool *mp);
+ struct rte_cryptodev_asym_session *session);
/**
* Free driver private session data.
*
@@ -340,12 +338,12 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_sym_session *sess);
/**
- * Free asymmetric session private data.
+ * Clear asymmetric session private data.
*
* @param dev Crypto device pointer
* @param sess Cryptodev session structure
*/
-typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
/**
* Perform actual crypto processing (encrypt/digest or auth/decrypt)
@@ -429,7 +427,7 @@ struct rte_cryptodev_ops {
/**< Configure asymmetric Crypto session. */
cryptodev_sym_free_session_t sym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_asym_free_session_t asym_session_clear;
+ cryptodev_asym_clear_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
union {
cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
@@ -627,17 +625,4 @@ set_sym_session_private_data(struct rte_cryptodev_sym_session *sess,
sess->sess_data[driver_id].data = private_data;
}
-static inline void *
-get_asym_session_private_data(const struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id) {
- return sess->sess_private_data[driver_id];
-}
-
-static inline void
-set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id, void *private_data)
-{
- sess->sess_private_data[driver_id] = private_data;
-}
-
#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 5d58951fd5..c5bfe08b79 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -24,6 +24,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_queue_pair_setup,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_pool_create,
lib.cryptodev.sym.pool.create)
+RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_pool_create,
+ lib.cryptodev.asym.pool.create)
+
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_create,
lib.cryptodev.sym.create)
@@ -39,15 +42,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_init,
lib.cryptodev.sym.init)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_init,
- lib.cryptodev.asym.init)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_clear,
lib.cryptodev.sym.clear)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_clear,
- lib.cryptodev.asym.clear)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
lib.cryptodev.enq.burst)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a40536c5ea..b056d88ac2 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -195,7 +195,7 @@ const char *rte_crypto_asym_op_strings[] = {
};
/**
- * The private data structure stored in the session mempool private data.
+ * The private data structure stored in the sym session mempool private data.
*/
struct rte_cryptodev_sym_session_pool_private_data {
uint16_t nb_drivers;
@@ -204,6 +204,14 @@ struct rte_cryptodev_sym_session_pool_private_data {
/**< session user data will be placed after sess_data */
};
+/**
+ * The private data structure stored in the asym session mempool private data.
+ */
+struct rte_cryptodev_asym_session_pool_private_data {
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+};
+
int
rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
const char *algo_string)
@@ -1751,47 +1759,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mp)
-{
- struct rte_cryptodev *dev;
- uint8_t index;
- int ret;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (sess == NULL || xforms == NULL || dev == NULL)
- return -EINVAL;
-
- index = dev->driver_id;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure,
- -ENOTSUP);
-
- if (sess->sess_private_data[index] == NULL) {
- ret = dev->dev_ops->asym_session_configure(dev,
- xforms,
- sess, mp);
- if (ret < 0) {
- CDEV_LOG_ERR(
- "dev_id %d failed to configure session details",
- dev_id);
- return ret;
- }
- }
-
- rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
- return 0;
-}
-
struct rte_mempool *
rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -1834,6 +1801,53 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
return mp;
}
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id)
+{
+ struct rte_mempool *mp;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ uint32_t obj_sz, obj_sz_aligned;
+ uint8_t dev_id, priv_sz, max_priv_sz = 0;
+
+ for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++)
+ if (rte_cryptodev_is_valid_dev(dev_id)) {
+ priv_sz = rte_cryptodev_asym_get_private_session_size(dev_id);
+ if (priv_sz > max_priv_sz)
+ max_priv_sz = priv_sz;
+ }
+ if (max_priv_sz == 0) {
+ CDEV_LOG_INFO("Could not set max private session size\n");
+ return NULL;
+ }
+
+ obj_sz = rte_cryptodev_asym_get_header_session_size() + max_priv_sz;
+ obj_sz_aligned = RTE_ALIGN_CEIL(obj_sz, RTE_CACHE_LINE_SIZE);
+
+ mp = rte_mempool_create(name, nb_elts, obj_sz_aligned, cache_size,
+ (uint32_t)(sizeof(*pool_priv)),
+ NULL, NULL, NULL, NULL,
+ socket_id, 0);
+ if (mp == NULL) {
+ CDEV_LOG_ERR("%s(name=%s) failed, rte_errno=%d\n",
+ __func__, name, rte_errno);
+ return NULL;
+ }
+
+ pool_priv = rte_mempool_get_priv(mp);
+ if (!pool_priv) {
+ CDEV_LOG_ERR("%s(name=%s) failed to get private data\n",
+ __func__, name);
+ rte_mempool_free(mp);
+ return NULL;
+ }
+ pool_priv->max_priv_session_sz = max_priv_sz;
+
+ rte_cryptodev_trace_asym_session_pool_create(name, nb_elts,
+ cache_size, mp);
+ return mp;
+}
+
static unsigned int
rte_cryptodev_sym_session_data_size(struct rte_cryptodev_sym_session *sess)
{
@@ -1895,19 +1909,44 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
}
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp)
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
{
struct rte_cryptodev_asym_session *sess;
- unsigned int session_size =
+ uint32_t session_priv_data_sz;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ unsigned int session_header_size =
rte_cryptodev_asym_get_header_session_size();
+ struct rte_cryptodev *dev;
+ int ret;
+
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+ return NULL;
+ }
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL)
+ return NULL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
return NULL;
}
+ session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
+ dev_id);
+ pool_priv = rte_mempool_get_priv(mp);
+
+ if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
+ CDEV_LOG_DEBUG(
+ "The private session data size used when creating the mempool is smaller than this device's private session data.");
+ return NULL;
+ }
+
/* Verify if provided mempool can hold elements big enough. */
- if (mp->elt_size < session_size) {
+ if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
return NULL;
@@ -1919,12 +1958,25 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
return NULL;
}
- /* Clear device session pointer.
- * Include the flag indicating presence of private data
- */
- memset(sess, 0, session_size);
+ sess->driver_id = dev->driver_id;
+ sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
+
+ /* Clear device session pointer.*/
+ memset(sess->sess_private_data, 0, session_priv_data_sz);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
- rte_cryptodev_trace_asym_session_create(mp, sess);
+ if (sess->sess_private_data[0] == 0) {
+ ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
+ if (ret < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return NULL;
+ }
+ }
+
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
return sess;
}
@@ -1959,30 +2011,6 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess)
-{
- struct rte_cryptodev *dev;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (dev == NULL || sess == NULL)
- return -EINVAL;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
-
- dev->dev_ops->asym_session_clear(dev, sess);
-
- rte_cryptodev_trace_sym_session_clear(dev_id, sess);
- return 0;
-}
-
int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
{
@@ -2007,27 +2035,31 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
}
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess)
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess)
{
- uint8_t i;
- void *sess_priv;
struct rte_mempool *sess_mp;
+ struct rte_cryptodev *dev;
- if (sess == NULL)
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
-
- /* Check that all device private data has been freed */
- for (i = 0; i < nb_drivers; i++) {
- sess_priv = get_asym_session_private_data(sess, i);
- if (sess_priv != NULL)
- return -EBUSY;
}
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL || sess == NULL)
+ return -EINVAL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
+
+ dev->dev_ops->asym_session_clear(dev, sess);
+
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
rte_mempool_put(sess_mp, sess);
- rte_cryptodev_trace_asym_session_free(sess);
+ rte_cryptodev_trace_asym_session_free(dev_id, sess);
return 0;
}
@@ -2061,12 +2093,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
unsigned int
rte_cryptodev_asym_get_header_session_size(void)
{
- /*
- * Header contains pointers to the private data
- * of all registered drivers, and a flag which
- * indicates presence of private data
- */
- return ((sizeof(void *) * nb_drivers) + sizeof(uint8_t));
+ return sizeof(struct rte_cryptodev_asym_session);
}
unsigned int
@@ -2092,7 +2119,6 @@ unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_is_valid_dev(dev_id))
@@ -2104,11 +2130,8 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
return 0;
priv_sess_size = (*dev->dev_ops->asym_session_get_size)(dev);
- if (priv_sess_size < header_size)
- return header_size;
return priv_sess_size;
-
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54df..90e764017d 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -919,9 +919,13 @@ struct rte_cryptodev_sym_session {
};
/** Cryptodev asymmetric crypto session */
-struct rte_cryptodev_asym_session {
- __extension__ void *sess_private_data[0];
- /**< Private asymmetric session material */
+RTE_STD_C11 struct rte_cryptodev_asym_session {
+ uint8_t driver_id;
+ /**< Session driver ID. */
+ uint16_t max_priv_data_sz;
+ /**< Size of private data used when creating mempool */
+ uint8_t padding[5];
+ uint8_t sess_private_data[0];
};
/**
@@ -956,6 +960,29 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t priv_size,
int socket_id);
+/**
+ * Create an asymmetric session mempool.
+ *
+ * @param name
+ * The unique mempool name.
+ * @param nb_elts
+ * The number of elements in the mempool.
+ * @param cache_size
+ * The number of per-lcore cache elements
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in the case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ *
+ * @return
+ * - On success return mempool
+ * - On failure returns NULL
+ */
+__rte_experimental
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id);
+
/**
* Create symmetric crypto session header (generic with no private data)
*
@@ -969,17 +996,22 @@ struct rte_cryptodev_sym_session *
rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
/**
- * Create asymmetric crypto session header (generic with no private data)
+ * Create and initialise an asymmetric crypto session structure.
+ * Calls the PMD to configure the private session data.
*
- * @param mempool mempool to allocate asymmetric session
- * objects from
+ * @param dev_id ID of device that we want the session to be used on
+ * @param xforms Asymmetric crypto transform operations to apply on flow
+ * processed with this session
+ * @param mp mempool to allocate asymmetric session
+ * objects from
* @return
* - On success return pointer to asym-session
* - On failure returns NULL
*/
__rte_experimental
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
/**
* Frees symmetric crypto session header, after checking that all
@@ -997,20 +1029,20 @@ int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
/**
- * Frees asymmetric crypto session header, after checking that all
- * the device private data has been freed, returning it
- * to its original mempool.
+ * Clears and frees asymmetric crypto session header and private data,
+ * returning it to its original mempool.
*
+ * @param dev_id ID of device that uses the asymmetric session.
* @param sess Session header to be freed.
*
* @return
* - 0 if successful.
- * - -EINVAL if session is NULL.
- * - -EBUSY if not all device private data has been freed.
+ * - -EINVAL if device is invalid or session is NULL.
*/
__rte_experimental
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess);
/**
* Fill out private data for the device id, based on its device type.
@@ -1034,28 +1066,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
struct rte_crypto_sym_xform *xforms,
struct rte_mempool *mempool);
-/**
- * Initialize asymmetric session on a device with specific asymmetric xform
- *
- * @param dev_id ID of device that we want the session to be used on
- * @param sess Session to be set up on a device
- * @param xforms Asymmetric crypto transform operations to apply on flow
- * processed with this session
- * @param mempool Mempool to be used for internal allocation.
- *
- * @return
- * - On success, zero.
- * - -EINVAL if input parameters are invalid.
- * - -ENOTSUP if crypto device does not support the crypto transform.
- * - -ENOMEM if the private session could not be allocated.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mempool);
-
/**
* Frees private data for the device id, based on its device type,
* returning it to its mempool. It is the application's responsibility
@@ -1074,21 +1084,6 @@ int
rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess);
-/**
- * Frees resources held by asymmetric session during rte_cryptodev_session_init
- *
- * @param dev_id ID of device that uses the asymmetric session.
- * @param sess Asymmetric session setup on device using
- * rte_cryptodev_session_init
- * @return
- * - 0 if successful.
- * - -EINVAL if device is invalid or session is NULL.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess);
-
/**
* Get the size of the header session, for all registered drivers excluding
* the user data size.
@@ -1116,7 +1111,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
struct rte_cryptodev_sym_session *sess);
/**
- * Get the size of the asymmetric session header, for all registered drivers.
+ * Get the size of the asymmetric session header.
*
* @return
* Size of the asymmetric header session.
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..a4fa9e8c7e 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -83,12 +83,22 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(sess->user_data_sz);
)
+RTE_TRACE_POINT(
+ rte_cryptodev_trace_asym_session_pool_create,
+ RTE_TRACE_POINT_ARGS(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, void *mempool),
+ rte_trace_point_emit_string(name);
+ rte_trace_point_emit_u32(nb_elts);
+ rte_trace_point_emit_u32(cache_size);
+ rte_trace_point_emit_ptr(mempool);
+)
+
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(void *mempool,
- struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
- rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
@@ -99,7 +109,9 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_free,
- RTE_TRACE_POINT_ARGS(struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess),
+ rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(sess);
)
@@ -117,17 +129,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(mempool);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_init,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess, void *xforms,
- void *mempool),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
- rte_trace_point_emit_ptr(xforms);
- rte_trace_point_emit_ptr(mempool);
-)
-
RTE_TRACE_POINT(
rte_cryptodev_trace_sym_session_clear,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
@@ -135,13 +136,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(sess);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_clear,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
-)
-
#ifdef __cplusplus
}
#endif
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index c50745fa8c..44d1aff0e2 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -55,10 +55,8 @@ EXPERIMENTAL {
rte_cryptodev_asym_get_header_session_size;
rte_cryptodev_asym_get_private_session_size;
rte_cryptodev_asym_get_xform_enum;
- rte_cryptodev_asym_session_clear;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
- rte_cryptodev_asym_session_init;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
@@ -81,9 +79,7 @@ EXPERIMENTAL {
__rte_cryptodev_trace_sym_session_free;
__rte_cryptodev_trace_asym_session_free;
__rte_cryptodev_trace_sym_session_init;
- __rte_cryptodev_trace_asym_session_init;
__rte_cryptodev_trace_sym_session_clear;
- __rte_cryptodev_trace_asym_session_clear;
__rte_cryptodev_trace_dequeue_burst;
__rte_cryptodev_trace_enqueue_burst;
@@ -104,6 +100,9 @@ EXPERIMENTAL {
rte_cryptodev_remove_deq_callback;
rte_cryptodev_remove_enq_callback;
+ # added 22.03
+ rte_cryptodev_asym_session_pool_create;
+ __rte_cryptodev_trace_asym_session_pool_create;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 1%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-02 11:44 0% ` Ray Kinsella
@ 2022-02-10 22:16 3% ` Thomas Monjalon
2022-02-11 10:09 5% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-10 22:16 UTC (permalink / raw)
To: Ferruh Yigit, Kalesh A P, Ray Kinsella
Cc: dev, dev, ajit.khaparde, asafp, David Marchand, Andrew Rybchenko
02/02/2022 12:44, Ray Kinsella:
> Ferruh Yigit <ferruh.yigit@intel.com> writes:
> > On 1/28/2022 12:48 PM, Kalesh A P wrote:
> >> --- a/lib/ethdev/rte_ethdev.h
> >> +++ b/lib/ethdev/rte_ethdev.h
> >> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
> >> RTE_ETH_EVENT_DESTROY, /**< port is released */
> >> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
> >> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
> >> + RTE_ETH_EVENT_ERR_RECOVERING,
> >> + /**< port recovering from an error
> >> + *
> >> + * PMD detected a FW reset or error condition.
> >> + * PMD will try to recover from the error.
> >> + * Data path may be quiesced and Control path operations
> >> + * may fail at this time.
> >> + */
> >> + RTE_ETH_EVENT_RECOVERED,
> >> + /**< port recovered from an error
> >> + *
> >> + * PMD has recovered from the error condition.
> >> + * Control path and Data path are up now.
> >> + * PMD re-configures the port to the state prior to the error.
> >> + * Since the device has undergone a reset, flow rules
> >> + * offloaded prior to reset may be lost and
> >> + * the application should recreate the rules again.
> >> + */
> >> RTE_ETH_EVENT_MAX /**< max value of this enum */
> >
> >
> > Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
> > to evaluate if it is a false positive:
> >
> >
> > 1 function with some indirect sub-type change:
> > [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
> > parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
> > underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
> > in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
> > parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
> > type size hasn't changed
> > 2 enumerator insertions:
> > 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
> > 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
> > 1 enumerator change:
> > 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
>
> I don't immediately see the problem that this would cause.
> There are no array sizes etc dependent on the value of MAX for instance.
>
> Looks safe?
We never know how this enum will be used by the application.
The max value may be used for the size of an event array.
It looks a real ABI issue unfortunately.
PS: I am not Cc'ed in this patchset,
so copying what I said on v6 (more than a year ago):
Please use the option --cc-cmd devtools/get-maintainer.sh
^ permalink raw reply [relevance 3%]
* RE: [EXT] [PATCH v2 4/4] crypto: reorganize endianness comments, add crypto uint
2022-02-10 16:38 0% ` Zhang, Roy Fan
@ 2022-02-10 21:08 4% ` Akhil Goyal
2022-02-11 10:54 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-10 21:08 UTC (permalink / raw)
To: Zhang, Roy Fan, Kusztal, ArkadiuszX, dev, David Marchand, ray.kinsella
Cc: Ramkumar Balu
Hi Fan,
> Hi Akhil,
>
> I assume everything in asym crypto is under experimental tag at the moment
> right?
> The goal is to have them updated and fixed before DPDK 22.11 so the
> experimental tag can be removed.
>
Asymmetric crypto APIs are marked as experimental, but the structures are not
explicitly marked experimental.
rte_crypto_asym_op is part of union in rte_crypto_op which is definitely not experimental.
So a change in asym_op will result in ABI issues in rte_crypto_op.
David/Ray: Can you review the patch 1/4 of this series from ABI compatibility point of view.
http://patches.dpdk.org/project/dpdk/patch/20220207113555.8431-2-arkadiuszx.kusztal@intel.com/
IMO, as per current experimental tags, we cannot change parameters inside rte_crypto_asym_op
and subsequently in struct rte_crypto_dsa_op_param. What do you suggest?
However, I remember, some exception was added to ignore ABI issues related to asymmetric
crypto. Could you please check why that exception is not working in this case?
Regards,
Akhil
^ permalink raw reply [relevance 4%]
* RE: [EXT] [PATCH v2 4/4] crypto: reorganize endianness comments, add crypto uint
2022-02-10 10:17 3% ` [EXT] " Akhil Goyal
@ 2022-02-10 16:38 0% ` Zhang, Roy Fan
2022-02-10 21:08 4% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Zhang, Roy Fan @ 2022-02-10 16:38 UTC (permalink / raw)
To: Akhil Goyal, Kusztal, ArkadiuszX, dev; +Cc: Ramkumar Balu
Hi Akhil,
I assume everything in asym crypto is under experimental tag at the moment right?
The goal is to have them updated and fixed before DPDK 22.11 so the experimental tag can be removed.
Regards,
Fan
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Thursday, February 10, 2022 10:17 AM
> To: Kusztal, ArkadiuszX <arkadiuszx.kusztal@intel.com>; dev@dpdk.org
> Cc: Zhang, Roy Fan <roy.fan.zhang@intel.com>; Ramkumar Balu
> <rbalu@marvell.com>
> Subject: RE: [EXT] [PATCH v2 4/4] crypto: reorganize endianness comments,
> add crypto uint
>
> > This patch adds crypto uint typedef so adding comment
> > about byte-order becomes unnecessary.
> >
> > It makes API comments more tidy, and more consistent
> > with other asymmetric crypto APIs.
> >
> > Additionally it reorganizes code that enums, externs
> > and forward declarations are moved to the top of the
> > header file making code more readable.
> >
> > It removes also comments like co-prime constraint
> > from mod inv as it is natural mathematical constraint,
> > not PMD constraint.
> >
> > Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> > ---
> CI is reporting abi issues in this set. Can you check?
> http://mails.dpdk.org/archives/test-report/2022-February/257403.html
^ permalink raw reply [relevance 0%]
* [PATCH v6 5/5] crypto: modify return value for asym session create
2022-02-10 15:54 1% ` [PATCH v6 2/5] crypto: use single buffer for asymmetric session Ciara Power
@ 2022-02-10 15:54 2% ` Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-10 15:54 UTC (permalink / raw)
To: dev; +Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
v5: Added session parameter to create session trace.
v4:
- Reordered function parameters.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Fixed variable declarations, putting initialised variable last.
- Made function comment for return value more generic.
- Fixed log to include line break.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 12 ++-
app/test/test_cryptodev_asym.c | 109 +++++++++++++------------
doc/guides/rel_notes/release_22_03.rst | 3 +-
lib/cryptodev/rte_cryptodev.c | 28 ++++---
lib/cryptodev/rte_cryptodev.h | 13 ++-
lib/cryptodev/rte_cryptodev_trace.h | 4 +-
6 files changed, 92 insertions(+), 77 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index b8f590b397..479c40eead 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -734,7 +734,9 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform auth_xform;
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
+ void *asym_sess = NULL;
struct rte_crypto_asym_xform xform = {0};
+ int ret;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
- if (sess == NULL)
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform,
+ sess_mp, &asym_sess);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1, "Asym session create failed\n");
return NULL;
-
- return sess;
+ }
+ return asym_sess;
}
#ifdef RTE_LIB_SECURITY
/*
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f0cb839a49..c2e1b4dafd 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -317,7 +317,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
xform_tc.next = NULL;
xform_tc.xform_type = data_tc->modex.xform_type;
@@ -452,14 +452,14 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
- ts_params->session_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool, &sess);
+ if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -646,7 +646,7 @@ test_rsa_sign_verify(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -659,12 +659,12 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -686,7 +686,7 @@ test_rsa_enc_dec(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -699,11 +699,11 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -726,7 +726,7 @@ test_rsa_sign_verify_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
@@ -738,12 +738,12 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -766,7 +766,7 @@ test_rsa_enc_dec_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
@@ -778,12 +778,12 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1047,7 +1047,7 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
@@ -1074,12 +1074,12 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1130,7 +1130,7 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1152,12 +1152,12 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1211,7 +1211,7 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1241,12 +1241,12 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1300,7 +1300,7 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t out_pub_key[TEST_DH_MOD_LEN];
uint8_t out_prv_key[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform pub_key_xform;
@@ -1330,12 +1330,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1419,12 +1419,12 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1543,13 +1543,13 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1653,13 +1653,14 @@ test_dsa_sign(void)
uint8_t r[TEST_DH_MOD_LEN];
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
+ int ret;
- sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
/* set up crypto op data structure */
@@ -1788,7 +1789,7 @@ test_ecdsa_sign_verify(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -1833,12 +1834,12 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
@@ -1990,7 +1991,7 @@ test_ecpm(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -2035,12 +2036,12 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a930cbbad6..640691c3ef 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -123,7 +123,8 @@ API Changes
The session structure was moved to ``cryptodev_pmd.h``,
hiding it from applications.
The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
- is now moved to ``rte_cryptodev_asym_session_create``.
+ is now moved to ``rte_cryptodev_asym_session_create``, which was updated to
+ return an integer value to indicate initialisation errors.
ABI Changes
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 91d48d5886..727d271fb9 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -1912,9 +1912,10 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
return sess;
}
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session)
{
struct rte_cryptodev_asym_session *sess;
uint32_t session_priv_data_sz;
@@ -1926,17 +1927,17 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return NULL;
+ return -EINVAL;
}
dev = rte_cryptodev_pmd_get_dev(dev_id);
if (dev == NULL)
- return NULL;
+ return -EINVAL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
- return NULL;
+ return -EINVAL;
}
session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
@@ -1946,22 +1947,23 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
CDEV_LOG_DEBUG(
"The private session data size used when creating the mempool is smaller than this device's private session data.");
- return NULL;
+ return -EINVAL;
}
/* Verify if provided mempool can hold elements big enough. */
if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
- return NULL;
+ return -EINVAL;
}
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(mp, (void **)&sess)) {
+ if (rte_mempool_get(mp, session)) {
CDEV_LOG_ERR("couldn't get object from session mempool");
- return NULL;
+ return -ENOMEM;
}
+ sess = *session;
sess->driver_id = dev->driver_id;
sess->user_data_sz = pool_priv->user_data_sz;
sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
@@ -1969,7 +1971,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
/* Clear device session pointer.*/
memset(sess->sess_private_data, 0, session_priv_data_sz + sess->user_data_sz);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, -ENOTSUP);
if (sess->sess_private_data[0] == 0) {
ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
@@ -1977,12 +1979,12 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
CDEV_LOG_ERR(
"dev_id %d failed to configure session details",
dev_id);
- return NULL;
+ return ret;
}
}
- rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
- return sess;
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp, sess);
+ return 0;
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 1d7bd07680..19e2e70287 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -996,14 +996,19 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
* processed with this session
* @param mp mempool to allocate asymmetric session
* objects from
+ * @param session void ** for session to be used
+ *
* @return
- * - On success return pointer to asym-session
- * - On failure returns NULL
+ * - 0 on success.
+ * - -EINVAL on invalid arguments.
+ * - -ENOMEM on memory error for session allocation.
+ * - -ENOTSUP if device doesn't support session configuration.
*/
__rte_experimental
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session);
/**
* Frees symmetric crypto session header, after checking that all
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index 005a4fe38b..a3f6048e7d 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -96,10 +96,12 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool,
+ void *sess),
rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
+ rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v6 2/5] crypto: use single buffer for asymmetric session
@ 2022-02-10 15:54 1% ` Ciara Power
2022-02-10 15:54 2% ` [PATCH v6 5/5] crypto: modify return value for asym session create Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-10 15:54 UTC (permalink / raw)
To: dev
Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty,
Ankur Dwivedi, Tejasree Kondoj, John Griffin, Fiona Trahe,
Deepak Kumar Jain
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v6:
- Reordered variable declarations to follow cnxk file format.
- Added fix for crypto perf app asymmetric modex operation, there
is no longer a need for private mempool, and the
rte_cryptodev_asym_session_pool_create API should be used.
v5:
- Removed get API for session private data, can be accessed directly.
- Modified test application to create a session mempool for
TEST_NUM_SESSIONS rather than TEST_NUM_SESSIONS * 2.
- Reworded create session function description.
- Removed sess parameter from create session trace,
to be added in a later patch.
v4:
- Merged asym crypto session clear and free functions.
- Reordered some function parameters.
- Updated trace function for asym crypto session create.
- Fixed cnxk clear, the PMD no longer needs to put private data
back into a mempool.
- Renamed struct field for max private session size.
- Replaced __extension__ with RTE_STD_C11.
- Moved some parameter validity checks to before functional code.
- Reworded release note.
- Removed mempool parameter from session configure function.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Corrected formatting of struct comments.
- Increased size of max_priv_session_sz to uint16_t.
- Removed trace for asym session init function that was
previously removed.
- Added documentation.
v2:
- Renamed function typedef from "free" to "clear" as session private
data isn't being freed in that function.
- Moved user data API to separate patch.
- Minor fixes to comments, formatting, return values.
---
app/test-crypto-perf/cperf_ops.c | 14 +-
app/test-crypto-perf/cperf_test_throughput.c | 8 +-
app/test-crypto-perf/main.c | 26 +-
app/test/test_cryptodev_asym.c | 272 +++++--------------
doc/guides/prog_guide/cryptodev_lib.rst | 21 +-
doc/guides/rel_notes/release_22_03.rst | 7 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 22 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 3 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 32 +--
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 24 +-
drivers/crypto/qat/qat_asym.c | 54 +---
drivers/crypto/qat/qat_asym.h | 5 +-
lib/cryptodev/cryptodev_pmd.h | 23 +-
lib/cryptodev/cryptodev_trace_points.c | 9 +-
lib/cryptodev/rte_cryptodev.c | 213 ++++++++-------
lib/cryptodev/rte_cryptodev.h | 97 ++++---
lib/cryptodev/rte_cryptodev_trace.h | 38 ++-
lib/cryptodev/version.map | 7 +-
21 files changed, 317 insertions(+), 578 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d975ae1ab8..b125c699de 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -735,7 +735,6 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
struct rte_crypto_asym_xform xform = {0};
- int rc;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -745,19 +744,10 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp);
+ sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
if (sess == NULL)
return NULL;
- rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
- &xform, priv_mp);
- if (rc < 0) {
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id,
- (void *)sess);
- rte_cryptodev_asym_session_free((void *)sess);
- }
- return NULL;
- }
+
return sess;
}
#ifdef RTE_LIB_SECURITY
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 51512af2ad..ee21ff27f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -35,11 +35,9 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
if (!ctx)
return;
if (ctx->sess) {
- if (ctx->options->op_type == CPERF_ASYM_MODEX) {
- rte_cryptodev_asym_session_clear(ctx->dev_id,
- (void *)ctx->sess);
- rte_cryptodev_asym_session_free((void *)ctx->sess);
- }
+ if (ctx->options->op_type == CPERF_ASYM_MODEX)
+ rte_cryptodev_asym_session_free(ctx->dev_id,
+ (void *)ctx->sess);
#ifdef RTE_LIB_SECURITY
else if (ctx->options->op_type == CPERF_PDCP ||
ctx->options->op_type == CPERF_DOCSIS ||
diff --git a/app/test-crypto-perf/main.c b/app/test-crypto-perf/main.c
index 6fdb92fb7c..115bf923f8 100644
--- a/app/test-crypto-perf/main.c
+++ b/app/test-crypto-perf/main.c
@@ -74,34 +74,12 @@ create_asym_op_pool_socket(uint8_t dev_id, int32_t socket_id,
{
char mp_name[RTE_MEMPOOL_NAMESIZE];
struct rte_mempool *mpool = NULL;
- unsigned int session_size =
- RTE_MAX(rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
-
- if (session_pool_socket[socket_id].priv_mp == NULL) {
- snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "perf_asym_priv_pool%u",
- socket_id);
-
- mpool = rte_mempool_create(mp_name, nb_sessions, session_size,
- 0, 0, NULL, NULL, NULL, NULL,
- socket_id, 0);
- if (mpool == NULL) {
- printf("Cannot create pool \"%s\" on socket %d\n",
- mp_name, socket_id);
- return -ENOMEM;
- }
- printf("Allocated pool \"%s\" on socket %d\n", mp_name,
- socket_id);
- session_pool_socket[socket_id].priv_mp = mpool;
- }
if (session_pool_socket[socket_id].sess_mp == NULL) {
-
snprintf(mp_name, RTE_MEMPOOL_NAMESIZE, "perf_asym_sess_pool%u",
socket_id);
- mpool = rte_mempool_create(mp_name, nb_sessions,
- session_size, 0, 0, NULL, NULL, NULL,
- NULL, socket_id, 0);
+ mpool = rte_cryptodev_asym_session_pool_create(mp_name,
+ nb_sessions, 0, socket_id);
if (mpool == NULL) {
printf("Cannot create pool \"%s\" on socket %d\n",
mp_name, socket_id);
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 8d7290f9ed..88433faf1c 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -452,7 +452,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool);
if (!sess) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -462,15 +463,6 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform_tc,
- ts_params->session_mpool) < 0) {
- snprintf(test_msg, ASYM_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
rte_crypto_op_attach_asym_session(op, sess);
} else {
asym_op->xform = &xform_tc;
@@ -512,10 +504,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
snprintf(test_msg, ASYM_TEST_MSG_LEN, "SESSIONLESS PASS");
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -669,18 +659,11 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -688,9 +671,7 @@ test_rsa_sign_verify(void)
status = queue_ops_rsa_sign_verify(sess);
error_exit:
-
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -718,17 +699,10 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -737,8 +711,7 @@ test_rsa_enc_dec(void)
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -765,28 +738,20 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
status = TEST_FAILED;
- return status;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify_crt\n");
- status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_sign_verify(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -813,27 +778,20 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec_crt\n");
status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_enc_dec(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -927,7 +885,6 @@ testsuite_setup(void)
/* configure qp */
ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
ts_params->qp_conf.mp_session = ts_params->session_mpool;
- ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
@@ -936,21 +893,9 @@ testsuite_setup(void)
qp_id, dev_id);
}
- /* setup asym session pool */
- unsigned int session_size = RTE_MAX(
- rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
- /*
- * Create mempool with TEST_NUM_SESSIONS * 2,
- * to include the session headers
- */
- ts_params->session_mpool = rte_mempool_create(
- "test_asym_sess_mp",
- TEST_NUM_SESSIONS * 2,
- session_size,
- 0, 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
+ ts_params->session_mpool = rte_cryptodev_asym_session_pool_create(
+ "test_asym_sess_mp", TEST_NUM_SESSIONS, 0,
+ SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
@@ -1107,14 +1052,6 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1137,11 +1074,11 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1176,10 +1113,8 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1199,14 +1134,6 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1225,11 +1152,11 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1265,10 +1192,8 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1290,14 +1215,6 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1324,11 +1241,11 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1365,10 +1282,8 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data, asym_op->dh.priv_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1391,15 +1306,6 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform pub_key_xform;
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1423,11 +1329,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.pub_key.length = sizeof(out_pub_key);
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1462,10 +1369,8 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
out_pub_key, asym_op->dh.pub_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1514,7 +1419,7 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
@@ -1523,15 +1428,6 @@ test_mod_inv(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modinv_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* generate crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1583,10 +1479,8 @@ test_mod_inv(void)
}
error_exit:
- if (sess) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op)
rte_crypto_op_free(op);
@@ -1649,7 +1543,7 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1,
"line %u "
@@ -1659,15 +1553,6 @@ test_mod_exp(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modex_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
asym_op = op->asym;
memcpy(input, base, sizeof(base));
asym_op->modex.base.data = input;
@@ -1706,10 +1591,8 @@ test_mod_exp(void)
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1771,7 +1654,7 @@ test_dsa_sign(void)
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1800,15 +1683,6 @@ test_dsa_sign(void)
debug_hexdump(stdout, "priv_key: ", dsa_xform.dsa.x.data,
dsa_xform.dsa.x.length);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &dsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* attach asymmetric crypto session to crypto operations */
rte_crypto_op_attach_asym_session(op, sess);
asym_op->dsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
@@ -1882,10 +1756,8 @@ test_dsa_sign(void)
status = TEST_FAILED;
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1944,15 +1816,6 @@ test_ecdsa_sign_verify(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1970,11 +1833,11 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2082,10 +1945,8 @@ test_ecdsa_sign_verify(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -2157,15 +2018,6 @@ test_ecpm(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -2183,11 +2035,11 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2255,10 +2107,8 @@ test_ecpm(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 9f33f7a177..b4dbd384bf 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1038,20 +1038,17 @@ It is the application's responsibility to create and manage the session mempools
Application using both symmetric and asymmetric sessions should allocate and maintain
different sessions pools for each type.
-An application can use ``rte_cryptodev_get_asym_session_private_size()`` to
-get the private size of asymmetric session on a given crypto device. This
-function would allow an application to calculate the max device asymmetric
-session size of all crypto devices to create a single session mempool.
-If instead an application creates multiple asymmetric session mempools,
-the Crypto device framework also provides ``rte_cryptodev_asym_get_header_session_size()`` to get
-the size of an uninitialized session.
+An application can use ``rte_cryptodev_asym_session_pool_create()`` to create a mempool
+with a specified number of elements. The element size will allow for the session header,
+and the max private session size.
+The max private session size is chosen based on available crypto devices,
+the biggest private session size is used. This means any of those devices can be used,
+and the mempool element will have available space for its private session data.
Once the session mempools have been created, ``rte_cryptodev_asym_session_create()``
-is used to allocate an uninitialized asymmetric session from the given mempool.
-The session then must be initialized using ``rte_cryptodev_asym_session_init()``
-for each of the required crypto devices. An asymmetric transform chain
-is used to specify the operation and its parameters. See the section below for
-details on transforms.
+is used to allocate and initialize an asymmetric session from the given mempool.
+An asymmetric transform chain is used to specify the operation and its parameters.
+See the section below for details on transforms.
When a session is no longer used, user must call ``rte_cryptodev_asym_session_clear()``
for each of the crypto devices that are using the session, to free all driver
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a820cc5596..ea4c5309a0 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -112,6 +112,13 @@ API Changes
* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
which are kept for backward compatibility, are marked as deprecated.
+* cryptodev: The asymmetric session handling was modified to use a single
+ mempool object. An API ``rte_cryptodev_asym_session_pool_create`` was added
+ to create a mempool with element size big enough to hold the generic asymmetric
+ session header and max size for a device private session data.
+ The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
+ is now moved to ``rte_cryptodev_asym_session_create``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index d217bbf383..c4d5d039ec 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -157,8 +157,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- ae_sess = get_asym_session_private_data(
- asym_op->session, cn10k_cryptodev_driver_id);
+ ae_sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0],
ae_sess);
if (unlikely(ret))
@@ -431,8 +431,8 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn10k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ac1953b66d..b8ad4bf211 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -138,8 +138,8 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- sess = get_asym_session_private_data(
- asym_op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
@@ -453,8 +453,8 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index a5fb68da02..7237dacb48 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -658,10 +658,9 @@ void
cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- struct rte_mempool *sess_mp;
struct cnxk_ae_sess *priv;
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cnxk_ae_sess *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -670,40 +669,29 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
/* Reset and free object back to pool */
memset(priv, 0, cnxk_ae_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
int
cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
+ struct cnxk_ae_sess *priv =
+ (struct cnxk_ae_sess *) sess->sess_private_data;
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
- struct cnxk_ae_sess *priv;
union cpt_inst_w7 w7;
int ret;
- if (rte_mempool_get(pool, (void **)&priv))
- return -ENOMEM;
-
- memset(priv, 0, sizeof(struct cnxk_ae_sess));
-
ret = cnxk_ae_fill_session_parameters(priv, xform);
- if (ret) {
- rte_mempool_put(pool, priv);
+ if (ret)
return ret;
- }
w7.u64 = 0;
w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
priv->cpt_inst_w7 = w7.u64;
priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
priv->ec_grp = vf->ec_grp;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 0656ba9675..ab0f00ee7c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -122,8 +122,7 @@ void cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
int cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool);
+ struct rte_cryptodev_asym_session *sess);
void cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp);
static inline union rte_event_crypto_metadata *
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index f7ca8a8a8e..0cb6dbb38c 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -375,35 +375,24 @@ otx_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
static int
-otx_cpt_asym_session_cfg(struct rte_cryptodev *dev,
+otx_cpt_asym_session_cfg(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform __rte_unused,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
- struct cpt_asym_sess_misc *priv;
+ struct cpt_asym_sess_misc *priv = (struct cpt_asym_sess_misc *)
+ sess->sess_private_data;
int ret;
CPT_PMD_INIT_FUNC_TRACE();
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
ret = cpt_fill_asym_session_parameters(priv, xform);
if (ret) {
CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
return ret;
}
priv->cpt_inst_w7 = 0;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
@@ -412,11 +401,10 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
CPT_PMD_INIT_FUNC_TRACE();
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cpt_asym_sess_misc *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -424,9 +412,6 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
/* Free resources allocated during session configure */
cpt_free_asym_session_parameters(priv);
memset(priv, 0, otx_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
static __rte_always_inline void * __rte_hot
@@ -471,8 +456,8 @@ otx_cpt_enq_single_asym(struct cpt_instance *instance,
return NULL;
}
- sess = get_asym_session_private_data(asym_op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *)
+ asym_op->session->sess_private_data;
/* Store phys_addr of the mdata to meta_buf */
params.meta_buf = rte_mempool_virt2iova(mdata);
@@ -852,8 +837,7 @@ otx_cpt_asym_post_process(struct rte_crypto_op *cop,
struct rte_crypto_asym_op *op = cop->asym;
struct cpt_asym_sess_misc *sess;
- sess = get_asym_session_private_data(op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *) op->session->sess_private_data;
switch (sess->xfrm_type) {
case RTE_CRYPTO_ASYM_XFORM_RSA:
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..d80e1052e2 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -748,9 +748,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
} else {
if (likely(op->asym->session != NULL))
asym_sess = (struct openssl_asym_session *)
- get_asym_session_private_data(
- op->asym->session,
- cryptodev_driver_id);
+ op->asym->session->sess_private_data;
if (asym_sess == NULL)
op->status =
RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..556fd226ed 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1119,8 +1119,7 @@ static int openssl_set_asym_session_parameters(
static int
openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
void *asym_sess_private_data;
int ret;
@@ -1130,25 +1129,14 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -EINVAL;
}
- if (rte_mempool_get(mempool, &asym_sess_private_data)) {
- CDEV_LOG_ERR(
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
+ asym_sess_private_data = sess->sess_private_data;
ret = openssl_set_asym_session_parameters(asym_sess_private_data,
xform);
if (ret != 0) {
OPENSSL_LOG(ERR, "failed configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(mempool, asym_sess_private_data);
return ret;
}
- set_asym_session_private_data(sess, dev->driver_id,
- asym_sess_private_data);
-
return 0;
}
@@ -1206,19 +1194,15 @@ static void openssl_reset_asym_session(struct openssl_asym_session *sess)
* so it doesn't leave key material behind
*/
static void
-openssl_pmd_asym_session_clear(struct rte_cryptodev *dev,
+openssl_pmd_asym_session_clear(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
/* Zero out the whole structure */
if (sess_priv) {
openssl_reset_asym_session(sess_priv);
memset(sess_priv, 0, sizeof(struct openssl_asym_session));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
}
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 09d8761c5f..f46eefd4b3 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -492,8 +492,7 @@ qat_asym_build_request(void *in_op,
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
ctx = (struct qat_asym_session *)
- get_asym_session_private_data(
- op->asym->session, qat_asym_driver_id);
+ op->asym->session->sess_private_data;
if (unlikely(ctx == NULL)) {
QAT_LOG(ERR, "Session has not been created for this device");
goto error;
@@ -711,8 +710,8 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)get_asym_session_private_data(
- rx_op->asym->session, qat_asym_driver_id);
+ ctx = (struct qat_asym_session *)
+ rx_op->asym->session->sess_private_data;
qat_asym_collect_response(rx_op, cookie, ctx->xform);
} else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform);
@@ -726,61 +725,42 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
- int err = 0;
- void *sess_private_data;
struct qat_asym_session *session;
- if (rte_mempool_get(mempool, &sess_private_data)) {
- QAT_LOG(ERR,
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
- session = sess_private_data;
+ session = (struct qat_asym_session *) sess->sess_private_data;
if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) {
if (xform->modex.exponent.length == 0 ||
xform->modex.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod exp input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) {
if (xform->modinv.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod inv input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (xform->rsa.n.length == 0) {
QAT_LOG(ERR, "Invalid rsa input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
} else {
QAT_LOG(ERR, "Asymmetric crypto xform not implemented");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
session->xform = xform;
- qat_asym_build_req_tmpl(sess_private_data);
- set_asym_session_private_data(sess, dev->driver_id,
- sess_private_data);
+ qat_asym_build_req_tmpl(session);
return 0;
-error:
- rte_mempool_put(mempool, sess_private_data);
- return err;
}
unsigned int qat_asym_session_get_private_size(
@@ -793,15 +773,9 @@ void
qat_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
struct qat_asym_session *s = (struct qat_asym_session *)sess_priv;
- if (sess_priv) {
+ if (sess_priv)
memset(s, 0, qat_asym_session_get_private_size(dev));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
- }
}
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 308b6b2e0b..c9242a12ca 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -46,10 +46,9 @@ struct qat_asym_session {
};
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool);
+ struct rte_cryptodev_asym_session *sess);
unsigned int
qat_asym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index b9146f652c..142bfb7c66 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -319,7 +319,6 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
* @param dev Crypto device pointer
* @param xform Single or chain of crypto xforms
* @param session Pointer to cryptodev's private session structure
- * @param mp Mempool where the private session is allocated
*
* @return
* - Returns 0 if private session structure have been created successfully.
@@ -329,8 +328,7 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
*/
typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *session,
- struct rte_mempool *mp);
+ struct rte_cryptodev_asym_session *session);
/**
* Free driver private session data.
*
@@ -340,12 +338,12 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_sym_session *sess);
/**
- * Free asymmetric session private data.
+ * Clear asymmetric session private data.
*
* @param dev Crypto device pointer
* @param sess Cryptodev session structure
*/
-typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
/**
* Perform actual crypto processing (encrypt/digest or auth/decrypt)
@@ -429,7 +427,7 @@ struct rte_cryptodev_ops {
/**< Configure asymmetric Crypto session. */
cryptodev_sym_free_session_t sym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_asym_free_session_t asym_session_clear;
+ cryptodev_asym_clear_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
union {
cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
@@ -627,17 +625,4 @@ set_sym_session_private_data(struct rte_cryptodev_sym_session *sess,
sess->sess_data[driver_id].data = private_data;
}
-static inline void *
-get_asym_session_private_data(const struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id) {
- return sess->sess_private_data[driver_id];
-}
-
-static inline void
-set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id, void *private_data)
-{
- sess->sess_private_data[driver_id] = private_data;
-}
-
#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 5d58951fd5..c5bfe08b79 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -24,6 +24,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_queue_pair_setup,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_pool_create,
lib.cryptodev.sym.pool.create)
+RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_pool_create,
+ lib.cryptodev.asym.pool.create)
+
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_create,
lib.cryptodev.sym.create)
@@ -39,15 +42,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_init,
lib.cryptodev.sym.init)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_init,
- lib.cryptodev.asym.init)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_clear,
lib.cryptodev.sym.clear)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_clear,
- lib.cryptodev.asym.clear)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
lib.cryptodev.enq.burst)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a40536c5ea..b056d88ac2 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -195,7 +195,7 @@ const char *rte_crypto_asym_op_strings[] = {
};
/**
- * The private data structure stored in the session mempool private data.
+ * The private data structure stored in the sym session mempool private data.
*/
struct rte_cryptodev_sym_session_pool_private_data {
uint16_t nb_drivers;
@@ -204,6 +204,14 @@ struct rte_cryptodev_sym_session_pool_private_data {
/**< session user data will be placed after sess_data */
};
+/**
+ * The private data structure stored in the asym session mempool private data.
+ */
+struct rte_cryptodev_asym_session_pool_private_data {
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+};
+
int
rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
const char *algo_string)
@@ -1751,47 +1759,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mp)
-{
- struct rte_cryptodev *dev;
- uint8_t index;
- int ret;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (sess == NULL || xforms == NULL || dev == NULL)
- return -EINVAL;
-
- index = dev->driver_id;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure,
- -ENOTSUP);
-
- if (sess->sess_private_data[index] == NULL) {
- ret = dev->dev_ops->asym_session_configure(dev,
- xforms,
- sess, mp);
- if (ret < 0) {
- CDEV_LOG_ERR(
- "dev_id %d failed to configure session details",
- dev_id);
- return ret;
- }
- }
-
- rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
- return 0;
-}
-
struct rte_mempool *
rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -1834,6 +1801,53 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
return mp;
}
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id)
+{
+ struct rte_mempool *mp;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ uint32_t obj_sz, obj_sz_aligned;
+ uint8_t dev_id, priv_sz, max_priv_sz = 0;
+
+ for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++)
+ if (rte_cryptodev_is_valid_dev(dev_id)) {
+ priv_sz = rte_cryptodev_asym_get_private_session_size(dev_id);
+ if (priv_sz > max_priv_sz)
+ max_priv_sz = priv_sz;
+ }
+ if (max_priv_sz == 0) {
+ CDEV_LOG_INFO("Could not set max private session size\n");
+ return NULL;
+ }
+
+ obj_sz = rte_cryptodev_asym_get_header_session_size() + max_priv_sz;
+ obj_sz_aligned = RTE_ALIGN_CEIL(obj_sz, RTE_CACHE_LINE_SIZE);
+
+ mp = rte_mempool_create(name, nb_elts, obj_sz_aligned, cache_size,
+ (uint32_t)(sizeof(*pool_priv)),
+ NULL, NULL, NULL, NULL,
+ socket_id, 0);
+ if (mp == NULL) {
+ CDEV_LOG_ERR("%s(name=%s) failed, rte_errno=%d\n",
+ __func__, name, rte_errno);
+ return NULL;
+ }
+
+ pool_priv = rte_mempool_get_priv(mp);
+ if (!pool_priv) {
+ CDEV_LOG_ERR("%s(name=%s) failed to get private data\n",
+ __func__, name);
+ rte_mempool_free(mp);
+ return NULL;
+ }
+ pool_priv->max_priv_session_sz = max_priv_sz;
+
+ rte_cryptodev_trace_asym_session_pool_create(name, nb_elts,
+ cache_size, mp);
+ return mp;
+}
+
static unsigned int
rte_cryptodev_sym_session_data_size(struct rte_cryptodev_sym_session *sess)
{
@@ -1895,19 +1909,44 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
}
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp)
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
{
struct rte_cryptodev_asym_session *sess;
- unsigned int session_size =
+ uint32_t session_priv_data_sz;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ unsigned int session_header_size =
rte_cryptodev_asym_get_header_session_size();
+ struct rte_cryptodev *dev;
+ int ret;
+
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+ return NULL;
+ }
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL)
+ return NULL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
return NULL;
}
+ session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
+ dev_id);
+ pool_priv = rte_mempool_get_priv(mp);
+
+ if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
+ CDEV_LOG_DEBUG(
+ "The private session data size used when creating the mempool is smaller than this device's private session data.");
+ return NULL;
+ }
+
/* Verify if provided mempool can hold elements big enough. */
- if (mp->elt_size < session_size) {
+ if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
return NULL;
@@ -1919,12 +1958,25 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
return NULL;
}
- /* Clear device session pointer.
- * Include the flag indicating presence of private data
- */
- memset(sess, 0, session_size);
+ sess->driver_id = dev->driver_id;
+ sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
+
+ /* Clear device session pointer.*/
+ memset(sess->sess_private_data, 0, session_priv_data_sz);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
- rte_cryptodev_trace_asym_session_create(mp, sess);
+ if (sess->sess_private_data[0] == 0) {
+ ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
+ if (ret < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return NULL;
+ }
+ }
+
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
return sess;
}
@@ -1959,30 +2011,6 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess)
-{
- struct rte_cryptodev *dev;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (dev == NULL || sess == NULL)
- return -EINVAL;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
-
- dev->dev_ops->asym_session_clear(dev, sess);
-
- rte_cryptodev_trace_sym_session_clear(dev_id, sess);
- return 0;
-}
-
int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
{
@@ -2007,27 +2035,31 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
}
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess)
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess)
{
- uint8_t i;
- void *sess_priv;
struct rte_mempool *sess_mp;
+ struct rte_cryptodev *dev;
- if (sess == NULL)
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
-
- /* Check that all device private data has been freed */
- for (i = 0; i < nb_drivers; i++) {
- sess_priv = get_asym_session_private_data(sess, i);
- if (sess_priv != NULL)
- return -EBUSY;
}
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL || sess == NULL)
+ return -EINVAL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
+
+ dev->dev_ops->asym_session_clear(dev, sess);
+
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
rte_mempool_put(sess_mp, sess);
- rte_cryptodev_trace_asym_session_free(sess);
+ rte_cryptodev_trace_asym_session_free(dev_id, sess);
return 0;
}
@@ -2061,12 +2093,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
unsigned int
rte_cryptodev_asym_get_header_session_size(void)
{
- /*
- * Header contains pointers to the private data
- * of all registered drivers, and a flag which
- * indicates presence of private data
- */
- return ((sizeof(void *) * nb_drivers) + sizeof(uint8_t));
+ return sizeof(struct rte_cryptodev_asym_session);
}
unsigned int
@@ -2092,7 +2119,6 @@ unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_is_valid_dev(dev_id))
@@ -2104,11 +2130,8 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
return 0;
priv_sess_size = (*dev->dev_ops->asym_session_get_size)(dev);
- if (priv_sess_size < header_size)
- return header_size;
return priv_sess_size;
-
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54df..90e764017d 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -919,9 +919,13 @@ struct rte_cryptodev_sym_session {
};
/** Cryptodev asymmetric crypto session */
-struct rte_cryptodev_asym_session {
- __extension__ void *sess_private_data[0];
- /**< Private asymmetric session material */
+RTE_STD_C11 struct rte_cryptodev_asym_session {
+ uint8_t driver_id;
+ /**< Session driver ID. */
+ uint16_t max_priv_data_sz;
+ /**< Size of private data used when creating mempool */
+ uint8_t padding[5];
+ uint8_t sess_private_data[0];
};
/**
@@ -956,6 +960,29 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t priv_size,
int socket_id);
+/**
+ * Create an asymmetric session mempool.
+ *
+ * @param name
+ * The unique mempool name.
+ * @param nb_elts
+ * The number of elements in the mempool.
+ * @param cache_size
+ * The number of per-lcore cache elements
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in the case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ *
+ * @return
+ * - On success return mempool
+ * - On failure returns NULL
+ */
+__rte_experimental
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id);
+
/**
* Create symmetric crypto session header (generic with no private data)
*
@@ -969,17 +996,22 @@ struct rte_cryptodev_sym_session *
rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
/**
- * Create asymmetric crypto session header (generic with no private data)
+ * Create and initialise an asymmetric crypto session structure.
+ * Calls the PMD to configure the private session data.
*
- * @param mempool mempool to allocate asymmetric session
- * objects from
+ * @param dev_id ID of device that we want the session to be used on
+ * @param xforms Asymmetric crypto transform operations to apply on flow
+ * processed with this session
+ * @param mp mempool to allocate asymmetric session
+ * objects from
* @return
* - On success return pointer to asym-session
* - On failure returns NULL
*/
__rte_experimental
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
/**
* Frees symmetric crypto session header, after checking that all
@@ -997,20 +1029,20 @@ int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
/**
- * Frees asymmetric crypto session header, after checking that all
- * the device private data has been freed, returning it
- * to its original mempool.
+ * Clears and frees asymmetric crypto session header and private data,
+ * returning it to its original mempool.
*
+ * @param dev_id ID of device that uses the asymmetric session.
* @param sess Session header to be freed.
*
* @return
* - 0 if successful.
- * - -EINVAL if session is NULL.
- * - -EBUSY if not all device private data has been freed.
+ * - -EINVAL if device is invalid or session is NULL.
*/
__rte_experimental
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess);
/**
* Fill out private data for the device id, based on its device type.
@@ -1034,28 +1066,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
struct rte_crypto_sym_xform *xforms,
struct rte_mempool *mempool);
-/**
- * Initialize asymmetric session on a device with specific asymmetric xform
- *
- * @param dev_id ID of device that we want the session to be used on
- * @param sess Session to be set up on a device
- * @param xforms Asymmetric crypto transform operations to apply on flow
- * processed with this session
- * @param mempool Mempool to be used for internal allocation.
- *
- * @return
- * - On success, zero.
- * - -EINVAL if input parameters are invalid.
- * - -ENOTSUP if crypto device does not support the crypto transform.
- * - -ENOMEM if the private session could not be allocated.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mempool);
-
/**
* Frees private data for the device id, based on its device type,
* returning it to its mempool. It is the application's responsibility
@@ -1074,21 +1084,6 @@ int
rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess);
-/**
- * Frees resources held by asymmetric session during rte_cryptodev_session_init
- *
- * @param dev_id ID of device that uses the asymmetric session.
- * @param sess Asymmetric session setup on device using
- * rte_cryptodev_session_init
- * @return
- * - 0 if successful.
- * - -EINVAL if device is invalid or session is NULL.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess);
-
/**
* Get the size of the header session, for all registered drivers excluding
* the user data size.
@@ -1116,7 +1111,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
struct rte_cryptodev_sym_session *sess);
/**
- * Get the size of the asymmetric session header, for all registered drivers.
+ * Get the size of the asymmetric session header.
*
* @return
* Size of the asymmetric header session.
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..a4fa9e8c7e 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -83,12 +83,22 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(sess->user_data_sz);
)
+RTE_TRACE_POINT(
+ rte_cryptodev_trace_asym_session_pool_create,
+ RTE_TRACE_POINT_ARGS(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, void *mempool),
+ rte_trace_point_emit_string(name);
+ rte_trace_point_emit_u32(nb_elts);
+ rte_trace_point_emit_u32(cache_size);
+ rte_trace_point_emit_ptr(mempool);
+)
+
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(void *mempool,
- struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
- rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
@@ -99,7 +109,9 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_free,
- RTE_TRACE_POINT_ARGS(struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess),
+ rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(sess);
)
@@ -117,17 +129,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(mempool);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_init,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess, void *xforms,
- void *mempool),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
- rte_trace_point_emit_ptr(xforms);
- rte_trace_point_emit_ptr(mempool);
-)
-
RTE_TRACE_POINT(
rte_cryptodev_trace_sym_session_clear,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
@@ -135,13 +136,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(sess);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_clear,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
-)
-
#ifdef __cplusplus
}
#endif
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index c50745fa8c..44d1aff0e2 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -55,10 +55,8 @@ EXPERIMENTAL {
rte_cryptodev_asym_get_header_session_size;
rte_cryptodev_asym_get_private_session_size;
rte_cryptodev_asym_get_xform_enum;
- rte_cryptodev_asym_session_clear;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
- rte_cryptodev_asym_session_init;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
@@ -81,9 +79,7 @@ EXPERIMENTAL {
__rte_cryptodev_trace_sym_session_free;
__rte_cryptodev_trace_asym_session_free;
__rte_cryptodev_trace_sym_session_init;
- __rte_cryptodev_trace_asym_session_init;
__rte_cryptodev_trace_sym_session_clear;
- __rte_cryptodev_trace_asym_session_clear;
__rte_cryptodev_trace_dequeue_burst;
__rte_cryptodev_trace_enqueue_burst;
@@ -104,6 +100,9 @@ EXPERIMENTAL {
rte_cryptodev_remove_deq_callback;
rte_cryptodev_remove_enq_callback;
+ # added 22.03
+ rte_cryptodev_asym_session_pool_create;
+ __rte_cryptodev_trace_asym_session_pool_create;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 1%]
* [PATCH v4 7/7] buildtools/chkincs: test headers for C++ compatibility
@ 2022-02-10 15:42 11% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2022-02-10 15:42 UTC (permalink / raw)
To: dev; +Cc: Bruce Richardson, Aaron Conole, Michael Santana
Add support for checking each of our headers for issues when included in
a C++ file.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
.ci/linux-build.sh | 1 +
.github/workflows/build.yml | 2 +-
buildtools/chkincs/main.cpp | 4 ++++
buildtools/chkincs/meson.build | 20 ++++++++++++++++++++
devtools/test-meson-builds.sh | 3 ++-
5 files changed, 28 insertions(+), 2 deletions(-)
create mode 100644 buildtools/chkincs/main.cpp
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index c10c1a8ab5..67d68535e0 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -74,6 +74,7 @@ fi
if [ "$BUILD_32BIT" = "true" ]; then
OPTS="$OPTS -Dc_args=-m32 -Dc_link_args=-m32"
+ OPTS="$OPTS -Dcpp_args=-m32 -Dcpp_link_args=-m32"
export PKG_CONFIG_LIBDIR="/usr/lib32/pkgconfig"
fi
diff --git a/.github/workflows/build.yml b/.github/workflows/build.yml
index 6cf997d6ee..d30cfd08d7 100644
--- a/.github/workflows/build.yml
+++ b/.github/workflows/build.yml
@@ -116,7 +116,7 @@ jobs:
libdw-dev
- name: Install i386 cross compiling packages
if: env.BUILD_32BIT == 'true'
- run: sudo apt install -y gcc-multilib
+ run: sudo apt install -y gcc-multilib g++-multilib
- name: Install aarch64 cross compiling packages
if: env.AARCH64 == 'true'
run: sudo apt install -y gcc-aarch64-linux-gnu libc6-dev-arm64-cross
diff --git a/buildtools/chkincs/main.cpp b/buildtools/chkincs/main.cpp
new file mode 100644
index 0000000000..d25bb8852a
--- /dev/null
+++ b/buildtools/chkincs/main.cpp
@@ -0,0 +1,4 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) 2021 Intel Corporation
+ */
+int main(void) { return 0; }
diff --git a/buildtools/chkincs/meson.build b/buildtools/chkincs/meson.build
index 5ffca89761..beabcd55d8 100644
--- a/buildtools/chkincs/meson.build
+++ b/buildtools/chkincs/meson.build
@@ -28,3 +28,23 @@ executable('chkincs', sources,
dependencies: deps,
link_whole: dpdk_static_libraries + dpdk_drivers,
install: false)
+
+# run tests for c++ builds also
+if not add_languages('cpp', required: false)
+ subdir_done()
+endif
+
+gen_cpp_files = generator(gen_c_file_for_header,
+ output: '@BASENAME@.cpp',
+ arguments: ['@INPUT@', '@OUTPUT@'])
+
+cpp_sources = files('main.cpp')
+cpp_sources += gen_cpp_files.process(dpdk_chkinc_headers)
+
+executable('chkincs-cpp', cpp_sources,
+ cpp_args: ['-include', 'rte_config.h', cflags],
+ link_args: dpdk_extra_ldflags,
+ include_directories: includes,
+ dependencies: deps,
+ link_whole: dpdk_static_libraries + dpdk_drivers,
+ install: false)
diff --git a/devtools/test-meson-builds.sh b/devtools/test-meson-builds.sh
index 4ed61328b9..c07fd16fdc 100755
--- a/devtools/test-meson-builds.sh
+++ b/devtools/test-meson-builds.sh
@@ -246,7 +246,8 @@ if check_cc_flags '-m32' ; then
export PKG_CONFIG_LIBDIR='/usr/lib/pkgconfig'
fi
target_override='i386-pc-linux-gnu'
- build build-32b cc ABI -Dc_args='-m32' -Dc_link_args='-m32'
+ build build-32b cc ABI -Dc_args='-m32' -Dc_link_args='-m32' \
+ -Dcpp_args='-m32' -Dcpp_link_args='-m32'
target_override=
unset PKG_CONFIG_LIBDIR
fi
--
2.32.0
^ permalink raw reply [relevance 11%]
* [PATCH v5 5/5] crypto: modify return value for asym session create
2022-02-10 14:01 1% ` [PATCH v5 2/5] crypto: use single buffer for asymmetric session Ciara Power
@ 2022-02-10 14:01 2% ` Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-10 14:01 UTC (permalink / raw)
To: dev; +Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
v5: Added session parameter to create session trace.
v4:
- Reordered function parameters.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Fixed variable declarations, putting initialised variable last.
- Made function comment for return value more generic.
- Fixed log to include line break.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 12 ++-
app/test/test_cryptodev_asym.c | 109 +++++++++++++------------
doc/guides/rel_notes/release_22_03.rst | 3 +-
lib/cryptodev/rte_cryptodev.c | 28 ++++---
lib/cryptodev/rte_cryptodev.h | 13 ++-
lib/cryptodev/rte_cryptodev_trace.h | 4 +-
6 files changed, 92 insertions(+), 77 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index b8f590b397..479c40eead 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -734,7 +734,9 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform auth_xform;
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
+ void *asym_sess = NULL;
struct rte_crypto_asym_xform xform = {0};
+ int ret;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
- if (sess == NULL)
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform,
+ sess_mp, &asym_sess);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1, "Asym session create failed\n");
return NULL;
-
- return sess;
+ }
+ return asym_sess;
}
#ifdef RTE_LIB_SECURITY
/*
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f0cb839a49..c2e1b4dafd 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -317,7 +317,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
xform_tc.next = NULL;
xform_tc.xform_type = data_tc->modex.xform_type;
@@ -452,14 +452,14 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
- ts_params->session_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool, &sess);
+ if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -646,7 +646,7 @@ test_rsa_sign_verify(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -659,12 +659,12 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -686,7 +686,7 @@ test_rsa_enc_dec(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -699,11 +699,11 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -726,7 +726,7 @@ test_rsa_sign_verify_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
@@ -738,12 +738,12 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -766,7 +766,7 @@ test_rsa_enc_dec_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
@@ -778,12 +778,12 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1047,7 +1047,7 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
@@ -1074,12 +1074,12 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1130,7 +1130,7 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1152,12 +1152,12 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1211,7 +1211,7 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1241,12 +1241,12 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1300,7 +1300,7 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t out_pub_key[TEST_DH_MOD_LEN];
uint8_t out_prv_key[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform pub_key_xform;
@@ -1330,12 +1330,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1419,12 +1419,12 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1543,13 +1543,13 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1653,13 +1653,14 @@ test_dsa_sign(void)
uint8_t r[TEST_DH_MOD_LEN];
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
+ int ret;
- sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
/* set up crypto op data structure */
@@ -1788,7 +1789,7 @@ test_ecdsa_sign_verify(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -1833,12 +1834,12 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
@@ -1990,7 +1991,7 @@ test_ecpm(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -2035,12 +2036,12 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a930cbbad6..640691c3ef 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -123,7 +123,8 @@ API Changes
The session structure was moved to ``cryptodev_pmd.h``,
hiding it from applications.
The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
- is now moved to ``rte_cryptodev_asym_session_create``.
+ is now moved to ``rte_cryptodev_asym_session_create``, which was updated to
+ return an integer value to indicate initialisation errors.
ABI Changes
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 91d48d5886..727d271fb9 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -1912,9 +1912,10 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
return sess;
}
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session)
{
struct rte_cryptodev_asym_session *sess;
uint32_t session_priv_data_sz;
@@ -1926,17 +1927,17 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return NULL;
+ return -EINVAL;
}
dev = rte_cryptodev_pmd_get_dev(dev_id);
if (dev == NULL)
- return NULL;
+ return -EINVAL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
- return NULL;
+ return -EINVAL;
}
session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
@@ -1946,22 +1947,23 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
CDEV_LOG_DEBUG(
"The private session data size used when creating the mempool is smaller than this device's private session data.");
- return NULL;
+ return -EINVAL;
}
/* Verify if provided mempool can hold elements big enough. */
if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
- return NULL;
+ return -EINVAL;
}
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(mp, (void **)&sess)) {
+ if (rte_mempool_get(mp, session)) {
CDEV_LOG_ERR("couldn't get object from session mempool");
- return NULL;
+ return -ENOMEM;
}
+ sess = *session;
sess->driver_id = dev->driver_id;
sess->user_data_sz = pool_priv->user_data_sz;
sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
@@ -1969,7 +1971,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
/* Clear device session pointer.*/
memset(sess->sess_private_data, 0, session_priv_data_sz + sess->user_data_sz);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, -ENOTSUP);
if (sess->sess_private_data[0] == 0) {
ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
@@ -1977,12 +1979,12 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
CDEV_LOG_ERR(
"dev_id %d failed to configure session details",
dev_id);
- return NULL;
+ return ret;
}
}
- rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
- return sess;
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp, sess);
+ return 0;
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 1d7bd07680..19e2e70287 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -996,14 +996,19 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
* processed with this session
* @param mp mempool to allocate asymmetric session
* objects from
+ * @param session void ** for session to be used
+ *
* @return
- * - On success return pointer to asym-session
- * - On failure returns NULL
+ * - 0 on success.
+ * - -EINVAL on invalid arguments.
+ * - -ENOMEM on memory error for session allocation.
+ * - -ENOTSUP if device doesn't support session configuration.
*/
__rte_experimental
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session);
/**
* Frees symmetric crypto session header, after checking that all
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index 005a4fe38b..a3f6048e7d 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -96,10 +96,12 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool,
+ void *sess),
rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
+ rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v5 2/5] crypto: use single buffer for asymmetric session
@ 2022-02-10 14:01 1% ` Ciara Power
2022-02-10 14:01 2% ` [PATCH v5 5/5] crypto: modify return value for asym session create Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-10 14:01 UTC (permalink / raw)
To: dev
Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty,
Ankur Dwivedi, Tejasree Kondoj, John Griffin, Fiona Trahe,
Deepak Kumar Jain
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
v5:
- Removed get API for session private data, can be accessed directly.
- Modified test application to create a session mempool for
TEST_NUM_SESSIONS rather than TEST_NUM_SESSIONS * 2.
- Reworded create session function description.
- Removed sess parameter from create session trace,
to be added in a later patch.
v4:
- Merged asym crypto session clear and free functions.
- Reordered some function parameters.
- Updated trace function for asym crypto session create.
- Fixed cnxk clear, the PMD no longer needs to put private data
back into a mempool.
- Renamed struct field for max private session size.
- Replaced __extension__ with RTE_STD_C11.
- Moved some parameter validity checks to before functional code.
- Reworded release note.
- Removed mempool parameter from session configure function.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Corrected formatting of struct comments.
- Increased size of max_priv_session_sz to uint16_t.
- Removed trace for asym session init function that was
previously removed.
- Added documentation.
v2:
- Renamed function typedef from "free" to "clear" as session private
data isn't being freed in that function.
- Moved user data API to separate patch.
- Minor fixes to comments, formatting, return values.
---
app/test-crypto-perf/cperf_ops.c | 14 +-
app/test-crypto-perf/cperf_test_throughput.c | 8 +-
app/test/test_cryptodev_asym.c | 272 +++++--------------
doc/guides/prog_guide/cryptodev_lib.rst | 21 +-
doc/guides/rel_notes/release_22_03.rst | 7 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 8 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 22 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 3 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 32 +--
drivers/crypto/openssl/rte_openssl_pmd.c | 4 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 24 +-
drivers/crypto/qat/qat_asym.c | 54 +---
drivers/crypto/qat/qat_asym.h | 5 +-
lib/cryptodev/cryptodev_pmd.h | 23 +-
lib/cryptodev/cryptodev_trace_points.c | 9 +-
lib/cryptodev/rte_cryptodev.c | 213 ++++++++-------
lib/cryptodev/rte_cryptodev.h | 97 ++++---
lib/cryptodev/rte_cryptodev_trace.h | 38 ++-
lib/cryptodev/version.map | 7 +-
20 files changed, 315 insertions(+), 554 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d975ae1ab8..b125c699de 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -735,7 +735,6 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
struct rte_crypto_asym_xform xform = {0};
- int rc;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -745,19 +744,10 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp);
+ sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
if (sess == NULL)
return NULL;
- rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
- &xform, priv_mp);
- if (rc < 0) {
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id,
- (void *)sess);
- rte_cryptodev_asym_session_free((void *)sess);
- }
- return NULL;
- }
+
return sess;
}
#ifdef RTE_LIB_SECURITY
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 51512af2ad..ee21ff27f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -35,11 +35,9 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
if (!ctx)
return;
if (ctx->sess) {
- if (ctx->options->op_type == CPERF_ASYM_MODEX) {
- rte_cryptodev_asym_session_clear(ctx->dev_id,
- (void *)ctx->sess);
- rte_cryptodev_asym_session_free((void *)ctx->sess);
- }
+ if (ctx->options->op_type == CPERF_ASYM_MODEX)
+ rte_cryptodev_asym_session_free(ctx->dev_id,
+ (void *)ctx->sess);
#ifdef RTE_LIB_SECURITY
else if (ctx->options->op_type == CPERF_PDCP ||
ctx->options->op_type == CPERF_DOCSIS ||
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 8d7290f9ed..88433faf1c 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -452,7 +452,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool);
if (!sess) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -462,15 +463,6 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform_tc,
- ts_params->session_mpool) < 0) {
- snprintf(test_msg, ASYM_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
rte_crypto_op_attach_asym_session(op, sess);
} else {
asym_op->xform = &xform_tc;
@@ -512,10 +504,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
snprintf(test_msg, ASYM_TEST_MSG_LEN, "SESSIONLESS PASS");
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -669,18 +659,11 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -688,9 +671,7 @@ test_rsa_sign_verify(void)
status = queue_ops_rsa_sign_verify(sess);
error_exit:
-
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -718,17 +699,10 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -737,8 +711,7 @@ test_rsa_enc_dec(void)
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -765,28 +738,20 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
status = TEST_FAILED;
- return status;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify_crt\n");
- status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_sign_verify(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -813,27 +778,20 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec_crt\n");
status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_enc_dec(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -927,7 +885,6 @@ testsuite_setup(void)
/* configure qp */
ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
ts_params->qp_conf.mp_session = ts_params->session_mpool;
- ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
@@ -936,21 +893,9 @@ testsuite_setup(void)
qp_id, dev_id);
}
- /* setup asym session pool */
- unsigned int session_size = RTE_MAX(
- rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
- /*
- * Create mempool with TEST_NUM_SESSIONS * 2,
- * to include the session headers
- */
- ts_params->session_mpool = rte_mempool_create(
- "test_asym_sess_mp",
- TEST_NUM_SESSIONS * 2,
- session_size,
- 0, 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
+ ts_params->session_mpool = rte_cryptodev_asym_session_pool_create(
+ "test_asym_sess_mp", TEST_NUM_SESSIONS, 0,
+ SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
@@ -1107,14 +1052,6 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1137,11 +1074,11 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1176,10 +1113,8 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1199,14 +1134,6 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1225,11 +1152,11 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1265,10 +1192,8 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1290,14 +1215,6 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1324,11 +1241,11 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1365,10 +1282,8 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data, asym_op->dh.priv_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1391,15 +1306,6 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform pub_key_xform;
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1423,11 +1329,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.pub_key.length = sizeof(out_pub_key);
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1462,10 +1369,8 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
out_pub_key, asym_op->dh.pub_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1514,7 +1419,7 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
@@ -1523,15 +1428,6 @@ test_mod_inv(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modinv_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* generate crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1583,10 +1479,8 @@ test_mod_inv(void)
}
error_exit:
- if (sess) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op)
rte_crypto_op_free(op);
@@ -1649,7 +1543,7 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1,
"line %u "
@@ -1659,15 +1553,6 @@ test_mod_exp(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modex_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
asym_op = op->asym;
memcpy(input, base, sizeof(base));
asym_op->modex.base.data = input;
@@ -1706,10 +1591,8 @@ test_mod_exp(void)
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1771,7 +1654,7 @@ test_dsa_sign(void)
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1800,15 +1683,6 @@ test_dsa_sign(void)
debug_hexdump(stdout, "priv_key: ", dsa_xform.dsa.x.data,
dsa_xform.dsa.x.length);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &dsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* attach asymmetric crypto session to crypto operations */
rte_crypto_op_attach_asym_session(op, sess);
asym_op->dsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
@@ -1882,10 +1756,8 @@ test_dsa_sign(void)
status = TEST_FAILED;
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1944,15 +1816,6 @@ test_ecdsa_sign_verify(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1970,11 +1833,11 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2082,10 +1945,8 @@ test_ecdsa_sign_verify(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -2157,15 +2018,6 @@ test_ecpm(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -2183,11 +2035,11 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2255,10 +2107,8 @@ test_ecpm(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 9f33f7a177..b4dbd384bf 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1038,20 +1038,17 @@ It is the application's responsibility to create and manage the session mempools
Application using both symmetric and asymmetric sessions should allocate and maintain
different sessions pools for each type.
-An application can use ``rte_cryptodev_get_asym_session_private_size()`` to
-get the private size of asymmetric session on a given crypto device. This
-function would allow an application to calculate the max device asymmetric
-session size of all crypto devices to create a single session mempool.
-If instead an application creates multiple asymmetric session mempools,
-the Crypto device framework also provides ``rte_cryptodev_asym_get_header_session_size()`` to get
-the size of an uninitialized session.
+An application can use ``rte_cryptodev_asym_session_pool_create()`` to create a mempool
+with a specified number of elements. The element size will allow for the session header,
+and the max private session size.
+The max private session size is chosen based on available crypto devices,
+the biggest private session size is used. This means any of those devices can be used,
+and the mempool element will have available space for its private session data.
Once the session mempools have been created, ``rte_cryptodev_asym_session_create()``
-is used to allocate an uninitialized asymmetric session from the given mempool.
-The session then must be initialized using ``rte_cryptodev_asym_session_init()``
-for each of the required crypto devices. An asymmetric transform chain
-is used to specify the operation and its parameters. See the section below for
-details on transforms.
+is used to allocate and initialize an asymmetric session from the given mempool.
+An asymmetric transform chain is used to specify the operation and its parameters.
+See the section below for details on transforms.
When a session is no longer used, user must call ``rte_cryptodev_asym_session_clear()``
for each of the crypto devices that are using the session, to free all driver
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a820cc5596..ea4c5309a0 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -112,6 +112,13 @@ API Changes
* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
which are kept for backward compatibility, are marked as deprecated.
+* cryptodev: The asymmetric session handling was modified to use a single
+ mempool object. An API ``rte_cryptodev_asym_session_pool_create`` was added
+ to create a mempool with element size big enough to hold the generic asymmetric
+ session header and max size for a device private session data.
+ The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
+ is now moved to ``rte_cryptodev_asym_session_create``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index d217bbf383..c4d5d039ec 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -157,8 +157,8 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- ae_sess = get_asym_session_private_data(
- asym_op->session, cn10k_cryptodev_driver_id);
+ ae_sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0],
ae_sess);
if (unlikely(ret))
@@ -431,8 +431,8 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn10k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ac1953b66d..b8ad4bf211 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -138,8 +138,8 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- sess = get_asym_session_private_data(
- asym_op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ asym_op->session->sess_private_data;
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
@@ -453,8 +453,8 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn9k_cryptodev_driver_id);
+ sess = (struct cnxk_ae_sess *)
+ op->session->sess_private_data;
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index a5fb68da02..72f5f1a6fe 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -658,10 +658,9 @@ void
cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- struct rte_mempool *sess_mp;
struct cnxk_ae_sess *priv;
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cnxk_ae_sess *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -670,40 +669,29 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
/* Reset and free object back to pool */
memset(priv, 0, cnxk_ae_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
int
cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
- struct cnxk_ae_sess *priv;
+ struct cnxk_ae_sess *priv =
+ (struct cnxk_ae_sess *) sess->sess_private_data;
union cpt_inst_w7 w7;
int ret;
- if (rte_mempool_get(pool, (void **)&priv))
- return -ENOMEM;
-
- memset(priv, 0, sizeof(struct cnxk_ae_sess));
-
ret = cnxk_ae_fill_session_parameters(priv, xform);
- if (ret) {
- rte_mempool_put(pool, priv);
+ if (ret)
return ret;
- }
w7.u64 = 0;
w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
priv->cpt_inst_w7 = w7.u64;
priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
priv->ec_grp = vf->ec_grp;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 0656ba9675..ab0f00ee7c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -122,8 +122,7 @@ void cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
int cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool);
+ struct rte_cryptodev_asym_session *sess);
void cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp);
static inline union rte_event_crypto_metadata *
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index f7ca8a8a8e..0cb6dbb38c 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -375,35 +375,24 @@ otx_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
static int
-otx_cpt_asym_session_cfg(struct rte_cryptodev *dev,
+otx_cpt_asym_session_cfg(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform __rte_unused,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
- struct cpt_asym_sess_misc *priv;
+ struct cpt_asym_sess_misc *priv = (struct cpt_asym_sess_misc *)
+ sess->sess_private_data;
int ret;
CPT_PMD_INIT_FUNC_TRACE();
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
ret = cpt_fill_asym_session_parameters(priv, xform);
if (ret) {
CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
return ret;
}
priv->cpt_inst_w7 = 0;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
@@ -412,11 +401,10 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
CPT_PMD_INIT_FUNC_TRACE();
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = (struct cpt_asym_sess_misc *) sess->sess_private_data;
if (priv == NULL)
return;
@@ -424,9 +412,6 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
/* Free resources allocated during session configure */
cpt_free_asym_session_parameters(priv);
memset(priv, 0, otx_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
static __rte_always_inline void * __rte_hot
@@ -471,8 +456,8 @@ otx_cpt_enq_single_asym(struct cpt_instance *instance,
return NULL;
}
- sess = get_asym_session_private_data(asym_op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *)
+ asym_op->session->sess_private_data;
/* Store phys_addr of the mdata to meta_buf */
params.meta_buf = rte_mempool_virt2iova(mdata);
@@ -852,8 +837,7 @@ otx_cpt_asym_post_process(struct rte_crypto_op *cop,
struct rte_crypto_asym_op *op = cop->asym;
struct cpt_asym_sess_misc *sess;
- sess = get_asym_session_private_data(op->session,
- otx_cryptodev_driver_id);
+ sess = (struct cpt_asym_sess_misc *) op->session->sess_private_data;
switch (sess->xfrm_type) {
case RTE_CRYPTO_ASYM_XFORM_RSA:
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..d80e1052e2 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -748,9 +748,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
} else {
if (likely(op->asym->session != NULL))
asym_sess = (struct openssl_asym_session *)
- get_asym_session_private_data(
- op->asym->session,
- cryptodev_driver_id);
+ op->asym->session->sess_private_data;
if (asym_sess == NULL)
op->status =
RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..556fd226ed 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1119,8 +1119,7 @@ static int openssl_set_asym_session_parameters(
static int
openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
void *asym_sess_private_data;
int ret;
@@ -1130,25 +1129,14 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -EINVAL;
}
- if (rte_mempool_get(mempool, &asym_sess_private_data)) {
- CDEV_LOG_ERR(
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
+ asym_sess_private_data = sess->sess_private_data;
ret = openssl_set_asym_session_parameters(asym_sess_private_data,
xform);
if (ret != 0) {
OPENSSL_LOG(ERR, "failed configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(mempool, asym_sess_private_data);
return ret;
}
- set_asym_session_private_data(sess, dev->driver_id,
- asym_sess_private_data);
-
return 0;
}
@@ -1206,19 +1194,15 @@ static void openssl_reset_asym_session(struct openssl_asym_session *sess)
* so it doesn't leave key material behind
*/
static void
-openssl_pmd_asym_session_clear(struct rte_cryptodev *dev,
+openssl_pmd_asym_session_clear(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
/* Zero out the whole structure */
if (sess_priv) {
openssl_reset_asym_session(sess_priv);
memset(sess_priv, 0, sizeof(struct openssl_asym_session));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
}
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 09d8761c5f..f46eefd4b3 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -492,8 +492,7 @@ qat_asym_build_request(void *in_op,
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
ctx = (struct qat_asym_session *)
- get_asym_session_private_data(
- op->asym->session, qat_asym_driver_id);
+ op->asym->session->sess_private_data;
if (unlikely(ctx == NULL)) {
QAT_LOG(ERR, "Session has not been created for this device");
goto error;
@@ -711,8 +710,8 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)get_asym_session_private_data(
- rx_op->asym->session, qat_asym_driver_id);
+ ctx = (struct qat_asym_session *)
+ rx_op->asym->session->sess_private_data;
qat_asym_collect_response(rx_op, cookie, ctx->xform);
} else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform);
@@ -726,61 +725,42 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
- int err = 0;
- void *sess_private_data;
struct qat_asym_session *session;
- if (rte_mempool_get(mempool, &sess_private_data)) {
- QAT_LOG(ERR,
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
- session = sess_private_data;
+ session = (struct qat_asym_session *) sess->sess_private_data;
if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) {
if (xform->modex.exponent.length == 0 ||
xform->modex.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod exp input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) {
if (xform->modinv.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod inv input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (xform->rsa.n.length == 0) {
QAT_LOG(ERR, "Invalid rsa input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
} else {
QAT_LOG(ERR, "Asymmetric crypto xform not implemented");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
session->xform = xform;
- qat_asym_build_req_tmpl(sess_private_data);
- set_asym_session_private_data(sess, dev->driver_id,
- sess_private_data);
+ qat_asym_build_req_tmpl(session);
return 0;
-error:
- rte_mempool_put(mempool, sess_private_data);
- return err;
}
unsigned int qat_asym_session_get_private_size(
@@ -793,15 +773,9 @@ void
qat_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = sess->sess_private_data;
struct qat_asym_session *s = (struct qat_asym_session *)sess_priv;
- if (sess_priv) {
+ if (sess_priv)
memset(s, 0, qat_asym_session_get_private_size(dev));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
- }
}
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 308b6b2e0b..c9242a12ca 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -46,10 +46,9 @@ struct qat_asym_session {
};
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool);
+ struct rte_cryptodev_asym_session *sess);
unsigned int
qat_asym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index b9146f652c..142bfb7c66 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -319,7 +319,6 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
* @param dev Crypto device pointer
* @param xform Single or chain of crypto xforms
* @param session Pointer to cryptodev's private session structure
- * @param mp Mempool where the private session is allocated
*
* @return
* - Returns 0 if private session structure have been created successfully.
@@ -329,8 +328,7 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
*/
typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *session,
- struct rte_mempool *mp);
+ struct rte_cryptodev_asym_session *session);
/**
* Free driver private session data.
*
@@ -340,12 +338,12 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_sym_session *sess);
/**
- * Free asymmetric session private data.
+ * Clear asymmetric session private data.
*
* @param dev Crypto device pointer
* @param sess Cryptodev session structure
*/
-typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
/**
* Perform actual crypto processing (encrypt/digest or auth/decrypt)
@@ -429,7 +427,7 @@ struct rte_cryptodev_ops {
/**< Configure asymmetric Crypto session. */
cryptodev_sym_free_session_t sym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_asym_free_session_t asym_session_clear;
+ cryptodev_asym_clear_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
union {
cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
@@ -627,17 +625,4 @@ set_sym_session_private_data(struct rte_cryptodev_sym_session *sess,
sess->sess_data[driver_id].data = private_data;
}
-static inline void *
-get_asym_session_private_data(const struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id) {
- return sess->sess_private_data[driver_id];
-}
-
-static inline void
-set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id, void *private_data)
-{
- sess->sess_private_data[driver_id] = private_data;
-}
-
#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 5d58951fd5..c5bfe08b79 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -24,6 +24,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_queue_pair_setup,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_pool_create,
lib.cryptodev.sym.pool.create)
+RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_pool_create,
+ lib.cryptodev.asym.pool.create)
+
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_create,
lib.cryptodev.sym.create)
@@ -39,15 +42,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_init,
lib.cryptodev.sym.init)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_init,
- lib.cryptodev.asym.init)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_clear,
lib.cryptodev.sym.clear)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_clear,
- lib.cryptodev.asym.clear)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
lib.cryptodev.enq.burst)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a40536c5ea..b056d88ac2 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -195,7 +195,7 @@ const char *rte_crypto_asym_op_strings[] = {
};
/**
- * The private data structure stored in the session mempool private data.
+ * The private data structure stored in the sym session mempool private data.
*/
struct rte_cryptodev_sym_session_pool_private_data {
uint16_t nb_drivers;
@@ -204,6 +204,14 @@ struct rte_cryptodev_sym_session_pool_private_data {
/**< session user data will be placed after sess_data */
};
+/**
+ * The private data structure stored in the asym session mempool private data.
+ */
+struct rte_cryptodev_asym_session_pool_private_data {
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+};
+
int
rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
const char *algo_string)
@@ -1751,47 +1759,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mp)
-{
- struct rte_cryptodev *dev;
- uint8_t index;
- int ret;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (sess == NULL || xforms == NULL || dev == NULL)
- return -EINVAL;
-
- index = dev->driver_id;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure,
- -ENOTSUP);
-
- if (sess->sess_private_data[index] == NULL) {
- ret = dev->dev_ops->asym_session_configure(dev,
- xforms,
- sess, mp);
- if (ret < 0) {
- CDEV_LOG_ERR(
- "dev_id %d failed to configure session details",
- dev_id);
- return ret;
- }
- }
-
- rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
- return 0;
-}
-
struct rte_mempool *
rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -1834,6 +1801,53 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
return mp;
}
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id)
+{
+ struct rte_mempool *mp;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ uint32_t obj_sz, obj_sz_aligned;
+ uint8_t dev_id, priv_sz, max_priv_sz = 0;
+
+ for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++)
+ if (rte_cryptodev_is_valid_dev(dev_id)) {
+ priv_sz = rte_cryptodev_asym_get_private_session_size(dev_id);
+ if (priv_sz > max_priv_sz)
+ max_priv_sz = priv_sz;
+ }
+ if (max_priv_sz == 0) {
+ CDEV_LOG_INFO("Could not set max private session size\n");
+ return NULL;
+ }
+
+ obj_sz = rte_cryptodev_asym_get_header_session_size() + max_priv_sz;
+ obj_sz_aligned = RTE_ALIGN_CEIL(obj_sz, RTE_CACHE_LINE_SIZE);
+
+ mp = rte_mempool_create(name, nb_elts, obj_sz_aligned, cache_size,
+ (uint32_t)(sizeof(*pool_priv)),
+ NULL, NULL, NULL, NULL,
+ socket_id, 0);
+ if (mp == NULL) {
+ CDEV_LOG_ERR("%s(name=%s) failed, rte_errno=%d\n",
+ __func__, name, rte_errno);
+ return NULL;
+ }
+
+ pool_priv = rte_mempool_get_priv(mp);
+ if (!pool_priv) {
+ CDEV_LOG_ERR("%s(name=%s) failed to get private data\n",
+ __func__, name);
+ rte_mempool_free(mp);
+ return NULL;
+ }
+ pool_priv->max_priv_session_sz = max_priv_sz;
+
+ rte_cryptodev_trace_asym_session_pool_create(name, nb_elts,
+ cache_size, mp);
+ return mp;
+}
+
static unsigned int
rte_cryptodev_sym_session_data_size(struct rte_cryptodev_sym_session *sess)
{
@@ -1895,19 +1909,44 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
}
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp)
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
{
struct rte_cryptodev_asym_session *sess;
- unsigned int session_size =
+ uint32_t session_priv_data_sz;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ unsigned int session_header_size =
rte_cryptodev_asym_get_header_session_size();
+ struct rte_cryptodev *dev;
+ int ret;
+
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+ return NULL;
+ }
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL)
+ return NULL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
return NULL;
}
+ session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
+ dev_id);
+ pool_priv = rte_mempool_get_priv(mp);
+
+ if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
+ CDEV_LOG_DEBUG(
+ "The private session data size used when creating the mempool is smaller than this device's private session data.");
+ return NULL;
+ }
+
/* Verify if provided mempool can hold elements big enough. */
- if (mp->elt_size < session_size) {
+ if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
return NULL;
@@ -1919,12 +1958,25 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
return NULL;
}
- /* Clear device session pointer.
- * Include the flag indicating presence of private data
- */
- memset(sess, 0, session_size);
+ sess->driver_id = dev->driver_id;
+ sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
+
+ /* Clear device session pointer.*/
+ memset(sess->sess_private_data, 0, session_priv_data_sz);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
- rte_cryptodev_trace_asym_session_create(mp, sess);
+ if (sess->sess_private_data[0] == 0) {
+ ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
+ if (ret < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return NULL;
+ }
+ }
+
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp);
return sess;
}
@@ -1959,30 +2011,6 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess)
-{
- struct rte_cryptodev *dev;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (dev == NULL || sess == NULL)
- return -EINVAL;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
-
- dev->dev_ops->asym_session_clear(dev, sess);
-
- rte_cryptodev_trace_sym_session_clear(dev_id, sess);
- return 0;
-}
-
int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
{
@@ -2007,27 +2035,31 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
}
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess)
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess)
{
- uint8_t i;
- void *sess_priv;
struct rte_mempool *sess_mp;
+ struct rte_cryptodev *dev;
- if (sess == NULL)
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
-
- /* Check that all device private data has been freed */
- for (i = 0; i < nb_drivers; i++) {
- sess_priv = get_asym_session_private_data(sess, i);
- if (sess_priv != NULL)
- return -EBUSY;
}
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL || sess == NULL)
+ return -EINVAL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
+
+ dev->dev_ops->asym_session_clear(dev, sess);
+
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
rte_mempool_put(sess_mp, sess);
- rte_cryptodev_trace_asym_session_free(sess);
+ rte_cryptodev_trace_asym_session_free(dev_id, sess);
return 0;
}
@@ -2061,12 +2093,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
unsigned int
rte_cryptodev_asym_get_header_session_size(void)
{
- /*
- * Header contains pointers to the private data
- * of all registered drivers, and a flag which
- * indicates presence of private data
- */
- return ((sizeof(void *) * nb_drivers) + sizeof(uint8_t));
+ return sizeof(struct rte_cryptodev_asym_session);
}
unsigned int
@@ -2092,7 +2119,6 @@ unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_is_valid_dev(dev_id))
@@ -2104,11 +2130,8 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
return 0;
priv_sess_size = (*dev->dev_ops->asym_session_get_size)(dev);
- if (priv_sess_size < header_size)
- return header_size;
return priv_sess_size;
-
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54df..90e764017d 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -919,9 +919,13 @@ struct rte_cryptodev_sym_session {
};
/** Cryptodev asymmetric crypto session */
-struct rte_cryptodev_asym_session {
- __extension__ void *sess_private_data[0];
- /**< Private asymmetric session material */
+RTE_STD_C11 struct rte_cryptodev_asym_session {
+ uint8_t driver_id;
+ /**< Session driver ID. */
+ uint16_t max_priv_data_sz;
+ /**< Size of private data used when creating mempool */
+ uint8_t padding[5];
+ uint8_t sess_private_data[0];
};
/**
@@ -956,6 +960,29 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t priv_size,
int socket_id);
+/**
+ * Create an asymmetric session mempool.
+ *
+ * @param name
+ * The unique mempool name.
+ * @param nb_elts
+ * The number of elements in the mempool.
+ * @param cache_size
+ * The number of per-lcore cache elements
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in the case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ *
+ * @return
+ * - On success return mempool
+ * - On failure returns NULL
+ */
+__rte_experimental
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id);
+
/**
* Create symmetric crypto session header (generic with no private data)
*
@@ -969,17 +996,22 @@ struct rte_cryptodev_sym_session *
rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
/**
- * Create asymmetric crypto session header (generic with no private data)
+ * Create and initialise an asymmetric crypto session structure.
+ * Calls the PMD to configure the private session data.
*
- * @param mempool mempool to allocate asymmetric session
- * objects from
+ * @param dev_id ID of device that we want the session to be used on
+ * @param xforms Asymmetric crypto transform operations to apply on flow
+ * processed with this session
+ * @param mp mempool to allocate asymmetric session
+ * objects from
* @return
* - On success return pointer to asym-session
* - On failure returns NULL
*/
__rte_experimental
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
/**
* Frees symmetric crypto session header, after checking that all
@@ -997,20 +1029,20 @@ int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
/**
- * Frees asymmetric crypto session header, after checking that all
- * the device private data has been freed, returning it
- * to its original mempool.
+ * Clears and frees asymmetric crypto session header and private data,
+ * returning it to its original mempool.
*
+ * @param dev_id ID of device that uses the asymmetric session.
* @param sess Session header to be freed.
*
* @return
* - 0 if successful.
- * - -EINVAL if session is NULL.
- * - -EBUSY if not all device private data has been freed.
+ * - -EINVAL if device is invalid or session is NULL.
*/
__rte_experimental
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess);
/**
* Fill out private data for the device id, based on its device type.
@@ -1034,28 +1066,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
struct rte_crypto_sym_xform *xforms,
struct rte_mempool *mempool);
-/**
- * Initialize asymmetric session on a device with specific asymmetric xform
- *
- * @param dev_id ID of device that we want the session to be used on
- * @param sess Session to be set up on a device
- * @param xforms Asymmetric crypto transform operations to apply on flow
- * processed with this session
- * @param mempool Mempool to be used for internal allocation.
- *
- * @return
- * - On success, zero.
- * - -EINVAL if input parameters are invalid.
- * - -ENOTSUP if crypto device does not support the crypto transform.
- * - -ENOMEM if the private session could not be allocated.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mempool);
-
/**
* Frees private data for the device id, based on its device type,
* returning it to its mempool. It is the application's responsibility
@@ -1074,21 +1084,6 @@ int
rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess);
-/**
- * Frees resources held by asymmetric session during rte_cryptodev_session_init
- *
- * @param dev_id ID of device that uses the asymmetric session.
- * @param sess Asymmetric session setup on device using
- * rte_cryptodev_session_init
- * @return
- * - 0 if successful.
- * - -EINVAL if device is invalid or session is NULL.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess);
-
/**
* Get the size of the header session, for all registered drivers excluding
* the user data size.
@@ -1116,7 +1111,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
struct rte_cryptodev_sym_session *sess);
/**
- * Get the size of the asymmetric session header, for all registered drivers.
+ * Get the size of the asymmetric session header.
*
* @return
* Size of the asymmetric header session.
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..a4fa9e8c7e 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -83,12 +83,22 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(sess->user_data_sz);
)
+RTE_TRACE_POINT(
+ rte_cryptodev_trace_asym_session_pool_create,
+ RTE_TRACE_POINT_ARGS(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, void *mempool),
+ rte_trace_point_emit_string(name);
+ rte_trace_point_emit_u32(nb_elts);
+ rte_trace_point_emit_u32(cache_size);
+ rte_trace_point_emit_ptr(mempool);
+)
+
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(void *mempool,
- struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms, void *mempool),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
- rte_trace_point_emit_ptr(sess);
)
RTE_TRACE_POINT(
@@ -99,7 +109,9 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_free,
- RTE_TRACE_POINT_ARGS(struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess),
+ rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(sess);
)
@@ -117,17 +129,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(mempool);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_init,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess, void *xforms,
- void *mempool),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
- rte_trace_point_emit_ptr(xforms);
- rte_trace_point_emit_ptr(mempool);
-)
-
RTE_TRACE_POINT(
rte_cryptodev_trace_sym_session_clear,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
@@ -135,13 +136,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(sess);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_clear,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
-)
-
#ifdef __cplusplus
}
#endif
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index c50745fa8c..44d1aff0e2 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -55,10 +55,8 @@ EXPERIMENTAL {
rte_cryptodev_asym_get_header_session_size;
rte_cryptodev_asym_get_private_session_size;
rte_cryptodev_asym_get_xform_enum;
- rte_cryptodev_asym_session_clear;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
- rte_cryptodev_asym_session_init;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
@@ -81,9 +79,7 @@ EXPERIMENTAL {
__rte_cryptodev_trace_sym_session_free;
__rte_cryptodev_trace_asym_session_free;
__rte_cryptodev_trace_sym_session_init;
- __rte_cryptodev_trace_asym_session_init;
__rte_cryptodev_trace_sym_session_clear;
- __rte_cryptodev_trace_asym_session_clear;
__rte_cryptodev_trace_dequeue_burst;
__rte_cryptodev_trace_enqueue_burst;
@@ -104,6 +100,9 @@ EXPERIMENTAL {
rte_cryptodev_remove_deq_callback;
rte_cryptodev_remove_enq_callback;
+ # added 22.03
+ rte_cryptodev_asym_session_pool_create;
+ __rte_cryptodev_trace_asym_session_pool_create;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 1%]
* RE: [EXT] [PATCH v2 4/4] crypto: reorganize endianness comments, add crypto uint
@ 2022-02-10 10:17 3% ` Akhil Goyal
2022-02-10 16:38 0% ` Zhang, Roy Fan
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-10 10:17 UTC (permalink / raw)
To: Arek Kusztal, dev; +Cc: roy.fan.zhang, Ramkumar Balu
> This patch adds crypto uint typedef so adding comment
> about byte-order becomes unnecessary.
>
> It makes API comments more tidy, and more consistent
> with other asymmetric crypto APIs.
>
> Additionally it reorganizes code that enums, externs
> and forward declarations are moved to the top of the
> header file making code more readable.
>
> It removes also comments like co-prime constraint
> from mod inv as it is natural mathematical constraint,
> not PMD constraint.
>
> Signed-off-by: Arek Kusztal <arkadiuszx.kusztal@intel.com>
> ---
CI is reporting abi issues in this set. Can you check?
http://mails.dpdk.org/archives/test-report/2022-February/257403.html
^ permalink raw reply [relevance 3%]
* Re: [PATCH v6 0/3] ethdev: introduce IP reassembly offload
2022-02-08 22:20 4% ` [PATCH v6 " Akhil Goyal
@ 2022-02-10 8:54 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-10 8:54 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, olivier.matz, david.marchand, radu.nicolau, jerinj,
stephen, mdr
On 2/8/2022 10:20 PM, Akhil Goyal wrote:
> As discussed in the RFC[1] sent in 21.11, a new offload is
> introduced in ethdev for IP reassembly.
>
> This patchset add the IP reassembly RX offload.
> Currently, the offload is tested along with inline IPsec processing.
> It can also be updated as a standalone offload without IPsec, if there
> are some hardware available to test it.
> The patchset is tested on cnxk platform. The driver implementation
> and a test app are added as separate patchsets.[2][3]
>
> [1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
> [2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
> [3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
> Newer versions of app and PMD will be sent once library changes are
> acked.
>
> Changes in v6:
> - fix warnings.
>
> Changes in v5:
> - updated Doxygen comments.(Ferruh)
> - Added release notes.
> - updated libabigail suppress rules.(David)
>
> Changes in v4:
> - removed rte_eth_dev_info update for capability (Ferruh)
> - removed Rx offload flag (Ferruh)
> - added capability_get() (Ferruh)
> - moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
>
> changes in v3:
> - incorporated comments from Andrew and Stephen Hemminger
>
> changes in v2:
> - added abi ignore exceptions for modifications in reserved fields.
> Added a crude way to subside the rte_security and rte_ipsec ABI issue.
> Please suggest a better way.
> - incorporated Konstantin's comment for extra checks in new API
> introduced.
> - converted static mbuf ol_flag to mbuf dynflag (Konstantin)
> - added a get API for reassembly configuration (Konstantin)
> - Fixed checkpatch issues.
> - Dynfield is NOT split into 2 parts as it would cause an extra fetch in
> case of IP reassembly failure.
> - Application patches are split into a separate series.
>
>
> Akhil Goyal (3):
> ethdev: introduce IP reassembly offload
> ethdev: add mbuf dynfield for incomplete IP reassembly
> security: add IPsec option for IP reassembly
>
Series applied to dpdk-next-net/main, thanks.
^ permalink raw reply [relevance 0%]
* [PATCH v4 5/5] crypto: modify return value for asym session create
2022-02-09 15:38 1% ` [PATCH v4 2/5] crypto: use single buffer for asymmetric session Ciara Power
@ 2022-02-09 15:38 2% ` Ciara Power
` (2 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Ciara Power @ 2022-02-09 15:38 UTC (permalink / raw)
To: dev; +Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v4:
- Reordered function parameters.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Fixed variable declarations, putting initialised variable last.
- Made function comment for return value more generic.
- Fixed log to include line break.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 12 ++-
app/test/test_cryptodev_asym.c | 109 +++++++++++++------------
doc/guides/rel_notes/release_22_03.rst | 3 +-
lib/cryptodev/rte_cryptodev.c | 26 +++---
lib/cryptodev/rte_cryptodev.h | 13 ++-
5 files changed, 88 insertions(+), 75 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index b8f590b397..479c40eead 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -734,7 +734,9 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform auth_xform;
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
+ void *asym_sess = NULL;
struct rte_crypto_asym_xform xform = {0};
+ int ret;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
- if (sess == NULL)
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform,
+ sess_mp, &asym_sess);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1, "Asym session create failed\n");
return NULL;
-
- return sess;
+ }
+ return asym_sess;
}
#ifdef RTE_LIB_SECURITY
/*
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index f9691fe281..1bddcb013e 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -317,7 +317,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
xform_tc.next = NULL;
xform_tc.xform_type = data_tc->modex.xform_type;
@@ -452,14 +452,14 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
- ts_params->session_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool, &sess);
+ if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -646,7 +646,7 @@ test_rsa_sign_verify(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -659,12 +659,12 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -686,7 +686,7 @@ test_rsa_enc_dec(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -699,11 +699,11 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -726,7 +726,7 @@ test_rsa_sign_verify_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
@@ -738,12 +738,12 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -766,7 +766,7 @@ test_rsa_enc_dec_crt(void)
uint8_t dev_id = ts_params->valid_devs[0];
void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
@@ -778,12 +778,12 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
+ ret = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool, &sess);
- if (!sess) {
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1047,7 +1047,7 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
@@ -1074,12 +1074,12 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1130,7 +1130,7 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1152,12 +1152,12 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1211,7 +1211,7 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1241,12 +1241,12 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1300,7 +1300,7 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t out_pub_key[TEST_DH_MOD_LEN];
uint8_t out_prv_key[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform pub_key_xform;
@@ -1330,12 +1330,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1419,12 +1419,12 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1543,13 +1543,13 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1653,13 +1653,14 @@ test_dsa_sign(void)
uint8_t r[TEST_DH_MOD_LEN];
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
+ int ret;
- sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
/* set up crypto op data structure */
@@ -1788,7 +1789,7 @@ test_ecdsa_sign_verify(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -1833,12 +1834,12 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
@@ -1990,7 +1991,7 @@ test_ecpm(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -2035,12 +2036,12 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool, &sess);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a930cbbad6..640691c3ef 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -123,7 +123,8 @@ API Changes
The session structure was moved to ``cryptodev_pmd.h``,
hiding it from applications.
The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
- is now moved to ``rte_cryptodev_asym_session_create``.
+ is now moved to ``rte_cryptodev_asym_session_create``, which was updated to
+ return an integer value to indicate initialisation errors.
ABI Changes
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 916dbb6709..727d271fb9 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -1912,9 +1912,10 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
return sess;
}
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session)
{
struct rte_cryptodev_asym_session *sess;
uint32_t session_priv_data_sz;
@@ -1926,17 +1927,17 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return NULL;
+ return -EINVAL;
}
dev = rte_cryptodev_pmd_get_dev(dev_id);
if (dev == NULL)
- return NULL;
+ return -EINVAL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
- return NULL;
+ return -EINVAL;
}
session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
@@ -1946,22 +1947,23 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
CDEV_LOG_DEBUG(
"The private session data size used when creating the mempool is smaller than this device's private session data.");
- return NULL;
+ return -EINVAL;
}
/* Verify if provided mempool can hold elements big enough. */
if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
- return NULL;
+ return -EINVAL;
}
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(mp, (void **)&sess)) {
+ if (rte_mempool_get(mp, session)) {
CDEV_LOG_ERR("couldn't get object from session mempool");
- return NULL;
+ return -ENOMEM;
}
+ sess = *session;
sess->driver_id = dev->driver_id;
sess->user_data_sz = pool_priv->user_data_sz;
sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
@@ -1969,7 +1971,7 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
/* Clear device session pointer.*/
memset(sess->sess_private_data, 0, session_priv_data_sz + sess->user_data_sz);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, -ENOTSUP);
if (sess->sess_private_data[0] == 0) {
ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
@@ -1977,12 +1979,12 @@ rte_cryptodev_asym_session_create(uint8_t dev_id,
CDEV_LOG_ERR(
"dev_id %d failed to configure session details",
dev_id);
- return NULL;
+ return ret;
}
}
rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp, sess);
- return sess;
+ return 0;
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 91110b08da..dba982e919 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -995,14 +995,19 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
* processed with this session
* @param mp mempool to allocate asymmetric session
* objects from
+ * @param session void ** for session to be used
+ *
* @return
- * - On success return pointer to asym-session
- * - On failure returns NULL
+ * - 0 on success.
+ * - -EINVAL on invalid arguments.
+ * - -ENOMEM on memory error for session allocation.
+ * - -ENOTSUP if device doesn't support session configuration.
*/
__rte_experimental
-void *
+int
rte_cryptodev_asym_session_create(uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp,
+ void **session);
/**
* Frees symmetric crypto session header, after checking that all
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v4 2/5] crypto: use single buffer for asymmetric session
@ 2022-02-09 15:38 1% ` Ciara Power
2022-02-09 15:38 2% ` [PATCH v4 5/5] crypto: modify return value for asym session create Ciara Power
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Ciara Power @ 2022-02-09 15:38 UTC (permalink / raw)
To: dev
Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty,
Ankur Dwivedi, Tejasree Kondoj, John Griffin, Fiona Trahe,
Deepak Kumar Jain
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
v4:
- Merged asym crypto session clear and free functions.
- Reordered some function parameters.
- Updated trace function for asym crypto session create.
- Fixed cnxk clear, the PMD no longer needs to put private data
back into a mempool.
- Renamed struct field for max private session size.
- Replaced __extension__ with RTE_STD_C11.
- Moved some parameter validity checks to before functional code.
- Reworded release note.
- Removed mempool parameter from session configure function.
- Removed docs code additions, these are included due to patch 1
changing sample doc to use literal includes.
v3:
- Corrected formatting of struct comments.
- Increased size of max_priv_session_sz to uint16_t.
- Removed trace for asym session init function that was
previously removed.
- Added documentation.
v2:
- Renamed function typedef from "free" to "clear" as session private
data isn't being freed in that function.
- Moved user data API to separate patch.
- Minor fixes to comments, formatting, return values.
---
app/test-crypto-perf/cperf_ops.c | 14 +-
app/test-crypto-perf/cperf_test_throughput.c | 8 +-
app/test/test_cryptodev_asym.c | 272 +++++--------------
doc/guides/prog_guide/cryptodev_lib.rst | 21 +-
doc/guides/rel_notes/release_22_03.rst | 7 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 6 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 6 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 21 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.h | 3 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 30 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 5 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 24 +-
drivers/crypto/qat/qat_asym.c | 54 +---
drivers/crypto/qat/qat_asym.h | 5 +-
lib/cryptodev/cryptodev_pmd.h | 21 +-
lib/cryptodev/cryptodev_trace_points.c | 9 +-
lib/cryptodev/rte_cryptodev.c | 213 ++++++++-------
lib/cryptodev/rte_cryptodev.h | 94 +++----
lib/cryptodev/rte_cryptodev_trace.h | 38 ++-
lib/cryptodev/version.map | 7 +-
20 files changed, 308 insertions(+), 550 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d975ae1ab8..b125c699de 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -735,7 +735,6 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
struct rte_crypto_asym_xform xform = {0};
- int rc;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -745,19 +744,10 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp);
+ sess = (void *)rte_cryptodev_asym_session_create(dev_id, &xform, sess_mp);
if (sess == NULL)
return NULL;
- rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
- &xform, priv_mp);
- if (rc < 0) {
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id,
- (void *)sess);
- rte_cryptodev_asym_session_free((void *)sess);
- }
- return NULL;
- }
+
return sess;
}
#ifdef RTE_LIB_SECURITY
diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c
index 51512af2ad..ee21ff27f7 100644
--- a/app/test-crypto-perf/cperf_test_throughput.c
+++ b/app/test-crypto-perf/cperf_test_throughput.c
@@ -35,11 +35,9 @@ cperf_throughput_test_free(struct cperf_throughput_ctx *ctx)
if (!ctx)
return;
if (ctx->sess) {
- if (ctx->options->op_type == CPERF_ASYM_MODEX) {
- rte_cryptodev_asym_session_clear(ctx->dev_id,
- (void *)ctx->sess);
- rte_cryptodev_asym_session_free((void *)ctx->sess);
- }
+ if (ctx->options->op_type == CPERF_ASYM_MODEX)
+ rte_cryptodev_asym_session_free(ctx->dev_id,
+ (void *)ctx->sess);
#ifdef RTE_LIB_SECURITY
else if (ctx->options->op_type == CPERF_PDCP ||
ctx->options->op_type == CPERF_DOCSIS ||
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 8d7290f9ed..3e27d93380 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -452,7 +452,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform_tc,
+ ts_params->session_mpool);
if (!sess) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -462,15 +463,6 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform_tc,
- ts_params->session_mpool) < 0) {
- snprintf(test_msg, ASYM_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
rte_crypto_op_attach_asym_session(op, sess);
} else {
asym_op->xform = &xform_tc;
@@ -512,10 +504,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
snprintf(test_msg, ASYM_TEST_MSG_LEN, "SESSIONLESS PASS");
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -669,18 +659,11 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -688,9 +671,7 @@ test_rsa_sign_verify(void)
status = queue_ops_rsa_sign_verify(sess);
error_exit:
-
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -718,17 +699,10 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -737,8 +711,7 @@ test_rsa_enc_dec(void)
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -765,28 +738,20 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
status = TEST_FAILED;
- return status;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify_crt\n");
- status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_sign_verify(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -813,27 +778,20 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &rsa_xform_crt, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec_crt\n");
status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_enc_dec(sess);
error_exit:
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
+ rte_cryptodev_asym_session_free(dev_id, sess);
TEST_ASSERT_EQUAL(status, 0, "Test failed");
@@ -927,7 +885,6 @@ testsuite_setup(void)
/* configure qp */
ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
ts_params->qp_conf.mp_session = ts_params->session_mpool;
- ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
@@ -936,21 +893,9 @@ testsuite_setup(void)
qp_id, dev_id);
}
- /* setup asym session pool */
- unsigned int session_size = RTE_MAX(
- rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
- /*
- * Create mempool with TEST_NUM_SESSIONS * 2,
- * to include the session headers
- */
- ts_params->session_mpool = rte_mempool_create(
- "test_asym_sess_mp",
- TEST_NUM_SESSIONS * 2,
- session_size,
- 0, 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
+ ts_params->session_mpool = rte_cryptodev_asym_session_pool_create(
+ "test_asym_sess_mp", TEST_NUM_SESSIONS * 2, 0,
+ SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
@@ -1107,14 +1052,6 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1137,11 +1074,11 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1176,10 +1113,8 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1199,14 +1134,6 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1225,11 +1152,11 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1265,10 +1192,8 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1290,14 +1215,6 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1324,11 +1241,11 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1365,10 +1282,8 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data, asym_op->dh.priv_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1391,15 +1306,6 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform pub_key_xform;
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1423,11 +1329,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.pub_key.length = sizeof(out_pub_key);
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1462,10 +1369,8 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
out_pub_key, asym_op->dh.pub_key.length);
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1514,7 +1419,7 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modinv_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
@@ -1523,15 +1428,6 @@ test_mod_inv(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modinv_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* generate crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1583,10 +1479,8 @@ test_mod_inv(void)
}
error_exit:
- if (sess) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op)
rte_crypto_op_free(op);
@@ -1649,7 +1543,7 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &modex_xform, sess_mpool);
if (!sess) {
RTE_LOG(ERR, USER1,
"line %u "
@@ -1659,15 +1553,6 @@ test_mod_exp(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modex_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
asym_op = op->asym;
memcpy(input, base, sizeof(base));
asym_op->modex.base.data = input;
@@ -1706,10 +1591,8 @@ test_mod_exp(void)
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
@@ -1771,7 +1654,7 @@ test_dsa_sign(void)
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(dev_id, &dsa_xform, sess_mpool);
if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1800,15 +1683,6 @@ test_dsa_sign(void)
debug_hexdump(stdout, "priv_key: ", dsa_xform.dsa.x.data,
dsa_xform.dsa.x.length);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &dsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* attach asymmetric crypto session to crypto operations */
rte_crypto_op_attach_asym_session(op, sess);
asym_op->dsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
@@ -1882,10 +1756,8 @@ test_dsa_sign(void)
status = TEST_FAILED;
}
error_exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -1944,15 +1816,6 @@ test_ecdsa_sign_verify(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1970,11 +1833,11 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2082,10 +1945,8 @@ test_ecdsa_sign_verify(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
@@ -2157,15 +2018,6 @@ test_ecpm(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -2183,11 +2035,11 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(dev_id, &xform, sess_mpool);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2255,10 +2107,8 @@ test_ecpm(enum curve curve_id)
}
exit:
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id, sess);
- rte_cryptodev_asym_session_free(sess);
- }
+ if (sess != NULL)
+ rte_cryptodev_asym_session_free(dev_id, sess);
if (op != NULL)
rte_crypto_op_free(op);
return status;
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 9f33f7a177..b4dbd384bf 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1038,20 +1038,17 @@ It is the application's responsibility to create and manage the session mempools
Application using both symmetric and asymmetric sessions should allocate and maintain
different sessions pools for each type.
-An application can use ``rte_cryptodev_get_asym_session_private_size()`` to
-get the private size of asymmetric session on a given crypto device. This
-function would allow an application to calculate the max device asymmetric
-session size of all crypto devices to create a single session mempool.
-If instead an application creates multiple asymmetric session mempools,
-the Crypto device framework also provides ``rte_cryptodev_asym_get_header_session_size()`` to get
-the size of an uninitialized session.
+An application can use ``rte_cryptodev_asym_session_pool_create()`` to create a mempool
+with a specified number of elements. The element size will allow for the session header,
+and the max private session size.
+The max private session size is chosen based on available crypto devices,
+the biggest private session size is used. This means any of those devices can be used,
+and the mempool element will have available space for its private session data.
Once the session mempools have been created, ``rte_cryptodev_asym_session_create()``
-is used to allocate an uninitialized asymmetric session from the given mempool.
-The session then must be initialized using ``rte_cryptodev_asym_session_init()``
-for each of the required crypto devices. An asymmetric transform chain
-is used to specify the operation and its parameters. See the section below for
-details on transforms.
+is used to allocate and initialize an asymmetric session from the given mempool.
+An asymmetric transform chain is used to specify the operation and its parameters.
+See the section below for details on transforms.
When a session is no longer used, user must call ``rte_cryptodev_asym_session_clear()``
for each of the crypto devices that are using the session, to free all driver
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index a820cc5596..ea4c5309a0 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -112,6 +112,13 @@ API Changes
* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
which are kept for backward compatibility, are marked as deprecated.
+* cryptodev: The asymmetric session handling was modified to use a single
+ mempool object. An API ``rte_cryptodev_asym_session_pool_create`` was added
+ to create a mempool with element size big enough to hold the generic asymmetric
+ session header and max size for a device private session data.
+ The API ``rte_cryptodev_asym_session_init`` was removed as the initialization
+ is now moved to ``rte_cryptodev_asym_session_create``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index d217bbf383..7390f976c6 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -157,8 +157,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- ae_sess = get_asym_session_private_data(
- asym_op->session, cn10k_cryptodev_driver_id);
+ ae_sess = get_asym_session_private_data(asym_op->session);
ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0],
ae_sess);
if (unlikely(ret))
@@ -431,8 +430,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn10k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ac1953b66d..59a06af30e 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -138,8 +138,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- sess = get_asym_session_private_data(
- asym_op->session, cn9k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(asym_op->session);
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
@@ -453,8 +452,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn9k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index a5fb68da02..7bfac186f9 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -658,10 +658,9 @@ void
cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- struct rte_mempool *sess_mp;
struct cnxk_ae_sess *priv;
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = get_asym_session_private_data(sess);
if (priv == NULL)
return;
@@ -670,40 +669,28 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
/* Reset and free object back to pool */
memset(priv, 0, cnxk_ae_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
int
cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
- struct cnxk_ae_sess *priv;
+ struct cnxk_ae_sess *priv = get_asym_session_private_data(sess);
union cpt_inst_w7 w7;
int ret;
- if (rte_mempool_get(pool, (void **)&priv))
- return -ENOMEM;
-
- memset(priv, 0, sizeof(struct cnxk_ae_sess));
-
ret = cnxk_ae_fill_session_parameters(priv, xform);
- if (ret) {
- rte_mempool_put(pool, priv);
+ if (ret)
return ret;
- }
w7.u64 = 0;
w7.s.egrp = roc_cpt->eng_grp[CPT_ENG_TYPE_AE];
priv->cpt_inst_w7 = w7.u64;
priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
priv->ec_grp = vf->ec_grp;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
index 0656ba9675..ab0f00ee7c 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.h
@@ -122,8 +122,7 @@ void cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
int cnxk_ae_session_cfg(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool);
+ struct rte_cryptodev_asym_session *sess);
void cnxk_cpt_dump_on_err(struct cnxk_cpt_qp *qp);
static inline union rte_event_crypto_metadata *
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index f7ca8a8a8e..cf3947f1ab 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -375,35 +375,23 @@ otx_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
static int
-otx_cpt_asym_session_cfg(struct rte_cryptodev *dev,
+otx_cpt_asym_session_cfg(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform __rte_unused,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_cryptodev_asym_session *sess)
{
- struct cpt_asym_sess_misc *priv;
+ struct cpt_asym_sess_misc *priv = get_asym_session_private_data(sess);
int ret;
CPT_PMD_INIT_FUNC_TRACE();
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
ret = cpt_fill_asym_session_parameters(priv, xform);
if (ret) {
CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
return ret;
}
priv->cpt_inst_w7 = 0;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
@@ -412,11 +400,10 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
CPT_PMD_INIT_FUNC_TRACE();
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = get_asym_session_private_data(sess);
if (priv == NULL)
return;
@@ -424,9 +411,6 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
/* Free resources allocated during session configure */
cpt_free_asym_session_parameters(priv);
memset(priv, 0, otx_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
static __rte_always_inline void * __rte_hot
@@ -471,8 +455,7 @@ otx_cpt_enq_single_asym(struct cpt_instance *instance,
return NULL;
}
- sess = get_asym_session_private_data(asym_op->session,
- otx_cryptodev_driver_id);
+ sess = get_asym_session_private_data(asym_op->session);
/* Store phys_addr of the mdata to meta_buf */
params.meta_buf = rte_mempool_virt2iova(mdata);
@@ -852,8 +835,7 @@ otx_cpt_asym_post_process(struct rte_crypto_op *cop,
struct rte_crypto_asym_op *op = cop->asym;
struct cpt_asym_sess_misc *sess;
- sess = get_asym_session_private_data(op->session,
- otx_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
switch (sess->xfrm_type) {
case RTE_CRYPTO_ASYM_XFORM_RSA:
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..1e7e5f6849 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -747,10 +747,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
cryptodev_driver_id);
} else {
if (likely(op->asym->session != NULL))
- asym_sess = (struct openssl_asym_session *)
- get_asym_session_private_data(
- op->asym->session,
- cryptodev_driver_id);
+ asym_sess = get_asym_session_private_data(op->asym->session);
if (asym_sess == NULL)
op->status =
RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..92b9524bf3 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1119,8 +1119,7 @@ static int openssl_set_asym_session_parameters(
static int
openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
void *asym_sess_private_data;
int ret;
@@ -1130,25 +1129,14 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -EINVAL;
}
- if (rte_mempool_get(mempool, &asym_sess_private_data)) {
- CDEV_LOG_ERR(
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
+ asym_sess_private_data = get_asym_session_private_data(sess);
ret = openssl_set_asym_session_parameters(asym_sess_private_data,
xform);
if (ret != 0) {
OPENSSL_LOG(ERR, "failed configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(mempool, asym_sess_private_data);
return ret;
}
- set_asym_session_private_data(sess, dev->driver_id,
- asym_sess_private_data);
-
return 0;
}
@@ -1206,19 +1194,15 @@ static void openssl_reset_asym_session(struct openssl_asym_session *sess)
* so it doesn't leave key material behind
*/
static void
-openssl_pmd_asym_session_clear(struct rte_cryptodev *dev,
+openssl_pmd_asym_session_clear(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = get_asym_session_private_data(sess);
/* Zero out the whole structure */
if (sess_priv) {
openssl_reset_asym_session(sess_priv);
memset(sess_priv, 0, sizeof(struct openssl_asym_session));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
}
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 09d8761c5f..6576e8c87c 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -491,9 +491,7 @@ qat_asym_build_request(void *in_op,
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)
- get_asym_session_private_data(
- op->asym->session, qat_asym_driver_id);
+ ctx = get_asym_session_private_data(op->asym->session);
if (unlikely(ctx == NULL)) {
QAT_LOG(ERR, "Session has not been created for this device");
goto error;
@@ -711,8 +709,7 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)get_asym_session_private_data(
- rx_op->asym->session, qat_asym_driver_id);
+ ctx = get_asym_session_private_data(rx_op->asym->session);
qat_asym_collect_response(rx_op, cookie, ctx->xform);
} else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform);
@@ -726,61 +723,42 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_cryptodev_asym_session *sess)
{
- int err = 0;
- void *sess_private_data;
struct qat_asym_session *session;
- if (rte_mempool_get(mempool, &sess_private_data)) {
- QAT_LOG(ERR,
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
- session = sess_private_data;
+ session = get_asym_session_private_data(sess);
if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) {
if (xform->modex.exponent.length == 0 ||
xform->modex.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod exp input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) {
if (xform->modinv.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod inv input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (xform->rsa.n.length == 0) {
QAT_LOG(ERR, "Invalid rsa input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
} else {
QAT_LOG(ERR, "Asymmetric crypto xform not implemented");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
session->xform = xform;
- qat_asym_build_req_tmpl(sess_private_data);
- set_asym_session_private_data(sess, dev->driver_id,
- sess_private_data);
+ qat_asym_build_req_tmpl(session);
return 0;
-error:
- rte_mempool_put(mempool, sess_private_data);
- return err;
}
unsigned int qat_asym_session_get_private_size(
@@ -793,15 +771,9 @@ void
qat_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = get_asym_session_private_data(sess);
struct qat_asym_session *s = (struct qat_asym_session *)sess_priv;
- if (sess_priv) {
+ if (sess_priv)
memset(s, 0, qat_asym_session_get_private_size(dev));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
- }
}
diff --git a/drivers/crypto/qat/qat_asym.h b/drivers/crypto/qat/qat_asym.h
index 308b6b2e0b..c9242a12ca 100644
--- a/drivers/crypto/qat/qat_asym.h
+++ b/drivers/crypto/qat/qat_asym.h
@@ -46,10 +46,9 @@ struct qat_asym_session {
};
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool);
+ struct rte_cryptodev_asym_session *sess);
unsigned int
qat_asym_session_get_private_size(struct rte_cryptodev *dev);
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index b9146f652c..aeaccfa611 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -319,7 +319,6 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
* @param dev Crypto device pointer
* @param xform Single or chain of crypto xforms
* @param session Pointer to cryptodev's private session structure
- * @param mp Mempool where the private session is allocated
*
* @return
* - Returns 0 if private session structure have been created successfully.
@@ -329,8 +328,7 @@ typedef int (*cryptodev_sym_configure_session_t)(struct rte_cryptodev *dev,
*/
typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *session,
- struct rte_mempool *mp);
+ struct rte_cryptodev_asym_session *session);
/**
* Free driver private session data.
*
@@ -340,12 +338,12 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_sym_session *sess);
/**
- * Free asymmetric session private data.
+ * Clear asymmetric session private data.
*
* @param dev Crypto device pointer
* @param sess Cryptodev session structure
*/
-typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
/**
* Perform actual crypto processing (encrypt/digest or auth/decrypt)
@@ -429,7 +427,7 @@ struct rte_cryptodev_ops {
/**< Configure asymmetric Crypto session. */
cryptodev_sym_free_session_t sym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_asym_free_session_t asym_session_clear;
+ cryptodev_asym_clear_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
union {
cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
@@ -628,16 +626,9 @@ set_sym_session_private_data(struct rte_cryptodev_sym_session *sess,
}
static inline void *
-get_asym_session_private_data(const struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id) {
- return sess->sess_private_data[driver_id];
-}
-
-static inline void
-set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id, void *private_data)
+get_asym_session_private_data(struct rte_cryptodev_asym_session *sess)
{
- sess->sess_private_data[driver_id] = private_data;
+ return sess->sess_private_data;
}
#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 5d58951fd5..c5bfe08b79 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -24,6 +24,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_queue_pair_setup,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_pool_create,
lib.cryptodev.sym.pool.create)
+RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_pool_create,
+ lib.cryptodev.asym.pool.create)
+
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_create,
lib.cryptodev.sym.create)
@@ -39,15 +42,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_init,
lib.cryptodev.sym.init)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_init,
- lib.cryptodev.asym.init)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_clear,
lib.cryptodev.sym.clear)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_clear,
- lib.cryptodev.asym.clear)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_enqueue_burst,
lib.cryptodev.enq.burst)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a40536c5ea..d4cdc56912 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -195,7 +195,7 @@ const char *rte_crypto_asym_op_strings[] = {
};
/**
- * The private data structure stored in the session mempool private data.
+ * The private data structure stored in the sym session mempool private data.
*/
struct rte_cryptodev_sym_session_pool_private_data {
uint16_t nb_drivers;
@@ -204,6 +204,14 @@ struct rte_cryptodev_sym_session_pool_private_data {
/**< session user data will be placed after sess_data */
};
+/**
+ * The private data structure stored in the asym session mempool private data.
+ */
+struct rte_cryptodev_asym_session_pool_private_data {
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+};
+
int
rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
const char *algo_string)
@@ -1751,47 +1759,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mp)
-{
- struct rte_cryptodev *dev;
- uint8_t index;
- int ret;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (sess == NULL || xforms == NULL || dev == NULL)
- return -EINVAL;
-
- index = dev->driver_id;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure,
- -ENOTSUP);
-
- if (sess->sess_private_data[index] == NULL) {
- ret = dev->dev_ops->asym_session_configure(dev,
- xforms,
- sess, mp);
- if (ret < 0) {
- CDEV_LOG_ERR(
- "dev_id %d failed to configure session details",
- dev_id);
- return ret;
- }
- }
-
- rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
- return 0;
-}
-
struct rte_mempool *
rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -1834,6 +1801,53 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
return mp;
}
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id)
+{
+ struct rte_mempool *mp;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ uint32_t obj_sz, obj_sz_aligned;
+ uint8_t dev_id, priv_sz, max_priv_sz = 0;
+
+ for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++)
+ if (rte_cryptodev_is_valid_dev(dev_id)) {
+ priv_sz = rte_cryptodev_asym_get_private_session_size(dev_id);
+ if (priv_sz > max_priv_sz)
+ max_priv_sz = priv_sz;
+ }
+ if (max_priv_sz == 0) {
+ CDEV_LOG_INFO("Could not set max private session size\n");
+ return NULL;
+ }
+
+ obj_sz = rte_cryptodev_asym_get_header_session_size() + max_priv_sz;
+ obj_sz_aligned = RTE_ALIGN_CEIL(obj_sz, RTE_CACHE_LINE_SIZE);
+
+ mp = rte_mempool_create(name, nb_elts, obj_sz_aligned, cache_size,
+ (uint32_t)(sizeof(*pool_priv)),
+ NULL, NULL, NULL, NULL,
+ socket_id, 0);
+ if (mp == NULL) {
+ CDEV_LOG_ERR("%s(name=%s) failed, rte_errno=%d\n",
+ __func__, name, rte_errno);
+ return NULL;
+ }
+
+ pool_priv = rte_mempool_get_priv(mp);
+ if (!pool_priv) {
+ CDEV_LOG_ERR("%s(name=%s) failed to get private data\n",
+ __func__, name);
+ rte_mempool_free(mp);
+ return NULL;
+ }
+ pool_priv->max_priv_session_sz = max_priv_sz;
+
+ rte_cryptodev_trace_asym_session_pool_create(name, nb_elts,
+ cache_size, mp);
+ return mp;
+}
+
static unsigned int
rte_cryptodev_sym_session_data_size(struct rte_cryptodev_sym_session *sess)
{
@@ -1895,19 +1909,44 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
}
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp)
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp)
{
struct rte_cryptodev_asym_session *sess;
- unsigned int session_size =
+ uint32_t session_priv_data_sz;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ unsigned int session_header_size =
rte_cryptodev_asym_get_header_session_size();
+ struct rte_cryptodev *dev;
+ int ret;
+
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+ return NULL;
+ }
+
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL)
+ return NULL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
return NULL;
}
+ session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
+ dev_id);
+ pool_priv = rte_mempool_get_priv(mp);
+
+ if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
+ CDEV_LOG_DEBUG(
+ "The private session data size used when creating the mempool is smaller than this device's private session data.");
+ return NULL;
+ }
+
/* Verify if provided mempool can hold elements big enough. */
- if (mp->elt_size < session_size) {
+ if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
return NULL;
@@ -1919,12 +1958,25 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
return NULL;
}
- /* Clear device session pointer.
- * Include the flag indicating presence of private data
- */
- memset(sess, 0, session_size);
+ sess->driver_id = dev->driver_id;
+ sess->max_priv_data_sz = pool_priv->max_priv_session_sz;
+
+ /* Clear device session pointer.*/
+ memset(sess->sess_private_data, 0, session_priv_data_sz);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
- rte_cryptodev_trace_asym_session_create(mp, sess);
+ if (sess->sess_private_data[0] == 0) {
+ ret = dev->dev_ops->asym_session_configure(dev, xforms, sess);
+ if (ret < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return NULL;
+ }
+ }
+
+ rte_cryptodev_trace_asym_session_create(dev_id, xforms, mp, sess);
return sess;
}
@@ -1959,30 +2011,6 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess)
-{
- struct rte_cryptodev *dev;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (dev == NULL || sess == NULL)
- return -EINVAL;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
-
- dev->dev_ops->asym_session_clear(dev, sess);
-
- rte_cryptodev_trace_sym_session_clear(dev_id, sess);
- return 0;
-}
-
int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
{
@@ -2007,27 +2035,31 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
}
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess)
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess)
{
- uint8_t i;
- void *sess_priv;
struct rte_mempool *sess_mp;
+ struct rte_cryptodev *dev;
- if (sess == NULL)
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
return -EINVAL;
-
- /* Check that all device private data has been freed */
- for (i = 0; i < nb_drivers; i++) {
- sess_priv = get_asym_session_private_data(sess, i);
- if (sess_priv != NULL)
- return -EBUSY;
}
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL || sess == NULL)
+ return -EINVAL;
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_clear, -ENOTSUP);
+
+ dev->dev_ops->asym_session_clear(dev, sess);
+
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
rte_mempool_put(sess_mp, sess);
- rte_cryptodev_trace_asym_session_free(sess);
+ rte_cryptodev_trace_asym_session_free(dev_id, sess);
return 0;
}
@@ -2061,12 +2093,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
unsigned int
rte_cryptodev_asym_get_header_session_size(void)
{
- /*
- * Header contains pointers to the private data
- * of all registered drivers, and a flag which
- * indicates presence of private data
- */
- return ((sizeof(void *) * nb_drivers) + sizeof(uint8_t));
+ return sizeof(struct rte_cryptodev_asym_session);
}
unsigned int
@@ -2092,7 +2119,6 @@ unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_is_valid_dev(dev_id))
@@ -2104,11 +2130,8 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
return 0;
priv_sess_size = (*dev->dev_ops->asym_session_get_size)(dev);
- if (priv_sess_size < header_size)
- return header_size;
return priv_sess_size;
-
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54df..a0ac81eaa0 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -919,9 +919,13 @@ struct rte_cryptodev_sym_session {
};
/** Cryptodev asymmetric crypto session */
-struct rte_cryptodev_asym_session {
- __extension__ void *sess_private_data[0];
- /**< Private asymmetric session material */
+RTE_STD_C11 struct rte_cryptodev_asym_session {
+ uint8_t driver_id;
+ /**< Session driver ID. */
+ uint16_t max_priv_data_sz;
+ /**< Size of private data used when creating mempool */
+ uint8_t padding[5];
+ uint8_t sess_private_data[0];
};
/**
@@ -956,6 +960,29 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t priv_size,
int socket_id);
+/**
+ * Create an asymmetric session mempool.
+ *
+ * @param name
+ * The unique mempool name.
+ * @param nb_elts
+ * The number of elements in the mempool.
+ * @param cache_size
+ * The number of per-lcore cache elements
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in the case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ *
+ * @return
+ * - On success return mempool
+ * - On failure returns NULL
+ */
+__rte_experimental
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id);
+
/**
* Create symmetric crypto session header (generic with no private data)
*
@@ -971,15 +998,19 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
/**
* Create asymmetric crypto session header (generic with no private data)
*
- * @param mempool mempool to allocate asymmetric session
- * objects from
+ * @param dev_id ID of device that we want the session to be used on
+ * @param xforms Asymmetric crypto transform operations to apply on flow
+ * processed with this session
+ * @param mp mempool to allocate asymmetric session
+ * objects from
* @return
* - On success return pointer to asym-session
* - On failure returns NULL
*/
__rte_experimental
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
+rte_cryptodev_asym_session_create(uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms, struct rte_mempool *mp);
/**
* Frees symmetric crypto session header, after checking that all
@@ -997,20 +1028,20 @@ int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
/**
- * Frees asymmetric crypto session header, after checking that all
- * the device private data has been freed, returning it
- * to its original mempool.
+ * Clears and frees asymmetric crypto session header and private data,
+ * returning it to its original mempool.
*
+ * @param dev_id ID of device that uses the asymmetric session.
* @param sess Session header to be freed.
*
* @return
* - 0 if successful.
- * - -EINVAL if session is NULL.
- * - -EBUSY if not all device private data has been freed.
+ * - -EINVAL if device is invalid or session is NULL.
*/
__rte_experimental
int
-rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess);
+rte_cryptodev_asym_session_free(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess);
/**
* Fill out private data for the device id, based on its device type.
@@ -1034,28 +1065,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
struct rte_crypto_sym_xform *xforms,
struct rte_mempool *mempool);
-/**
- * Initialize asymmetric session on a device with specific asymmetric xform
- *
- * @param dev_id ID of device that we want the session to be used on
- * @param sess Session to be set up on a device
- * @param xforms Asymmetric crypto transform operations to apply on flow
- * processed with this session
- * @param mempool Mempool to be used for internal allocation.
- *
- * @return
- * - On success, zero.
- * - -EINVAL if input parameters are invalid.
- * - -ENOTSUP if crypto device does not support the crypto transform.
- * - -ENOMEM if the private session could not be allocated.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mempool);
-
/**
* Frees private data for the device id, based on its device type,
* returning it to its mempool. It is the application's responsibility
@@ -1074,21 +1083,6 @@ int
rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess);
-/**
- * Frees resources held by asymmetric session during rte_cryptodev_session_init
- *
- * @param dev_id ID of device that uses the asymmetric session.
- * @param sess Asymmetric session setup on device using
- * rte_cryptodev_session_init
- * @return
- * - 0 if successful.
- * - -EINVAL if device is invalid or session is NULL.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_clear(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess);
-
/**
* Get the size of the header session, for all registered drivers excluding
* the user data size.
@@ -1116,7 +1110,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
struct rte_cryptodev_sym_session *sess);
/**
- * Get the size of the asymmetric session header, for all registered drivers.
+ * Get the size of the asymmetric session header.
*
* @return
* Size of the asymmetric header session.
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..f4e1c870df 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -83,10 +83,22 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(sess->user_data_sz);
)
+RTE_TRACE_POINT(
+ rte_cryptodev_trace_asym_session_pool_create,
+ RTE_TRACE_POINT_ARGS(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, void *mempool),
+ rte_trace_point_emit_string(name);
+ rte_trace_point_emit_u32(nb_elts);
+ rte_trace_point_emit_u32(cache_size);
+ rte_trace_point_emit_ptr(mempool);
+)
+
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
- RTE_TRACE_POINT_ARGS(void *mempool,
- struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *xforms,
+ void *mempool, struct rte_cryptodev_asym_session *sess),
+ rte_trace_point_emit_u8(dev_id);
+ rte_trace_point_emit_ptr(xforms);
rte_trace_point_emit_ptr(mempool);
rte_trace_point_emit_ptr(sess);
)
@@ -99,7 +111,9 @@ RTE_TRACE_POINT(
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_free,
- RTE_TRACE_POINT_ARGS(struct rte_cryptodev_asym_session *sess),
+ RTE_TRACE_POINT_ARGS(uint8_t dev_id,
+ struct rte_cryptodev_asym_session *sess),
+ rte_trace_point_emit_u8(dev_id);
rte_trace_point_emit_ptr(sess);
)
@@ -117,17 +131,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(mempool);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_init,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess, void *xforms,
- void *mempool),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
- rte_trace_point_emit_ptr(xforms);
- rte_trace_point_emit_ptr(mempool);
-)
-
RTE_TRACE_POINT(
rte_cryptodev_trace_sym_session_clear,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
@@ -135,13 +138,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(sess);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_clear,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
-)
-
#ifdef __cplusplus
}
#endif
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index c50745fa8c..44d1aff0e2 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -55,10 +55,8 @@ EXPERIMENTAL {
rte_cryptodev_asym_get_header_session_size;
rte_cryptodev_asym_get_private_session_size;
rte_cryptodev_asym_get_xform_enum;
- rte_cryptodev_asym_session_clear;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
- rte_cryptodev_asym_session_init;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
@@ -81,9 +79,7 @@ EXPERIMENTAL {
__rte_cryptodev_trace_sym_session_free;
__rte_cryptodev_trace_asym_session_free;
__rte_cryptodev_trace_sym_session_init;
- __rte_cryptodev_trace_asym_session_init;
__rte_cryptodev_trace_sym_session_clear;
- __rte_cryptodev_trace_asym_session_clear;
__rte_cryptodev_trace_dequeue_burst;
__rte_cryptodev_trace_enqueue_burst;
@@ -104,6 +100,9 @@ EXPERIMENTAL {
rte_cryptodev_remove_deq_callback;
rte_cryptodev_remove_enq_callback;
+ # added 22.03
+ rte_cryptodev_asym_session_pool_create;
+ __rte_cryptodev_trace_asym_session_pool_create;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 1%]
* Re: [PATCH] ci: remove outdated default reference tag for ABI
2022-02-09 13:37 4% ` Thomas Monjalon
@ 2022-02-09 14:04 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2022-02-09 14:04 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, Michael Santana, Lincoln Lavoie, Owen Hilyard, Aaron Conole
On Wed, Feb 9, 2022 at 2:38 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 08/02/2022 16:08, Aaron Conole:
> > Thomas Monjalon <thomas@monjalon.net> writes:
> >
> > > The variable REF_GIT_TAG is set in the CI configuration
> > > like .travis.yml or .github/workflows/build.yml.
> > > The default value is outdated and probably unused.
> > > It is removed completely to avoid forgetting an update in future.
> > >
> > > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> >
> > I think the default was there for labs that run the build script
> > manually. Maybe there are no such users, though. I believe the lab has
> > its own script for doing such ABI checks (but can't remember off the top
> > of my head).
> >
> > CC'd Lincoln and Owen just to confirm.
> >
> > Assuming the UNH/other lab doesn't use this feature of the linux build
> > script,
> >
> > Acked-by: Aaron Conole <aconole@redhat.com>
>
> I could also remove this variable:
> LIBABIGAIL_VERSION=${LIBABIGAIL_VERSION:-libabigail-1.6}
>
> It is confusing to see an old version here,
> while we use the version 1.8.
>
> If no objection, I'll send a v2.
+1.
--
David Marchand
^ permalink raw reply [relevance 4%]
* RE: [PATCH v18 8/8] eal: implement functions for mutex management
2022-02-09 2:47 3% ` Narcisa Ana Maria Vasile
@ 2022-02-09 13:57 0% ` Ananyev, Konstantin
2022-02-20 21:56 4% ` Dmitry Kozlyuk
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2022-02-09 13:57 UTC (permalink / raw)
To: Narcisa Ana Maria Vasile
Cc: Richardson, Bruce, david.marchand, dev, dmitry.kozliuk, dmitrym,
khot, navasile, ocardona, Kadam, Pallavi, roretzla, talshn,
thomas
> > Actually, please scrap that comment.
> > Obviously it wouldn't work for static variables,
> > and doesn't make much sense.
> > Though few thoughts remain:
> > for posix we probably don't need an indirection and
> > rte_thread_mutex can be just typedef of pthread_mutex_t.
> > also for posix we don't need RTE_INIT constructor for each
> > static mutex initialization.
> > Something like:
> > #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> > rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> > should work, I think.
> > Konstantin
>
> Thank you for reviewing, Konstantin!
> Some context for the current representation of mutex
> can be found in v9, patch 7/10 of this patchset.
>
> Originally we've typedef'ed the pthread_mutex_t on POSIX, just
> like you are suggesting here.
> However, on Windows there's no static initializer similar to the pthread
> one. Still, we want ABI compatibility and same thread behavior between
> platforms. The most elegant solution we found was the current representation,
> as suggested by Dmitry K.
Yes, I agree it is a problem with Windows for static initializer.
But why we can't have different structs typedef for mutex
for posix and windows platforms?
On posix it would be:
typedef pthread_mutex_t rte_thread_mutex_t;
#define RTE_STATIC_INITIALIZED_MUTEX(mx) rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
On windows it could be what Dimitry suggested:
typedef struct rte_thread_mutex {
void *mutex_id; /**< mutex identifier */
} rte_thread_mutex_t;
#define RTE_STATIC_INITIALIZED_MUTEX(private_lock) \
rte_thread_mutex_t private_lock; \
RTE_INIT(__rte_ ## private_lock ## _init)\
{\
RTE_VERIFY(rte_thread_mutex_init(&private_lock) == 0);\
}
API would remain the same, though it would be different underneath.
Yes, on Windows rte_thread_mutex still wouldn't work for MP,
but that's the same as with current design.
> I will address your other comments on the other thread.
>
> Link to v9: http://patchwork.dpdk.org/project/dpdk/patch/1622850274-6946-8-git-send-email-navasile@linux.microsoft.com/
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ci: remove outdated default reference tag for ABI
2022-02-08 15:08 7% ` Aaron Conole
2022-02-08 22:03 8% ` Brandon Lo
@ 2022-02-09 13:37 4% ` Thomas Monjalon
2022-02-09 14:04 4% ` David Marchand
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-02-09 13:37 UTC (permalink / raw)
To: dev
Cc: david.marchand, Michael Santana, Lincoln Lavoie, Owen Hilyard,
Aaron Conole
08/02/2022 16:08, Aaron Conole:
> Thomas Monjalon <thomas@monjalon.net> writes:
>
> > The variable REF_GIT_TAG is set in the CI configuration
> > like .travis.yml or .github/workflows/build.yml.
> > The default value is outdated and probably unused.
> > It is removed completely to avoid forgetting an update in future.
> >
> > Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
>
> I think the default was there for labs that run the build script
> manually. Maybe there are no such users, though. I believe the lab has
> its own script for doing such ABI checks (but can't remember off the top
> of my head).
>
> CC'd Lincoln and Owen just to confirm.
>
> Assuming the UNH/other lab doesn't use this feature of the linux build
> script,
>
> Acked-by: Aaron Conole <aconole@redhat.com>
I could also remove this variable:
LIBABIGAIL_VERSION=${LIBABIGAIL_VERSION:-libabigail-1.6}
It is confusing to see an old version here,
while we use the version 1.8.
If no objection, I'll send a v2.
^ permalink raw reply [relevance 4%]
* Re: [PATCH v5 1/2] eal: add API for bus close
2022-02-09 11:04 3% ` David Marchand
@ 2022-02-09 13:20 3% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2022-02-09 13:20 UTC (permalink / raw)
To: Rohit Raj
Cc: dev, Bruce Richardson, Ray Kinsella, Dmitry Kozlyuk,
Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, dev,
Nipun Gupta, Sachin Saxena, Hemant Agrawal, David Marchand
09/02/2022 12:04, David Marchand:
> On Mon, Jan 10, 2022 at 6:26 AM <rohit.raj@nxp.com> wrote:
> >
> > From: Rohit Raj <rohit.raj@nxp.com>
> >
> > As per the current code we have API for bus probe, but the
> > bus close API is missing. This breaks the multi process
> > scenarios as objects are not cleaned while terminating the
> > secondary processes.
>
> After an application crash, how does this bus resets the associated resources?
>
> > This patch adds a new API rte_bus_close() for cleanup of
> > bus objects which were acquired during probe.
>
> The patch in its current form breaks the ABI on rte_bus object.
> This can be seen in GHA, or calling DPDK_ABI_REF_VERSION=v21.11
> ./devtools/test-meson-builds.sh.
[...]
> > +/* Close all devices of all buses */
> > +int
> > +rte_bus_close(void)
> > +{
> > + int ret;
> > + struct rte_bus *bus, *vbus = NULL;
> > +
> > + TAILQ_FOREACH(bus, &rte_bus_list, next) {
> > + if (!strcmp(bus->name, "vdev")) {
Please do an explicit comparison "== 0".
> > + vbus = bus;
> > + continue;
> > + }
> > +
> > + if (bus->close) {
Please do an explicit comparison with "!= NULL".
We can also completely remove this check and implement the callback
in all buses. It should loop in all remaining devices and remove them.
> > + ret = bus->close();
> > + if (ret)
> > + RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
> > + bus->name);
> > + }
> > + }
> > +
> > + if (vbus && vbus->close) {
> > + ret = vbus->close();
> > + if (ret)
> > + RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
> > + vbus->name);
> > + }
>
> The vdev bus is special in that some drivers can reference objects
> from other buses (see f4ce209a8ce5 ("eal: postpone vdev
> initialization") and da76cc02342b ("eal: probe new virtual bus after
> other bus devices")).
> For this reason, I would expect that the vdev bus is closed before the
> other buses.
Yes, good catch.
We don't have to expose this function as API.
This can be an internal function called only in rte_eal_cleanup().
Instead, it would be more useful to have a public function
to close a single bus by its name.
> > @@ -263,6 +280,7 @@ struct rte_bus {
> > const char *name; /**< Name of the bus */
> > rte_bus_scan_t scan; /**< Scan for devices attached to bus */
> > rte_bus_probe_t probe; /**< Probe devices on bus */
> > + rte_bus_close_t close; /**< Close devices on bus */
As David said, it is breaking the ABI.
[...]
> > @@ -1362,6 +1362,14 @@ rte_eal_cleanup(void)
> >
> > if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> > rte_memseg_walk(mark_freeable, NULL);
> > +
> > + /* Close all the buses and devices/drivers on them */
> > + if (rte_bus_close()) {
> > + rte_eal_init_alert("Cannot close devices");
>
> You can't call rte_eal_*init*_alert in rte_eal_cleanup.
>
> There is not much to do if the bus close fails, I'd rather leave the
> cleanup continue.
+1, just log and save the error code for return at the end.
^ permalink raw reply [relevance 3%]
* Re: [PATCH v5 1/2] eal: add API for bus close
2022-01-10 5:26 3% ` [PATCH v5 1/2] " rohit.raj
@ 2022-02-09 11:04 3% ` David Marchand
2022-02-09 13:20 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2022-02-09 11:04 UTC (permalink / raw)
To: Rohit Raj
Cc: Bruce Richardson, Ray Kinsella, Dmitry Kozlyuk,
Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam, dev,
Nipun Gupta, Sachin Saxena, Hemant Agrawal
On Mon, Jan 10, 2022 at 6:26 AM <rohit.raj@nxp.com> wrote:
>
> From: Rohit Raj <rohit.raj@nxp.com>
>
> As per the current code we have API for bus probe, but the
> bus close API is missing. This breaks the multi process
> scenarios as objects are not cleaned while terminating the
> secondary processes.
After an application crash, how does this bus resets the associated resources?
>
> This patch adds a new API rte_bus_close() for cleanup of
> bus objects which were acquired during probe.
The patch in its current form breaks the ABI on rte_bus object.
This can be seen in GHA, or calling DPDK_ABI_REF_VERSION=v21.11
./devtools/test-meson-builds.sh.
[snip]
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 6d99d1eaa9..7417606a2a 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -55,6 +55,11 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> + * **Added support to close bus.**
> +
> + Added capability to allow a user to do cleanup of bus objects which
> + were acquired during bus probe.
> +
Wrongly indented.
>
> Removed Items
> -------------
> @@ -84,6 +89,9 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> + * eal: Added new API ``rte_bus_close`` to perform cleanup bus objects which
> + were acquired during bus probe.
> +
>
> ABI Changes
> -----------
> diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
> index baa5b532af..2c3c0a90d2 100644
> --- a/lib/eal/common/eal_common_bus.c
> +++ b/lib/eal/common/eal_common_bus.c
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2016 NXP
> + * Copyright 2016,2022 NXP
> */
>
> #include <stdio.h>
> @@ -85,6 +85,37 @@ rte_bus_probe(void)
> return 0;
> }
>
> +/* Close all devices of all buses */
> +int
> +rte_bus_close(void)
> +{
> + int ret;
> + struct rte_bus *bus, *vbus = NULL;
> +
> + TAILQ_FOREACH(bus, &rte_bus_list, next) {
> + if (!strcmp(bus->name, "vdev")) {
> + vbus = bus;
> + continue;
> + }
> +
> + if (bus->close) {
> + ret = bus->close();
> + if (ret)
> + RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
> + bus->name);
> + }
> + }
> +
> + if (vbus && vbus->close) {
> + ret = vbus->close();
> + if (ret)
> + RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
> + vbus->name);
> + }
The vdev bus is special in that some drivers can reference objects
from other buses (see f4ce209a8ce5 ("eal: postpone vdev
initialization") and da76cc02342b ("eal: probe new virtual bus after
other bus devices")).
For this reason, I would expect that the vdev bus is closed before the
other buses.
> +
> + return 0;
> +}
> +
> /* Dump information of a single bus */
> static int
> bus_dump_one(FILE *f, struct rte_bus *bus)
> diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
> index a1cd2462db..87d70c6898 100644
> --- a/lib/eal/freebsd/eal.c
> +++ b/lib/eal/freebsd/eal.c
> @@ -984,6 +984,7 @@ rte_eal_cleanup(void)
> {
> struct internal_config *internal_conf =
> eal_get_internal_configuration();
> + rte_bus_close();
> rte_service_finalize();
> rte_mp_channel_cleanup();
> /* after this point, any DPDK pointers will become dangling */
> diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
> index bbbb6efd28..c6211bbd95 100644
> --- a/lib/eal/include/rte_bus.h
> +++ b/lib/eal/include/rte_bus.h
> @@ -1,5 +1,5 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright 2016 NXP
> + * Copyright 2016,2022 NXP
> */
>
> #ifndef _RTE_BUS_H_
> @@ -66,6 +66,23 @@ typedef int (*rte_bus_scan_t)(void);
> */
> typedef int (*rte_bus_probe_t)(void);
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice
> + *
> + * Implementation specific close function which is responsible for resetting all
> + * detected devices on the bus to a default state, closing UIO nodes or VFIO
> + * groups and also freeing any memory allocated during rte_bus_probe like
> + * private resources for device list.
> + *
> + * This is called while iterating over each registered bus.
> + *
> + * @return
> + * 0 for successful close
> + * !0 for any error while closing
> + */
> +typedef int (*rte_bus_close_t)(void);
> +
> /**
> * Device iterator to find a device on a bus.
> *
> @@ -263,6 +280,7 @@ struct rte_bus {
> const char *name; /**< Name of the bus */
> rte_bus_scan_t scan; /**< Scan for devices attached to bus */
> rte_bus_probe_t probe; /**< Probe devices on bus */
> + rte_bus_close_t close; /**< Close devices on bus */
> rte_bus_find_device_t find_device; /**< Find a device on the bus */
> rte_bus_plug_t plug; /**< Probe single device for drivers */
> rte_bus_unplug_t unplug; /**< Remove single device from driver */
> @@ -317,6 +335,16 @@ int rte_bus_scan(void);
> */
> int rte_bus_probe(void);
>
> +/**
> + * For each device on the buses, call the device specific close.
> + *
> + * @return
> + * 0 for successful close
> + * !0 otherwise
> + */
> +__rte_experimental
> +int rte_bus_close(void);
> +
> /**
> * Dump information of all the buses registered with EAL.
> *
> diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
> index 60b4924838..5c60131e46 100644
> --- a/lib/eal/linux/eal.c
> +++ b/lib/eal/linux/eal.c
> @@ -1362,6 +1362,14 @@ rte_eal_cleanup(void)
>
> if (rte_eal_process_type() == RTE_PROC_PRIMARY)
> rte_memseg_walk(mark_freeable, NULL);
> +
> + /* Close all the buses and devices/drivers on them */
> + if (rte_bus_close()) {
> + rte_eal_init_alert("Cannot close devices");
You can't call rte_eal_*init*_alert in rte_eal_cleanup.
There is not much to do if the bus close fails, I'd rather leave the
cleanup continue.
> + rte_errno = ENOTSUP;
> + return -1;
> + }
> +
> rte_service_finalize();
> rte_mp_channel_cleanup();
> /* after this point, any DPDK pointers will become dangling */
> diff --git a/lib/eal/version.map b/lib/eal/version.map
> index ab28c22791..39882dbbd5 100644
> --- a/lib/eal/version.map
> +++ b/lib/eal/version.map
> @@ -420,6 +420,9 @@ EXPERIMENTAL {
> rte_intr_instance_free;
> rte_intr_type_get;
> rte_intr_type_set;
> +
> + # added in 22.03
> + rte_bus_close;
> };
>
> INTERNAL {
> diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
> index 67db7f099a..5915ab6291 100644
> --- a/lib/eal/windows/eal.c
> +++ b/lib/eal/windows/eal.c
> @@ -260,6 +260,7 @@ rte_eal_cleanup(void)
> struct internal_config *internal_conf =
> eal_get_internal_configuration();
>
> + rte_bus_close();
> eal_intr_thread_cancel();
> eal_mem_virt2iova_cleanup();
> /* after this point, any DPDK pointers will become dangling */
> --
> 2.17.1
>
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [PATCH v18 8/8] eal: implement functions for mutex management
@ 2022-02-09 2:47 3% ` Narcisa Ana Maria Vasile
2022-02-09 13:57 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Narcisa Ana Maria Vasile @ 2022-02-09 2:47 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: Richardson, Bruce, david.marchand, dev, dmitry.kozliuk, dmitrym,
khot, navasile, ocardona, Kadam, Pallavi, roretzla, talshn,
thomas
On Tue, Feb 08, 2022 at 02:21:49AM +0000, Ananyev, Konstantin wrote:
>
>
> > > +
> > > +/**
> > > + * Thread mutex representation.
> >
>
> Actually, please scrap that comment.
> Obviously it wouldn't work for static variables,
> and doesn't make much sense.
> Though few thoughts remain:
> for posix we probably don't need an indirection and
> rte_thread_mutex can be just typedef of pthread_mutex_t.
> also for posix we don't need RTE_INIT constructor for each
> static mutex initialization.
> Something like:
> #define RTE_STATIC_INITIALIZED_MUTEX(mx) \
> rte_thread_mutex_t mx = PTHREAD_MUTEX_INITIALIZER
> should work, I think.
> Konstantin
Thank you for reviewing, Konstantin!
Some context for the current representation of mutex
can be found in v9, patch 7/10 of this patchset.
Originally we've typedef'ed the pthread_mutex_t on POSIX, just
like you are suggesting here.
However, on Windows there's no static initializer similar to the pthread
one. Still, we want ABI compatibility and same thread behavior between
platforms. The most elegant solution we found was the current representation,
as suggested by Dmitry K.
I will address your other comments on the other thread.
Link to v9: http://patchwork.dpdk.org/project/dpdk/patch/1622850274-6946-8-git-send-email-navasile@linux.microsoft.com/
>
>
^ permalink raw reply [relevance 3%]
* [PATCH v6 0/3] ethdev: introduce IP reassembly offload
2022-02-08 20:11 4% ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
@ 2022-02-08 22:20 4% ` Akhil Goyal
2022-02-10 8:54 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-08 22:20 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v6:
- fix warnings.
Changes in v5:
- updated Doxygen comments.(Ferruh)
- Added release notes.
- updated libabigail suppress rules.(David)
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 5 +
doc/guides/nics/features.rst | 13 +++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 63 +++++++++++++
lib/ethdev/rte_ethdev.c | 124 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 126 +++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 ++
lib/security/rte_security.h | 15 ++-
10 files changed, 367 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH] ci: remove outdated default reference tag for ABI
2022-02-08 15:08 7% ` Aaron Conole
@ 2022-02-08 22:03 8% ` Brandon Lo
2022-02-09 13:37 4% ` Thomas Monjalon
1 sibling, 0 replies; 200+ results
From: Brandon Lo @ 2022-02-08 22:03 UTC (permalink / raw)
To: Aaron Conole
Cc: Thomas Monjalon, dev, David Marchand, Michael Santana,
Lincoln Lavoie, Owen Hilyard
On Tue, Feb 8, 2022 at 10:09 AM Aaron Conole <aconole@redhat.com> wrote:
> I think the default was there for labs that run the build script
> manually. Maybe there are no such users, though. I believe the lab has
> its own script for doing such ABI checks (but can't remember off the top
> of my head).
Just confirming: the UNH lab does use our own separate script (which
calls the devtools check-abi.sh and gen-abi.sh scripts).
--
Brandon Lo
UNH InterOperability Laboratory
21 Madbury Rd, Suite 100, Durham, NH 03824
blo@iol.unh.edu
www.iol.unh.edu
^ permalink raw reply [relevance 8%]
* [PATCH v5 0/3] ethdev: introduce IP reassembly offload
2022-02-04 22:13 4% ` [PATCH v4 0/3] " Akhil Goyal
@ 2022-02-08 20:11 4% ` Akhil Goyal
2022-02-08 22:20 4% ` [PATCH v6 " Akhil Goyal
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-08 20:11 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v5:
- updated Doxygen comments.(Ferruh)
- Added release notes.
- updated libabigail suppress rules.(David)
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 5 +
doc/guides/nics/features.rst | 13 +++
doc/guides/nics/features/default.ini | 1 +
doc/guides/rel_notes/release_22_03.rst | 6 ++
lib/ethdev/ethdev_driver.h | 63 +++++++++++++
lib/ethdev/rte_ethdev.c | 124 ++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 126 +++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 ++
lib/security/rte_security.h | 15 ++-
10 files changed, 367 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 19:55 0% ` David Marchand
@ 2022-02-08 20:01 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2022-02-08 20:01 UTC (permalink / raw)
To: David Marchand, Dodji Seketeli
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella
Hi David,
> On Tue, Feb 8, 2022 at 2:19 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > > > > I tried this in the first place but abi check was complaining in other
> structures
> > > > which included
> > > > > rte_security_ipsec_sa_options. So I had to add suppression for those as
> well.
> > > > > Can you try at your end?
> > > >
> > > > I tried before suggesting, and it works with a single rule on this structure.
> > > >
> > > > I'm using libabigail current master, which version are you using so I
> > > > can try with the same?
> > > >
> > > I am currently using 1.6 version. I will try with latest version.
> > > $ abidiff --version
> > > abidiff: 1.6.0
> > >
> > It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
> > It is not getting compiled.
>
> I am using the HEAD of libabigail master branch, so maybe something
> got fixed between 2.0 and the current master.
>
>
> > Can you check with 1.6.0 version?
>
> I tried 1.6 in GHA (Ubuntu 18.04), and I can reproduce the warnings
> you reported.
>
> But in the end, we use 1.8 in GHA:
> https://git.dpdk.org/dpdk/tree/.github/workflows/build.yml#n23
>
> The simplest rule (on rte_security_ipsec_sa_options only) passes fine
> with this version of libabigail:
> https://github.com/david-marchand/dpdk/runs/5109221298?check_suite_focus=true
Thanks for trying it out. I will remove the last two rules and send next version.
^ permalink raw reply [relevance 0%]
* Re: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 13:19 0% ` Akhil Goyal
@ 2022-02-08 19:55 0% ` David Marchand
2022-02-08 20:01 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2022-02-08 19:55 UTC (permalink / raw)
To: Akhil Goyal, Dodji Seketeli
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella
On Tue, Feb 8, 2022 at 2:19 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > > > I tried this in the first place but abi check was complaining in other structures
> > > which included
> > > > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > > > Can you try at your end?
> > >
> > > I tried before suggesting, and it works with a single rule on this structure.
> > >
> > > I'm using libabigail current master, which version are you using so I
> > > can try with the same?
> > >
> > I am currently using 1.6 version. I will try with latest version.
> > $ abidiff --version
> > abidiff: 1.6.0
> >
> It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
> It is not getting compiled.
I am using the HEAD of libabigail master branch, so maybe something
got fixed between 2.0 and the current master.
> Can you check with 1.6.0 version?
I tried 1.6 in GHA (Ubuntu 18.04), and I can reproduce the warnings
you reported.
But in the end, we use 1.8 in GHA:
https://git.dpdk.org/dpdk/tree/.github/workflows/build.yml#n23
The simplest rule (on rte_security_ipsec_sa_options only) passes fine
with this version of libabigail:
https://github.com/david-marchand/dpdk/runs/5109221298?check_suite_focus=true
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [PATCH] ci: remove outdated default reference tag for ABI
2022-02-08 13:47 4% [PATCH] ci: remove outdated default reference tag for ABI Thomas Monjalon
@ 2022-02-08 15:08 7% ` Aaron Conole
2022-02-08 22:03 8% ` Brandon Lo
2022-02-09 13:37 4% ` Thomas Monjalon
2022-03-01 9:56 9% ` [PATCH v2] ci: remove outdated default versions for ABI check Thomas Monjalon
1 sibling, 2 replies; 200+ results
From: Aaron Conole @ 2022-02-08 15:08 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, david.marchand, Michael Santana, Lincoln Lavoie, Owen Hilyard
Thomas Monjalon <thomas@monjalon.net> writes:
> The variable REF_GIT_TAG is set in the CI configuration
> like .travis.yml or .github/workflows/build.yml.
> The default value is outdated and probably unused.
> It is removed completely to avoid forgetting an update in future.
>
> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
> ---
I think the default was there for labs that run the build script
manually. Maybe there are no such users, though. I believe the lab has
its own script for doing such ABI checks (but can't remember off the top
of my head).
CC'd Lincoln and Owen just to confirm.
Assuming the UNH/other lab doesn't use this feature of the linux build
script,
Acked-by: Aaron Conole <aconole@redhat.com>
^ permalink raw reply [relevance 7%]
* RE: [EXT] [PATCH v2 4/8] crypto/dpaa2_sec: support AES-GMAC
2022-01-21 11:29 3% ` [EXT] " Akhil Goyal
@ 2022-02-08 14:15 0% ` Gagandeep Singh
0 siblings, 0 replies; 200+ results
From: Gagandeep Singh @ 2022-02-08 14:15 UTC (permalink / raw)
To: Akhil Goyal, dev; +Cc: Akhil Goyal
> -----Original Message-----
> From: Akhil Goyal <gakhil@marvell.com>
> Sent: Friday, January 21, 2022 4:59 PM
> To: Gagandeep Singh <G.Singh@nxp.com>; dev@dpdk.org
> Cc: Akhil Goyal <akhil.goyal@nxp.com>
> Subject: RE: [EXT] [PATCH v2 4/8] crypto/dpaa2_sec: support AES-GMAC
>
> > From: Akhil Goyal <akhil.goyal@nxp.com>
> >
> > This patch supports AES_GMAC algorithm for DPAA2
> > driver.
> >
> > Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> > Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> > ---
> > doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
> > drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 14 ++++++++-
> > drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 30 ++++++++++++++++++++
> > lib/cryptodev/rte_crypto_sym.h | 4 ++-
> > 4 files changed, 47 insertions(+), 2 deletions(-)
>
> This patch should be split in two - cryptodev change should be separate patch.
>
> > diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
> > index daa090b978..4644fa3e25 100644
> > --- a/lib/cryptodev/rte_crypto_sym.h
> > +++ b/lib/cryptodev/rte_crypto_sym.h
> > @@ -467,8 +467,10 @@ enum rte_crypto_aead_algorithm {
> > /**< AES algorithm in CCM mode. */
> > RTE_CRYPTO_AEAD_AES_GCM,
> > /**< AES algorithm in GCM mode. */
> > - RTE_CRYPTO_AEAD_CHACHA20_POLY1305
> > + RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
> > /**< Chacha20 cipher with poly1305 authenticator */
> > + RTE_CRYPTO_AEAD_AES_GMAC
> > + /**< AES algorithm in GMAC mode. */
> > };
> AES-GMAC is also defined as AUTH algo. It may be removed but that would be
> ABI break.
> Is it not possible to use AES-GMAC as auth algo?
There are some issues in this patch. I will send it later.
^ permalink raw reply [relevance 0%]
* [PATCH] ci: remove outdated default reference tag for ABI
@ 2022-02-08 13:47 4% Thomas Monjalon
2022-02-08 15:08 7% ` Aaron Conole
2022-03-01 9:56 9% ` [PATCH v2] ci: remove outdated default versions for ABI check Thomas Monjalon
0 siblings, 2 replies; 200+ results
From: Thomas Monjalon @ 2022-02-08 13:47 UTC (permalink / raw)
To: dev; +Cc: david.marchand, Aaron Conole, Michael Santana
The variable REF_GIT_TAG is set in the CI configuration
like .travis.yml or .github/workflows/build.yml.
The default value is outdated and probably unused.
It is removed completely to avoid forgetting an update in future.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
---
.ci/linux-build.sh | 1 -
1 file changed, 1 deletion(-)
diff --git a/.ci/linux-build.sh b/.ci/linux-build.sh
index c10c1a8ab5..25a7cae120 100755
--- a/.ci/linux-build.sh
+++ b/.ci/linux-build.sh
@@ -119,7 +119,6 @@ if [ "$ABI_CHECKS" = "true" ]; then
export PATH=$(pwd)/libabigail/bin:$PATH
REF_GIT_REPO=${REF_GIT_REPO:-https://dpdk.org/git/dpdk}
- REF_GIT_TAG=${REF_GIT_TAG:-v19.11}
if [ "$(cat reference/VERSION 2>/dev/null)" != "$REF_GIT_TAG" ]; then
rm -rf reference
--
2.34.1
^ permalink raw reply [relevance 4%]
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 10:45 0% ` Akhil Goyal
@ 2022-02-08 13:19 0% ` Akhil Goyal
2022-02-08 19:55 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-08 13:19 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
Hi David,
> > > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > > index 4b676f317d..3bd39042e8 100644
> > > > > --- a/devtools/libabigail.abignore
> > > > > +++ b/devtools/libabigail.abignore
> > > > > @@ -11,3 +11,17 @@
> > > > > ; Ignore generated PMD information strings
> > > > > [suppress_variable]
> > > > > name_regexp = _pmd_info$
> > > > > +
> > > > > +; Ignore fields inserted in place of reserved_opts of
> > > > rte_security_ipsec_sa_options
> > > > > +[suppress_type]
> > > > > + name = rte_ipsec_sa_prm
> > > > > + name = rte_security_ipsec_sa_options
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > end}
> > > > > +
> > > > > +[suppress_type]
> > > > > + name = rte_security_capability
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > > (offset_of(reserved_opts) + 18)}
> > > > > +
> > > > > +[suppress_type]
> > > > > + name = rte_security_session_conf
> > > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > > (offset_of(reserved_opts) + 18)}
> > > >
> > > > Now, about the suppression rule, I don't understand the intention of
> > > > those 3 rules.
> > > >
> > > > I would simply suppress modifications (after reserved_opts) to the
> > > > rte_security_ipsec_sa_options struct.
> > > > Like:
> > > >
> > > > ; Ignore fields inserted in place of reserved_opts of
> > > > rte_security_ipsec_sa_options
> > > > [suppress_type]
> > > > name = rte_security_ipsec_sa_options
> > > > has_data_member_inserted_between = {offset_of(reserved_opts),
> end}
> > > >
> > > I tried this in the first place but abi check was complaining in other structures
> > which included
> > > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > > Can you try at your end?
> >
> > I tried before suggesting, and it works with a single rule on this structure.
> >
> > I'm using libabigail current master, which version are you using so I
> > can try with the same?
> >
> I am currently using 1.6 version. I will try with latest version.
> $ abidiff --version
> abidiff: 1.6.0
>
It seems the latest version 2.0 is not compatible with Ubuntu 20.04.
It is not getting compiled.
Can you check with 1.6.0 version?
^ permalink raw reply [relevance 0%]
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 9:27 0% ` David Marchand
@ 2022-02-08 10:45 0% ` Akhil Goyal
2022-02-08 13:19 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-08 10:45 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
> > > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > > index 4b676f317d..3bd39042e8 100644
> > > > --- a/devtools/libabigail.abignore
> > > > +++ b/devtools/libabigail.abignore
> > > > @@ -11,3 +11,17 @@
> > > > ; Ignore generated PMD information strings
> > > > [suppress_variable]
> > > > name_regexp = _pmd_info$
> > > > +
> > > > +; Ignore fields inserted in place of reserved_opts of
> > > rte_security_ipsec_sa_options
> > > > +[suppress_type]
> > > > + name = rte_ipsec_sa_prm
> > > > + name = rte_security_ipsec_sa_options
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> end}
> > > > +
> > > > +[suppress_type]
> > > > + name = rte_security_capability
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > (offset_of(reserved_opts) + 18)}
> > > > +
> > > > +[suppress_type]
> > > > + name = rte_security_session_conf
> > > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > > (offset_of(reserved_opts) + 18)}
> > >
> > > Now, about the suppression rule, I don't understand the intention of
> > > those 3 rules.
> > >
> > > I would simply suppress modifications (after reserved_opts) to the
> > > rte_security_ipsec_sa_options struct.
> > > Like:
> > >
> > > ; Ignore fields inserted in place of reserved_opts of
> > > rte_security_ipsec_sa_options
> > > [suppress_type]
> > > name = rte_security_ipsec_sa_options
> > > has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > >
> > I tried this in the first place but abi check was complaining in other structures
> which included
> > rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> > Can you try at your end?
>
> I tried before suggesting, and it works with a single rule on this structure.
>
> I'm using libabigail current master, which version are you using so I
> can try with the same?
>
I am currently using 1.6 version. I will try with latest version.
$ abidiff --version
abidiff: 1.6.0
and I get following issue after removing the last two suppress rules.
Functions changes summary: 0 Removed, 1 Changed (8 filtered out), 0 Added functions
Variables changes summary: 0 Removed, 0 Changed, 0 Added variable
1 function with some indirect sub-type change:
[C]'function const rte_security_capability* rte_security_capabilities_get(rte_security_ctx*)' at rte_security.c:158:1 has some indirect sub-type changes:
return type changed:
in pointed to type 'const rte_security_capability':
in unqualified underlying type 'struct rte_security_capability' at rte_security.h:808:1:
type size hasn't changed
1 data member change:
parameter 1 of type 'rte_security_ctx*' has sub-type changes:
in pointed to type 'struct rte_security_ctx' at rte_security.h:72:1:
type size hasn't changed
1 data member change:
type of 'const rte_security_ops* rte_security_ctx::ops' changed:
in pointed to type 'const rte_security_ops':
in unqualified underlying type 'struct rte_security_ops' at rte_security_driver.h:140:1:
type size hasn't changed
1 data member changes (2 filtered):
type of 'security_session_create_t rte_security_ops::session_create' changed:
underlying type 'int (void*, rte_security_session_conf*, rte_security_session*, rte_mempool*)*' changed:
in pointed to type 'function type int (void*, rte_security_session_conf*, rte_security_session*, rte_mempool*)':
parameter 2 of type 'rte_security_session_conf*' has sub-type changes:
in pointed to type 'struct rte_security_session_conf' at rte_security.h:502:1:
type size hasn't changed
1 data member change:
^ permalink raw reply [relevance 0%]
* Re: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
2022-02-08 9:18 3% ` [EXT] " Akhil Goyal
@ 2022-02-08 9:27 0% ` David Marchand
2022-02-08 10:45 0% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2022-02-08 9:27 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
On Tue, Feb 8, 2022 at 10:19 AM Akhil Goyal <gakhil@marvell.com> wrote:
>
> > Hello Akhil,
> >
> >
> > On Fri, Feb 4, 2022 at 11:14 PM Akhil Goyal <gakhil@marvell.com> wrote:
> > >
> > > A new option is added in IPsec to enable and attempt reassembly
> > > of inbound packets.
> >
> > First, about extending this structure.
> >
> > Copying the header:
> >
> > /** Reserved bit fields for future extension
> > *
> > * User should ensure reserved_opts is cleared as it may change in
> > * subsequent releases to support new options.
> > *
> > * Note: Reduce number of bits in reserved_opts for every new option.
> > */
> > uint32_t reserved_opts : 18;
> >
> > I did not follow the introduction of the reserved_opts field, but
> > writing this comment in the API only is weak.
> > Why can't the rte_security API enforce reserved_opts == 0 (like in
> > rte_security_session_create)?
> >
> This was discussed here.
> http://patches.dpdk.org/project/dpdk/patch/20211008204516.3497060-3-gakhil@marvell.com/
> rte_security_ipsec_sa_options is being used at multiple places as listed below in abiignore.
> Checking a particular field in each of the API does not make sense to me.
It's strange to me that a user may pass this structure as input in
multiple functions.
But if it's how the security lib works, ok.
>
> >
> >
> > >
> > > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > > Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
> > ^^^
> > Internal tag, please remove.
> >
> Yes, missed that will remove.
> >
> > > ---
> > > devtools/libabigail.abignore | 14 ++++++++++++++
> > > lib/security/rte_security.h | 12 +++++++++++-
> > > 2 files changed, 25 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > > index 4b676f317d..3bd39042e8 100644
> > > --- a/devtools/libabigail.abignore
> > > +++ b/devtools/libabigail.abignore
> > > @@ -11,3 +11,17 @@
> > > ; Ignore generated PMD information strings
> > > [suppress_variable]
> > > name_regexp = _pmd_info$
> > > +
> > > +; Ignore fields inserted in place of reserved_opts of
> > rte_security_ipsec_sa_options
> > > +[suppress_type]
> > > + name = rte_ipsec_sa_prm
> > > + name = rte_security_ipsec_sa_options
> > > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > > +
> > > +[suppress_type]
> > > + name = rte_security_capability
> > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > (offset_of(reserved_opts) + 18)}
> > > +
> > > +[suppress_type]
> > > + name = rte_security_session_conf
> > > + has_data_member_inserted_between = {offset_of(reserved_opts),
> > (offset_of(reserved_opts) + 18)}
> >
> > Now, about the suppression rule, I don't understand the intention of
> > those 3 rules.
> >
> > I would simply suppress modifications (after reserved_opts) to the
> > rte_security_ipsec_sa_options struct.
> > Like:
> >
> > ; Ignore fields inserted in place of reserved_opts of
> > rte_security_ipsec_sa_options
> > [suppress_type]
> > name = rte_security_ipsec_sa_options
> > has_data_member_inserted_between = {offset_of(reserved_opts), end}
> >
> I tried this in the first place but abi check was complaining in other structures which included
> rte_security_ipsec_sa_options. So I had to add suppression for those as well.
> Can you try at your end?
I tried before suggesting, and it works with a single rule on this structure.
I'm using libabigail current master, which version are you using so I
can try with the same?
--
David Marchand
^ permalink raw reply [relevance 0%]
* RE: [EXT] Re: [PATCH v4 3/3] security: add IPsec option for IP reassembly
@ 2022-02-08 9:18 3% ` Akhil Goyal
2022-02-08 9:27 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-08 9:18 UTC (permalink / raw)
To: David Marchand
Cc: dev, Anoob Joseph, Matan Azrad, Ananyev, Konstantin,
Thomas Monjalon, Yigit, Ferruh, Andrew Rybchenko, Rosen Xu,
Olivier Matz, Radu Nicolau, Jerin Jacob Kollanukkaran,
Stephen Hemminger, Ray Kinsella, Dodji Seketeli
> Hello Akhil,
>
>
> On Fri, Feb 4, 2022 at 11:14 PM Akhil Goyal <gakhil@marvell.com> wrote:
> >
> > A new option is added in IPsec to enable and attempt reassembly
> > of inbound packets.
>
> First, about extending this structure.
>
> Copying the header:
>
> /** Reserved bit fields for future extension
> *
> * User should ensure reserved_opts is cleared as it may change in
> * subsequent releases to support new options.
> *
> * Note: Reduce number of bits in reserved_opts for every new option.
> */
> uint32_t reserved_opts : 18;
>
> I did not follow the introduction of the reserved_opts field, but
> writing this comment in the API only is weak.
> Why can't the rte_security API enforce reserved_opts == 0 (like in
> rte_security_session_create)?
>
This was discussed here.
http://patches.dpdk.org/project/dpdk/patch/20211008204516.3497060-3-gakhil@marvell.com/
rte_security_ipsec_sa_options is being used at multiple places as listed below in abiignore.
Checking a particular field in each of the API does not make sense to me.
>
>
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > Change-Id: I6f66f0b5a659550976a32629130594070cb16cb1
> ^^^
> Internal tag, please remove.
>
Yes, missed that will remove.
>
> > ---
> > devtools/libabigail.abignore | 14 ++++++++++++++
> > lib/security/rte_security.h | 12 +++++++++++-
> > 2 files changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > index 4b676f317d..3bd39042e8 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -11,3 +11,17 @@
> > ; Ignore generated PMD information strings
> > [suppress_variable]
> > name_regexp = _pmd_info$
> > +
> > +; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> > +[suppress_type]
> > + name = rte_ipsec_sa_prm
> > + name = rte_security_ipsec_sa_options
> > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > +
> > +[suppress_type]
> > + name = rte_security_capability
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
> > +
> > +[suppress_type]
> > + name = rte_security_session_conf
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
>
> Now, about the suppression rule, I don't understand the intention of
> those 3 rules.
>
> I would simply suppress modifications (after reserved_opts) to the
> rte_security_ipsec_sa_options struct.
> Like:
>
> ; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> [suppress_type]
> name = rte_security_ipsec_sa_options
> has_data_member_inserted_between = {offset_of(reserved_opts), end}
>
I tried this in the first place but abi check was complaining in other structures which included
rte_security_ipsec_sa_options. So I had to add suppression for those as well.
Can you try at your end?
^ permalink raw reply [relevance 3%]
* [PATCH v4 0/3] ethdev: introduce IP reassembly offload
2022-01-30 17:59 4% ` [PATCH v3 0/4] " Akhil Goyal
2022-02-01 14:10 0% ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
@ 2022-02-04 22:13 4% ` Akhil Goyal
2022-02-08 20:11 4% ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2022-02-04 22:13 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, olivier.matz, david.marchand,
radu.nicolau, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.[2][3]
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
[2]: APP: http://patches.dpdk.org/project/dpdk/list/?series=21284
[3]: PMD: http://patches.dpdk.org/project/dpdk/list/?series=21285
Newer versions of app and PMD will be sent once library changes are
acked.
Changes in v4:
- removed rte_eth_dev_info update for capability (Ferruh)
- removed Rx offload flag (Ferruh)
- added capability_get() (Ferruh)
- moved dynfield and dynflag namedefines in rte_mbuf_dyn.h (Ferruh)
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (3):
ethdev: introduce IP reassembly offload
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 14 ++++
doc/guides/nics/features.rst | 13 ++++
lib/ethdev/ethdev_driver.h | 63 ++++++++++++++++++
lib/ethdev/rte_ethdev.c | 121 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 108 +++++++++++++++++++++++++++++++
lib/ethdev/version.map | 6 ++
lib/mbuf/rte_mbuf_dyn.h | 9 +++
lib/security/rte_security.h | 12 +++-
8 files changed, 345 insertions(+), 1 deletion(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2 0/8] net/bonding: fixes and LACP short timeout
@ 2022-02-04 15:09 3% ` Ferruh Yigit
2 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-04 15:09 UTC (permalink / raw)
To: Robert Sanford, dev; +Cc: chas3, humin29, bruce.richardson
On 12/21/2021 7:57 PM, Robert Sanford wrote:
> This patchset makes the following changes to net/bonding:
> - Clean up minor errors in spelling, whitespace, C++ wrappers, and
> comments.
> - Replace directly overwriting of slave port's rte_eth_conf by copying
> it, but only updating it via rte_eth_dev_configure().
> - Make minor changes to allocation of mbuf pool and rx/tx rings.
> - Add support for enabling LACP short timeout, i.e., link partner can
> use fast periodic time interval between transmits.
> - Include bond_8023ad and bond_alb in doxygen.
> - Remove self from Timers maintainers.
> - Add API stubs to net/ring PMD.
> - Add LACP short timeout to tests.
>
> V2 changes:
> - Additional typo and whitespace corrections.
> - Minor changes to LACP private rings creation.
> - Add net/ring API stubs patch.
> - Insert extra "bond_handshake" to LACP short timeout autotest.
>
> Robert Sanford (8):
> net/bonding: fix typos and whitespace
> net/bonding: fix bonded dev configuring slave dev
> net/bonding: change mbuf pool and ring creation
> net/bonding: support enabling LACP short timeout
> net/bonding: add bond_8023ad and bond_alb to doc
> Remove self from Timers maintainers.
> net/ring: add promiscuous and allmulticast API stubs
> net/bonding: add LACP short timeout to tests
>
Hi Robert,
There are some unrelated (and independent) patches in the set,
can you please make new version as multiple sets to help to manage them?
- 4/8 & 8/8 can be a separate set, since they have ABI concern they can
be managed separately
- 6/8 can be separate
- rest can be various bonding fix set
Thanks,
ferruh
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 8/8] net/bonding: add LACP short timeout tests
@ 2022-02-04 14:49 4% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-04 14:49 UTC (permalink / raw)
To: Robert Sanford, dev; +Cc: chas3, humin29, bruce.richardson
On 12/21/2021 7:57 PM, Robert Sanford wrote:
> - Add "set bonding lacp timeout_ctrl <port_id> on|off" to testpmd.
> - Add "test_mode4_lacp_timeout_control" to dpdk-test.
> - Remove call to rte_eth_dev_mac_addr_remove from add_slave,
> as it always fails and prints an error.
>
> Signed-off-by: Robert Sanford<rsanford@akamai.com>
> ---
> app/test-pmd/cmdline.c | 77 ++++++++++++++++++++++++++++++++++++++
> app/test/test_link_bonding_mode4.c | 70 +++++++++++++++++++++++++++++++++-
> 2 files changed, 145 insertions(+), 2 deletions(-)
This patch depends on path 4/8, which cause an ABI error, so can't proceed
with this patch before ABI issue is resolved.
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2 4/8] net/bonding: support enabling LACP short timeout
@ 2022-02-04 14:46 4% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-04 14:46 UTC (permalink / raw)
To: Robert Sanford, dev
Cc: chas3, humin29, bruce.richardson, David Marchand, Ray Kinsella
On 12/21/2021 7:57 PM, Robert Sanford wrote:
> - Add support for enabling LACP short timeout, i.e., link partner can
> use fast periodic time interval between transmits.
>
> Signed-off-by: Robert Sanford <rsanford@akamai.com>
> ---
> drivers/net/bonding/eth_bond_8023ad_private.h | 3 ++-
> drivers/net/bonding/rte_eth_bond_8023ad.c | 28 +++++++++++++++++++++++----
> drivers/net/bonding/rte_eth_bond_8023ad.h | 3 +++
> 3 files changed, 29 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/bonding/eth_bond_8023ad_private.h b/drivers/net/bonding/eth_bond_8023ad_private.h
> index 60db31e..bfde03c 100644
> --- a/drivers/net/bonding/eth_bond_8023ad_private.h
> +++ b/drivers/net/bonding/eth_bond_8023ad_private.h
> @@ -159,7 +159,6 @@ struct mode8023ad_private {
> uint64_t rx_marker_timeout;
> uint64_t update_timeout_us;
> rte_eth_bond_8023ad_ext_slowrx_fn slowrx_cb;
> - uint8_t external_sm;
> struct rte_ether_addr mac_addr;
>
> struct rte_eth_link slave_link;
> @@ -178,6 +177,8 @@ struct mode8023ad_private {
> uint16_t tx_qid;
> } dedicated_queues;
> enum rte_bond_8023ad_agg_selection agg_selection;
> + uint8_t short_timeout_enabled : 1;
> + uint8_t short_timeout_updated : 1;
> };
>
> /**
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index 9ed2a46..5c175e7 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -868,10 +868,10 @@ bond_mode_8023ad_periodic_cb(void *arg)
> struct rte_eth_link link_info;
> struct rte_ether_addr slave_addr;
> struct rte_mbuf *lacp_pkt = NULL;
> + uint8_t short_timeout_updated = internals->mode4.short_timeout_updated;
> uint16_t slave_id;
> uint16_t i;
>
> -
> /* Update link status on each port */
> for (i = 0; i < internals->active_slave_count; i++) {
> uint16_t key;
> @@ -916,6 +916,13 @@ bond_mode_8023ad_periodic_cb(void *arg)
> slave_id = internals->active_slaves[i];
> port = &bond_mode_8023ad_ports[slave_id];
>
> + if (short_timeout_updated) {
> + if (internals->mode4.short_timeout_enabled)
> + ACTOR_STATE_SET(port, LACP_SHORT_TIMEOUT);
> + else
> + ACTOR_STATE_CLR(port, LACP_SHORT_TIMEOUT);
> + }
> +
> if ((port->actor.key &
> rte_cpu_to_be_16(BOND_LINK_FULL_DUPLEX_KEY)) == 0) {
>
> @@ -960,6 +967,9 @@ bond_mode_8023ad_periodic_cb(void *arg)
> show_warnings(slave_id);
> }
>
> + if (short_timeout_updated)
> + internals->mode4.short_timeout_updated = 0;
> +
> rte_eal_alarm_set(internals->mode4.update_timeout_us,
> bond_mode_8023ad_periodic_cb, arg);
> }
> @@ -1054,7 +1064,6 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> /* Given slave must not be in active list. */
> RTE_ASSERT(find_slave_by_id(internals->active_slaves,
> internals->active_slave_count, slave_id) == internals->active_slave_count);
> - RTE_SET_USED(internals); /* used only for assert when enabled */
>
> memcpy(&port->actor, &initial, sizeof(struct port_params));
> /* Standard requires that port ID must be greater than 0.
> @@ -1065,7 +1074,9 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
> memcpy(&port->partner_admin, &initial, sizeof(struct port_params));
>
> /* default states */
> - port->actor_state = STATE_AGGREGATION | STATE_LACP_ACTIVE | STATE_DEFAULTED;
> + port->actor_state = STATE_AGGREGATION | STATE_LACP_ACTIVE |
> + STATE_DEFAULTED | (internals->mode4.short_timeout_enabled ?
> + STATE_LACP_SHORT_TIMEOUT : 0);
> port->partner_state = STATE_LACP_ACTIVE | STATE_AGGREGATION;
> port->sm_flags = SM_FLAGS_BEGIN;
>
> @@ -1209,6 +1220,7 @@ bond_mode_8023ad_conf_get(struct rte_eth_dev *dev,
> struct mode8023ad_private *mode4 = &internals->mode4;
> uint64_t ms_ticks = rte_get_tsc_hz() / 1000;
>
> + memset(conf, 0, sizeof(*conf));
> conf->fast_periodic_ms = mode4->fast_periodic_timeout / ms_ticks;
> conf->slow_periodic_ms = mode4->slow_periodic_timeout / ms_ticks;
> conf->short_timeout_ms = mode4->short_timeout / ms_ticks;
> @@ -1219,6 +1231,7 @@ bond_mode_8023ad_conf_get(struct rte_eth_dev *dev,
> conf->rx_marker_period_ms = mode4->rx_marker_timeout / ms_ticks;
> conf->slowrx_cb = mode4->slowrx_cb;
> conf->agg_selection = mode4->agg_selection;
> + conf->lacp_timeout_control = mode4->short_timeout_enabled;
> }
>
> static void
> @@ -1234,6 +1247,7 @@ bond_mode_8023ad_conf_get_default(struct rte_eth_bond_8023ad_conf *conf)
> conf->update_timeout_ms = BOND_MODE_8023AX_UPDATE_TIMEOUT_MS;
> conf->slowrx_cb = NULL;
> conf->agg_selection = AGG_STABLE;
> + conf->lacp_timeout_control = 0;
> }
>
> static void
> @@ -1274,6 +1288,11 @@ bond_mode_8023ad_setup(struct rte_eth_dev *dev,
> mode4->slowrx_cb = conf->slowrx_cb;
> mode4->agg_selection = AGG_STABLE;
>
> + if (mode4->short_timeout_enabled != conf->lacp_timeout_control) {
> + mode4->short_timeout_enabled = conf->lacp_timeout_control;
> + mode4->short_timeout_updated = 1;
> + }
> +
> if (dev->data->dev_started)
> bond_mode_8023ad_start(dev);
> }
> @@ -1478,7 +1497,8 @@ bond_8023ad_setup_validate(uint16_t port_id,
> conf->aggregate_wait_timeout_ms == 0 ||
> conf->tx_period_ms == 0 ||
> conf->rx_marker_period_ms == 0 ||
> - conf->update_timeout_ms == 0) {
> + conf->update_timeout_ms == 0 ||
> + conf->lacp_timeout_control > 1) {
> RTE_BOND_LOG(ERR, "given mode 4 configuration is invalid");
> return -EINVAL;
> }
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.h b/drivers/net/bonding/rte_eth_bond_8023ad.h
> index 7e9a018..87f6b2f 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.h
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.h
> @@ -139,6 +139,9 @@ struct rte_eth_bond_8023ad_conf {
> uint32_t update_timeout_ms;
> rte_eth_bond_8023ad_ext_slowrx_fn slowrx_cb;
> enum rte_bond_8023ad_agg_selection agg_selection;
> + uint8_t lacp_timeout_control;
> + /**< LACPDU.Actor_State.LACP_Timeout flag: 0=Long 1=Short. */
> + uint8_t reserved_8s[3];
> };
>
> struct rte_eth_bond_8023ad_slave_info {
The changes gives ABI warning [1], it increases size of struct that is
parameter to the public API.
So old applications will send a smaller struct, but new DPDK library
will check beyond the size of the struct, most probably some unrelated
memory in the stack, this looks an ABI break to me.
@Ray can you please check if I am missing anything?
[1]
[C] 'function int rte_eth_bond_8023ad_conf_get(uint16_t, rte_eth_bond_8023ad_conf*)' at rte_eth_bond_8023ad.c:1423:1 has some indirect sub-type changes:
parameter 2 of type 'rte_eth_bond_8023ad_conf*' has sub-type changes:
in pointed to type 'struct rte_eth_bond_8023ad_conf' at rte_eth_bond_8023ad.h:131:1:
Error: ABI issue reported for 'abidiff --suppr devtools/libabigail.abignore --no-added-syms --headers-dir1 reference/usr/local/include --headers-dir2 install/usr/local/include reference/dump/librte_net_bond.dump install/dump/librte_net_bond.dump'
ABIDIFF_ABI_CHANGE, this change requires a review (abidiff flagged this as a potential issue).
type size hasn't changed
2 data member insertions:
'uint8_t rte_eth_bond_8023ad_conf::lacp_timeout_control', at offset 352 (in bits) at rte_eth_bond_8023ad.h:142:1
'uint8_t rte_eth_bond_8023ad_conf::reserved_8s[3]', at offset 360 (in bits) at rte_eth_bond_8023ad.h:144:1
^ permalink raw reply [relevance 4%]
* [PATCH v3 4/4] crypto: modify return value for asym session create
2022-02-03 16:04 1% ` [PATCH v3 1/4] crypto: use single buffer for asymmetric session Ciara Power
@ 2022-02-03 16:04 2% ` Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-03 16:04 UTC (permalink / raw)
To: dev; +Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty
Rather than the asym session create function returning a session on
success, and a NULL value on error, it is modified to now return int
values - 0 on success or -EINVAL/-ENOTSUP/-ENOMEM on failure.
The session to be used is passed as input.
This adds clarity on the failure of the create function, which enables
treating the -ENOTSUP return as TEST_SKIPPED in test apps.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
Acked-by: Anoob Joseph <anoobj@marvell.com>
---
v3:
- Fixed variable declarations, putting initialised variable last.
- Made function comment for return value more generic.
- Fixed log to include line break.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 12 ++-
app/test/test_cryptodev_asym.c | 132 +++++++++++++-----------
doc/guides/prog_guide/cryptodev_lib.rst | 6 +-
doc/guides/rel_notes/release_22_03.rst | 3 +-
lib/cryptodev/rte_cryptodev.c | 27 ++---
lib/cryptodev/rte_cryptodev.h | 11 +-
6 files changed, 104 insertions(+), 87 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index 948dc0f608..1486298931 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -734,7 +734,9 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform auth_xform;
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
+ void *asym_sess = NULL;
struct rte_crypto_asym_xform xform = {0};
+ int ret;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -744,11 +746,13 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp, dev_id, &xform);
- if (sess == NULL)
+ ret = rte_cryptodev_asym_session_create(&asym_sess,
+ sess_mp, dev_id, &xform);
+ if (ret < 0) {
+ RTE_LOG(ERR, USER1, "Asym session create failed\n");
return NULL;
-
- return sess;
+ }
+ return asym_sess;
}
#ifdef RTE_LIB_SECURITY
/*
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index a81d6292f6..2edf8b5b42 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -315,7 +315,7 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
uint8_t input[TEST_DATA_SIZE] = {0};
uint8_t *result = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
xform_tc.next = NULL;
xform_tc.xform_type = data_tc->modex.xform_type;
@@ -450,14 +450,14 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool,
- dev_id, &xform_tc);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess,
+ ts_params->session_mpool, dev_id, &xform_tc);
+ if (ret < 0) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -644,9 +644,9 @@ test_rsa_sign_verify(void)
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
uint8_t dev_id = ts_params->valid_devs[0];
- void *sess;
+ void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -659,12 +659,12 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform);
-
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &rsa_xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -685,9 +685,9 @@ test_rsa_enc_dec(void)
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
uint8_t dev_id = ts_params->valid_devs[0];
- void *sess;
+ void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with exponent key only,
* Check in PMD feature flag for RSA exponent key type support.
@@ -700,11 +700,11 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform);
-
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &rsa_xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -726,9 +726,9 @@ test_rsa_sign_verify_crt(void)
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
uint8_t dev_id = ts_params->valid_devs[0];
- void *sess;
+ void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check im PMD feature flag for RSA quintuple key type support.
@@ -740,12 +740,12 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform_crt);
-
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &rsa_xform_crt);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -767,9 +767,9 @@ test_rsa_enc_dec_crt(void)
struct crypto_testsuite_params_asym *ts_params = &testsuite_params;
struct rte_mempool *sess_mpool = ts_params->session_mpool;
uint8_t dev_id = ts_params->valid_devs[0];
- void *sess;
+ void *sess = NULL;
struct rte_cryptodev_info dev_info;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
/* Test case supports op with quintuple format key only,
* Check in PMD feature flag for RSA quintuple key type support.
@@ -781,12 +781,12 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform_crt);
-
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &rsa_xform_crt);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1050,7 +1050,7 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
@@ -1077,12 +1077,13 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1135,7 +1136,7 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1157,12 +1158,13 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1218,7 +1220,7 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
@@ -1248,12 +1250,13 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1309,7 +1312,7 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_op *asym_op = NULL;
struct rte_crypto_op *op = NULL, *result_op = NULL;
void *sess = NULL;
- int status = TEST_SUCCESS;
+ int ret, status = TEST_SUCCESS;
uint8_t out_pub_key[TEST_DH_MOD_LEN];
uint8_t out_prv_key[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform pub_key_xform;
@@ -1339,12 +1342,13 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1430,12 +1434,13 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &modinv_xform);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &modinv_xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1556,13 +1561,14 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &modex_xform);
- if (!sess) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &modex_xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u "
"FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
@@ -1668,13 +1674,14 @@ test_dsa_sign(void)
uint8_t r[TEST_DH_MOD_LEN];
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
+ int ret;
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &dsa_xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool, dev_id, &dsa_xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto error_exit;
}
/* set up crypto op data structure */
@@ -1805,7 +1812,7 @@ test_ecdsa_sign_verify(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -1850,12 +1857,13 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool,
+ dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
@@ -2009,7 +2017,7 @@ test_ecpm(enum curve curve_id)
struct rte_crypto_asym_op *asym_op;
struct rte_cryptodev_info dev_info;
struct rte_crypto_op *op = NULL;
- int status = TEST_SUCCESS, ret;
+ int ret, status = TEST_SUCCESS;
switch (curve_id) {
case SECP192R1:
@@ -2054,12 +2062,12 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
- if (sess == NULL) {
+ ret = rte_cryptodev_asym_session_create(&sess, sess_mpool, dev_id, &xform);
+ if (ret < 0) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
"Session creation failed\n");
- status = TEST_FAILED;
+ status = (ret == -ENOTSUP) ? TEST_SKIPPED : TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 62bd3577f5..8e16461dc6 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1236,10 +1236,10 @@ crypto operations is similar except change to respective op and xform setup).
* Create asym crypto session and initialize it for the crypto device.
* The session structure is hidden from the app, so void * is used.
*/
- void *asym_session;
- asym_session = rte_cryptodev_asym_session_create(asym_session_pool,
+ void *asym_session = NULL;
+ ret = rte_cryptodev_asym_session_create(&asym_session, asym_session_pool,
cdev_id, &modex_xform);
- if (asym_session == NULL)
+ if (ret < 0)
rte_exit(EXIT_FAILURE, "Session could not be created\n");
/* Get a burst of crypto operations. */
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 1022f77828..195a7efdd5 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -110,7 +110,8 @@ API Changes
create a mempool with element size to hold the generic asym session header,
along with the max size for a device private session data, and user data size.
``rte_cryptodev_asym_session_init`` was removed as this initialisation is
- now done by ``rte_cryptodev_asym_session_create``.
+ now done by ``rte_cryptodev_asym_session_create``, which was updated to
+ return an integer value to indicate initialisation errors.
ABI Changes
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index 0d816ed4a9..005f0e7952 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -1912,9 +1912,9 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
return sess;
}
-void *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
- struct rte_crypto_asym_xform *xforms)
+int
+rte_cryptodev_asym_session_create(void **session, struct rte_mempool *mp,
+ uint8_t dev_id, struct rte_crypto_asym_xform *xforms)
{
struct rte_cryptodev_asym_session *sess;
uint32_t session_priv_data_sz;
@@ -1926,18 +1926,18 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
if (!rte_cryptodev_is_valid_dev(dev_id)) {
CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return NULL;
+ return -EINVAL;
}
session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
dev_id);
dev = rte_cryptodev_pmd_get_dev(dev_id);
if (dev == NULL)
- return NULL;
+ return -EINVAL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
- return NULL;
+ return -EINVAL;
}
pool_priv = rte_mempool_get_priv(mp);
@@ -1945,22 +1945,23 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
CDEV_LOG_DEBUG(
"The private session data size used when creating the mempool is smaller than this device's private session data.");
- return NULL;
+ return -EINVAL;
}
/* Verify if provided mempool can hold elements big enough. */
if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
- return NULL;
+ return -EINVAL;
}
/* Allocate a session structure from the session pool */
- if (rte_mempool_get(mp, (void **)&sess)) {
+ if (rte_mempool_get(mp, session)) {
CDEV_LOG_ERR("couldn't get object from session mempool");
- return NULL;
+ return -ENOMEM;
}
+ sess = *session;
sess->driver_id = dev->driver_id;
sess->user_data_sz = pool_priv->user_data_sz;
sess->max_priv_session_sz = pool_priv->max_priv_session_sz;
@@ -1970,7 +1971,7 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
*/
memset(sess->sess_private_data, 0, session_priv_data_sz + sess->user_data_sz);
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, -ENOTSUP);
if (sess->sess_private_data[0] == 0) {
ret = dev->dev_ops->asym_session_configure(dev,
@@ -1980,12 +1981,12 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
CDEV_LOG_ERR(
"dev_id %d failed to configure session details",
dev_id);
- return NULL;
+ return ret;
}
}
rte_cryptodev_trace_asym_session_create(mp, sess);
- return sess;
+ return 0;
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 6a4d6d9934..9a75936963 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -990,18 +990,21 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
/**
* Create asymmetric crypto session header (generic with no private data)
*
+ * @param session void ** for session to be used
* @param mempool mempool to allocate asymmetric session
* objects from
* @param dev_id ID of device that we want the session to be used on
* @param xforms Asymmetric crypto transform operations to apply on flow
* processed with this session
* @return
- * - On success return pointer to asym-session
- * - On failure returns NULL
+ * - 0 on success.
+ * - -EINVAL on invalid arguments.
+ * - -ENOMEM on memory error for session allocation.
+ * - -ENOTSUP if device doesn't support session configuration.
*/
__rte_experimental
-void *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool,
+int
+rte_cryptodev_asym_session_create(void **session, struct rte_mempool *mempool,
uint8_t dev_id, struct rte_crypto_asym_xform *xforms);
/**
--
2.25.1
^ permalink raw reply [relevance 2%]
* [PATCH v3 1/4] crypto: use single buffer for asymmetric session
@ 2022-02-03 16:04 1% ` Ciara Power
2022-02-03 16:04 2% ` [PATCH v3 4/4] crypto: modify return value for asym session create Ciara Power
1 sibling, 0 replies; 200+ results
From: Ciara Power @ 2022-02-03 16:04 UTC (permalink / raw)
To: dev
Cc: roy.fan.zhang, gakhil, anoobj, mdr, Ciara Power, Declan Doherty,
Ankur Dwivedi, Tejasree Kondoj, John Griffin, Fiona Trahe,
Deepak Kumar Jain
Rather than using a session buffer that contains pointers to private
session data elsewhere, have a single session buffer.
This session is created for a driver ID, and the mempool element
contains space for the max session private data needed for any driver.
Signed-off-by: Ciara Power <ciara.power@intel.com>
Acked-by: Fan Zhang <roy.fan.zhang@intel.com>
---
v2:
- Renamed function typedef from "free" to "clear" as session private
data isn't being freed in that function.
- Moved user data API to separate patch.
- Minor fixes to comments, formatting, return values.
v3:
- Corrected formatting of struct comments.
- Increased size of max_priv_session_sz to uint16_t.
- Removed trace for asym session init function that was
previously removed.
- Added documentation.
---
app/test-crypto-perf/cperf_ops.c | 14 +-
app/test/test_cryptodev_asym.c | 200 ++++---------------
doc/guides/prog_guide/cryptodev_lib.rst | 55 ++---
doc/guides/rel_notes/release_22_03.rst | 7 +
drivers/crypto/cnxk/cn10k_cryptodev_ops.c | 6 +-
drivers/crypto/cnxk/cn9k_cryptodev_ops.c | 6 +-
drivers/crypto/cnxk/cnxk_cryptodev_ops.c | 11 +-
drivers/crypto/octeontx/otx_cryptodev_ops.c | 29 +--
drivers/crypto/openssl/rte_openssl_pmd.c | 5 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 23 +--
drivers/crypto/qat/qat_asym.c | 53 ++---
lib/cryptodev/cryptodev_pmd.h | 17 +-
lib/cryptodev/cryptodev_trace_points.c | 6 +-
lib/cryptodev/rte_cryptodev.c | 167 ++++++++++------
lib/cryptodev/rte_cryptodev.h | 72 ++++---
lib/cryptodev/rte_cryptodev_trace.h | 21 +-
lib/cryptodev/version.map | 5 +-
17 files changed, 258 insertions(+), 439 deletions(-)
diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c
index d975ae1ab8..bdc5dc9544 100644
--- a/app/test-crypto-perf/cperf_ops.c
+++ b/app/test-crypto-perf/cperf_ops.c
@@ -735,7 +735,6 @@ cperf_create_session(struct rte_mempool *sess_mp,
struct rte_crypto_sym_xform aead_xform;
struct rte_cryptodev_sym_session *sess = NULL;
struct rte_crypto_asym_xform xform = {0};
- int rc;
if (options->op_type == CPERF_ASYM_MODEX) {
xform.next = NULL;
@@ -745,19 +744,10 @@ cperf_create_session(struct rte_mempool *sess_mp,
xform.modex.exponent.data = perf_mod_e;
xform.modex.exponent.length = sizeof(perf_mod_e);
- sess = (void *)rte_cryptodev_asym_session_create(sess_mp);
+ sess = (void *)rte_cryptodev_asym_session_create(sess_mp, dev_id, &xform);
if (sess == NULL)
return NULL;
- rc = rte_cryptodev_asym_session_init(dev_id, (void *)sess,
- &xform, priv_mp);
- if (rc < 0) {
- if (sess != NULL) {
- rte_cryptodev_asym_session_clear(dev_id,
- (void *)sess);
- rte_cryptodev_asym_session_free((void *)sess);
- }
- return NULL;
- }
+
return sess;
}
#ifdef RTE_LIB_SECURITY
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 68f4d8e7a6..f7c2fd2588 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -450,7 +450,8 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
}
if (!sessionless) {
- sess = rte_cryptodev_asym_session_create(ts_params->session_mpool);
+ sess = rte_cryptodev_asym_session_create(ts_params->session_mpool,
+ dev_id, &xform_tc);
if (!sess) {
snprintf(test_msg, ASYM_TEST_MSG_LEN,
"line %u "
@@ -460,15 +461,6 @@ test_cryptodev_asym_op(struct crypto_testsuite_params_asym *ts_params,
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform_tc,
- ts_params->session_mpool) < 0) {
- snprintf(test_msg, ASYM_TEST_MSG_LEN,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
rte_crypto_op_attach_asym_session(op, sess);
} else {
asym_op->xform = &xform_tc;
@@ -667,18 +659,11 @@ test_rsa_sign_verify(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -686,7 +671,6 @@ test_rsa_sign_verify(void)
status = queue_ops_rsa_sign_verify(sess);
error_exit:
-
rte_cryptodev_asym_session_clear(dev_id, sess);
rte_cryptodev_asym_session_free(sess);
@@ -716,17 +700,10 @@ test_rsa_enc_dec(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for enc_dec\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec\n");
status = TEST_FAILED;
goto error_exit;
}
@@ -763,22 +740,15 @@ test_rsa_sign_verify_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform_crt);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"sign_verify_crt\n");
status = TEST_FAILED;
- return status;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "sign_verify_crt\n");
- status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_sign_verify(sess);
error_exit:
@@ -811,21 +781,15 @@ test_rsa_enc_dec_crt(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &rsa_xform_crt);
if (!sess) {
RTE_LOG(ERR, USER1, "Session creation failed for "
"enc_dec_crt\n");
- return TEST_FAILED;
- }
-
- if (rte_cryptodev_asym_session_init(dev_id, sess, &rsa_xform_crt,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1, "Unable to config asym session for "
- "enc_dec_crt\n");
status = TEST_FAILED;
goto error_exit;
}
+
status = queue_ops_rsa_enc_dec(sess);
error_exit:
@@ -924,7 +888,6 @@ testsuite_setup(void)
/* configure qp */
ts_params->qp_conf.nb_descriptors = DEFAULT_NUM_OPS_INFLIGHT;
ts_params->qp_conf.mp_session = ts_params->session_mpool;
- ts_params->qp_conf.mp_session_private = ts_params->session_mpool;
for (qp_id = 0; qp_id < info.max_nb_queue_pairs; qp_id++) {
TEST_ASSERT_SUCCESS(rte_cryptodev_queue_pair_setup(
dev_id, qp_id, &ts_params->qp_conf,
@@ -933,21 +896,9 @@ testsuite_setup(void)
qp_id, dev_id);
}
- /* setup asym session pool */
- unsigned int session_size = RTE_MAX(
- rte_cryptodev_asym_get_private_session_size(dev_id),
- rte_cryptodev_asym_get_header_session_size());
- /*
- * Create mempool with TEST_NUM_SESSIONS * 2,
- * to include the session headers
- */
- ts_params->session_mpool = rte_mempool_create(
- "test_asym_sess_mp",
- TEST_NUM_SESSIONS * 2,
- session_size,
- 0, 0, NULL, NULL, NULL,
- NULL, SOCKET_ID_ANY,
- 0);
+ ts_params->session_mpool = rte_cryptodev_asym_session_pool_create(
+ "test_asym_sess_mp", TEST_NUM_SESSIONS * 2, 0,
+ SOCKET_ID_ANY);
TEST_ASSERT_NOT_NULL(ts_params->session_mpool,
"session mempool allocation failed");
@@ -1104,14 +1055,6 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform xform = *xfrm;
uint8_t peer[] = "01234567890123456789012345678901234567890123456789";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1134,11 +1077,11 @@ test_dh_gen_shared_sec(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.shared_secret.data = output;
asym_op->dh.shared_secret.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1196,14 +1139,6 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1222,11 +1157,11 @@ test_dh_gen_priv_key(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.priv_key.data = output;
asym_op->dh.priv_key.length = sizeof(output);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1287,14 +1222,6 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
uint8_t output[TEST_DH_MOD_LEN];
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1321,11 +1248,11 @@ test_dh_gen_pub_key(struct rte_crypto_asym_xform *xfrm)
0);
asym_op->dh.priv_key = dh_test_params.priv_key;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1388,15 +1315,6 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
struct rte_crypto_asym_xform pub_key_xform;
struct rte_crypto_asym_xform xform = *xfrm;
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* set up crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1420,11 +1338,12 @@ test_dh_gen_kp(struct rte_crypto_asym_xform *xfrm)
asym_op->dh.pub_key.length = sizeof(out_pub_key);
asym_op->dh.priv_key.data = out_prv_key;
asym_op->dh.priv_key.length = sizeof(out_prv_key);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
+ "line %u FAILED: %s", __LINE__,
+ "Session creation failed");
status = TEST_FAILED;
goto error_exit;
}
@@ -1511,7 +1430,7 @@ test_mod_inv(void)
return TEST_SKIPPED;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &modinv_xform);
if (!sess) {
RTE_LOG(ERR, USER1, "line %u "
"FAILED: %s", __LINE__,
@@ -1520,15 +1439,6 @@ test_mod_inv(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modinv_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* generate crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (!op) {
@@ -1646,7 +1556,7 @@ test_mod_exp(void)
goto error_exit;
}
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &modex_xform);
if (!sess) {
RTE_LOG(ERR, USER1,
"line %u "
@@ -1656,15 +1566,6 @@ test_mod_exp(void)
goto error_exit;
}
- if (rte_cryptodev_asym_session_init(dev_id, sess, &modex_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
asym_op = op->asym;
memcpy(input, base, sizeof(base));
asym_op->modex.base.data = input;
@@ -1768,7 +1669,7 @@ test_dsa_sign(void)
uint8_t s[TEST_DH_MOD_LEN];
uint8_t dgst[] = "35d81554afaad2cf18f3a1770d5fedc4ea5be344";
- sess = rte_cryptodev_asym_session_create(sess_mpool);
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &dsa_xform);
if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
@@ -1797,15 +1698,6 @@ test_dsa_sign(void)
debug_hexdump(stdout, "priv_key: ", dsa_xform.dsa.x.data,
dsa_xform.dsa.x.length);
- if (rte_cryptodev_asym_session_init(dev_id, sess, &dsa_xform,
- sess_mpool) < 0) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s",
- __LINE__, "unabled to config sym session");
- status = TEST_FAILED;
- goto error_exit;
- }
-
/* attach asymmetric crypto session to crypto operations */
rte_crypto_op_attach_asym_session(op, sess);
asym_op->dsa.op_type = RTE_CRYPTO_ASYM_OP_SIGN;
@@ -1941,15 +1833,6 @@ test_ecdsa_sign_verify(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -1967,11 +1850,11 @@ test_ecdsa_sign_verify(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
@@ -2154,15 +2037,6 @@ test_ecpm(enum curve curve_id)
rte_cryptodev_info_get(dev_id, &dev_info);
- sess = rte_cryptodev_asym_session_create(sess_mpool);
- if (sess == NULL) {
- RTE_LOG(ERR, USER1,
- "line %u FAILED: %s", __LINE__,
- "Session creation failed\n");
- status = TEST_FAILED;
- goto exit;
- }
-
/* Setup crypto op data structure */
op = rte_crypto_op_alloc(op_mpool, RTE_CRYPTO_OP_TYPE_ASYMMETRIC);
if (op == NULL) {
@@ -2180,11 +2054,11 @@ test_ecpm(enum curve curve_id)
xform.xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM;
xform.ec.curve_id = input_params.curve;
- if (rte_cryptodev_asym_session_init(dev_id, sess, &xform,
- sess_mpool) < 0) {
+ sess = rte_cryptodev_asym_session_create(sess_mpool, dev_id, &xform);
+ if (sess == NULL) {
RTE_LOG(ERR, USER1,
"line %u FAILED: %s", __LINE__,
- "Unable to config asym session\n");
+ "Session creation failed\n");
status = TEST_FAILED;
goto exit;
}
diff --git a/doc/guides/prog_guide/cryptodev_lib.rst b/doc/guides/prog_guide/cryptodev_lib.rst
index 8766bc34a9..f8f8562f4c 100644
--- a/doc/guides/prog_guide/cryptodev_lib.rst
+++ b/doc/guides/prog_guide/cryptodev_lib.rst
@@ -1038,20 +1038,17 @@ It is the application's responsibility to create and manage the session mempools
Application using both symmetric and asymmetric sessions should allocate and maintain
different sessions pools for each type.
-An application can use ``rte_cryptodev_get_asym_session_private_size()`` to
-get the private size of asymmetric session on a given crypto device. This
-function would allow an application to calculate the max device asymmetric
-session size of all crypto devices to create a single session mempool.
-If instead an application creates multiple asymmetric session mempools,
-the Crypto device framework also provides ``rte_cryptodev_asym_get_header_session_size()`` to get
-the size of an uninitialized session.
+An application can use ``rte_cryptodev_asym_session_pool_create()`` to create a mempool
+with a specified number of elements. The element size will allow for the session header,
+and the max private session size.
+The max private session size is chosen based on available crypto devices,
+the biggest private session size is used. This means any of those devices can be used,
+and the mempool element will have available space for its private session data.
Once the session mempools have been created, ``rte_cryptodev_asym_session_create()``
-is used to allocate an uninitialized asymmetric session from the given mempool.
-The session then must be initialized using ``rte_cryptodev_asym_session_init()``
-for each of the required crypto devices. An asymmetric transform chain
-is used to specify the operation and its parameters. See the section below for
-details on transforms.
+is used to allocate and initialize an asymmetric session from the given mempool.
+An asymmetric transform chain is used to specify the operation and its parameters.
+See the section below for details on transforms.
When a session is no longer used, user must call ``rte_cryptodev_asym_session_clear()``
for each of the crypto devices that are using the session, to free all driver
@@ -1162,21 +1159,14 @@ crypto operations is similar except change to respective op and xform setup).
uint8_t cdev_id = rte_cryptodev_get_dev_id(crypto_name);
- /* Get private asym session data size. */
- asym_session_size = rte_cryptodev_get_asym_private_session_size(cdev_id);
-
/*
- * Create session mempool, with two objects per session,
- * one for the session header and another one for the
- * private asym session data for the crypto device.
+ * Create session mempool, this will create elements big enough
+ * to hold the generic session header,
+ * and the max private session size of the available devices.
*/
- asym_session_pool = rte_mempool_create("asym_session_pool",
- MAX_ASYM_SESSIONS * 2,
- asym_session_size,
- 0,
- 0, NULL, NULL, NULL,
- NULL, socket_id,
- 0);
+ asym_session_pool = rte_cryptodev_asym_session_pool_create(
+ "asym_session_pool", MAX_ASYM_SESSIONS, 0, 0,
+ socket_id);
/* Configure the crypto device. */
struct rte_cryptodev_config conf = {
@@ -1190,8 +1180,7 @@ crypto operations is similar except change to respective op and xform setup).
if (rte_cryptodev_configure(cdev_id, &conf) < 0)
rte_exit(EXIT_FAILURE, "Failed to configure cryptodev %u", cdev_id);
- if (rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
- socket_id, asym_session_pool) < 0)
+ if (rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf, socket_id) < 0)
rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n");
if (rte_cryptodev_start(cdev_id) < 0)
@@ -1226,15 +1215,11 @@ crypto operations is similar except change to respective op and xform setup).
};
/* Create asym crypto session and initialize it for the crypto device. */
struct rte_cryptodev_asym_session *asym_session;
- asym_session = rte_cryptodev_asym_session_create(asym_session_pool);
+ asym_session = rte_cryptodev_asym_session_create(asym_session_pool,
+ cdev_id, &modex_xform);
if (asym_session == NULL)
rte_exit(EXIT_FAILURE, "Session could not be created\n");
- if (rte_cryptodev_asym_session_init(cdev_id, asym_session,
- &modex_xform, asym_session_pool) < 0)
- rte_exit(EXIT_FAILURE, "Session could not be initialized "
- "for the crypto device\n");
-
/* Get a burst of crypto operations. */
struct rte_crypto_op *crypto_ops[1];
if (rte_crypto_op_bulk_alloc(crypto_op_pool,
@@ -1245,11 +1230,11 @@ crypto operations is similar except change to respective op and xform setup).
/* Set up the crypto operations. */
struct rte_crypto_asym_op *asym_op = crypto_ops[0]->asym;
- /* calculate mod exp of value 0xf8 */
+ /* calculate mod exp of value 0xf8 */
static unsigned char base[] = {0xF8};
asym_op->modex.base.data = base;
asym_op->modex.base.length = sizeof(base);
- asym_op->modex.base.iova = base;
+ asym_op->modex.base.iova = base;
/* Attach the asym crypto session to the operation */
rte_crypto_op_attach_asym_session(op, asym_session);
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 3bc0630c7c..de8d8ce4e9 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -100,6 +100,13 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* cryptodev: The asym session handling was modified to use a single buffer.
+ A ``rte_cryptodev_asym_session_pool_create`` function was added to
+ create a mempool with element size to hold the generic asym session header,
+ along with the max size for a device private session data.
+ ``rte_cryptodev_asym_session_init`` was removed as this initialisation is
+ now done by ``rte_cryptodev_asym_session_create``.
+
ABI Changes
-----------
diff --git a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
index d217bbf383..7390f976c6 100644
--- a/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn10k_cryptodev_ops.c
@@ -157,8 +157,7 @@ cn10k_cpt_fill_inst(struct cnxk_cpt_qp *qp, struct rte_crypto_op *ops[],
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- ae_sess = get_asym_session_private_data(
- asym_op->session, cn10k_cryptodev_driver_id);
+ ae_sess = get_asym_session_private_data(asym_op->session);
ret = cnxk_ae_enqueue(qp, op, infl_req, &inst[0],
ae_sess);
if (unlikely(ret))
@@ -431,8 +430,7 @@ cn10k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn10k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
index ac1953b66d..59a06af30e 100644
--- a/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cn9k_cryptodev_ops.c
@@ -138,8 +138,7 @@ cn9k_cpt_inst_prep(struct cnxk_cpt_qp *qp, struct rte_crypto_op *op,
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
asym_op = op->asym;
- sess = get_asym_session_private_data(
- asym_op->session, cn9k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(asym_op->session);
ret = cnxk_ae_enqueue(qp, op, infl_req, inst, sess);
inst->w7.u64 = sess->cpt_inst_w7;
} else {
@@ -453,8 +452,7 @@ cn9k_cpt_dequeue_post_process(struct cnxk_cpt_qp *qp, struct rte_crypto_op *cop,
uintptr_t *mdata = infl_req->mdata;
struct cnxk_ae_sess *sess;
- sess = get_asym_session_private_data(
- op->session, cn9k_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
cnxk_ae_post_process(cop, sess, (uint8_t *)mdata[0]);
}
diff --git a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
index 67a2d9b08e..0d561ba0f3 100644
--- a/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
+++ b/drivers/crypto/cnxk/cnxk_cryptodev_ops.c
@@ -657,7 +657,7 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
struct rte_mempool *sess_mp;
struct cnxk_ae_sess *priv;
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = get_asym_session_private_data(sess);
if (priv == NULL)
return;
@@ -667,7 +667,6 @@ cnxk_ae_session_clear(struct rte_cryptodev *dev,
/* Reset and free object back to pool */
memset(priv, 0, cnxk_ae_session_size_get(dev));
sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
rte_mempool_put(sess_mp, priv);
}
@@ -679,15 +678,10 @@ cnxk_ae_session_cfg(struct rte_cryptodev *dev,
{
struct cnxk_cpt_vf *vf = dev->data->dev_private;
struct roc_cpt *roc_cpt = &vf->cpt;
- struct cnxk_ae_sess *priv;
+ struct cnxk_ae_sess *priv = get_asym_session_private_data(sess);
union cpt_inst_w7 w7;
int ret;
- if (rte_mempool_get(pool, (void **)&priv))
- return -ENOMEM;
-
- memset(priv, 0, sizeof(struct cnxk_ae_sess));
-
ret = cnxk_ae_fill_session_parameters(priv, xform);
if (ret) {
rte_mempool_put(pool, priv);
@@ -699,7 +693,6 @@ cnxk_ae_session_cfg(struct rte_cryptodev *dev,
priv->cpt_inst_w7 = w7.u64;
priv->cnxk_fpm_iova = vf->cnxk_fpm_iova;
priv->ec_grp = vf->ec_grp;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
diff --git a/drivers/crypto/octeontx/otx_cryptodev_ops.c b/drivers/crypto/octeontx/otx_cryptodev_ops.c
index f7ca8a8a8e..22c54dde68 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_ops.c
+++ b/drivers/crypto/octeontx/otx_cryptodev_ops.c
@@ -375,35 +375,24 @@ otx_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
}
static int
-otx_cpt_asym_session_cfg(struct rte_cryptodev *dev,
+otx_cpt_asym_session_cfg(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform __rte_unused,
struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
+ struct rte_mempool *pool __rte_unused)
{
- struct cpt_asym_sess_misc *priv;
+ struct cpt_asym_sess_misc *priv = get_asym_session_private_data(sess);
int ret;
CPT_PMD_INIT_FUNC_TRACE();
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
ret = cpt_fill_asym_session_parameters(priv, xform);
if (ret) {
CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
return ret;
}
priv->cpt_inst_w7 = 0;
- set_asym_session_private_data(sess, dev->driver_id, priv);
return 0;
}
@@ -412,11 +401,10 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
CPT_PMD_INIT_FUNC_TRACE();
- priv = get_asym_session_private_data(sess, dev->driver_id);
+ priv = get_asym_session_private_data(sess);
if (priv == NULL)
return;
@@ -424,9 +412,6 @@ otx_cpt_asym_session_clear(struct rte_cryptodev *dev,
/* Free resources allocated during session configure */
cpt_free_asym_session_parameters(priv);
memset(priv, 0, otx_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
}
static __rte_always_inline void * __rte_hot
@@ -471,8 +456,7 @@ otx_cpt_enq_single_asym(struct cpt_instance *instance,
return NULL;
}
- sess = get_asym_session_private_data(asym_op->session,
- otx_cryptodev_driver_id);
+ sess = get_asym_session_private_data(asym_op->session);
/* Store phys_addr of the mdata to meta_buf */
params.meta_buf = rte_mempool_virt2iova(mdata);
@@ -852,8 +836,7 @@ otx_cpt_asym_post_process(struct rte_crypto_op *cop,
struct rte_crypto_asym_op *op = cop->asym;
struct cpt_asym_sess_misc *sess;
- sess = get_asym_session_private_data(op->session,
- otx_cryptodev_driver_id);
+ sess = get_asym_session_private_data(op->session);
switch (sess->xfrm_type) {
case RTE_CRYPTO_ASYM_XFORM_RSA:
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 5794ed8159..1e7e5f6849 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -747,10 +747,7 @@ get_session(struct openssl_qp *qp, struct rte_crypto_op *op)
cryptodev_driver_id);
} else {
if (likely(op->asym->session != NULL))
- asym_sess = (struct openssl_asym_session *)
- get_asym_session_private_data(
- op->asym->session,
- cryptodev_driver_id);
+ asym_sess = get_asym_session_private_data(op->asym->session);
if (asym_sess == NULL)
op->status =
RTE_CRYPTO_OP_STATUS_INVALID_SESSION;
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 52715f86f8..061fdd2837 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -1120,7 +1120,7 @@ static int
openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_mempool *mempool __rte_unused)
{
void *asym_sess_private_data;
int ret;
@@ -1130,25 +1130,14 @@ openssl_pmd_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
return -EINVAL;
}
- if (rte_mempool_get(mempool, &asym_sess_private_data)) {
- CDEV_LOG_ERR(
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
+ asym_sess_private_data = get_asym_session_private_data(sess);
ret = openssl_set_asym_session_parameters(asym_sess_private_data,
xform);
if (ret != 0) {
OPENSSL_LOG(ERR, "failed configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(mempool, asym_sess_private_data);
return ret;
}
- set_asym_session_private_data(sess, dev->driver_id,
- asym_sess_private_data);
-
return 0;
}
@@ -1206,19 +1195,15 @@ static void openssl_reset_asym_session(struct openssl_asym_session *sess)
* so it doesn't leave key material behind
*/
static void
-openssl_pmd_asym_session_clear(struct rte_cryptodev *dev,
+openssl_pmd_asym_session_clear(struct rte_cryptodev *dev __rte_unused,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = get_asym_session_private_data(sess);
/* Zero out the whole structure */
if (sess_priv) {
openssl_reset_asym_session(sess_priv);
memset(sess_priv, 0, sizeof(struct openssl_asym_session));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
}
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 09d8761c5f..32d3f01a18 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -491,9 +491,7 @@ qat_asym_build_request(void *in_op,
op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)
- get_asym_session_private_data(
- op->asym->session, qat_asym_driver_id);
+ ctx = get_asym_session_private_data(op->asym->session);
if (unlikely(ctx == NULL)) {
QAT_LOG(ERR, "Session has not been created for this device");
goto error;
@@ -711,8 +709,7 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
if (rx_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- ctx = (struct qat_asym_session *)get_asym_session_private_data(
- rx_op->asym->session, qat_asym_driver_id);
+ ctx = get_asym_session_private_data(rx_op->asym->session);
qat_asym_collect_response(rx_op, cookie, ctx->xform);
} else if (rx_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS) {
qat_asym_collect_response(rx_op, cookie, rx_op->asym->xform);
@@ -726,61 +723,43 @@ qat_asym_process_response(void **op, uint8_t *resp,
}
int
-qat_asym_session_configure(struct rte_cryptodev *dev,
+qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
struct rte_crypto_asym_xform *xform,
struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *mempool)
+ struct rte_mempool *mempool __rte_unused)
{
- int err = 0;
- void *sess_private_data;
struct qat_asym_session *session;
- if (rte_mempool_get(mempool, &sess_private_data)) {
- QAT_LOG(ERR,
- "Couldn't get object from session mempool");
- return -ENOMEM;
- }
-
- session = sess_private_data;
+ session = get_asym_session_private_data(sess);
if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODEX) {
if (xform->modex.exponent.length == 0 ||
xform->modex.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod exp input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_MODINV) {
if (xform->modinv.modulus.length == 0) {
QAT_LOG(ERR, "Invalid mod inv input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type == RTE_CRYPTO_ASYM_XFORM_RSA) {
if (xform->rsa.n.length == 0) {
QAT_LOG(ERR, "Invalid rsa input parameter");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
} else if (xform->xform_type >= RTE_CRYPTO_ASYM_XFORM_TYPE_LIST_END
|| xform->xform_type <= RTE_CRYPTO_ASYM_XFORM_NONE) {
QAT_LOG(ERR, "Invalid asymmetric crypto xform");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
} else {
QAT_LOG(ERR, "Asymmetric crypto xform not implemented");
- err = -EINVAL;
- goto error;
+ return -EINVAL;
}
session->xform = xform;
- qat_asym_build_req_tmpl(sess_private_data);
- set_asym_session_private_data(sess, dev->driver_id,
- sess_private_data);
+ qat_asym_build_req_tmpl(session);
return 0;
-error:
- rte_mempool_put(mempool, sess_private_data);
- return err;
}
unsigned int qat_asym_session_get_private_size(
@@ -793,15 +772,9 @@ void
qat_asym_session_clear(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess)
{
- uint8_t index = dev->driver_id;
- void *sess_priv = get_asym_session_private_data(sess, index);
+ void *sess_priv = get_asym_session_private_data(sess);
struct qat_asym_session *s = (struct qat_asym_session *)sess_priv;
- if (sess_priv) {
+ if (sess_priv)
memset(s, 0, qat_asym_session_get_private_size(dev));
- struct rte_mempool *sess_mp = rte_mempool_from_obj(sess_priv);
-
- set_asym_session_private_data(sess, index, NULL);
- rte_mempool_put(sess_mp, sess_priv);
- }
}
diff --git a/lib/cryptodev/cryptodev_pmd.h b/lib/cryptodev/cryptodev_pmd.h
index b9146f652c..474d447496 100644
--- a/lib/cryptodev/cryptodev_pmd.h
+++ b/lib/cryptodev/cryptodev_pmd.h
@@ -340,12 +340,12 @@ typedef int (*cryptodev_asym_configure_session_t)(struct rte_cryptodev *dev,
typedef void (*cryptodev_sym_free_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_sym_session *sess);
/**
- * Free asymmetric session private data.
+ * Clear asymmetric session private data.
*
* @param dev Crypto device pointer
* @param sess Cryptodev session structure
*/
-typedef void (*cryptodev_asym_free_session_t)(struct rte_cryptodev *dev,
+typedef void (*cryptodev_asym_clear_session_t)(struct rte_cryptodev *dev,
struct rte_cryptodev_asym_session *sess);
/**
* Perform actual crypto processing (encrypt/digest or auth/decrypt)
@@ -429,7 +429,7 @@ struct rte_cryptodev_ops {
/**< Configure asymmetric Crypto session. */
cryptodev_sym_free_session_t sym_session_clear;
/**< Clear a Crypto sessions private data. */
- cryptodev_asym_free_session_t asym_session_clear;
+ cryptodev_asym_clear_session_t asym_session_clear;
/**< Clear a Crypto sessions private data. */
union {
cryptodev_sym_cpu_crypto_process_t sym_cpu_process;
@@ -628,16 +628,9 @@ set_sym_session_private_data(struct rte_cryptodev_sym_session *sess,
}
static inline void *
-get_asym_session_private_data(const struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id) {
- return sess->sess_private_data[driver_id];
-}
-
-static inline void
-set_asym_session_private_data(struct rte_cryptodev_asym_session *sess,
- uint8_t driver_id, void *private_data)
+get_asym_session_private_data(struct rte_cryptodev_asym_session *sess)
{
- sess->sess_private_data[driver_id] = private_data;
+ return sess->sess_private_data;
}
#endif /* _CRYPTODEV_PMD_H_ */
diff --git a/lib/cryptodev/cryptodev_trace_points.c b/lib/cryptodev/cryptodev_trace_points.c
index 5d58951fd5..d23b30edd8 100644
--- a/lib/cryptodev/cryptodev_trace_points.c
+++ b/lib/cryptodev/cryptodev_trace_points.c
@@ -24,6 +24,9 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_queue_pair_setup,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_pool_create,
lib.cryptodev.sym.pool.create)
+RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_pool_create,
+ lib.cryptodev.asym.pool.create)
+
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_create,
lib.cryptodev.sym.create)
@@ -39,9 +42,6 @@ RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_free,
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_init,
lib.cryptodev.sym.init)
-RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_asym_session_init,
- lib.cryptodev.asym.init)
-
RTE_TRACE_POINT_REGISTER(rte_cryptodev_trace_sym_session_clear,
lib.cryptodev.sym.clear)
diff --git a/lib/cryptodev/rte_cryptodev.c b/lib/cryptodev/rte_cryptodev.c
index a40536c5ea..d260f79bbc 100644
--- a/lib/cryptodev/rte_cryptodev.c
+++ b/lib/cryptodev/rte_cryptodev.c
@@ -195,7 +195,7 @@ const char *rte_crypto_asym_op_strings[] = {
};
/**
- * The private data structure stored in the session mempool private data.
+ * The private data structure stored in the sym session mempool private data.
*/
struct rte_cryptodev_sym_session_pool_private_data {
uint16_t nb_drivers;
@@ -204,6 +204,14 @@ struct rte_cryptodev_sym_session_pool_private_data {
/**< session user data will be placed after sess_data */
};
+/**
+ * The private data structure stored in the asym session mempool private data.
+ */
+struct rte_cryptodev_asym_session_pool_private_data {
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+};
+
int
rte_cryptodev_get_cipher_algo_enum(enum rte_crypto_cipher_algorithm *algo_enum,
const char *algo_string)
@@ -1751,47 +1759,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
return 0;
}
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mp)
-{
- struct rte_cryptodev *dev;
- uint8_t index;
- int ret;
-
- if (!rte_cryptodev_is_valid_dev(dev_id)) {
- CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
- return -EINVAL;
- }
-
- dev = rte_cryptodev_pmd_get_dev(dev_id);
-
- if (sess == NULL || xforms == NULL || dev == NULL)
- return -EINVAL;
-
- index = dev->driver_id;
-
- RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure,
- -ENOTSUP);
-
- if (sess->sess_private_data[index] == NULL) {
- ret = dev->dev_ops->asym_session_configure(dev,
- xforms,
- sess, mp);
- if (ret < 0) {
- CDEV_LOG_ERR(
- "dev_id %d failed to configure session details",
- dev_id);
- return ret;
- }
- }
-
- rte_cryptodev_trace_asym_session_init(dev_id, sess, xforms, mp);
- return 0;
-}
-
struct rte_mempool *
rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t user_data_size,
@@ -1834,6 +1801,53 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
return mp;
}
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id)
+{
+ struct rte_mempool *mp;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ uint32_t obj_sz, obj_sz_aligned;
+ uint8_t dev_id, priv_sz, max_priv_sz = 0;
+
+ for (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++)
+ if (rte_cryptodev_is_valid_dev(dev_id)) {
+ priv_sz = rte_cryptodev_asym_get_private_session_size(dev_id);
+ if (priv_sz > max_priv_sz)
+ max_priv_sz = priv_sz;
+ }
+ if (max_priv_sz == 0) {
+ CDEV_LOG_INFO("Could not set max private session size\n");
+ return NULL;
+ }
+
+ obj_sz = rte_cryptodev_asym_get_header_session_size() + max_priv_sz;
+ obj_sz_aligned = RTE_ALIGN_CEIL(obj_sz, RTE_CACHE_LINE_SIZE);
+
+ mp = rte_mempool_create(name, nb_elts, obj_sz_aligned, cache_size,
+ (uint32_t)(sizeof(*pool_priv)),
+ NULL, NULL, NULL, NULL,
+ socket_id, 0);
+ if (mp == NULL) {
+ CDEV_LOG_ERR("%s(name=%s) failed, rte_errno=%d\n",
+ __func__, name, rte_errno);
+ return NULL;
+ }
+
+ pool_priv = rte_mempool_get_priv(mp);
+ if (!pool_priv) {
+ CDEV_LOG_ERR("%s(name=%s) failed to get private data\n",
+ __func__, name);
+ rte_mempool_free(mp);
+ return NULL;
+ }
+ pool_priv->max_priv_session_sz = max_priv_sz;
+
+ rte_cryptodev_trace_asym_session_pool_create(name, nb_elts,
+ cache_size, mp);
+ return mp;
+}
+
static unsigned int
rte_cryptodev_sym_session_data_size(struct rte_cryptodev_sym_session *sess)
{
@@ -1895,19 +1909,43 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mp)
}
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mp)
+rte_cryptodev_asym_session_create(struct rte_mempool *mp, uint8_t dev_id,
+ struct rte_crypto_asym_xform *xforms)
{
struct rte_cryptodev_asym_session *sess;
- unsigned int session_size =
+ uint32_t session_priv_data_sz;
+ struct rte_cryptodev_asym_session_pool_private_data *pool_priv;
+ unsigned int session_header_size =
rte_cryptodev_asym_get_header_session_size();
+ struct rte_cryptodev *dev;
+ int ret;
+
+ if (!rte_cryptodev_is_valid_dev(dev_id)) {
+ CDEV_LOG_ERR("Invalid dev_id=%" PRIu8, dev_id);
+ return NULL;
+ }
+ session_priv_data_sz = rte_cryptodev_asym_get_private_session_size(
+ dev_id);
+ dev = rte_cryptodev_pmd_get_dev(dev_id);
+
+ if (dev == NULL)
+ return NULL;
if (!mp) {
CDEV_LOG_ERR("invalid mempool\n");
return NULL;
}
+ pool_priv = rte_mempool_get_priv(mp);
+
+ if (pool_priv->max_priv_session_sz < session_priv_data_sz) {
+ CDEV_LOG_DEBUG(
+ "The private session data size used when creating the mempool is smaller than this device's private session data.");
+ return NULL;
+ }
+
/* Verify if provided mempool can hold elements big enough. */
- if (mp->elt_size < session_size) {
+ if (mp->elt_size < session_header_size + session_priv_data_sz) {
CDEV_LOG_ERR(
"mempool elements too small to hold session objects");
return NULL;
@@ -1919,10 +1957,27 @@ rte_cryptodev_asym_session_create(struct rte_mempool *mp)
return NULL;
}
+ sess->driver_id = dev->driver_id;
+ sess->max_priv_session_sz = pool_priv->max_priv_session_sz;
+
/* Clear device session pointer.
* Include the flag indicating presence of private data
*/
- memset(sess, 0, session_size);
+ memset(sess->sess_private_data, 0, session_priv_data_sz);
+
+ RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->asym_session_configure, NULL);
+
+ if (sess->sess_private_data[0] == 0) {
+ ret = dev->dev_ops->asym_session_configure(dev,
+ xforms,
+ sess, mp);
+ if (ret < 0) {
+ CDEV_LOG_ERR(
+ "dev_id %d failed to configure session details",
+ dev_id);
+ return NULL;
+ }
+ }
rte_cryptodev_trace_asym_session_create(mp, sess);
return sess;
@@ -2009,20 +2064,11 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess)
int
rte_cryptodev_asym_session_free(struct rte_cryptodev_asym_session *sess)
{
- uint8_t i;
- void *sess_priv;
struct rte_mempool *sess_mp;
if (sess == NULL)
return -EINVAL;
- /* Check that all device private data has been freed */
- for (i = 0; i < nb_drivers; i++) {
- sess_priv = get_asym_session_private_data(sess, i);
- if (sess_priv != NULL)
- return -EBUSY;
- }
-
/* Return session to mempool */
sess_mp = rte_mempool_from_obj(sess);
rte_mempool_put(sess_mp, sess);
@@ -2061,12 +2107,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
unsigned int
rte_cryptodev_asym_get_header_session_size(void)
{
- /*
- * Header contains pointers to the private data
- * of all registered drivers, and a flag which
- * indicates presence of private data
- */
- return ((sizeof(void *) * nb_drivers) + sizeof(uint8_t));
+ return sizeof(struct rte_cryptodev_asym_session);
}
unsigned int
@@ -2092,7 +2133,6 @@ unsigned int
rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
{
struct rte_cryptodev *dev;
- unsigned int header_size = sizeof(void *) * nb_drivers;
unsigned int priv_sess_size;
if (!rte_cryptodev_is_valid_dev(dev_id))
@@ -2104,11 +2144,8 @@ rte_cryptodev_asym_get_private_session_size(uint8_t dev_id)
return 0;
priv_sess_size = (*dev->dev_ops->asym_session_get_size)(dev);
- if (priv_sess_size < header_size)
- return header_size;
return priv_sess_size;
-
}
int
diff --git a/lib/cryptodev/rte_cryptodev.h b/lib/cryptodev/rte_cryptodev.h
index 59ea5a54df..8bd85f1575 100644
--- a/lib/cryptodev/rte_cryptodev.h
+++ b/lib/cryptodev/rte_cryptodev.h
@@ -919,9 +919,13 @@ struct rte_cryptodev_sym_session {
};
/** Cryptodev asymmetric crypto session */
-struct rte_cryptodev_asym_session {
- __extension__ void *sess_private_data[0];
- /**< Private asymmetric session material */
+__extension__ struct rte_cryptodev_asym_session {
+ uint8_t driver_id;
+ /**< Session driver ID. */
+ uint16_t max_priv_session_sz;
+ /**< Size of private session data used when creating mempool */
+ uint8_t padding[5];
+ uint8_t sess_private_data[0];
};
/**
@@ -956,6 +960,29 @@ rte_cryptodev_sym_session_pool_create(const char *name, uint32_t nb_elts,
uint32_t elt_size, uint32_t cache_size, uint16_t priv_size,
int socket_id);
+/**
+ * Create an asymmetric session mempool.
+ *
+ * @param name
+ * The unique mempool name.
+ * @param nb_elts
+ * The number of elements in the mempool.
+ * @param cache_size
+ * The number of per-lcore cache elements
+ * @param socket_id
+ * The *socket_id* argument is the socket identifier in the case of
+ * NUMA. The value can be *SOCKET_ID_ANY* if there is no NUMA
+ * constraint for the reserved zone.
+ *
+ * @return
+ * - On success return mempool
+ * - On failure returns NULL
+ */
+__rte_experimental
+struct rte_mempool *
+rte_cryptodev_asym_session_pool_create(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, int socket_id);
+
/**
* Create symmetric crypto session header (generic with no private data)
*
@@ -973,13 +1000,17 @@ rte_cryptodev_sym_session_create(struct rte_mempool *mempool);
*
* @param mempool mempool to allocate asymmetric session
* objects from
+ * @param dev_id ID of device that we want the session to be used on
+ * @param xforms Asymmetric crypto transform operations to apply on flow
+ * processed with this session
* @return
* - On success return pointer to asym-session
* - On failure returns NULL
*/
__rte_experimental
struct rte_cryptodev_asym_session *
-rte_cryptodev_asym_session_create(struct rte_mempool *mempool);
+rte_cryptodev_asym_session_create(struct rte_mempool *mempool,
+ uint8_t dev_id, struct rte_crypto_asym_xform *xforms);
/**
* Frees symmetric crypto session header, after checking that all
@@ -997,8 +1028,7 @@ int
rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
/**
- * Frees asymmetric crypto session header, after checking that all
- * the device private data has been freed, returning it
+ * Frees asymmetric crypto session header, returning it
* to its original mempool.
*
* @param sess Session header to be freed.
@@ -1006,7 +1036,6 @@ rte_cryptodev_sym_session_free(struct rte_cryptodev_sym_session *sess);
* @return
* - 0 if successful.
* - -EINVAL if session is NULL.
- * - -EBUSY if not all device private data has been freed.
*/
__rte_experimental
int
@@ -1034,28 +1063,6 @@ rte_cryptodev_sym_session_init(uint8_t dev_id,
struct rte_crypto_sym_xform *xforms,
struct rte_mempool *mempool);
-/**
- * Initialize asymmetric session on a device with specific asymmetric xform
- *
- * @param dev_id ID of device that we want the session to be used on
- * @param sess Session to be set up on a device
- * @param xforms Asymmetric crypto transform operations to apply on flow
- * processed with this session
- * @param mempool Mempool to be used for internal allocation.
- *
- * @return
- * - On success, zero.
- * - -EINVAL if input parameters are invalid.
- * - -ENOTSUP if crypto device does not support the crypto transform.
- * - -ENOMEM if the private session could not be allocated.
- */
-__rte_experimental
-int
-rte_cryptodev_asym_session_init(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess,
- struct rte_crypto_asym_xform *xforms,
- struct rte_mempool *mempool);
-
/**
* Frees private data for the device id, based on its device type,
* returning it to its mempool. It is the application's responsibility
@@ -1075,11 +1082,10 @@ rte_cryptodev_sym_session_clear(uint8_t dev_id,
struct rte_cryptodev_sym_session *sess);
/**
- * Frees resources held by asymmetric session during rte_cryptodev_session_init
+ * Clear private data held by asymmetric session.
*
* @param dev_id ID of device that uses the asymmetric session.
- * @param sess Asymmetric session setup on device using
- * rte_cryptodev_session_init
+ * @param sess Asymmetric session setup on device.
* @return
* - 0 if successful.
* - -EINVAL if device is invalid or session is NULL.
@@ -1116,7 +1122,7 @@ rte_cryptodev_sym_get_existing_header_session_size(
struct rte_cryptodev_sym_session *sess);
/**
- * Get the size of the asymmetric session header, for all registered drivers.
+ * Get the size of the asymmetric session header.
*
* @return
* Size of the asymmetric header session.
diff --git a/lib/cryptodev/rte_cryptodev_trace.h b/lib/cryptodev/rte_cryptodev_trace.h
index d1f4f069a3..befbaf7f44 100644
--- a/lib/cryptodev/rte_cryptodev_trace.h
+++ b/lib/cryptodev/rte_cryptodev_trace.h
@@ -83,6 +83,16 @@ RTE_TRACE_POINT(
rte_trace_point_emit_u16(sess->user_data_sz);
)
+RTE_TRACE_POINT(
+ rte_cryptodev_trace_asym_session_pool_create,
+ RTE_TRACE_POINT_ARGS(const char *name, uint32_t nb_elts,
+ uint32_t cache_size, void *mempool),
+ rte_trace_point_emit_string(name);
+ rte_trace_point_emit_u32(nb_elts);
+ rte_trace_point_emit_u32(cache_size);
+ rte_trace_point_emit_ptr(mempool);
+)
+
RTE_TRACE_POINT(
rte_cryptodev_trace_asym_session_create,
RTE_TRACE_POINT_ARGS(void *mempool,
@@ -117,17 +127,6 @@ RTE_TRACE_POINT(
rte_trace_point_emit_ptr(mempool);
)
-RTE_TRACE_POINT(
- rte_cryptodev_trace_asym_session_init,
- RTE_TRACE_POINT_ARGS(uint8_t dev_id,
- struct rte_cryptodev_asym_session *sess, void *xforms,
- void *mempool),
- rte_trace_point_emit_u8(dev_id);
- rte_trace_point_emit_ptr(sess);
- rte_trace_point_emit_ptr(xforms);
- rte_trace_point_emit_ptr(mempool);
-)
-
RTE_TRACE_POINT(
rte_cryptodev_trace_sym_session_clear,
RTE_TRACE_POINT_ARGS(uint8_t dev_id, void *sess),
diff --git a/lib/cryptodev/version.map b/lib/cryptodev/version.map
index c50745fa8c..eaea976f21 100644
--- a/lib/cryptodev/version.map
+++ b/lib/cryptodev/version.map
@@ -58,7 +58,6 @@ EXPERIMENTAL {
rte_cryptodev_asym_session_clear;
rte_cryptodev_asym_session_create;
rte_cryptodev_asym_session_free;
- rte_cryptodev_asym_session_init;
rte_cryptodev_asym_xform_capability_check_modlen;
rte_cryptodev_asym_xform_capability_check_optype;
rte_cryptodev_sym_cpu_crypto_process;
@@ -81,7 +80,6 @@ EXPERIMENTAL {
__rte_cryptodev_trace_sym_session_free;
__rte_cryptodev_trace_asym_session_free;
__rte_cryptodev_trace_sym_session_init;
- __rte_cryptodev_trace_asym_session_init;
__rte_cryptodev_trace_sym_session_clear;
__rte_cryptodev_trace_asym_session_clear;
__rte_cryptodev_trace_dequeue_burst;
@@ -104,6 +102,9 @@ EXPERIMENTAL {
rte_cryptodev_remove_deq_callback;
rte_cryptodev_remove_enq_callback;
+ # added 22.03
+ rte_cryptodev_asym_session_pool_create;
+ __rte_cryptodev_trace_asym_session_pool_create;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 1%]
* RE: [PATCH v6 20/50] pdump: remove unneeded header includes
2022-02-02 16:00 3% ` Bruce Richardson
@ 2022-02-02 16:45 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-02-02 16:45 UTC (permalink / raw)
To: Bruce Richardson, Stephen Hemminger; +Cc: Sean Morrissey, Reshma Pattan, dev
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Wednesday, 2 February 2022 17.01
>
> On Wed, Feb 02, 2022 at 07:54:58AM -0800, Stephen Hemminger wrote:
> > On Wed, 2 Feb 2022 09:47:32 +0000
> > Sean Morrissey <sean.morrissey@intel.com> wrote:
> >
> > > These header includes have been flagged by the iwyu_tool
> > > and removed.
> > >
> > > Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
> > > ---
[...]
> >
> > > diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
> > > index 6efa0274f2..41c4b7800b 100644
> > > --- a/lib/pdump/rte_pdump.h
> > > +++ b/lib/pdump/rte_pdump.h
> > > @@ -13,8 +13,6 @@
> > > */
> > >
> > > #include <stdint.h>
> > > -#include <rte_mempool.h>
> > > -#include <rte_ring.h>
> > > #include <rte_bpf.h>
> > >
> > > #ifdef __cplusplus
> >
> > This header does use rte_mempool and rte_ring in rte_pdump_enable().
> > Not sure why IWYU thinks they should be removed.
>
> Because they are only used as pointer types, not as structures
> themselves.
> Normally in cases like this, I would put in just "struct rte_mempool;"
> at
> the top of the file rather than including a whole header just for one
> structure.
I don't think we should introduce such a hack!
If a module uses something from a library, it makes sense to include the header file for the library.
Putting in "struct rte_mempool;" is essentially copy-pasting from the library, although only a structure. What happens if the type changes or disappears, or depends on some #ifdef? It could have one type in some cases and another type in other cases - e.g. the atomic counters in the mbuf once had different types, depending on compile time flags. The copy-pasted code would not get fixed if the type evolved over time.
If only using one function from a library, you probably wouldn't copy the function prototype instead of including the library header file.
Let's focus on the speed of compiled DPDK code, not the speed of compiling DPDK code. Code readability and lower probability of introducing bugs is far more important than compilation time!
Cleaning up code is also good, so the iwyu_tool initiative is still good.
>
> > Since this is an API header, changing it here risks breaking an
> application.
> >
> Good point. Should we avoid removing headers from public headers in
> case of
> application breakage? It's safer, but it means that we will likely
> still be
> including far too many headers in multiple places. If we do remove
> them, it
> would not be an ABI break, just an API one, of sorts.
The application only breaks at compile time, and it should be easy for the application developer to see what is missing.
I vote for removing unused headers in public files too - not considering it an API breakage.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 20/50] pdump: remove unneeded header includes
@ 2022-02-02 16:00 3% ` Bruce Richardson
2022-02-02 16:45 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2022-02-02 16:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Sean Morrissey, Reshma Pattan, dev
On Wed, Feb 02, 2022 at 07:54:58AM -0800, Stephen Hemminger wrote:
> On Wed, 2 Feb 2022 09:47:32 +0000
> Sean Morrissey <sean.morrissey@intel.com> wrote:
>
> > These header includes have been flagged by the iwyu_tool
> > and removed.
> >
> > Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
> > ---
> > lib/pdump/rte_pdump.c | 1 -
> > lib/pdump/rte_pdump.h | 2 --
> > 2 files changed, 3 deletions(-)
> >
> > diff --git a/lib/pdump/rte_pdump.c b/lib/pdump/rte_pdump.c
> > index af450695ec..b3a62df591 100644
> > --- a/lib/pdump/rte_pdump.c
> > +++ b/lib/pdump/rte_pdump.c
> > @@ -2,7 +2,6 @@
> > * Copyright(c) 2016-2018 Intel Corporation
> > */
> >
> > -#include <rte_memcpy.h>
> > #include <rte_mbuf.h>
> > #include <rte_ethdev.h>
> > #include <rte_lcore.h>
>
> Yes, this code doesn't use rte_memcpy so yes, remove it.
>
> > diff --git a/lib/pdump/rte_pdump.h b/lib/pdump/rte_pdump.h
> > index 6efa0274f2..41c4b7800b 100644
> > --- a/lib/pdump/rte_pdump.h
> > +++ b/lib/pdump/rte_pdump.h
> > @@ -13,8 +13,6 @@
> > */
> >
> > #include <stdint.h>
> > -#include <rte_mempool.h>
> > -#include <rte_ring.h>
> > #include <rte_bpf.h>
> >
> > #ifdef __cplusplus
>
> This header does use rte_mempool and rte_ring in rte_pdump_enable().
> Not sure why IWYU thinks they should be removed.
Because they are only used as pointer types, not as structures themselves.
Normally in cases like this, I would put in just "struct rte_mempool;" at
the top of the file rather than including a whole header just for one
structure.
> Since this is an API header, changing it here risks breaking an application.
>
Good point. Should we avoid removing headers from public headers in case of
application breakage? It's safer, but it means that we will likely still be
including far too many headers in multiple places. If we do remove them, it
would not be an ABI break, just an API one, of sorts.
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [EXT] Re: [PATCH v3 4/4] security: add IPsec option for IP reassembly
2022-02-02 9:15 3% ` [EXT] " Akhil Goyal
@ 2022-02-02 14:04 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-02 14:04 UTC (permalink / raw)
To: Akhil Goyal, dev, Radu Nicolau, mdr, David Marchand
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen
On 2/2/2022 9:15 AM, Akhil Goyal wrote:
>> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
>>> A new option is added in IPsec to enable and attempt reassembly
>>> of inbound packets.
>>>
>>> Signed-off-by: Akhil Goyal <gakhil@marvell.com>
>>> ---
>>> devtools/libabigail.abignore | 14 ++++++++++++++
>>> lib/security/rte_security.h | 12 +++++++++++-
>>
>>
>> +Radu for review
>>
>>> 2 files changed, 25 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
>>> index 90f449c43a..c6e304282f 100644
>>> --- a/devtools/libabigail.abignore
>>> +++ b/devtools/libabigail.abignore
>>> @@ -16,3 +16,17 @@
>>> [suppress_type]
>>> name = rte_eth_dev_info
>>> has_data_member_inserted_between = {offset_of(reserved_64s), end}
>>> +
>>> +; Ignore fields inserted in place of reserved_opts of
>> rte_security_ipsec_sa_options
>>> +[suppress_type]
>>> + name = rte_ipsec_sa_prm
>>> + name = rte_security_ipsec_sa_options
>>> + has_data_member_inserted_between = {offset_of(reserved_opts), end}
>>> +
>>> +[suppress_type]
>>> + name = rte_security_capability
>>> + has_data_member_inserted_between = {offset_of(reserved_opts),
>> (offset_of(reserved_opts) + 18)}
>>> +
>>> +[suppress_type]
>>> + name = rte_security_session_conf
>>> + has_data_member_inserted_between = {offset_of(reserved_opts),
>> (offset_of(reserved_opts) + 18)}
>
> Could not find any better way to suppress the ABI warning.
> Any better idea?
>
+David for it, who knows abigail better.
>>> diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
>>> index 1228b6c8b1..168b837a82 100644
>>> --- a/lib/security/rte_security.h
>>> +++ b/lib/security/rte_security.h
>>> @@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
>>> */
>>> uint32_t l4_csum_enable : 1;
>>>
>>> + /** Enable reassembly on incoming packets.
>>> + *
>>> + * * 1: Enable driver to try reassembly of encrypted IP packets for
>>> + * this SA, if supported by the driver. This feature will work
>>> + * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
>>> + * inline Ethernet device.
>>> + * * 0: Disable reassembly of packets (default).
>>> + */
>>> + uint32_t reass_en : 1;
>>> +
>>> /** Reserved bit fields for future extension
>>> *
>>> * User should ensure reserved_opts is cleared as it may change in
>>> @@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
>>> *
>>> * Note: Reduce number of bits in reserved_opts for every new option.
>>> */
>>> - uint32_t reserved_opts : 18;
>>> + uint32_t reserved_opts : 17;
>>> };
>>>
>>> /** IPSec security association direction */
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
2022-02-01 12:52 3% ` Ferruh Yigit
@ 2022-02-02 11:44 0% ` Ray Kinsella
2022-02-10 22:16 3% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2022-02-02 11:44 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Kalesh A P, dev, ajit.khaparde, asafp, David Marchand,
Thomas Monjalon, Andrew Rybchenko
Ferruh Yigit <ferruh.yigit@intel.com> writes:
> On 1/28/2022 12:48 PM, Kalesh A P wrote:
>> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
>> Adding support for the device reset and recovery events in the
>> rte_eth_event framework. FW error and FW reset conditions would be
>> managed internally by the PMD without needing application intervention.
>> In such cases, PMD would need reset/recovery events to notify application
>> that PMD is undergoing a reset.
>> While most of the recovery process is transparent to the application since
>> most of the driver ensures recovery from FW reset or FW error conditions,
>> the application will have to reprogram any flows which were offloaded to
>> the underlying hardware.
>> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
>> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
>> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
>> ---
>> doc/guides/prog_guide/poll_mode_drv.rst | 24 ++++++++++++++++++++++++
>> lib/ethdev/rte_ethdev.h | 18 ++++++++++++++++++
>> 2 files changed, 42 insertions(+)
>> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst
>> b/doc/guides/prog_guide/poll_mode_drv.rst
>> index 6831289..9ecc0e4 100644
>> --- a/doc/guides/prog_guide/poll_mode_drv.rst
>> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
>> @@ -623,3 +623,27 @@ by application.
>> The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
>> the application to handle reset event. It is duty of application to
>> handle all synchronization before it calls rte_eth_dev_reset().
>> +
>> +Error recovery support
>> +~~~~~~~~~~~~~~~~~~~~~~
>> +
>> +When the PMD detects a FW reset or error condition, it may try to recover
>> +from the error without needing the application intervention. In such cases,
>> +PMD would need events to notify the application that it is undergoing
>> +an error recovery.
>> +
>> +The PMD should trigger RTE_ETH_EVENT_ERR_RECOVERING event to notify the
>> +application that PMD detected a FW reset or FW error condition. PMD may
>> +try to recover from the error by itself. Data path may be quiesced and
>> +control path operations may fail during the recovery period. The application
>> +should stop polling till it receives RTE_ETH_EVENT_RECOVERED event from the PMD.
>> +
>> +The PMD should trigger RTE_ETH_EVENT_RECOVERED event to notify the application
>> +that the it has recovered from the error condition. PMD re-configures the port
>> +to the state prior to the error condition. Control path and data path are up now.
>> +Since the device has undergone a reset, flow rules offloaded prior to reset
>> +may be lost and the application should recreate the rules again.
>> +
>> +The PMD should trigger RTE_ETH_EVENT_INTR_RMV event to notify the application
>> +that it has failed to recover from the error condition. The device may not be
>> +usable anymore.
>> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
>> index 147cc1c..a46819f 100644
>> --- a/lib/ethdev/rte_ethdev.h
>> +++ b/lib/ethdev/rte_ethdev.h
>> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
>> RTE_ETH_EVENT_DESTROY, /**< port is released */
>> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
>> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
>> + RTE_ETH_EVENT_ERR_RECOVERING,
>> + /**< port recovering from an error
>> + *
>> + * PMD detected a FW reset or error condition.
>> + * PMD will try to recover from the error.
>> + * Data path may be quiesced and Control path operations
>> + * may fail at this time.
>> + */
>> + RTE_ETH_EVENT_RECOVERED,
>> + /**< port recovered from an error
>> + *
>> + * PMD has recovered from the error condition.
>> + * Control path and Data path are up now.
>> + * PMD re-configures the port to the state prior to the error.
>> + * Since the device has undergone a reset, flow rules
>> + * offloaded prior to reset may be lost and
>> + * the application should recreate the rules again.
>> + */
>> RTE_ETH_EVENT_MAX /**< max value of this enum */
>
>
> Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
> to evaluate if it is a false positive:
>
>
> 1 function with some indirect sub-type change:
> [C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
> parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
> underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
> in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
> parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
> type size hasn't changed
> 2 enumerator insertions:
> 'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
> 'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
> 1 enumerator change:
> 'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
I don't immediately see the problem that this would cause.
There are no array sizes etc dependent on the value of MAX for instance.
Looks safe?
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* [PATCH v4] mempool: fix mempool cache flushing algorithm
2022-01-19 14:52 3% ` [PATCH v2] mempool: fix put objects to mempool with cache Morten Brørup
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
@ 2022-02-02 10:33 3% ` Morten Brørup
2 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-02-02 10:33 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko
Cc: bruce.richardson, jerinjacobk, dev, Morten Brørup
This patch fixes the rte_mempool_do_generic_put() caching algorithm,
which was fundamentally wrong, causing multiple performance issues when
flushing.
Although the bugs do have serious performance implications when
flushing, the function did not fail when flushing (or otherwise).
Backporting could be considered optional.
The algorithm was:
1. Add the objects to the cache
2. Anything greater than the cache size (if it crosses the cache flush
threshold) is flushed to the ring.
Please note that the description in the source code said that it kept
"cache min value" objects after flushing, but the function actually kept
the cache full after flushing, which the above description reflects.
Now, the algorithm is:
1. If the objects cannot be added to the cache without crossing the
flush threshold, flush the cache to the ring.
2. Add the objects to the cache.
This patch fixes these bugs:
1. The cache was still full after flushing.
In the opposite direction, i.e. when getting objects from the cache, the
cache is refilled to full level when it crosses the low watermark (which
happens to be zero).
Similarly, the cache should be flushed to empty level when it crosses
the high watermark (which happens to be 1.5 x the size of the cache).
The existing flushing behaviour was suboptimal for real applications,
because crossing the low or high watermark typically happens when the
application is in a state where the number of put/get events are out of
balance, e.g. when absorbing a burst of packets into a QoS queue
(getting more mbufs from the mempool), or when a burst of packets is
trickling out from the QoS queue (putting the mbufs back into the
mempool).
Now, the mempool cache is completely flushed when crossing the flush
threshold, so only the newly put (hot) objects remain in the mempool
cache afterwards.
This bug degraded performance caused by too frequent flushing.
Consider this application scenario:
Either, an lcore thread in the application is in a state of balance,
where it uses the mempool cache within its flush/refill boundaries; in
this situation, the flush method is less important, and this fix is
irrelevant.
Or, an lcore thread in the application is out of balance (either
permanently or temporarily), and mostly gets or puts objects from/to the
mempool. If it mostly puts objects, not flushing all of the objects will
cause more frequent flushing. This is the scenario addressed by this
fix. E.g.:
Cache size=256, flushthresh=384 (1.5x size), initial len=256;
application burst len=32.
If there are "size" objects in the cache after flushing, the cache is
flushed at every 4th burst.
If the cache is flushed completely, the cache is only flushed at every
16th burst.
As you can see, this bug caused the cache to be flushed 4x too
frequently in this example.
And when/if the application thread breaks its pattern of continuously
putting objects, and suddenly starts to get objects instead, it will
either get objects already in the cache, or the get() function will
refill the cache.
The concept of not flushing the cache completely was probably based on
an assumption that it is more likely for an application's lcore thread
to get() after flushing than to put() after flushing.
I strongly disagree with this assumption! If an application thread is
continuously putting so much that it overflows the cache, it is much
more likely to keep putting than it is to start getting. If in doubt,
consider how CPU branch predictors work: When the application has done
something many times consecutively, the branch predictor will expect the
application to do the same again, rather than suddenly do something
else.
Also, if you consider the description of the algorithm in the source
code, and agree that "cache min value" cannot mean "cache size", the
function did not behave as intended. This in itself is a bug.
2. The flush threshold comparison was off by one.
It must be "len > flushthresh", not "len >= flushthresh".
Consider a flush multiplier of 1 instead of 1.5; the cache would be
flushed already when reaching size objecs, not when exceeding size
objects. In other words, the cache would not be able to hold "size"
objects, which is clearly a bug.
Now, flushing is triggered when the flush threshold is exceeded, not
when reached.
This bug degraded performance due to premature flushing. In my example
above, this bug caused flushing every 3rd burst instead of every 4th.
3. The most recent (hot) objects were flushed, leaving the oldest (cold)
objects in the mempool cache.
This bug degraded performance, because flushing prevented immediate
reuse of the (hot) objects already in the CPU cache.
Now, the existing (cold) objects in the mempool cache are flushed before
the new (hot) objects are added the to the mempool cache.
4. With RTE_LIBRTE_MEMPOOL_DEBUG defined, the return value of
rte_mempool_ops_enqueue_bulk() was not checked when flushing the cache.
Now, it is checked in both locations where used; and obviously still
only if RTE_LIBRTE_MEMPOOL_DEBUG is defined.
v2 changes:
- Not adding the new objects to the mempool cache before flushing it
also allows the memory allocated for the mempool cache to be reduced
from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
However, such this change would break the ABI, so it was removed in v2.
- The mempool cache should be cache line aligned for the benefit of the
copying method, which on some CPU architectures performs worse on data
crossing a cache boundary.
However, such this change would break the ABI, so it was removed in v2;
and yet another alternative copying method replaced the rte_memcpy().
v3 changes:
- Actually remove my modifications of the rte_mempool_cache structure.
v4 changes:
- Updated patch title to reflect that the scope of the patch is only
mempool cache flushing.
- Do not replace rte_memcpy() with alternative copying method. This was
a pure optimization, not a fix.
- Elaborate even more on the bugs fixed by the modifications.
- Added 4th bullet item to the patch description, regarding
rte_mempool_ops_enqueue_bulk() with RTE_LIBRTE_MEMPOOL_DEBUG.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/mempool/rte_mempool.h | 34 ++++++++++++++++++++++------------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..e7e09e48fc 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1344,31 +1344,41 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache_objs = &cache->objs[cache->len];
+ /* If the request itself is too big for the cache */
+ if (unlikely(n > cache->flushthresh))
+ goto ring_enqueue;
/*
* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
+ * 1. If the objects cannot be added to the cache without
+ * crossing the flush threshold, flush the cache to the ring.
+ * 2. Add the objects to the cache.
*/
- /* Add elements back into the cache */
- rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
+ if (cache->len + n <= cache->flushthresh) {
+ cache_objs = &cache->objs[cache->len];
- cache->len += n;
+ cache->len += n;
+ } else {
+ cache_objs = &cache->objs[0];
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
+ rte_panic("cannot put objects in mempool\n");
+#else
+ rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
+#endif
+ cache->len = n;
}
+ /* Add the objects to the cache. */
+ rte_memcpy(cache_objs, obj_table, sizeof(void *) * n);
+
return;
ring_enqueue:
- /* push remaining objects in ring */
+ /* Put the objects into the ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
rte_panic("cannot put objects in mempool\n");
--
2.17.1
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH v3 4/4] security: add IPsec option for IP reassembly
@ 2022-02-02 9:15 3% ` Akhil Goyal
2022-02-02 14:04 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-02-02 9:15 UTC (permalink / raw)
To: Ferruh Yigit, dev, Radu Nicolau, mdr
Cc: Anoob Joseph, matan, konstantin.ananyev, thomas,
andrew.rybchenko, rosen.xu, Jerin Jacob Kollanukkaran, stephen
> On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> > A new option is added in IPsec to enable and attempt reassembly
> > of inbound packets.
> >
> > Signed-off-by: Akhil Goyal <gakhil@marvell.com>
> > ---
> > devtools/libabigail.abignore | 14 ++++++++++++++
> > lib/security/rte_security.h | 12 +++++++++++-
>
>
> +Radu for review
>
> > 2 files changed, 25 insertions(+), 1 deletion(-)
> >
> > diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore
> > index 90f449c43a..c6e304282f 100644
> > --- a/devtools/libabigail.abignore
> > +++ b/devtools/libabigail.abignore
> > @@ -16,3 +16,17 @@
> > [suppress_type]
> > name = rte_eth_dev_info
> > has_data_member_inserted_between = {offset_of(reserved_64s), end}
> > +
> > +; Ignore fields inserted in place of reserved_opts of
> rte_security_ipsec_sa_options
> > +[suppress_type]
> > + name = rte_ipsec_sa_prm
> > + name = rte_security_ipsec_sa_options
> > + has_data_member_inserted_between = {offset_of(reserved_opts), end}
> > +
> > +[suppress_type]
> > + name = rte_security_capability
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
> > +
> > +[suppress_type]
> > + name = rte_security_session_conf
> > + has_data_member_inserted_between = {offset_of(reserved_opts),
> (offset_of(reserved_opts) + 18)}
Could not find any better way to suppress the ABI warning.
Any better idea?
> > diff --git a/lib/security/rte_security.h b/lib/security/rte_security.h
> > index 1228b6c8b1..168b837a82 100644
> > --- a/lib/security/rte_security.h
> > +++ b/lib/security/rte_security.h
> > @@ -264,6 +264,16 @@ struct rte_security_ipsec_sa_options {
> > */
> > uint32_t l4_csum_enable : 1;
> >
> > + /** Enable reassembly on incoming packets.
> > + *
> > + * * 1: Enable driver to try reassembly of encrypted IP packets for
> > + * this SA, if supported by the driver. This feature will work
> > + * only if rx_offload RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY is set in
> > + * inline Ethernet device.
> > + * * 0: Disable reassembly of packets (default).
> > + */
> > + uint32_t reass_en : 1;
> > +
> > /** Reserved bit fields for future extension
> > *
> > * User should ensure reserved_opts is cleared as it may change in
> > @@ -271,7 +281,7 @@ struct rte_security_ipsec_sa_options {
> > *
> > * Note: Reduce number of bits in reserved_opts for every new option.
> > */
> > - uint32_t reserved_opts : 18;
> > + uint32_t reserved_opts : 17;
> > };
> >
> > /** IPSec security association direction */
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3 0/4] ethdev: introduce IP reassembly offload
2022-01-30 17:59 4% ` [PATCH v3 0/4] " Akhil Goyal
@ 2022-02-01 14:10 0% ` Ferruh Yigit
2022-02-04 22:13 4% ` [PATCH v4 0/3] " Akhil Goyal
2 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-02-01 14:10 UTC (permalink / raw)
To: Akhil Goyal, dev
Cc: anoobj, matan, konstantin.ananyev, thomas, andrew.rybchenko,
rosen.xu, jerinj, stephen, mdr
On 1/30/2022 5:59 PM, Akhil Goyal wrote:
> As discussed in the RFC[1] sent in 21.11, a new offload is
> introduced in ethdev for IP reassembly.
>
> This patchset add the IP reassembly RX offload.
> Currently, the offload is tested along with inline IPsec processing.
> It can also be updated as a standalone offload without IPsec, if there
> are some hardware available to test it.
> The patchset is tested on cnxk platform. The driver implementation
> and a test app are added as separate patchsets.>
Can you please share the links of those sets?
> [1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
>
> changes in v3:
> - incorporated comments from Andrew and Stephen Hemminger
>
> changes in v2:
> - added abi ignore exceptions for modifications in reserved fields.
> Added a crude way to subside the rte_security and rte_ipsec ABI issue.
> Please suggest a better way.
> - incorporated Konstantin's comment for extra checks in new API
> introduced.
> - converted static mbuf ol_flag to mbuf dynflag (Konstantin)
> - added a get API for reassembly configuration (Konstantin)
> - Fixed checkpatch issues.
> - Dynfield is NOT split into 2 parts as it would cause an extra fetch in
> case of IP reassembly failure.
> - Application patches are split into a separate series.
>
>
> Akhil Goyal (4):
> ethdev: introduce IP reassembly offload
> ethdev: add dev op to set/get IP reassembly configuration
> ethdev: add mbuf dynfield for incomplete IP reassembly
> security: add IPsec option for IP reassembly
>
> devtools/libabigail.abignore | 19 ++++++
> doc/guides/nics/features.rst | 12 ++++
> lib/ethdev/ethdev_driver.h | 45 +++++++++++++++
> lib/ethdev/rte_ethdev.c | 109 +++++++++++++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++++++++-
> lib/ethdev/version.map | 5 ++
> lib/security/rte_security.h | 12 +++-
> 7 files changed, 300 insertions(+), 2 deletions(-)
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events
@ 2022-02-01 12:52 3% ` Ferruh Yigit
2022-02-02 11:44 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2022-02-01 12:52 UTC (permalink / raw)
To: Kalesh A P, dev
Cc: ajit.khaparde, asafp, David Marchand, Ray Kinsella,
Thomas Monjalon, Andrew Rybchenko
On 1/28/2022 12:48 PM, Kalesh A P wrote:
> From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
>
> Adding support for the device reset and recovery events in the
> rte_eth_event framework. FW error and FW reset conditions would be
> managed internally by the PMD without needing application intervention.
> In such cases, PMD would need reset/recovery events to notify application
> that PMD is undergoing a reset.
>
> While most of the recovery process is transparent to the application since
> most of the driver ensures recovery from FW reset or FW error conditions,
> the application will have to reprogram any flows which were offloaded to
> the underlying hardware.
>
> Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
> ---
> doc/guides/prog_guide/poll_mode_drv.rst | 24 ++++++++++++++++++++++++
> lib/ethdev/rte_ethdev.h | 18 ++++++++++++++++++
> 2 files changed, 42 insertions(+)
>
> diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
> index 6831289..9ecc0e4 100644
> --- a/doc/guides/prog_guide/poll_mode_drv.rst
> +++ b/doc/guides/prog_guide/poll_mode_drv.rst
> @@ -623,3 +623,27 @@ by application.
> The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger
> the application to handle reset event. It is duty of application to
> handle all synchronization before it calls rte_eth_dev_reset().
> +
> +Error recovery support
> +~~~~~~~~~~~~~~~~~~~~~~
> +
> +When the PMD detects a FW reset or error condition, it may try to recover
> +from the error without needing the application intervention. In such cases,
> +PMD would need events to notify the application that it is undergoing
> +an error recovery.
> +
> +The PMD should trigger RTE_ETH_EVENT_ERR_RECOVERING event to notify the
> +application that PMD detected a FW reset or FW error condition. PMD may
> +try to recover from the error by itself. Data path may be quiesced and
> +control path operations may fail during the recovery period. The application
> +should stop polling till it receives RTE_ETH_EVENT_RECOVERED event from the PMD.
> +
> +The PMD should trigger RTE_ETH_EVENT_RECOVERED event to notify the application
> +that the it has recovered from the error condition. PMD re-configures the port
> +to the state prior to the error condition. Control path and data path are up now.
> +Since the device has undergone a reset, flow rules offloaded prior to reset
> +may be lost and the application should recreate the rules again.
> +
> +The PMD should trigger RTE_ETH_EVENT_INTR_RMV event to notify the application
> +that it has failed to recover from the error condition. The device may not be
> +usable anymore.
> diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
> index 147cc1c..a46819f 100644
> --- a/lib/ethdev/rte_ethdev.h
> +++ b/lib/ethdev/rte_ethdev.h
> @@ -3818,6 +3818,24 @@ enum rte_eth_event_type {
> RTE_ETH_EVENT_DESTROY, /**< port is released */
> RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */
> RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */
> + RTE_ETH_EVENT_ERR_RECOVERING,
> + /**< port recovering from an error
> + *
> + * PMD detected a FW reset or error condition.
> + * PMD will try to recover from the error.
> + * Data path may be quiesced and Control path operations
> + * may fail at this time.
> + */
> + RTE_ETH_EVENT_RECOVERED,
> + /**< port recovered from an error
> + *
> + * PMD has recovered from the error condition.
> + * Control path and Data path are up now.
> + * PMD re-configures the port to the state prior to the error.
> + * Since the device has undergone a reset, flow rules
> + * offloaded prior to reset may be lost and
> + * the application should recreate the rules again.
> + */
> RTE_ETH_EVENT_MAX /**< max value of this enum */
Also ABI check complains about 'RTE_ETH_EVENT_MAX' value check, cc'ed more people
to evaluate if it is a false positive:
1 function with some indirect sub-type change:
[C] 'function int rte_eth_dev_callback_register(uint16_t, rte_eth_event_type, rte_eth_dev_cb_fn, void*)' at rte_ethdev.c:4637:1 has some indirect sub-type changes:
parameter 3 of type 'typedef rte_eth_dev_cb_fn' has sub-type changes:
underlying type 'int (typedef uint16_t, enum rte_eth_event_type, void*, void*)*' changed:
in pointed to type 'function type int (typedef uint16_t, enum rte_eth_event_type, void*, void*)':
parameter 2 of type 'enum rte_eth_event_type' has sub-type changes:
type size hasn't changed
2 enumerator insertions:
'rte_eth_event_type::RTE_ETH_EVENT_ERR_RECOVERING' value '11'
'rte_eth_event_type::RTE_ETH_EVENT_RECOVERED' value '12'
1 enumerator change:
'rte_eth_event_type::RTE_ETH_EVENT_MAX' from value '11' to '13' at rte_ethdev.h:3807:1
^ permalink raw reply [relevance 3%]
* Minutes of tech-board meeting: 2022-01-26
@ 2022-02-01 9:18 3% Richardson, Bruce
0 siblings, 0 replies; 200+ results
From: Richardson, Bruce @ 2022-02-01 9:18 UTC (permalink / raw)
To: dev; +Cc: techboard
Tech Board Attendees:
Bruce, Aaron, Ferruh, Jerin, Maxime, Thomas, Kevin, Konstantin, Olivier, Stephen, Honnappa, Hemant
Agenda Items
--------------
* Tech writer
* No updates on this
* Getting new program manager from LF - may need to remind on hiring Tech writer.
* UNH Statement of Work
* Needs to be closed out for 2022
* Final call for items list for UNH for coming weeks
* After this week, going to gov board
* For suggested items, either email to Aaron or add directly to doc.
* There will be review - Techboard to review once doc is cleaned up a bit.
* Current version: https://docs.google.com/document/d/1l38GZwaMuIu8hq3kMjoAHDU_LanXXTt-LYS4y7Sv4t8/edit
* GPL license files in DTS
* Working on changing license headers for files in DTS (to SPDX tags)
* Some files missing headers - will be added
* 4 - 5 files missing GPL license. These files will not be linked into applications
* Files should be reviewed and checked /approved for exceptions
* Action: Create license directory/files in DTS which will be merged into main DPDK license files once merge of DTS and DPDK is done.
* Action: For GPL scripts, refer through gov board for advice re python scripts included in other scripts.
* L3fwd in testpmd
* Reviewed previous decision.
* Confirmed nothing to be done, not to be considered for this release. Patches deferred in patchwork.
* Events
- decisions on events no longer being planned by separate marketing committe
- Tech board input is required in event planning from now on.
- in-person event is planned for Europe (Userspace 2022).
- provisionnally planned for Bordeaux, most likely in mid-September (2nd week)
* Traffic Gen
- previous discussion incomplete
- consider revisiting next time if Harry can attend
Items Noted for Reference:
--------------------------
FYI: perf thread example is now removed from repository.
FYI: ICC support is broken on next-net. May need to drop support completely. Ferruh tracking.
FYI: Discussion on ABI versioning of structs passed from libs to drivers, e.g. ethdev to net drivers
Ref: http://inbox.dpdk.org/dev/CALBAE1Po3-7KUN+WecJRDjbbYDi40mPRmVf8YTaFDyDPj-WnGQ@mail.gmail.com/T/#m2371800d0b6848b76c8666dd771de66544ebdd3a
^ permalink raw reply [relevance 3%]
* [PATCH v5] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-23 21:20 8% ` [PATCH v4] " Michael Barker
2022-01-25 10:33 0% ` Ray Kinsella
@ 2022-01-31 0:05 8% ` Michael Barker
2022-02-12 14:00 0% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: Michael Barker @ 2022-01-31 0:05 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
When compiling with clang using -Wpedantic (or -Wgcc-compat) the use of
diagnose_if kicks up a warning:
.../include/rte_interrupts.h:623:1: error: 'diagnose_if' is a clang
extension [-Werror,-Wgcc-compat]
__rte_internal
^
.../include/rte_compat.h:36:16: note: expanded from macro '__rte_internal'
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
This change ignores the '-Wgcc-compat' warning in the specific location
where the warning occurs. It is safe to do in this circumstance as the
specific macro is only defined when using the clang compiler.
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 8%]
* [PATCH v3 0/4] ethdev: introduce IP reassembly offload
2022-01-20 16:26 4% ` [PATCH v2 0/4] " Akhil Goyal
@ 2022-01-30 17:59 4% ` Akhil Goyal
` (2 more replies)
1 sibling, 3 replies; 200+ results
From: Akhil Goyal @ 2022-01-30 17:59 UTC (permalink / raw)
To: dev
Cc: anoobj, matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, rosen.xu, jerinj, stephen, mdr, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
changes in v3:
- incorporated comments from Andrew and Stephen Hemminger
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (4):
ethdev: introduce IP reassembly offload
ethdev: add dev op to set/get IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 19 ++++++
doc/guides/nics/features.rst | 12 ++++
lib/ethdev/ethdev_driver.h | 45 +++++++++++++++
lib/ethdev/rte_ethdev.c | 109 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 100 +++++++++++++++++++++++++++++++-
lib/ethdev/version.map | 5 ++
lib/security/rte_security.h | 12 +++-
7 files changed, 300 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* RE: [PATCH v3] mempool: fix put objects to mempool with cache
2022-01-24 15:39 3% ` Olivier Matz
@ 2022-01-28 9:37 0% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-28 9:37 UTC (permalink / raw)
To: Olivier Matz; +Cc: andrew.rybchenko, bruce.richardson, jerinjacobk, dev
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Monday, 24 January 2022 16.39
>
> Hi Morten,
>
> On Wed, Jan 19, 2022 at 04:03:01PM +0100, Morten Brørup wrote:
> > mempool: fix put objects to mempool with cache
> >
> > This patch optimizes the rte_mempool_do_generic_put() caching
> algorithm,
> > and fixes a bug in it.
>
> I think we should avoid grouping fixes and optimizations in one
> patch. The main reason is that fixes aims to be backported, which
> is not the case of optimizations.
OK. I'll separate them.
>
> > The existing algorithm was:
> > 1. Add the objects to the cache
> > 2. Anything greater than the cache size (if it crosses the cache
> flush
> > threshold) is flushed to the ring.
> >
> > Please note that the description in the source code said that it kept
> > "cache min value" objects after flushing, but the function actually
> kept
> > "size" objects, which is reflected in the above description.
> >
> > Now, the algorithm is:
> > 1. If the objects cannot be added to the cache without crossing the
> > flush threshold, flush the cache to the ring.
> > 2. Add the objects to the cache.
> >
> > This patch changes these details:
> >
> > 1. Bug: The cache was still full after flushing.
> > In the opposite direction, i.e. when getting objects from the cache,
> the
> > cache is refilled to full level when it crosses the low watermark
> (which
> > happens to be zero).
> > Similarly, the cache should be flushed to empty level when it crosses
> > the high watermark (which happens to be 1.5 x the size of the cache).
> > The existing flushing behaviour was suboptimal for real applications,
> > because crossing the low or high watermark typically happens when the
> > application is in a state where the number of put/get events are out
> of
> > balance, e.g. when absorbing a burst of packets into a QoS queue
> > (getting more mbufs from the mempool), or when a burst of packets is
> > trickling out from the QoS queue (putting the mbufs back into the
> > mempool).
> > NB: When the application is in a state where put/get events are in
> > balance, the cache should remain within its low and high watermarks,
> and
> > the algorithms for refilling/flushing the cache should not come into
> > play.
> > Now, the mempool cache is completely flushed when crossing the flush
> > threshold, so only the newly put (hot) objects remain in the mempool
> > cache afterwards.
>
> I'm not sure we should call this behavior a bug. What is the impact
> on applications, from a user perspective? Can it break a use-case, or
> have an important performance impact?
It doesn't break anything.
But it doesn't behave as intended (according to its description in the source code), so I do consider it a bug! Any professional tester, when seeing an implementation that doesn't do what is intended, would also flag the implementation as faulty.
It has a performance impact: It causes many more mempool cache flushes than was intended. I have elaborated by an example here: http://inbox.dpdk.org/dev/98CBD80474FA8B44BF855DF32C47DC35D86E54@smartserver.smartshare.dk/T/#t
>
>
> > 2. Minor bug: The flush threshold comparison has been corrected; it
> must
> > be "len > flushthresh", not "len >= flushthresh".
> > Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
> > would be flushed already when reaching size elements, not when
> exceeding
> > size elements.
> > Now, flushing is triggered when the flush threshold is exceeded, not
> > when reached.
>
> Same here, we should ask ourselves what is the impact before calling
> it a bug.
It's a classic off-by-one bug.
It only impacts performance, causing premature mempool cache flushing.
Referring to my example in the RFC discussion, this bug causes flushing every 3rd application put() instead of every 4th.
>
>
> > 3. Optimization: The most recent (hot) objects are flushed, leaving
> the
> > oldest (cold) objects in the mempool cache.
> > This is bad for CPUs with a small L1 cache, because when they get
> > objects from the mempool after the mempool cache has been flushed,
> they
> > get cold objects instead of hot objects.
> > Now, the existing (cold) objects in the mempool cache are flushed
> before
> > the new (hot) objects are added the to the mempool cache.
> >
> > 4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
> > here, where n is relatively small and unknown at compile time.
> > Now, it has been replaced by an alternative copying method, optimized
> > for the fact that most Ethernet PMDs operate in bursts of 4 or 8
> mbufs
> > or multiples thereof.
>
> For these optimizations, do you have an idea of what is the performance
> gain? Ideally (I understand it is not always possible), each
> optimization
> is done separately, and its impact is measured.
Regarding 3: I don't have access to hardware with a CPU with small L1 cache. But the algorithm was structurally wrong, so I think it should be fixed. Not working with such hardware ourselves, I labeled it an "optimization"... if the patch came from someone with affected hardware, it could reasonably had been labeled a "bug fix".
Regarding 4: I'll stick with rte_memcpy() in the "fix" patch, and provide a separate optimization patch with performance information.
>
>
> > v2 changes:
> >
> > - Not adding the new objects to the mempool cache before flushing it
> > also allows the memory allocated for the mempool cache to be reduced
> > from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
> > However, such this change would break the ABI, so it was removed in
> v2.
> >
> > - The mempool cache should be cache line aligned for the benefit of
> the
> > copying method, which on some CPU architectures performs worse on
> data
> > crossing a cache boundary.
> > However, such this change would break the ABI, so it was removed in
> v2;
> > and yet another alternative copying method replaced the rte_memcpy().
>
> OK, we may want to keep this in mind for the next abi breakage.
Sounds good.
>
>
> >
> > v3 changes:
> >
> > - Actually remove my modifications of the rte_mempool_cache
> structure.
> >
> > Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> > ---
> > lib/mempool/rte_mempool.h | 51 +++++++++++++++++++++++++++++--------
> --
> > 1 file changed, 38 insertions(+), 13 deletions(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 1e7a3c1527..7b364cfc74 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -1334,6 +1334,7 @@ static __rte_always_inline void
> > rte_mempool_do_generic_put(struct rte_mempool *mp, void * const
> *obj_table,
> > unsigned int n, struct rte_mempool_cache *cache)
> > {
> > + uint32_t index;
> > void **cache_objs;
> >
> > /* increment stat now, adding in mempool always success */
> > @@ -1344,31 +1345,56 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> > if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
> > goto ring_enqueue;
> >
> > - cache_objs = &cache->objs[cache->len];
> > + /* If the request itself is too big for the cache */
> > + if (unlikely(n > cache->flushthresh))
> > + goto ring_enqueue;
> >
> > /*
> > * The cache follows the following algorithm
> > - * 1. Add the objects to the cache
> > - * 2. Anything greater than the cache min value (if it crosses
> the
> > - * cache flush threshold) is flushed to the ring.
> > + * 1. If the objects cannot be added to the cache without
> > + * crossing the flush threshold, flush the cache to the ring.
> > + * 2. Add the objects to the cache.
> > */
> >
> > - /* Add elements back into the cache */
> > - rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
> > + if (cache->len + n <= cache->flushthresh) {
> > + cache_objs = &cache->objs[cache->len];
> >
> > - cache->len += n;
> > + cache->len += n;
> > + } else {
> > + cache_objs = cache->objs;
> >
> > - if (cache->len >= cache->flushthresh) {
> > - rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
> > - cache->len - cache->size);
> > - cache->len = cache->size;
> > +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > + if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache-
> >len) < 0)
> > + rte_panic("cannot put objects in mempool\n");
> > +#else
> > + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
> > +#endif
> > + cache->len = n;
> > + }
> > +
> > + /* Add the objects to the cache. */
> > + for (index = 0; index < (n & ~0x3); index += 4) {
> > + cache_objs[index] = obj_table[index];
> > + cache_objs[index + 1] = obj_table[index + 1];
> > + cache_objs[index + 2] = obj_table[index + 2];
> > + cache_objs[index + 3] = obj_table[index + 3];
> > + }
> > + switch (n & 0x3) {
> > + case 3:
> > + cache_objs[index] = obj_table[index];
> > + index++; /* fallthrough */
> > + case 2:
> > + cache_objs[index] = obj_table[index];
> > + index++; /* fallthrough */
> > + case 1:
> > + cache_objs[index] = obj_table[index];
> > }
> >
> > return;
> >
> > ring_enqueue:
> >
> > - /* push remaining objects in ring */
> > + /* Put the objects into the ring */
> > #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> > if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
> > rte_panic("cannot put objects in mempool\n");
> > @@ -1377,7 +1403,6 @@ rte_mempool_do_generic_put(struct rte_mempool
> *mp, void * const *obj_table,
> > #endif
> > }
> >
> > -
> > /**
> > * Put several objects back in the mempool.
> > *
> > --
> > 2.17.1
> >
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
@ 2022-01-27 9:34 3% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2022-01-27 9:34 UTC (permalink / raw)
To: Alexander Kozyrev
Cc: Ajit Khaparde, dpdk-dev, Ori Kam,
NBU-Contact-Thomas Monjalon (EXTERNAL),
Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
Qi Zhang, Jerin Jacob
On Thu, Jan 27, 2022 at 3:32 AM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> On Tuesday, January 25, 2022 13:44 Jerin Jacob <jerinjacobk@gmail.com> wrote:
> > On Tue, Jan 25, 2022 at 6:58 AM Alexander Kozyrev <akozyrev@nvidia.com>
> > wrote:
> > >
> > > On Monday, January 24, 2022 12:41 Ajit Khaparde
> > <ajit.khaparde@broadcom.com> wrote:
> > > > On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> > > > wrote:
> > > > >
> >
> > > Ok, I'll adopt this wording in the v3.
> > >
> > > > > > + *
> > > > > > + * @param port_id
> > > > > > + * Port identifier of Ethernet device.
> > > > > > + * @param[in] port_attr
> > > > > > + * Port configuration attributes.
> > > > > > + * @param[out] error
> > > > > > + * Perform verbose error reporting if not NULL.
> > > > > > + * PMDs initialize this structure in case of error only.
> > > > > > + *
> > > > > > + * @return
> > > > > > + * 0 on success, a negative errno value otherwise and rte_errno is
> > set.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_flow_configure(uint16_t port_id,
> > > > >
> > > > > Should we couple, setting resource limit hint to configure function as
> > > > > if we add future items in
> > > > > configuration, we may pain to manage all state. Instead how about,
> > > > > rte_flow_resource_reserve_hint_set()?
> > > > +1
> > > Port attributes are the hints, PMD can safely ignore anything that is not
> > supported/deemed unreasonable.
> > > Having several functions to call instead of one configuration function seems
> > like a burden to me.
> >
> > If we add a lot of features which has different state it will be
> > difficult to manage.
> > Since it is the slow path and OPTIONAL API. IMO, it should be fine to
> > have a separate API for a specific purpose
> > to have a clean interface.
>
> This approach contradicts to the DPDK way of configuring devices.
> It you look at the rte_eth_dev_configure or rte_eth_rx_queue_setup API
> you will see that the configuration is propagated via config structures.
> I would like to conform to this approach with my new API as well.
There is a subtle difference, those are mandatory APIs. i,e application must
call those API to use the subsequent APIs.
I am OK with introducing rte_flow_configure() for such use cases.
Probably, we can add these parameters in rte_flow_configure() for the
new features.
And make it mandatory API for the next ABI to avoid application breakage.
Also, please change git commit to the description for adding the
configure state
for rte_flow API.
BTW: Your Queue patch[3/3] probably needs to add the nb_queue
parameter to configure.
So the driver knows, the number queue needed upfront like the ethdev API scheme.
>
> Another question is how to deal with interdependencies with separate hints?
> There could be some resources that requires other resources to be present.
> Or one resource shares the hardware registers with another one and needs to
> be accounted for. That is not easy to do with separate function calls.
I got the use case now.
>
> > >
> > > >
> > > > >
> > > > >
> > > > > > + const struct rte_flow_port_attr *port_attr,
> > > > > > + struct rte_flow_error *error);
> > > > >
> > > > > I think, we should have _get function to get those limit numbers
> > otherwise,
> > > > > we can not write portable applications as the return value is kind of
> > > > > boolean now if
> > > > > don't define exact values for rte_errno for reasons.
> > > > +1
> > > We had this discussion in RFC. The limits will vary from NIC to NIC and from
> > system to
> > > system, depending on hardware capabilities and amount of free memory
> > for example.
> > > It is easier to reject a configuration with a clear error description as we do
> > for flow creation.
> >
> > In that case, we can return a "defined" return value or "defined"
> > errno to capture this case so that
> > the application can make forward progress to differentiate between API
> > failed vs dont having enough resources
> > and move on.
>
> I think you are right and it will be useful to provide some hardware capabilities.
> I'll add something like rte_flow_info_get() to obtain available flow rule resources.
Ack.
^ permalink raw reply [relevance 3%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-26 13:41 0% ` Bruce Richardson
@ 2022-01-26 15:12 0% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2022-01-26 15:12 UTC (permalink / raw)
To: Bruce Richardson
Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
Stephen Hemminger
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Wednesday, January 26, 2022 3:41 PM
> To: Ori Kam <orika@nvidia.com>
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
>
> On Wed, Jan 26, 2022 at 12:19:43PM +0000, Ori Kam wrote:
> >
> >
> > > -----Original Message-----
> > > From: Thomas Monjalon <thomas@monjalon.net>
> > > Sent: Wednesday, January 26, 2022 1:22 PM
> > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> > >
> > > 26/01/2022 11:52, Bruce Richardson:
> > > > The scenario is as follows. Suppose we have the initial state as below:
> > > >
> > > > struct x_dev_cfg {
> > > > int x;
> > > > };
> > > >
> > > > int
> > > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > > {
> > > > struct x_dev *dev = x_devs[id];
> > > > // some setup/config may go here
> > > > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > > > }
> > > >
> > > > Now, supposing we need to add in a new field into the config structure, a
> > > > very common occurance. This will indeed break the ABI, so we need to use
> > > > ABI versioning, to ensure that apps passing in the old structure, only call
> > > > a function which expects the old structure. Therefore, we need a copy of
> > > > the old structure, and a function to work on it. This gives this result:
> > > >
> > > > struct x_dev_cfg {
> > > > int x;
> > > > bool flag; // new field;
> > > > };
> > > >
> > > > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > > > int x;
> > > > };
> > > >
> > > > /* this function will only be called by *newly-linked* code, which uses
> > > > * the new structure */
> > > > int
> > > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > > {
> > > > struct x_dev *dev = x_devs[id];
> > > > // some setup/config may go here
> > > > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > > > }
> > > >
> > > > /* this function is called by apps linked against old version */
> > > > int
> > > > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > > > {
> > > > struct x_dev *dev = x_devs[id];
> > > > // some setup/config may go here
> > > > return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > > > }
> > > >
> > > > With the above library code, we have different functions using the
> > > > different structures, so ABI compatibility is preserved - apps passing in a
> > > > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > > > use the 8-byte version.
> > > >
> > > > The final part of the puzzle is then how drivers react to this change.
> > > > Originally, all drivers only use "x" in the config structure because that
> > > > is all that there is. That will still continue to work fine in the above
> > > > case, as both 4-byte and 8-byte structs have the same x value at the same
> > > > offset. i.e. no driver updates for x_dev is needed.
> > > >
> > > > On the other hand, if there are drivers that do want/need the new field,
> > > > they can also get to use it, but they do need to check for its presence
> > > > before they do so, i.e they would work as below:
> > > >
> > > > if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > > > // use flags field
> > > > }
> > > >
> > > > Hope this is clear now.
> > >
> > > Yes, this is the kind of explanation we need in our guideline doc.
> > > Alternatives can be documented as well.
> > > If we can list pros/cons in the doc, it will be easier to choose
> > > the best approach and to explain the choice during code review.
> > >
> > >
> > Thanks you very much for the clear explanation.
> >
> > The draw back is that we need also to duplicate the functions.
> > Using the flags/version we only need to create new structures
> > and from application point of view it knows what exta fields it gets.
> > (I agree that application knowledge has downsides but also advantages)
> >
> > In the case of flags/version your example will look like this (this is for the record and may other
> > developers are intrested):
> >
> > struct x_dev_cfg { //original struct
> > int ver;
> > int x;
> > };
> >
> > struct x_dev_cfg_v2 { // new struct
> > int ver;
> > int x;
> > bool flag; // new field;
> > };
> >
> >
> > The function is always the same function:
> > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > {
> > struct x_dev *dev = x_devs[id];
> > // some setup/config may go here
> > return dev->configure(cfg);
> > }
> >
> > When calling this function with old struct:
> > X_dev_cfg(id, (struct x_dev_cfg *)cfg)
> >
> > When calling this function with new struct:
> > X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)
> >
> > In PMD:
> > If (cfg->ver >= 2)
> > // version 2 logic
> > Else If (cfg->v >=0)
> > // base version logic
> >
> >
> > When using flags it gives even more control since pmd can tell exactly what
> > features are required.
> >
> > All options have prons/cons
> > I vote for the version one.
> >
> > We can have a poll 😊
> > Or like Thomas said list pros and cons and each subsystem can
> > have it own selection.
>
> The biggest issue I have with this version approach is how is the user
> meant to know what version number to put into the structure? When the user
> upgrades from one version of DPDK to the next, are they manually to update
> their version numbers in all their structures? If they don't, they then may
> be mistified if they use the newer fields and find that they "don't work"
> because they forgot that they need to update the version field to the newer
> version at the same time. The reason I prefer the size field is that it is
> impossible for the end user to mess things up, and the entirity of the
> mechanism is internal, and hidden from the user.
>
The solution is simple when you define new struct in the struct you write what
should be the version number.
You can also define that 0 is the latest one, so application that are are writing code which
is size agnostic will just set 0 all the time.
> Regards,
> /Bruce
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-26 12:19 0% ` Ori Kam
@ 2022-01-26 13:41 0% ` Bruce Richardson
2022-01-26 15:12 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2022-01-26 13:41 UTC (permalink / raw)
To: Ori Kam
Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
Stephen Hemminger
On Wed, Jan 26, 2022 at 12:19:43PM +0000, Ori Kam wrote:
>
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Sent: Wednesday, January 26, 2022 1:22 PM
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> >
> > 26/01/2022 11:52, Bruce Richardson:
> > > The scenario is as follows. Suppose we have the initial state as below:
> > >
> > > struct x_dev_cfg {
> > > int x;
> > > };
> > >
> > > int
> > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > {
> > > struct x_dev *dev = x_devs[id];
> > > // some setup/config may go here
> > > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > > }
> > >
> > > Now, supposing we need to add in a new field into the config structure, a
> > > very common occurance. This will indeed break the ABI, so we need to use
> > > ABI versioning, to ensure that apps passing in the old structure, only call
> > > a function which expects the old structure. Therefore, we need a copy of
> > > the old structure, and a function to work on it. This gives this result:
> > >
> > > struct x_dev_cfg {
> > > int x;
> > > bool flag; // new field;
> > > };
> > >
> > > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > > int x;
> > > };
> > >
> > > /* this function will only be called by *newly-linked* code, which uses
> > > * the new structure */
> > > int
> > > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > > {
> > > struct x_dev *dev = x_devs[id];
> > > // some setup/config may go here
> > > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > > }
> > >
> > > /* this function is called by apps linked against old version */
> > > int
> > > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > > {
> > > struct x_dev *dev = x_devs[id];
> > > // some setup/config may go here
> > > return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > > }
> > >
> > > With the above library code, we have different functions using the
> > > different structures, so ABI compatibility is preserved - apps passing in a
> > > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > > use the 8-byte version.
> > >
> > > The final part of the puzzle is then how drivers react to this change.
> > > Originally, all drivers only use "x" in the config structure because that
> > > is all that there is. That will still continue to work fine in the above
> > > case, as both 4-byte and 8-byte structs have the same x value at the same
> > > offset. i.e. no driver updates for x_dev is needed.
> > >
> > > On the other hand, if there are drivers that do want/need the new field,
> > > they can also get to use it, but they do need to check for its presence
> > > before they do so, i.e they would work as below:
> > >
> > > if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > > // use flags field
> > > }
> > >
> > > Hope this is clear now.
> >
> > Yes, this is the kind of explanation we need in our guideline doc.
> > Alternatives can be documented as well.
> > If we can list pros/cons in the doc, it will be easier to choose
> > the best approach and to explain the choice during code review.
> >
> >
> Thanks you very much for the clear explanation.
>
> The draw back is that we need also to duplicate the functions.
> Using the flags/version we only need to create new structures
> and from application point of view it knows what exta fields it gets.
> (I agree that application knowledge has downsides but also advantages)
>
> In the case of flags/version your example will look like this (this is for the record and may other
> developers are intrested):
>
> struct x_dev_cfg { //original struct
> int ver;
> int x;
> };
>
> struct x_dev_cfg_v2 { // new struct
> int ver;
> int x;
> bool flag; // new field;
> };
>
>
> The function is always the same function:
> x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> {
> struct x_dev *dev = x_devs[id];
> // some setup/config may go here
> return dev->configure(cfg);
> }
>
> When calling this function with old struct:
> X_dev_cfg(id, (struct x_dev_cfg *)cfg)
>
> When calling this function with new struct:
> X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)
>
> In PMD:
> If (cfg->ver >= 2)
> // version 2 logic
> Else If (cfg->v >=0)
> // base version logic
>
>
> When using flags it gives even more control since pmd can tell exactly what
> features are required.
>
> All options have prons/cons
> I vote for the version one.
>
> We can have a poll 😊
> Or like Thomas said list pros and cons and each subsystem can
> have it own selection.
The biggest issue I have with this version approach is how is the user
meant to know what version number to put into the structure? When the user
upgrades from one version of DPDK to the next, are they manually to update
their version numbers in all their structures? If they don't, they then may
be mistified if they use the newer fields and find that they "don't work"
because they forgot that they need to update the version field to the newer
version at the same time. The reason I prefer the size field is that it is
impossible for the end user to mess things up, and the entirity of the
mechanism is internal, and hidden from the user.
Regards,
/Bruce
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-26 11:21 0% ` Thomas Monjalon
@ 2022-01-26 12:19 0% ` Ori Kam
2022-01-26 13:41 0% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2022-01-26 12:19 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL), Bruce Richardson
Cc: Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
Stephen Hemminger
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Wednesday, January 26, 2022 1:22 PM
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
>
> 26/01/2022 11:52, Bruce Richardson:
> > The scenario is as follows. Suppose we have the initial state as below:
> >
> > struct x_dev_cfg {
> > int x;
> > };
> >
> > int
> > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > {
> > struct x_dev *dev = x_devs[id];
> > // some setup/config may go here
> > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> > }
> >
> > Now, supposing we need to add in a new field into the config structure, a
> > very common occurance. This will indeed break the ABI, so we need to use
> > ABI versioning, to ensure that apps passing in the old structure, only call
> > a function which expects the old structure. Therefore, we need a copy of
> > the old structure, and a function to work on it. This gives this result:
> >
> > struct x_dev_cfg {
> > int x;
> > bool flag; // new field;
> > };
> >
> > struct x_dev_cfg_v22 { // needed for ABI-versioned function
> > int x;
> > };
> >
> > /* this function will only be called by *newly-linked* code, which uses
> > * the new structure */
> > int
> > x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> > {
> > struct x_dev *dev = x_devs[id];
> > // some setup/config may go here
> > return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> > }
> >
> > /* this function is called by apps linked against old version */
> > int
> > x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> > {
> > struct x_dev *dev = x_devs[id];
> > // some setup/config may go here
> > return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> > }
> >
> > With the above library code, we have different functions using the
> > different structures, so ABI compatibility is preserved - apps passing in a
> > 4-byte struct call a function using the 4-byte struct, while newer apps can
> > use the 8-byte version.
> >
> > The final part of the puzzle is then how drivers react to this change.
> > Originally, all drivers only use "x" in the config structure because that
> > is all that there is. That will still continue to work fine in the above
> > case, as both 4-byte and 8-byte structs have the same x value at the same
> > offset. i.e. no driver updates for x_dev is needed.
> >
> > On the other hand, if there are drivers that do want/need the new field,
> > they can also get to use it, but they do need to check for its presence
> > before they do so, i.e they would work as below:
> >
> > if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> > // use flags field
> > }
> >
> > Hope this is clear now.
>
> Yes, this is the kind of explanation we need in our guideline doc.
> Alternatives can be documented as well.
> If we can list pros/cons in the doc, it will be easier to choose
> the best approach and to explain the choice during code review.
>
>
Thanks you very much for the clear explanation.
The draw back is that we need also to duplicate the functions.
Using the flags/version we only need to create new structures
and from application point of view it knows what exta fields it gets.
(I agree that application knowledge has downsides but also advantages)
In the case of flags/version your example will look like this (this is for the record and may other
developers are intrested):
struct x_dev_cfg { //original struct
int ver;
int x;
};
struct x_dev_cfg_v2 { // new struct
int ver;
int x;
bool flag; // new field;
};
The function is always the same function:
x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
{
struct x_dev *dev = x_devs[id];
// some setup/config may go here
return dev->configure(cfg);
}
When calling this function with old struct:
X_dev_cfg(id, (struct x_dev_cfg *)cfg)
When calling this function with new struct:
X_dev_cfg(id, (struct x_dev_cfg *)cfg_v2)
In PMD:
If (cfg->ver >= 2)
// version 2 logic
Else If (cfg->v >=0)
// base version logic
When using flags it gives even more control since pmd can tell exactly what
features are required.
All options have prons/cons
I vote for the version one.
We can have a poll 😊
Or like Thomas said list pros and cons and each subsystem can
have it own selection.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-26 10:52 4% ` Bruce Richardson
@ 2022-01-26 11:21 0% ` Thomas Monjalon
2022-01-26 12:19 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-01-26 11:21 UTC (permalink / raw)
To: Ori Kam, Bruce Richardson
Cc: Jerin Jacob, Alexander Kozyrev, dpdk-dev, Ivan Malov,
Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
Jerin Jacob, Ajit Khaparde, David Marchand, Olivier Matz,
Stephen Hemminger
26/01/2022 11:52, Bruce Richardson:
> The scenario is as follows. Suppose we have the initial state as below:
>
> struct x_dev_cfg {
> int x;
> };
>
> int
> x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> {
> struct x_dev *dev = x_devs[id];
> // some setup/config may go here
> return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
> }
>
> Now, supposing we need to add in a new field into the config structure, a
> very common occurance. This will indeed break the ABI, so we need to use
> ABI versioning, to ensure that apps passing in the old structure, only call
> a function which expects the old structure. Therefore, we need a copy of
> the old structure, and a function to work on it. This gives this result:
>
> struct x_dev_cfg {
> int x;
> bool flag; // new field;
> };
>
> struct x_dev_cfg_v22 { // needed for ABI-versioned function
> int x;
> };
>
> /* this function will only be called by *newly-linked* code, which uses
> * the new structure */
> int
> x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
> {
> struct x_dev *dev = x_devs[id];
> // some setup/config may go here
> return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
> }
>
> /* this function is called by apps linked against old version */
> int
> x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
> {
> struct x_dev *dev = x_devs[id];
> // some setup/config may go here
> return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
> }
>
> With the above library code, we have different functions using the
> different structures, so ABI compatibility is preserved - apps passing in a
> 4-byte struct call a function using the 4-byte struct, while newer apps can
> use the 8-byte version.
>
> The final part of the puzzle is then how drivers react to this change.
> Originally, all drivers only use "x" in the config structure because that
> is all that there is. That will still continue to work fine in the above
> case, as both 4-byte and 8-byte structs have the same x value at the same
> offset. i.e. no driver updates for x_dev is needed.
>
> On the other hand, if there are drivers that do want/need the new field,
> they can also get to use it, but they do need to check for its presence
> before they do so, i.e they would work as below:
>
> if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
> // use flags field
> }
>
> Hope this is clear now.
Yes, this is the kind of explanation we need in our guideline doc.
Alternatives can be documented as well.
If we can list pros/cons in the doc, it will be easier to choose
the best approach and to explain the choice during code review.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-26 9:45 0% ` Ori Kam
@ 2022-01-26 10:52 4% ` Bruce Richardson
2022-01-26 11:21 0% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2022-01-26 10:52 UTC (permalink / raw)
To: Ori Kam
Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger
On Wed, Jan 26, 2022 at 09:45:18AM +0000, Ori Kam wrote:
>
>
> > -----Original Message-----
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > Sent: Tuesday, January 25, 2022 8:14 PM
> > To: Ori Kam <orika@nvidia.com>
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> >
> > On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> > > On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > > > Hi Bruce,
> > > >
> > > > > -----Original Message----- From: Bruce Richardson
> > > > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > > > pre-configuration hints
> > > > >
> > > > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > > > <thomas@monjalon.net> wrote:
> > > > > > >
> > > > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > > > +struct rte_flow_port_attr { + /** + * Version
> > > > > > > > > of the struct layout, should be 0. + */ +
> > > > > > > > > uint32_t version;
> > > > > > > >
> > > > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > > > versioning
> > > > > > >
> > > > > > > Function versioning is not ideal when the structure is accessed
> > > > > > > in many places like many drivers and library functions.
> > > > > > >
> > > > > > > The idea of this version field (which can be a bitfield) is to
> > > > > > > update it when some new features are added, so the users of the
> > > > > > > struct can check if a feature is there before trying to use it.
> > > > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > > > functions as in function versioning.
> > > > > > >
> > > > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > > > It is assuming we only add new fields at the end (no removal),
> > > > > > > and focus on the size of the struct. By passing sizeof as an
> > > > > > > extra parameter, the function knows which fields are OK to use.
> > > > > > > Example:
> > > > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > > > >
> > > > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > > > We can have one approach and use it across DPDK for consistency.
> > > > > >
> > > > >
> > > > > In general I prefer the size-based approach, mainly because of its
> > > > > simplicity. However, some other reasons why we may want to choose it:
> > > > >
> > > > > * It's completely hidden from the end user, and there is no need for
> > > > > an extra struct field that needs to be filled in
> > > > >
> > > > > * Related to that, for the version-field approach, if the field is
> > > > > present in a user-allocated struct, then you probably need to start
> > > > > preventing user error via: - having the external struct not have the
> > > > > field and use a separate internal struct to add in the version info
> > > > > after the fact in the versioned function. Alternatively, - provide a
> > > > > separate init function for each structure to fill in the version
> > > > > field appropriately
> > > > >
> > > > > * In general, using the size-based approach like in the linked
> > > > > example is more resilient since it's compiler-inserted, so there is
> > > > > reduced chance of error.
> > > > >
> > > > > * A sizeof field allows simple-enough handling in the drivers -
> > > > > especially since it does not allow removed fields. Each driver only
> > > > > needs to check that the size passed in is greater than that expected,
> > > > > thereby allowing us to have both updated and non-updated drivers
> > > > > co-existing simultaneously. [For a version field, the same scheme
> > > > > could also work if we keep the no-delete rule, but for a bitmask
> > > > > field, I believe things may get more complex in terms of checking]
> > > > >
> > > > > In terms of the limitations of using sizeof - requiring new fields to
> > > > > always go on the end, and preventing shrinking the struct - I think
> > > > > that the simplicity gains far outweigh the impact of these
> > > > > strictions.
> > > > >
> > > > > * Adding fields to struct is far more common than wanting to remove
> > > > > one
> > > > >
> > > > > * So long as the added field is at the end, even if the struct size
> > > > > doesn't change the scheme can still work as the versioned function
> > > > > for the old struct can ensure that the extra field is appropriately
> > > > > zeroed (rather than random) on entry into any driver function
> > > > >
> > > >
> > > > Zero can be a valid value so this is may result in an issue.
> > > >
> > >
> > > In this instance, I was using zero as a neutral, default-option value. If
> > > having zero as the default causes problems, we can always make the
> > > structure size change to force a new size value.
> > >
> > > > > * If we do want to remove a field, the space simply needs to be
> > > > > marked as reserved in the struct, until the next ABI break release,
> > > > > when it can be compacted. Again, function versioning can take care of
> > > > > appropriately zeroing this field on return, if necessary.
> > > > >
> > > >
> > > > This means that PMD will have to change just for removal of a field I
> > > > would say removal is not allowed.
> > > >
> > > > > My 2c from considering this for the implementation in dmadev. :-)
> > > >
> > > > Some concerns I have about your suggestion: 1. The size of the struct
> > > > is dependent on the system, for example Assume this struct { Uint16_t
> > > > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > > > size will be 128 bytes, while in 64 machine it will be 96
> > >
> > > Actually, I believe that in just about every system we support it will be
> > > 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> > > any case, the actual size value doesn't matter in practice, since all
> > > sizes should be computed by the compiler using sizeof, rather than
> > > hard-coded.
> > >
> You are correct my mistake with the numbers.
> I still think there might be some issue but I can't think of anything.
> So dropping it.
>
> > > >
> > > > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > > > breakage, since if the application got the size from previous version
> > > > and for example created array or allocated memory then using the new
> > > > structure will result in memory override.
> > > >
> > > > I know that flags/version is not easy since it means creating new
> > > > Structure for each change. I prefer to declare that size can change
> > > > between DPDK releases is allowd but as long as we say ABI breakage is
> > > > forbidden then I don't think your solution is valid. And we must go
> > > > with the version/flags and create new structure for each change.
> > > >
> > >
> > > whatever approach is taken for this, I believe we will always need to
> > > create a new structure for the changes. This is because only functions
> > > can be versioned, not structures. The only question therefore becomes how
> > > to pass ABI version information, and therefore by extension structure
> > > version information across a library to driver boundary. This has to be
> > > an extra field somewhere, either in a structure or as a function
> > > parameter. I'd prefer not in the structure as it exposes it to the user.
> > > In terms of the field value, it can either be explicit version info as
> > > version number or version flags, or implicit versioning via "size". Based
> > > off the "YAGNI" principle, I really would prefer just using sizes, as
> > > it's far easier to manage and work with for all concerned, and requires
> > > no additional documentation for the programmer or driver developer to
> > > understand.
> > >
> > As a third alternative that I would find acceptable, we could also just
> > take the approach of passing the ABI version explicitly across the function
> > call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
> > largely self explanatory, and can be inserted automatically by the compiler
> > - again reducing chances of errors. [However, I also believe that using
> > sizes is still simpler again, which is why it's still my first choice! :-)]
> >
>
> Just to make sure I fully understand your suggestion.
> We will create new struct for each change.
> The function will stay the same
> For example I had the following:
>
> Struct base {
> Uint32_t x;
> }
>
> Function (struct base *input)
> {
> Inner_func (input, sizeof(struct base))
> }
>
> Now I'm adding new member so it will look like this:
> Struct new {
> Uint32_t x;
> Uint32_t y;
> }
>
> When I want to call the function I need to cast
> Function((struct base*) new)
>
> Right?
>
> This means that in both cases the sizeof wil return the same value,
> What am I missing?
>
The scenario is as follows. Suppose we have the initial state as below:
struct x_dev_cfg {
int x;
};
int
x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
{
struct x_dev *dev = x_devs[id];
// some setup/config may go here
return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) == 4
}
Now, supposing we need to add in a new field into the config structure, a
very common occurance. This will indeed break the ABI, so we need to use
ABI versioning, to ensure that apps passing in the old structure, only call
a function which expects the old structure. Therefore, we need a copy of
the old structure, and a function to work on it. This gives this result:
struct x_dev_cfg {
int x;
bool flag; // new field;
};
struct x_dev_cfg_v22 { // needed for ABI-versioned function
int x;
};
/* this function will only be called by *newly-linked* code, which uses
* the new structure */
int
x_dev_cfg(int dev_id, struct x_dev_cfg *cfg)
{
struct x_dev *dev = x_devs[id];
// some setup/config may go here
return dev->configure(cfg, sizeof(cfg)); // sizeof(cfg) is now 8
}
/* this function is called by apps linked against old version */
int
x_dev_cfg_v22(int dev_id, struct x_dev_cfg_v22 *cfg)
{
struct x_dev *dev = x_devs[id];
// some setup/config may go here
return dev->configure((void *)cfg, sizeof(cfg)); // sizeof(cfg) is still 4
}
With the above library code, we have different functions using the
different structures, so ABI compatibility is preserved - apps passing in a
4-byte struct call a function using the 4-byte struct, while newer apps can
use the 8-byte version.
The final part of the puzzle is then how drivers react to this change.
Originally, all drivers only use "x" in the config structure because that
is all that there is. That will still continue to work fine in the above
case, as both 4-byte and 8-byte structs have the same x value at the same
offset. i.e. no driver updates for x_dev is needed.
On the other hand, if there are drivers that do want/need the new field,
they can also get to use it, but they do need to check for its presence
before they do so, i.e they would work as below:
if (size_param > struct(x_dev_cfg_v22)) { // or "== struct(x_dev_cfg)"
// use flags field
}
Hope this is clear now.
/Bruce
^ permalink raw reply [relevance 4%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-25 18:14 3% ` Bruce Richardson
@ 2022-01-26 9:45 0% ` Ori Kam
2022-01-26 10:52 4% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2022-01-26 9:45 UTC (permalink / raw)
To: Bruce Richardson
Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Tuesday, January 25, 2022 8:14 PM
> To: Ori Kam <orika@nvidia.com>
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
>
> On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> > On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > > Hi Bruce,
> > >
> > > > -----Original Message----- From: Bruce Richardson
> > > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > > pre-configuration hints
> > > >
> > > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > > <thomas@monjalon.net> wrote:
> > > > > >
> > > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > > +struct rte_flow_port_attr { + /** + * Version
> > > > > > > > of the struct layout, should be 0. + */ +
> > > > > > > > uint32_t version;
> > > > > > >
> > > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > > versioning
> > > > > >
> > > > > > Function versioning is not ideal when the structure is accessed
> > > > > > in many places like many drivers and library functions.
> > > > > >
> > > > > > The idea of this version field (which can be a bitfield) is to
> > > > > > update it when some new features are added, so the users of the
> > > > > > struct can check if a feature is there before trying to use it.
> > > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > > functions as in function versioning.
> > > > > >
> > > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > > It is assuming we only add new fields at the end (no removal),
> > > > > > and focus on the size of the struct. By passing sizeof as an
> > > > > > extra parameter, the function knows which fields are OK to use.
> > > > > > Example:
> > > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > > >
> > > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > > We can have one approach and use it across DPDK for consistency.
> > > > >
> > > >
> > > > In general I prefer the size-based approach, mainly because of its
> > > > simplicity. However, some other reasons why we may want to choose it:
> > > >
> > > > * It's completely hidden from the end user, and there is no need for
> > > > an extra struct field that needs to be filled in
> > > >
> > > > * Related to that, for the version-field approach, if the field is
> > > > present in a user-allocated struct, then you probably need to start
> > > > preventing user error via: - having the external struct not have the
> > > > field and use a separate internal struct to add in the version info
> > > > after the fact in the versioned function. Alternatively, - provide a
> > > > separate init function for each structure to fill in the version
> > > > field appropriately
> > > >
> > > > * In general, using the size-based approach like in the linked
> > > > example is more resilient since it's compiler-inserted, so there is
> > > > reduced chance of error.
> > > >
> > > > * A sizeof field allows simple-enough handling in the drivers -
> > > > especially since it does not allow removed fields. Each driver only
> > > > needs to check that the size passed in is greater than that expected,
> > > > thereby allowing us to have both updated and non-updated drivers
> > > > co-existing simultaneously. [For a version field, the same scheme
> > > > could also work if we keep the no-delete rule, but for a bitmask
> > > > field, I believe things may get more complex in terms of checking]
> > > >
> > > > In terms of the limitations of using sizeof - requiring new fields to
> > > > always go on the end, and preventing shrinking the struct - I think
> > > > that the simplicity gains far outweigh the impact of these
> > > > strictions.
> > > >
> > > > * Adding fields to struct is far more common than wanting to remove
> > > > one
> > > >
> > > > * So long as the added field is at the end, even if the struct size
> > > > doesn't change the scheme can still work as the versioned function
> > > > for the old struct can ensure that the extra field is appropriately
> > > > zeroed (rather than random) on entry into any driver function
> > > >
> > >
> > > Zero can be a valid value so this is may result in an issue.
> > >
> >
> > In this instance, I was using zero as a neutral, default-option value. If
> > having zero as the default causes problems, we can always make the
> > structure size change to force a new size value.
> >
> > > > * If we do want to remove a field, the space simply needs to be
> > > > marked as reserved in the struct, until the next ABI break release,
> > > > when it can be compacted. Again, function versioning can take care of
> > > > appropriately zeroing this field on return, if necessary.
> > > >
> > >
> > > This means that PMD will have to change just for removal of a field I
> > > would say removal is not allowed.
> > >
> > > > My 2c from considering this for the implementation in dmadev. :-)
> > >
> > > Some concerns I have about your suggestion: 1. The size of the struct
> > > is dependent on the system, for example Assume this struct { Uint16_t
> > > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > > size will be 128 bytes, while in 64 machine it will be 96
> >
> > Actually, I believe that in just about every system we support it will be
> > 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> > any case, the actual size value doesn't matter in practice, since all
> > sizes should be computed by the compiler using sizeof, rather than
> > hard-coded.
> >
You are correct my mistake with the numbers.
I still think there might be some issue but I can't think of anything.
So dropping it.
> > >
> > > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > > breakage, since if the application got the size from previous version
> > > and for example created array or allocated memory then using the new
> > > structure will result in memory override.
> > >
> > > I know that flags/version is not easy since it means creating new
> > > Structure for each change. I prefer to declare that size can change
> > > between DPDK releases is allowd but as long as we say ABI breakage is
> > > forbidden then I don't think your solution is valid. And we must go
> > > with the version/flags and create new structure for each change.
> > >
> >
> > whatever approach is taken for this, I believe we will always need to
> > create a new structure for the changes. This is because only functions
> > can be versioned, not structures. The only question therefore becomes how
> > to pass ABI version information, and therefore by extension structure
> > version information across a library to driver boundary. This has to be
> > an extra field somewhere, either in a structure or as a function
> > parameter. I'd prefer not in the structure as it exposes it to the user.
> > In terms of the field value, it can either be explicit version info as
> > version number or version flags, or implicit versioning via "size". Based
> > off the "YAGNI" principle, I really would prefer just using sizes, as
> > it's far easier to manage and work with for all concerned, and requires
> > no additional documentation for the programmer or driver developer to
> > understand.
> >
> As a third alternative that I would find acceptable, we could also just
> take the approach of passing the ABI version explicitly across the function
> call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
> largely self explanatory, and can be inserted automatically by the compiler
> - again reducing chances of errors. [However, I also believe that using
> sizes is still simpler again, which is why it's still my first choice! :-)]
>
Just to make sure I fully understand your suggestion.
We will create new struct for each change.
The function will stay the same
For example I had the following:
Struct base {
Uint32_t x;
}
Function (struct base *input)
{
Inner_func (input, sizeof(struct base))
}
Now I'm adding new member so it will look like this:
Struct new {
Uint32_t x;
Uint32_t y;
}
When I want to call the function I need to cast
Function((struct base*) new)
Right?
This means that in both cases the sizeof wil return the same value,
What am I missing?
> /Bruce
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-25 18:09 3% ` Bruce Richardson
@ 2022-01-25 18:14 3% ` Bruce Richardson
2022-01-26 9:45 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2022-01-25 18:14 UTC (permalink / raw)
To: Ori Kam
Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger
On Tue, Jan 25, 2022 at 06:09:42PM +0000, Bruce Richardson wrote:
> On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> > Hi Bruce,
> >
> > > -----Original Message----- From: Bruce Richardson
> > > <bruce.richardson@intel.com> Sent: Monday, January 24, 2022 8:09 PM
> > > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow
> > > pre-configuration hints
> > >
> > > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> > > > <thomas@monjalon.net> wrote:
> > > > >
> > > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> > > > > > <akozyrev@nvidia.com> wrote:
> > > > > > > +struct rte_flow_port_attr { + /** + * Version
> > > > > > > of the struct layout, should be 0. + */ +
> > > > > > > uint32_t version;
> > > > > >
> > > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > > versioning, I think, that would be sufficient for ABI
> > > > > > versioning
> > > > >
> > > > > Function versioning is not ideal when the structure is accessed
> > > > > in many places like many drivers and library functions.
> > > > >
> > > > > The idea of this version field (which can be a bitfield) is to
> > > > > update it when some new features are added, so the users of the
> > > > > struct can check if a feature is there before trying to use it.
> > > > > It means a bit more code in the functions, but avoid duplicating
> > > > > functions as in function versioning.
> > > > >
> > > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > > It is assuming we only add new fields at the end (no removal),
> > > > > and focus on the size of the struct. By passing sizeof as an
> > > > > extra parameter, the function knows which fields are OK to use.
> > > > > Example:
> > > > > http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > > >
> > > > + @Richardson, Bruce Either approach is fine, No strong opinion.
> > > > We can have one approach and use it across DPDK for consistency.
> > > >
> > >
> > > In general I prefer the size-based approach, mainly because of its
> > > simplicity. However, some other reasons why we may want to choose it:
> > >
> > > * It's completely hidden from the end user, and there is no need for
> > > an extra struct field that needs to be filled in
> > >
> > > * Related to that, for the version-field approach, if the field is
> > > present in a user-allocated struct, then you probably need to start
> > > preventing user error via: - having the external struct not have the
> > > field and use a separate internal struct to add in the version info
> > > after the fact in the versioned function. Alternatively, - provide a
> > > separate init function for each structure to fill in the version
> > > field appropriately
> > >
> > > * In general, using the size-based approach like in the linked
> > > example is more resilient since it's compiler-inserted, so there is
> > > reduced chance of error.
> > >
> > > * A sizeof field allows simple-enough handling in the drivers -
> > > especially since it does not allow removed fields. Each driver only
> > > needs to check that the size passed in is greater than that expected,
> > > thereby allowing us to have both updated and non-updated drivers
> > > co-existing simultaneously. [For a version field, the same scheme
> > > could also work if we keep the no-delete rule, but for a bitmask
> > > field, I believe things may get more complex in terms of checking]
> > >
> > > In terms of the limitations of using sizeof - requiring new fields to
> > > always go on the end, and preventing shrinking the struct - I think
> > > that the simplicity gains far outweigh the impact of these
> > > strictions.
> > >
> > > * Adding fields to struct is far more common than wanting to remove
> > > one
> > >
> > > * So long as the added field is at the end, even if the struct size
> > > doesn't change the scheme can still work as the versioned function
> > > for the old struct can ensure that the extra field is appropriately
> > > zeroed (rather than random) on entry into any driver function
> > >
> >
> > Zero can be a valid value so this is may result in an issue.
> >
>
> In this instance, I was using zero as a neutral, default-option value. If
> having zero as the default causes problems, we can always make the
> structure size change to force a new size value.
>
> > > * If we do want to remove a field, the space simply needs to be
> > > marked as reserved in the struct, until the next ABI break release,
> > > when it can be compacted. Again, function versioning can take care of
> > > appropriately zeroing this field on return, if necessary.
> > >
> >
> > This means that PMD will have to change just for removal of a field I
> > would say removal is not allowed.
> >
> > > My 2c from considering this for the implementation in dmadev. :-)
> >
> > Some concerns I have about your suggestion: 1. The size of the struct
> > is dependent on the system, for example Assume this struct { Uint16_t
> > a; Uint32_t b; Uint8_t c; Uint32_t d; } Incase of 32 bit machine the
> > size will be 128 bytes, while in 64 machine it will be 96
>
> Actually, I believe that in just about every system we support it will be
> 4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In
> any case, the actual size value doesn't matter in practice, since all
> sizes should be computed by the compiler using sizeof, rather than
> hard-coded.
>
> >
> > 2. ABI breakage, as far as I know changing size of a struct is ABI
> > breakage, since if the application got the size from previous version
> > and for example created array or allocated memory then using the new
> > structure will result in memory override.
> >
> > I know that flags/version is not easy since it means creating new
> > Structure for each change. I prefer to declare that size can change
> > between DPDK releases is allowd but as long as we say ABI breakage is
> > forbidden then I don't think your solution is valid. And we must go
> > with the version/flags and create new structure for each change.
> >
>
> whatever approach is taken for this, I believe we will always need to
> create a new structure for the changes. This is because only functions
> can be versioned, not structures. The only question therefore becomes how
> to pass ABI version information, and therefore by extension structure
> version information across a library to driver boundary. This has to be
> an extra field somewhere, either in a structure or as a function
> parameter. I'd prefer not in the structure as it exposes it to the user.
> In terms of the field value, it can either be explicit version info as
> version number or version flags, or implicit versioning via "size". Based
> off the "YAGNI" principle, I really would prefer just using sizes, as
> it's far easier to manage and work with for all concerned, and requires
> no additional documentation for the programmer or driver developer to
> understand.
>
As a third alternative that I would find acceptable, we could also just
take the approach of passing the ABI version explicitly across the function
call i.e. 22 for DPDK_21.11. I'd find this ok too on the basis that it's
largely self explanatory, and can be inserted automatically by the compiler
- again reducing chances of errors. [However, I also believe that using
sizes is still simpler again, which is why it's still my first choice! :-)]
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-25 15:58 4% ` Ori Kam
@ 2022-01-25 18:09 3% ` Bruce Richardson
2022-01-25 18:14 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Bruce Richardson @ 2022-01-25 18:09 UTC (permalink / raw)
To: Ori Kam
Cc: Jerin Jacob, NBU-Contact-Thomas Monjalon (EXTERNAL),
Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger
On Tue, Jan 25, 2022 at 03:58:45PM +0000, Ori Kam wrote:
> Hi Bruce,
>
> > -----Original Message-----
> > From: Bruce Richardson <bruce.richardson@intel.com>
> > Sent: Monday, January 24, 2022 8:09 PM
> > Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
> >
> > On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > > >
> > > > 24/01/2022 15:36, Jerin Jacob:
> > > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > > > +struct rte_flow_port_attr {
> > > > > > + /**
> > > > > > + * Version of the struct layout, should be 0.
> > > > > > + */
> > > > > > + uint32_t version;
> > > > >
> > > > > Why version number? Across DPDK, we are using dynamic function
> > > > > versioning, I think, that would be sufficient for ABI versioning
> > > >
> > > > Function versioning is not ideal when the structure is accessed
> > > > in many places like many drivers and library functions.
> > > >
> > > > The idea of this version field (which can be a bitfield)
> > > > is to update it when some new features are added,
> > > > so the users of the struct can check if a feature is there
> > > > before trying to use it.
> > > > It means a bit more code in the functions, but avoid duplicating functions
> > > > as in function versioning.
> > > >
> > > > Another approach was suggested by Bruce, and applied to dmadev.
> > > > It is assuming we only add new fields at the end (no removal),
> > > > and focus on the size of the struct.
> > > > By passing sizeof as an extra parameter, the function knows
> > > > which fields are OK to use.
> > > > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> > >
> > > + @Richardson, Bruce
> > > Either approach is fine, No strong opinion. We can have one approach
> > > and use it across DPDK for consistency.
> > >
> >
> > In general I prefer the size-based approach, mainly because of its
> > simplicity. However, some other reasons why we may want to choose it:
> >
> > * It's completely hidden from the end user, and there is no need for an
> > extra struct field that needs to be filled in
> >
> > * Related to that, for the version-field approach, if the field is present
> > in a user-allocated struct, then you probably need to start preventing user
> > error via:
> > - having the external struct not have the field and use a separate
> > internal struct to add in the version info after the fact in the
> > versioned function. Alternatively,
> > - provide a separate init function for each structure to fill in the
> > version field appropriately
> >
> > * In general, using the size-based approach like in the linked example is
> > more resilient since it's compiler-inserted, so there is reduced chance
> > of error.
> >
> > * A sizeof field allows simple-enough handling in the drivers - especially
> > since it does not allow removed fields. Each driver only needs to check
> > that the size passed in is greater than that expected, thereby allowing
> > us to have both updated and non-updated drivers co-existing simultaneously.
> > [For a version field, the same scheme could also work if we keep the
> > no-delete rule, but for a bitmask field, I believe things may get more
> > complex in terms of checking]
> >
> > In terms of the limitations of using sizeof - requiring new fields to
> > always go on the end, and preventing shrinking the struct - I think that the
> > simplicity gains far outweigh the impact of these strictions.
> >
> > * Adding fields to struct is far more common than wanting to remove one
> >
> > * So long as the added field is at the end, even if the struct size doesn't
> > change the scheme can still work as the versioned function for the old
> > struct can ensure that the extra field is appropriately zeroed (rather than
> > random) on entry into any driver function
> >
>
> Zero can be a valid value so this is may result in an issue.
>
In this instance, I was using zero as a neutral, default-option value. If
having zero as the default causes problems, we can always make the
structure size change to force a new size value.
> > * If we do want to remove a field, the space simply needs to be marked as
> > reserved in the struct, until the next ABI break release, when it can be
> > compacted. Again, function versioning can take care of appropriately
> > zeroing this field on return, if necessary.
> >
>
> This means that PMD will have to change just for removal of a field
> I would say removal is not allowed.
>
> > My 2c from considering this for the implementation in dmadev. :-)
>
> Some concerns I have about your suggestion:
> 1. The size of the struct is dependent on the system, for example
> Assume this struct
> {
> Uint16_t a;
> Uint32_t b;
> Uint8_t c;
> Uint32_t d;
> }
> Incase of 32 bit machine the size will be 128 bytes, while in 64 machine it will be 96
Actually, I believe that in just about every system we support it will be
4x4B i.e. 16 bytes in size. How do you compute 96 or 128 byte sizes? In any
case, the actual size value doesn't matter in practice, since all sizes
should be computed by the compiler using sizeof, rather than hard-coded.
>
> 2. ABI breakage, as far as I know changing size of a struct is ABI breakage, since if
> the application got the size from previous version and for example created array
> or allocated memory then using the new structure will result in memory override.
>
> I know that flags/version is not easy since it means creating new
> Structure for each change. I prefer to declare that size can change between
> DPDK releases is allowd but as long as we say ABI breakage is forbidden then I don't think your
> solution is valid.
> And we must go with the version/flags and create new structure for each change.
>
whatever approach is taken for this, I believe we will always need to
create a new structure for the changes. This is because only functions can
be versioned, not structures. The only question therefore becomes how to
pass ABI version information, and therefore by extension structure version
information across a library to driver boundary. This has to be an extra
field somewhere, either in a structure or as a function parameter. I'd
prefer not in the structure as it exposes it to the user. In terms of the
field value, it can either be explicit version info as version number or
version flags, or implicit versioning via "size". Based off the "YAGNI"
principle, I really would prefer just using sizes, as it's far easier to
manage and work with for all concerned, and requires no additional
documentation for the programmer or driver developer to understand.
Regards,
/Bruce
^ permalink raw reply [relevance 3%]
* RE: [RFC 1/3] ethdev: support GRE optional fields
2022-01-25 14:29 0% ` Ferruh Yigit
@ 2022-01-25 16:03 0% ` Ori Kam
0 siblings, 0 replies; 200+ results
From: Ori Kam @ 2022-01-25 16:03 UTC (permalink / raw)
To: Ferruh Yigit, Sean Zhang (Networking SW),
NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad
Cc: Andrew Rybchenko, dev
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Tuesday, January 25, 2022 4:29 PM
> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>
> On 1/25/2022 1:06 PM, Ori Kam wrote:
> > Hi Ferruh,
> >
> >> -----Original Message-----
> >> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
> >>
> >> On 1/25/2022 9:49 AM, Sean Zhang (Networking SW) wrote:
> >>> Hi,
> >>>
> >>>> -----Original Message-----
> >>>> From: Ori Kam <orika@nvidia.com>
> >>>> Sent: Wednesday, January 19, 2022 6:57 PM
> >>>> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
> >>>> Sean Zhang (Networking SW) <xiazhang@nvidia.com>; Matan Azrad
> >>>> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>
> >>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
> >>>> Subject: RE: [RFC 1/3] ethdev: support GRE optional fields
> >>>>
> >>>> Hi,
> >>>>
> >>>>> -----Original Message-----
> >>>>> From: Thomas Monjalon <thomas@monjalon.net>
> >>>>> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
> >>>>>
> >>>>> 19/01/2022 10:53, Ferruh Yigit:
> >>>>>> On 12/30/2021 3:08 AM, Sean Zhang wrote:
> >>>>>>> --- a/lib/ethdev/rte_flow.h
> >>>>>>> +++ b/lib/ethdev/rte_flow.h
> >>>>>>> /**
> >>>>>>> + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
> >>>>>>> + *
> >>>>>>> + * Matches GRE optional fields in header.
> >>>>>>> + */
> >>>>>>> +struct rte_gre_hdr_option {
> >>>>>>> + rte_be16_t checksum;
> >>>>>>> + rte_be32_t key;
> >>>>>>> + rte_be32_t sequence;
> >>>>>>> +};
> >>>>>>> +
> >>>>>>
> >>>>>> Hi Ori, Andrew,
> >>>>>>
> >>>>>> The decision was to have protocol structs in the net library and
> >>>>>> flow structs use from there, wasn't it?
> >>>>>> (Btw, a deprecation notice is still pending to clear some existing
> >>>>>> ones)
> >>>>>>
> >>>>>> So for the GRE optional fields, what about having a struct in the
> >>>> 'rte_gre.h'?
> >>>>>> (Also perhaps an GRE extended protocol header can be defined
> >>>>>> combining 'rte_gre_hdr' and optional fields struct.) Later flow API
> >>>>>> struct can embed that struct.
> >>>>>
> >>>>> +1 for using librte_net.
> >>>>> This addition in rte_flow looks to be a mistake.
> >>>>> Please fix the next version.
> >>>>>
> >>>> Nice idea,
> >>>> but my main concern is that the header should have the header is defined.
> >>>> Since some of the fields are optional this will look something like this:
> >>>> gre_hdr_option_checksum {
> >>>> rte_be_16_t checksum;
> >>>> }
> >>>>
> >>>> gre_hdr_option_key {
> >>>> rte_be_32_t key;
> >>>> }
> >>>>
> >>>> gre_hdr_option_ sequence {
> >>>> rte_be_32_t sequence;
> >>>> }
> >>>>
> >>>> I don't want to have so many rte_flow_items, Has more and more protocols
> >>>> have optional data it doesn't make sense to create the item for each.
> >>>>
> >>>> If I'm looking at it from an ideal place, I would like that the optional fields will
> >>>> be part of the original item.
> >>>> For example in test pmd I would like to write:
> >>>> Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
> >>>> And not Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum
> >>>> is yyy key is xxx / end This means that the structure will look like this:
> >>>> struct rte_flow_item_gre {
> >>>> union {
> >>>> struct {
> >>>> /**
> >>>> * Checksum (1b), reserved 0 (12b), version (3b).
> >>>> * Refer to RFC 2784.
> >>>> */
> >>>> rte_be16_t c_rsvd0_ver;
> >>>> rte_be16_t protocol; /**< Protocol type. */
> >>>> }
> >>>> struct rte_gre_hdr hdr
> >>>> }
> >>>> rte_be_16_t checksum;
> >>>> rte_be_32_t key;
> >>>> rte_be_32_t sequence;
> >>>> };
> >>>> The main issue with this is that it breaks ABI, Maybe to solve this we can
> >>>> create a new structure gre_ext?
> >>>>
> >>>> In any way I think we should think how we allow adding members to
> >>>> structures without ABI breakage.
> >>>>
> >>>> Best,
> >>>> Ori
> >>>
> >>> Thanks for the comments and suggestion.
> >>> So the acceptable solution is to have new structs define in rte_gre.h?
> >>> struct gre_hdr_opt_checksum {
> >>> rte_be_16_t checksum;
> >>> }
> >>>
> >>> struct gre_hdr_opt_key {
> >>> rte_be_32_t key;
> >>> }
> >>>
> >>> struct gre_hdr_opt_ sequence {
> >>> rte_be_32_t sequence;
> >>> }
> >>>
> >>> And to add new struct gre_ext defined in rte_flow.h:
> >>> struct gre_ext {
> >>> struct rte_gre_hdr hdr;
> >>> struct gre_hdr_opt_checkum checksum;
> >>> struct rte_hdr_opt_key key;
> >>> struct rte_hdr_opt_seq seq;
> >>> };
> >>>
> >>> And we use struct gre_ext for this new added flow item gre_option.
> >>>
> >>
> >> What about having a struct for 'options' and use in in flow item for options,
> >> like:
> >>
> >> struct gre_hdr_opt {
> >> struct gre_hdr_opt_checkum checksum;
> >> struct rte_hdr_opt_key key;
> >> struct rte_hdr_opt_seq seq;
> >> }
> >>
> >> struct gre_hdr_ext {
> >> struct rte_gre_hdr hdr;
> >> struct gre_hdr_opt;
> >> }
> >>
> >> struct rte_flow_item_gre_opt {
> >> struct gre_hdr_opt hdr;
> >> }
> >
> > Fom my understanding the header should reflect structures
> > as they appear in the spec.
> >
> > If we look at the spec, from my understanding each of those items is stand-alone.
> > It is possible to have just key or key and seq or any other combination.
> > So the struct you suggested is not valid struct in gre header.
> >
>
> If it is not valid header representation, please forget about it.
>
> But this means initially suggested 'struct gre_ext' is wrong, right?
>
> So should 'rte_flow_item_gre_opt' use separate structs, like:
>
> struct rte_flow_item_gre_opt {
> struct gre_hdr_opt_checkum checksum;
> struct rte_hdr_opt_key key;
> struct rte_hdr_opt_seq seq;
> }
>
Yes this is the last suggestion from Sean, the only difference is that
he created a new item gre_ext that holds the struct that you listed above
and added the gre header. This means that from rte_flow thre is only one GRE item
(the old one can be deprecated)
Ori
>
> > If you are O.K with adding such a struct to the gre file I will also be O.K with it.
> >
> > Best,
> > Ori
> >>
> >>> Correct me if my understanding is not right.
> >>>
> >>> Thanks,
> >>> Sean
> >>>
> >>>
> >
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 18:08 3% ` Bruce Richardson
2022-01-25 1:14 0% ` Alexander Kozyrev
@ 2022-01-25 15:58 4% ` Ori Kam
2022-01-25 18:09 3% ` Bruce Richardson
1 sibling, 1 reply; 200+ results
From: Ori Kam @ 2022-01-25 15:58 UTC (permalink / raw)
To: Bruce Richardson, Jerin Jacob
Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
Alexander Kozyrev, dpdk-dev, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, David Marchand, Olivier Matz, Stephen Hemminger
Hi Bruce,
> -----Original Message-----
> From: Bruce Richardson <bruce.richardson@intel.com>
> Sent: Monday, January 24, 2022 8:09 PM
> Subject: Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
>
> On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> > >
> > > 24/01/2022 15:36, Jerin Jacob:
> > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > > +struct rte_flow_port_attr {
> > > > > + /**
> > > > > + * Version of the struct layout, should be 0.
> > > > > + */
> > > > > + uint32_t version;
> > > >
> > > > Why version number? Across DPDK, we are using dynamic function
> > > > versioning, I think, that would be sufficient for ABI versioning
> > >
> > > Function versioning is not ideal when the structure is accessed
> > > in many places like many drivers and library functions.
> > >
> > > The idea of this version field (which can be a bitfield)
> > > is to update it when some new features are added,
> > > so the users of the struct can check if a feature is there
> > > before trying to use it.
> > > It means a bit more code in the functions, but avoid duplicating functions
> > > as in function versioning.
> > >
> > > Another approach was suggested by Bruce, and applied to dmadev.
> > > It is assuming we only add new fields at the end (no removal),
> > > and focus on the size of the struct.
> > > By passing sizeof as an extra parameter, the function knows
> > > which fields are OK to use.
> > > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> >
> > + @Richardson, Bruce
> > Either approach is fine, No strong opinion. We can have one approach
> > and use it across DPDK for consistency.
> >
>
> In general I prefer the size-based approach, mainly because of its
> simplicity. However, some other reasons why we may want to choose it:
>
> * It's completely hidden from the end user, and there is no need for an
> extra struct field that needs to be filled in
>
> * Related to that, for the version-field approach, if the field is present
> in a user-allocated struct, then you probably need to start preventing user
> error via:
> - having the external struct not have the field and use a separate
> internal struct to add in the version info after the fact in the
> versioned function. Alternatively,
> - provide a separate init function for each structure to fill in the
> version field appropriately
>
> * In general, using the size-based approach like in the linked example is
> more resilient since it's compiler-inserted, so there is reduced chance
> of error.
>
> * A sizeof field allows simple-enough handling in the drivers - especially
> since it does not allow removed fields. Each driver only needs to check
> that the size passed in is greater than that expected, thereby allowing
> us to have both updated and non-updated drivers co-existing simultaneously.
> [For a version field, the same scheme could also work if we keep the
> no-delete rule, but for a bitmask field, I believe things may get more
> complex in terms of checking]
>
> In terms of the limitations of using sizeof - requiring new fields to
> always go on the end, and preventing shrinking the struct - I think that the
> simplicity gains far outweigh the impact of these strictions.
>
> * Adding fields to struct is far more common than wanting to remove one
>
> * So long as the added field is at the end, even if the struct size doesn't
> change the scheme can still work as the versioned function for the old
> struct can ensure that the extra field is appropriately zeroed (rather than
> random) on entry into any driver function
>
Zero can be a valid value so this is may result in an issue.
> * If we do want to remove a field, the space simply needs to be marked as
> reserved in the struct, until the next ABI break release, when it can be
> compacted. Again, function versioning can take care of appropriately
> zeroing this field on return, if necessary.
>
This means that PMD will have to change just for removal of a field
I would say removal is not allowed.
> My 2c from considering this for the implementation in dmadev. :-)
Some concerns I have about your suggestion:
1. The size of the struct is dependent on the system, for example
Assume this struct
{
Uint16_t a;
Uint32_t b;
Uint8_t c;
Uint32_t d;
}
Incase of 32 bit machine the size will be 128 bytes, while in 64 machine it will be 96
2. ABI breakage, as far as I know changing size of a struct is ABI breakage, since if
the application got the size from previous version and for example created array
or allocated memory then using the new structure will result in memory override.
I know that flags/version is not easy since it means creating new
Structure for each change. I prefer to declare that size can change between
DPDK releases is allowd but as long as we say ABI breakage is forbidden then I don't think your
solution is valid.
And we must go with the version/flags and create new structure for each change.
Best,
Ori
>
> /Bruce
^ permalink raw reply [relevance 4%]
* Re: [RFC 1/3] ethdev: support GRE optional fields
2022-01-25 13:06 0% ` Ori Kam
@ 2022-01-25 14:29 0% ` Ferruh Yigit
2022-01-25 16:03 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2022-01-25 14:29 UTC (permalink / raw)
To: Ori Kam, Sean Zhang (Networking SW),
NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad
Cc: Andrew Rybchenko, dev
On 1/25/2022 1:06 PM, Ori Kam wrote:
> Hi Ferruh,
>
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>>
>> On 1/25/2022 9:49 AM, Sean Zhang (Networking SW) wrote:
>>> Hi,
>>>
>>>> -----Original Message-----
>>>> From: Ori Kam <orika@nvidia.com>
>>>> Sent: Wednesday, January 19, 2022 6:57 PM
>>>> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
>>>> Sean Zhang (Networking SW) <xiazhang@nvidia.com>; Matan Azrad
>>>> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
>>>> Subject: RE: [RFC 1/3] ethdev: support GRE optional fields
>>>>
>>>> Hi,
>>>>
>>>>> -----Original Message-----
>>>>> From: Thomas Monjalon <thomas@monjalon.net>
>>>>> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>>>>>
>>>>> 19/01/2022 10:53, Ferruh Yigit:
>>>>>> On 12/30/2021 3:08 AM, Sean Zhang wrote:
>>>>>>> --- a/lib/ethdev/rte_flow.h
>>>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>>>> /**
>>>>>>> + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
>>>>>>> + *
>>>>>>> + * Matches GRE optional fields in header.
>>>>>>> + */
>>>>>>> +struct rte_gre_hdr_option {
>>>>>>> + rte_be16_t checksum;
>>>>>>> + rte_be32_t key;
>>>>>>> + rte_be32_t sequence;
>>>>>>> +};
>>>>>>> +
>>>>>>
>>>>>> Hi Ori, Andrew,
>>>>>>
>>>>>> The decision was to have protocol structs in the net library and
>>>>>> flow structs use from there, wasn't it?
>>>>>> (Btw, a deprecation notice is still pending to clear some existing
>>>>>> ones)
>>>>>>
>>>>>> So for the GRE optional fields, what about having a struct in the
>>>> 'rte_gre.h'?
>>>>>> (Also perhaps an GRE extended protocol header can be defined
>>>>>> combining 'rte_gre_hdr' and optional fields struct.) Later flow API
>>>>>> struct can embed that struct.
>>>>>
>>>>> +1 for using librte_net.
>>>>> This addition in rte_flow looks to be a mistake.
>>>>> Please fix the next version.
>>>>>
>>>> Nice idea,
>>>> but my main concern is that the header should have the header is defined.
>>>> Since some of the fields are optional this will look something like this:
>>>> gre_hdr_option_checksum {
>>>> rte_be_16_t checksum;
>>>> }
>>>>
>>>> gre_hdr_option_key {
>>>> rte_be_32_t key;
>>>> }
>>>>
>>>> gre_hdr_option_ sequence {
>>>> rte_be_32_t sequence;
>>>> }
>>>>
>>>> I don't want to have so many rte_flow_items, Has more and more protocols
>>>> have optional data it doesn't make sense to create the item for each.
>>>>
>>>> If I'm looking at it from an ideal place, I would like that the optional fields will
>>>> be part of the original item.
>>>> For example in test pmd I would like to write:
>>>> Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
>>>> And not Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum
>>>> is yyy key is xxx / end This means that the structure will look like this:
>>>> struct rte_flow_item_gre {
>>>> union {
>>>> struct {
>>>> /**
>>>> * Checksum (1b), reserved 0 (12b), version (3b).
>>>> * Refer to RFC 2784.
>>>> */
>>>> rte_be16_t c_rsvd0_ver;
>>>> rte_be16_t protocol; /**< Protocol type. */
>>>> }
>>>> struct rte_gre_hdr hdr
>>>> }
>>>> rte_be_16_t checksum;
>>>> rte_be_32_t key;
>>>> rte_be_32_t sequence;
>>>> };
>>>> The main issue with this is that it breaks ABI, Maybe to solve this we can
>>>> create a new structure gre_ext?
>>>>
>>>> In any way I think we should think how we allow adding members to
>>>> structures without ABI breakage.
>>>>
>>>> Best,
>>>> Ori
>>>
>>> Thanks for the comments and suggestion.
>>> So the acceptable solution is to have new structs define in rte_gre.h?
>>> struct gre_hdr_opt_checksum {
>>> rte_be_16_t checksum;
>>> }
>>>
>>> struct gre_hdr_opt_key {
>>> rte_be_32_t key;
>>> }
>>>
>>> struct gre_hdr_opt_ sequence {
>>> rte_be_32_t sequence;
>>> }
>>>
>>> And to add new struct gre_ext defined in rte_flow.h:
>>> struct gre_ext {
>>> struct rte_gre_hdr hdr;
>>> struct gre_hdr_opt_checkum checksum;
>>> struct rte_hdr_opt_key key;
>>> struct rte_hdr_opt_seq seq;
>>> };
>>>
>>> And we use struct gre_ext for this new added flow item gre_option.
>>>
>>
>> What about having a struct for 'options' and use in in flow item for options,
>> like:
>>
>> struct gre_hdr_opt {
>> struct gre_hdr_opt_checkum checksum;
>> struct rte_hdr_opt_key key;
>> struct rte_hdr_opt_seq seq;
>> }
>>
>> struct gre_hdr_ext {
>> struct rte_gre_hdr hdr;
>> struct gre_hdr_opt;
>> }
>>
>> struct rte_flow_item_gre_opt {
>> struct gre_hdr_opt hdr;
>> }
>
> Fom my understanding the header should reflect structures
> as they appear in the spec.
>
> If we look at the spec, from my understanding each of those items is stand-alone.
> It is possible to have just key or key and seq or any other combination.
> So the struct you suggested is not valid struct in gre header.
>
If it is not valid header representation, please forget about it.
But this means initially suggested 'struct gre_ext' is wrong, right?
So should 'rte_flow_item_gre_opt' use separate structs, like:
struct rte_flow_item_gre_opt {
struct gre_hdr_opt_checkum checksum;
struct rte_hdr_opt_key key;
struct rte_hdr_opt_seq seq;
}
> If you are O.K with adding such a struct to the gre file I will also be O.K with it.
>
> Best,
> Ori
>>
>>> Correct me if my understanding is not right.
>>>
>>> Thanks,
>>> Sean
>>>
>>>
>
^ permalink raw reply [relevance 0%]
* RE: [RFC 1/3] ethdev: support GRE optional fields
2022-01-25 11:37 0% ` Ferruh Yigit
@ 2022-01-25 13:06 0% ` Ori Kam
2022-01-25 14:29 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2022-01-25 13:06 UTC (permalink / raw)
To: Ferruh Yigit, Sean Zhang (Networking SW),
NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad
Cc: Andrew Rybchenko, dev
Hi Ferruh,
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>
> On 1/25/2022 9:49 AM, Sean Zhang (Networking SW) wrote:
> > Hi,
> >
> >> -----Original Message-----
> >> From: Ori Kam <orika@nvidia.com>
> >> Sent: Wednesday, January 19, 2022 6:57 PM
> >> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
> >> Sean Zhang (Networking SW) <xiazhang@nvidia.com>; Matan Azrad
> >> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>
> >> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
> >> Subject: RE: [RFC 1/3] ethdev: support GRE optional fields
> >>
> >> Hi,
> >>
> >>> -----Original Message-----
> >>> From: Thomas Monjalon <thomas@monjalon.net>
> >>> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
> >>>
> >>> 19/01/2022 10:53, Ferruh Yigit:
> >>>> On 12/30/2021 3:08 AM, Sean Zhang wrote:
> >>>>> --- a/lib/ethdev/rte_flow.h
> >>>>> +++ b/lib/ethdev/rte_flow.h
> >>>>> /**
> >>>>> + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
> >>>>> + *
> >>>>> + * Matches GRE optional fields in header.
> >>>>> + */
> >>>>> +struct rte_gre_hdr_option {
> >>>>> + rte_be16_t checksum;
> >>>>> + rte_be32_t key;
> >>>>> + rte_be32_t sequence;
> >>>>> +};
> >>>>> +
> >>>>
> >>>> Hi Ori, Andrew,
> >>>>
> >>>> The decision was to have protocol structs in the net library and
> >>>> flow structs use from there, wasn't it?
> >>>> (Btw, a deprecation notice is still pending to clear some existing
> >>>> ones)
> >>>>
> >>>> So for the GRE optional fields, what about having a struct in the
> >> 'rte_gre.h'?
> >>>> (Also perhaps an GRE extended protocol header can be defined
> >>>> combining 'rte_gre_hdr' and optional fields struct.) Later flow API
> >>>> struct can embed that struct.
> >>>
> >>> +1 for using librte_net.
> >>> This addition in rte_flow looks to be a mistake.
> >>> Please fix the next version.
> >>>
> >> Nice idea,
> >> but my main concern is that the header should have the header is defined.
> >> Since some of the fields are optional this will look something like this:
> >> gre_hdr_option_checksum {
> >> rte_be_16_t checksum;
> >> }
> >>
> >> gre_hdr_option_key {
> >> rte_be_32_t key;
> >> }
> >>
> >> gre_hdr_option_ sequence {
> >> rte_be_32_t sequence;
> >> }
> >>
> >> I don't want to have so many rte_flow_items, Has more and more protocols
> >> have optional data it doesn't make sense to create the item for each.
> >>
> >> If I'm looking at it from an ideal place, I would like that the optional fields will
> >> be part of the original item.
> >> For example in test pmd I would like to write:
> >> Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
> >> And not Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum
> >> is yyy key is xxx / end This means that the structure will look like this:
> >> struct rte_flow_item_gre {
> >> union {
> >> struct {
> >> /**
> >> * Checksum (1b), reserved 0 (12b), version (3b).
> >> * Refer to RFC 2784.
> >> */
> >> rte_be16_t c_rsvd0_ver;
> >> rte_be16_t protocol; /**< Protocol type. */
> >> }
> >> struct rte_gre_hdr hdr
> >> }
> >> rte_be_16_t checksum;
> >> rte_be_32_t key;
> >> rte_be_32_t sequence;
> >> };
> >> The main issue with this is that it breaks ABI, Maybe to solve this we can
> >> create a new structure gre_ext?
> >>
> >> In any way I think we should think how we allow adding members to
> >> structures without ABI breakage.
> >>
> >> Best,
> >> Ori
> >
> > Thanks for the comments and suggestion.
> > So the acceptable solution is to have new structs define in rte_gre.h?
> > struct gre_hdr_opt_checksum {
> > rte_be_16_t checksum;
> > }
> >
> > struct gre_hdr_opt_key {
> > rte_be_32_t key;
> > }
> >
> > struct gre_hdr_opt_ sequence {
> > rte_be_32_t sequence;
> > }
> >
> > And to add new struct gre_ext defined in rte_flow.h:
> > struct gre_ext {
> > struct rte_gre_hdr hdr;
> > struct gre_hdr_opt_checkum checksum;
> > struct rte_hdr_opt_key key;
> > struct rte_hdr_opt_seq seq;
> > };
> >
> > And we use struct gre_ext for this new added flow item gre_option.
> >
>
> What about having a struct for 'options' and use in in flow item for options,
> like:
>
> struct gre_hdr_opt {
> struct gre_hdr_opt_checkum checksum;
> struct rte_hdr_opt_key key;
> struct rte_hdr_opt_seq seq;
> }
>
> struct gre_hdr_ext {
> struct rte_gre_hdr hdr;
> struct gre_hdr_opt;
> }
>
> struct rte_flow_item_gre_opt {
> struct gre_hdr_opt hdr;
> }
Fom my understanding the header should reflect structures
as they appear in the spec.
If we look at the spec, from my understanding each of those items is stand-alone.
It is possible to have just key or key and seq or any other combination.
So the struct you suggested is not valid struct in gre header.
If you are O.K with adding such a struct to the gre file I will also be O.K with it.
Best,
Ori
>
> > Correct me if my understanding is not right.
> >
> > Thanks,
> > Sean
> >
> >
^ permalink raw reply [relevance 0%]
* Re: [RFC 1/3] ethdev: support GRE optional fields
2022-01-25 9:49 0% ` Sean Zhang (Networking SW)
@ 2022-01-25 11:37 0% ` Ferruh Yigit
2022-01-25 13:06 0% ` Ori Kam
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2022-01-25 11:37 UTC (permalink / raw)
To: Sean Zhang (Networking SW),
Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad
Cc: Andrew Rybchenko, dev
On 1/25/2022 9:49 AM, Sean Zhang (Networking SW) wrote:
> Hi,
>
>> -----Original Message-----
>> From: Ori Kam <orika@nvidia.com>
>> Sent: Wednesday, January 19, 2022 6:57 PM
>> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
>> Sean Zhang (Networking SW) <xiazhang@nvidia.com>; Matan Azrad
>> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>
>> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
>> Subject: RE: [RFC 1/3] ethdev: support GRE optional fields
>>
>> Hi,
>>
>>> -----Original Message-----
>>> From: Thomas Monjalon <thomas@monjalon.net>
>>> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>>>
>>> 19/01/2022 10:53, Ferruh Yigit:
>>>> On 12/30/2021 3:08 AM, Sean Zhang wrote:
>>>>> --- a/lib/ethdev/rte_flow.h
>>>>> +++ b/lib/ethdev/rte_flow.h
>>>>> /**
>>>>> + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
>>>>> + *
>>>>> + * Matches GRE optional fields in header.
>>>>> + */
>>>>> +struct rte_gre_hdr_option {
>>>>> + rte_be16_t checksum;
>>>>> + rte_be32_t key;
>>>>> + rte_be32_t sequence;
>>>>> +};
>>>>> +
>>>>
>>>> Hi Ori, Andrew,
>>>>
>>>> The decision was to have protocol structs in the net library and
>>>> flow structs use from there, wasn't it?
>>>> (Btw, a deprecation notice is still pending to clear some existing
>>>> ones)
>>>>
>>>> So for the GRE optional fields, what about having a struct in the
>> 'rte_gre.h'?
>>>> (Also perhaps an GRE extended protocol header can be defined
>>>> combining 'rte_gre_hdr' and optional fields struct.) Later flow API
>>>> struct can embed that struct.
>>>
>>> +1 for using librte_net.
>>> This addition in rte_flow looks to be a mistake.
>>> Please fix the next version.
>>>
>> Nice idea,
>> but my main concern is that the header should have the header is defined.
>> Since some of the fields are optional this will look something like this:
>> gre_hdr_option_checksum {
>> rte_be_16_t checksum;
>> }
>>
>> gre_hdr_option_key {
>> rte_be_32_t key;
>> }
>>
>> gre_hdr_option_ sequence {
>> rte_be_32_t sequence;
>> }
>>
>> I don't want to have so many rte_flow_items, Has more and more protocols
>> have optional data it doesn't make sense to create the item for each.
>>
>> If I'm looking at it from an ideal place, I would like that the optional fields will
>> be part of the original item.
>> For example in test pmd I would like to write:
>> Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
>> And not Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum
>> is yyy key is xxx / end This means that the structure will look like this:
>> struct rte_flow_item_gre {
>> union {
>> struct {
>> /**
>> * Checksum (1b), reserved 0 (12b), version (3b).
>> * Refer to RFC 2784.
>> */
>> rte_be16_t c_rsvd0_ver;
>> rte_be16_t protocol; /**< Protocol type. */
>> }
>> struct rte_gre_hdr hdr
>> }
>> rte_be_16_t checksum;
>> rte_be_32_t key;
>> rte_be_32_t sequence;
>> };
>> The main issue with this is that it breaks ABI, Maybe to solve this we can
>> create a new structure gre_ext?
>>
>> In any way I think we should think how we allow adding members to
>> structures without ABI breakage.
>>
>> Best,
>> Ori
>
> Thanks for the comments and suggestion.
> So the acceptable solution is to have new structs define in rte_gre.h?
> struct gre_hdr_opt_checksum {
> rte_be_16_t checksum;
> }
>
> struct gre_hdr_opt_key {
> rte_be_32_t key;
> }
>
> struct gre_hdr_opt_ sequence {
> rte_be_32_t sequence;
> }
>
> And to add new struct gre_ext defined in rte_flow.h:
> struct gre_ext {
> struct rte_gre_hdr hdr;
> struct gre_hdr_opt_checkum checksum;
> struct rte_hdr_opt_key key;
> struct rte_hdr_opt_seq seq;
> };
>
> And we use struct gre_ext for this new added flow item gre_option.
>
What about having a struct for 'options' and use in in flow item for options,
like:
struct gre_hdr_opt {
struct gre_hdr_opt_checkum checksum;
struct rte_hdr_opt_key key;
struct rte_hdr_opt_seq seq;
}
struct gre_hdr_ext {
struct rte_gre_hdr hdr;
struct gre_hdr_opt;
}
struct rte_flow_item_gre_opt {
struct gre_hdr_opt hdr;
}
> Correct me if my understanding is not right.
>
> Thanks,
> Sean
>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-23 21:20 8% ` [PATCH v4] " Michael Barker
@ 2022-01-25 10:33 0% ` Ray Kinsella
2022-01-31 0:05 8% ` [PATCH v5] " Michael Barker
1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2022-01-25 10:33 UTC (permalink / raw)
To: Michael Barker, Stephen Hemminger; +Cc: dev
Michael Barker <mikeb01@gmail.com> writes:
> When compiling with clang using -Wall (or -Wgcc-compat) the use of
> diagnose_if kicks up a warning:
>
> .../include/rte_interrupts.h:623:1: error: 'diagnose_if' is a clang
> extension [-Werror,-Wgcc-compat]
> __rte_internal
> ^
> .../include/rte_compat.h:36:16: note: expanded from macro '__rte_internal'
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
>
> This change ignores the '-Wgcc-compat' warning in the specific location
> where the warning occurs. It is safe to do in this circumstance as the
> specific macro is only defined when using the clang compiler.
>
> Signed-off-by: Michael Barker <mikeb01@gmail.com>
> ---
> lib/eal/include/rte_compat.h | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
> index 2718612cce..9556bbf4d0 100644
> --- a/lib/eal/include/rte_compat.h
> +++ b/lib/eal/include/rte_compat.h
> @@ -33,8 +33,11 @@ section(".text.internal")))
> #elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /*
> For clang */
Why doesn't the __has_attribute take care of this?
I would have thought that gcc would check the for the attribute, find it
doesn't support it and ignore the whole thing?
>
> #define __rte_internal \
> +_Pragma("GCC diagnostic push") \
> +_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> -section(".text.internal")))
> +section(".text.internal"))) \
> +_Pragma("GCC diagnostic pop")
>
> #else
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* RE: [RFC 1/3] ethdev: support GRE optional fields
2022-01-19 10:56 4% ` Ori Kam
@ 2022-01-25 9:49 0% ` Sean Zhang (Networking SW)
2022-01-25 11:37 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Sean Zhang (Networking SW) @ 2022-01-25 9:49 UTC (permalink / raw)
To: Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
Matan Azrad, Ferruh Yigit
Cc: Andrew Rybchenko, dev
Hi,
> -----Original Message-----
> From: Ori Kam <orika@nvidia.com>
> Sent: Wednesday, January 19, 2022 6:57 PM
> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>;
> Sean Zhang (Networking SW) <xiazhang@nvidia.com>; Matan Azrad
> <matan@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>
> Cc: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>; dev@dpdk.org
> Subject: RE: [RFC 1/3] ethdev: support GRE optional fields
>
> Hi,
>
> > -----Original Message-----
> > From: Thomas Monjalon <thomas@monjalon.net>
> > Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
> >
> > 19/01/2022 10:53, Ferruh Yigit:
> > > On 12/30/2021 3:08 AM, Sean Zhang wrote:
> > > > --- a/lib/ethdev/rte_flow.h
> > > > +++ b/lib/ethdev/rte_flow.h
> > > > /**
> > > > + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
> > > > + *
> > > > + * Matches GRE optional fields in header.
> > > > + */
> > > > +struct rte_gre_hdr_option {
> > > > + rte_be16_t checksum;
> > > > + rte_be32_t key;
> > > > + rte_be32_t sequence;
> > > > +};
> > > > +
> > >
> > > Hi Ori, Andrew,
> > >
> > > The decision was to have protocol structs in the net library and
> > > flow structs use from there, wasn't it?
> > > (Btw, a deprecation notice is still pending to clear some existing
> > > ones)
> > >
> > > So for the GRE optional fields, what about having a struct in the
> 'rte_gre.h'?
> > > (Also perhaps an GRE extended protocol header can be defined
> > > combining 'rte_gre_hdr' and optional fields struct.) Later flow API
> > > struct can embed that struct.
> >
> > +1 for using librte_net.
> > This addition in rte_flow looks to be a mistake.
> > Please fix the next version.
> >
> Nice idea,
> but my main concern is that the header should have the header is defined.
> Since some of the fields are optional this will look something like this:
> gre_hdr_option_checksum {
> rte_be_16_t checksum;
> }
>
> gre_hdr_option_key {
> rte_be_32_t key;
> }
>
> gre_hdr_option_ sequence {
> rte_be_32_t sequence;
> }
>
> I don't want to have so many rte_flow_items, Has more and more protocols
> have optional data it doesn't make sense to create the item for each.
>
> If I'm looking at it from an ideal place, I would like that the optional fields will
> be part of the original item.
> For example in test pmd I would like to write:
> Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
> And not Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum
> is yyy key is xxx / end This means that the structure will look like this:
> struct rte_flow_item_gre {
> union {
> struct {
> /**
> * Checksum (1b), reserved 0 (12b), version (3b).
> * Refer to RFC 2784.
> */
> rte_be16_t c_rsvd0_ver;
> rte_be16_t protocol; /**< Protocol type. */
> }
> struct rte_gre_hdr hdr
> }
> rte_be_16_t checksum;
> rte_be_32_t key;
> rte_be_32_t sequence;
> };
> The main issue with this is that it breaks ABI, Maybe to solve this we can
> create a new structure gre_ext?
>
> In any way I think we should think how we allow adding members to
> structures without ABI breakage.
>
> Best,
> Ori
Thanks for the comments and suggestion.
So the acceptable solution is to have new structs define in rte_gre.h?
struct gre_hdr_opt_checksum {
rte_be_16_t checksum;
}
struct gre_hdr_opt_key {
rte_be_32_t key;
}
struct gre_hdr_opt_ sequence {
rte_be_32_t sequence;
}
And to add new struct gre_ext defined in rte_flow.h:
struct gre_ext {
struct rte_gre_hdr hdr;
struct gre_hdr_opt_checkum checksum;
struct rte_hdr_opt_key key;
struct rte_hdr_opt_seq seq;
};
And we use struct gre_ext for this new added flow item gre_option.
Correct me if my understanding is not right.
Thanks,
Sean
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 17:40 0% ` Ajit Khaparde
@ 2022-01-25 1:28 0% ` Alexander Kozyrev
0 siblings, 1 reply; 200+ results
From: Alexander Kozyrev @ 2022-01-25 1:28 UTC (permalink / raw)
To: Ajit Khaparde, Jerin Jacob
Cc: dpdk-dev, Ori Kam, NBU-Contact-Thomas Monjalon (EXTERNAL),
Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
Qi Zhang, Jerin Jacob
On Monday, January 24, 2022 12:41 Ajit Khaparde <ajit.khaparde@broadcom.com> wrote:
> On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com>
> wrote:
> >
> > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> <akozyrev@nvidia.com> wrote:
> > >
> > > The flow rules creation/destruction at a large scale incurs a performance
> > > penalty and may negatively impact the packet processing when used
> > > as part of the datapath logic. This is mainly because software/hardware
> > > resources are allocated and prepared during the flow rule creation.
> > >
> > > In order to optimize the insertion rate, PMD may use some hints
> provided
> > > by the application at the initialization phase. The rte_flow_configure()
> > > function allows to pre-allocate all the needed resources beforehand.
> > > These resources can be used at a later stage without costly allocations.
> > > Every PMD may use only the subset of hints and ignore unused ones or
> > > fail in case the requested configuration is not supported.
> > >
> > > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > > ---
> >
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Flow engine port configuration attributes.
> > > + */
> > > +__extension__
> >
> > Is this __extension__ required ?
No, it is not longer required as I removed bitfield from this structure. Thanks for catching.
> >
> > > +struct rte_flow_port_attr {
> > > + /**
> > > + * Version of the struct layout, should be 0.
> > > + */
> > > + uint32_t version;
> >
> > Why version number? Across DPDK, we are using dynamic function
> > versioning, I think, that would
> > be sufficient for ABI versioning
> >
> > > + /**
> > > + * Number of counter actions pre-configured.
> > > + * If set to 0, PMD will allocate counters dynamically.
> > > + * @see RTE_FLOW_ACTION_TYPE_COUNT
> > > + */
> > > + uint32_t nb_counters;
> > > + /**
> > > + * Number of aging actions pre-configured.
> > > + * If set to 0, PMD will allocate aging dynamically.
> > > + * @see RTE_FLOW_ACTION_TYPE_AGE
> > > + */
> > > + uint32_t nb_aging;
> > > + /**
> > > + * Number of traffic metering actions pre-configured.
> > > + * If set to 0, PMD will allocate meters dynamically.
> > > + * @see RTE_FLOW_ACTION_TYPE_METER
> > > + */
> > > + uint32_t nb_meters;
> > > +};
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Configure flow rules module.
> > > + * To pre-allocate resources as per the flow port attributes
> > > + * this configuration function must be called before any flow rule is
> created.
> > > + * Must be called only after Ethernet device is configured, but may be
> called
> > > + * before or after the device is started as long as there are no flow rules.
> > > + * No other rte_flow function should be called while this function is
> invoked.
> > > + * This function can be called again to change the configuration.
> > > + * Some PMDs may not support re-configuration at all,
> > > + * or may only allow increasing the number of resources allocated.
> >
> > Following comment from Ivan looks good to me
> >
> > * Pre-configure the port's flow API engine.
> > *
> > * This API can only be invoked before the application
> > * starts using the rest of the flow library functions.
> > *
> > * The API can be invoked multiple times to change the
> > * settings. The port, however, may reject the changes.
Ok, I'll adopt this wording in the v3.
> > > + *
> > > + * @param port_id
> > > + * Port identifier of Ethernet device.
> > > + * @param[in] port_attr
> > > + * Port configuration attributes.
> > > + * @param[out] error
> > > + * Perform verbose error reporting if not NULL.
> > > + * PMDs initialize this structure in case of error only.
> > > + *
> > > + * @return
> > > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > > + */
> > > +__rte_experimental
> > > +int
> > > +rte_flow_configure(uint16_t port_id,
> >
> > Should we couple, setting resource limit hint to configure function as
> > if we add future items in
> > configuration, we may pain to manage all state. Instead how about,
> > rte_flow_resource_reserve_hint_set()?
> +1
Port attributes are the hints, PMD can safely ignore anything that is not supported/deemed unreasonable.
Having several functions to call instead of one configuration function seems like a burden to me.
>
> >
> >
> > > + const struct rte_flow_port_attr *port_attr,
> > > + struct rte_flow_error *error);
> >
> > I think, we should have _get function to get those limit numbers otherwise,
> > we can not write portable applications as the return value is kind of
> > boolean now if
> > don't define exact values for rte_errno for reasons.
> +1
We had this discussion in RFC. The limits will vary from NIC to NIC and from system to
system, depending on hardware capabilities and amount of free memory for example.
It is easier to reject a configuration with a clear error description as we do for flow creation.
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 18:08 3% ` Bruce Richardson
@ 2022-01-25 1:14 0% ` Alexander Kozyrev
2022-01-25 15:58 4% ` Ori Kam
1 sibling, 0 replies; 200+ results
From: Alexander Kozyrev @ 2022-01-25 1:14 UTC (permalink / raw)
To: Bruce Richardson, Jerin Jacob
Cc: NBU-Contact-Thomas Monjalon (EXTERNAL),
dpdk-dev, Ori Kam, Ivan Malov, Andrew Rybchenko, Ferruh Yigit,
mohammad.abdul.awal, Qi Zhang, Jerin Jacob, Ajit Khaparde,
David Marchand, Olivier Matz, Stephen Hemminger
On Monday, January 24, 2022 13:09 Bruce Richardson <bruce.richardson@intel.com> wrote:
> On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> > On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon
> <thomas@monjalon.net> wrote:
> > >
> > > 24/01/2022 15:36, Jerin Jacob:
> > > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev
> <akozyrev@nvidia.com> wrote:
> > > > > +struct rte_flow_port_attr {
> > > > > + /**
> > > > > + * Version of the struct layout, should be 0.
> > > > > + */
> > > > > + uint32_t version;
> > > >
> > > > Why version number? Across DPDK, we are using dynamic function
> > > > versioning, I think, that would be sufficient for ABI versioning
> > >
> > > Function versioning is not ideal when the structure is accessed
> > > in many places like many drivers and library functions.
> > >
> > > The idea of this version field (which can be a bitfield)
> > > is to update it when some new features are added,
> > > so the users of the struct can check if a feature is there
> > > before trying to use it.
> > > It means a bit more code in the functions, but avoid duplicating functions
> > > as in function versioning.
> > >
> > > Another approach was suggested by Bruce, and applied to dmadev.
> > > It is assuming we only add new fields at the end (no removal),
> > > and focus on the size of the struct.
> > > By passing sizeof as an extra parameter, the function knows
> > > which fields are OK to use.
> > > Example:
> http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
> >
> > + @Richardson, Bruce
> > Either approach is fine, No strong opinion. We can have one approach
> > and use it across DPDK for consistency.
> >
>
> In general I prefer the size-based approach, mainly because of its
> simplicity. However, some other reasons why we may want to choose it:
>
> * It's completely hidden from the end user, and there is no need for an
> extra struct field that needs to be filled in
>
> * Related to that, for the version-field approach, if the field is present
> in a user-allocated struct, then you probably need to start preventing user
> error via:
> - having the external struct not have the field and use a separate
> internal struct to add in the version info after the fact in the
> versioned function. Alternatively,
> - provide a separate init function for each structure to fill in the
> version field appropriately
>
> * In general, using the size-based approach like in the linked example is
> more resilient since it's compiler-inserted, so there is reduced chance
> of error.
>
> * A sizeof field allows simple-enough handling in the drivers - especially
> since it does not allow removed fields. Each driver only needs to check
> that the size passed in is greater than that expected, thereby allowing
> us to have both updated and non-updated drivers co-existing
> simultaneously.
> [For a version field, the same scheme could also work if we keep the
> no-delete rule, but for a bitmask field, I believe things may get more
> complex in terms of checking]
>
> In terms of the limitations of using sizeof - requiring new fields to
> always go on the end, and preventing shrinking the struct - I think that the
> simplicity gains far outweigh the impact of these strictions.
>
> * Adding fields to struct is far more common than wanting to remove one
>
> * So long as the added field is at the end, even if the struct size doesn't
> change the scheme can still work as the versioned function for the old
> struct can ensure that the extra field is appropriately zeroed (rather than
> random) on entry into any driver function
>
> * If we do want to remove a field, the space simply needs to be marked as
> reserved in the struct, until the next ABI break release, when it can be
> compacted. Again, function versioning can take care of appropriately
> zeroing this field on return, if necessary.
>
> My 2c from considering this for the implementation in dmadev. :-)
>
> /Bruce
Thank you for the suggestions. I have no objections in adopting a size-based approach.
I can keep versions or switch to sizeof as long as we can agree on some uniform way.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 17:46 0% ` Jerin Jacob
@ 2022-01-24 18:08 3% ` Bruce Richardson
2022-01-25 1:14 0% ` Alexander Kozyrev
2022-01-25 15:58 4% ` Ori Kam
0 siblings, 2 replies; 200+ results
From: Bruce Richardson @ 2022-01-24 18:08 UTC (permalink / raw)
To: Jerin Jacob
Cc: Thomas Monjalon, Alexander Kozyrev, dpdk-dev, Ori Kam,
Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
Qi Zhang, Jerin Jacob, Ajit Khaparde, David Marchand,
Olivier Matz, Stephen Hemminger
On Mon, Jan 24, 2022 at 11:16:15PM +0530, Jerin Jacob wrote:
> On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
> >
> > 24/01/2022 15:36, Jerin Jacob:
> > > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > > +struct rte_flow_port_attr {
> > > > + /**
> > > > + * Version of the struct layout, should be 0.
> > > > + */
> > > > + uint32_t version;
> > >
> > > Why version number? Across DPDK, we are using dynamic function
> > > versioning, I think, that would be sufficient for ABI versioning
> >
> > Function versioning is not ideal when the structure is accessed
> > in many places like many drivers and library functions.
> >
> > The idea of this version field (which can be a bitfield)
> > is to update it when some new features are added,
> > so the users of the struct can check if a feature is there
> > before trying to use it.
> > It means a bit more code in the functions, but avoid duplicating functions
> > as in function versioning.
> >
> > Another approach was suggested by Bruce, and applied to dmadev.
> > It is assuming we only add new fields at the end (no removal),
> > and focus on the size of the struct.
> > By passing sizeof as an extra parameter, the function knows
> > which fields are OK to use.
> > Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
>
> + @Richardson, Bruce
> Either approach is fine, No strong opinion. We can have one approach
> and use it across DPDK for consistency.
>
In general I prefer the size-based approach, mainly because of its
simplicity. However, some other reasons why we may want to choose it:
* It's completely hidden from the end user, and there is no need for an
extra struct field that needs to be filled in
* Related to that, for the version-field approach, if the field is present
in a user-allocated struct, then you probably need to start preventing user
error via:
- having the external struct not have the field and use a separate
internal struct to add in the version info after the fact in the
versioned function. Alternatively,
- provide a separate init function for each structure to fill in the
version field appropriately
* In general, using the size-based approach like in the linked example is
more resilient since it's compiler-inserted, so there is reduced chance
of error.
* A sizeof field allows simple-enough handling in the drivers - especially
since it does not allow removed fields. Each driver only needs to check
that the size passed in is greater than that expected, thereby allowing
us to have both updated and non-updated drivers co-existing simultaneously.
[For a version field, the same scheme could also work if we keep the
no-delete rule, but for a bitmask field, I believe things may get more
complex in terms of checking]
In terms of the limitations of using sizeof - requiring new fields to
always go on the end, and preventing shrinking the struct - I think that the
simplicity gains far outweigh the impact of these strictions.
* Adding fields to struct is far more common than wanting to remove one
* So long as the added field is at the end, even if the struct size doesn't
change the scheme can still work as the versioned function for the old
struct can ensure that the extra field is appropriately zeroed (rather than
random) on entry into any driver function
* If we do want to remove a field, the space simply needs to be marked as
reserved in the struct, until the next ABI break release, when it can be
compacted. Again, function versioning can take care of appropriately
zeroing this field on return, if necessary.
My 2c from considering this for the implementation in dmadev. :-)
/Bruce
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 17:35 0% ` Thomas Monjalon
@ 2022-01-24 17:46 0% ` Jerin Jacob
2022-01-24 18:08 3% ` Bruce Richardson
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2022-01-24 17:46 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Ivan Malov,
Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal, Qi Zhang,
Jerin Jacob, Ajit Khaparde, Richardson, Bruce, David Marchand,
Olivier Matz, Stephen Hemminger
On Mon, Jan 24, 2022 at 11:05 PM Thomas Monjalon <thomas@monjalon.net> wrote:
>
> 24/01/2022 15:36, Jerin Jacob:
> > On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > > +struct rte_flow_port_attr {
> > > + /**
> > > + * Version of the struct layout, should be 0.
> > > + */
> > > + uint32_t version;
> >
> > Why version number? Across DPDK, we are using dynamic function
> > versioning, I think, that would be sufficient for ABI versioning
>
> Function versioning is not ideal when the structure is accessed
> in many places like many drivers and library functions.
>
> The idea of this version field (which can be a bitfield)
> is to update it when some new features are added,
> so the users of the struct can check if a feature is there
> before trying to use it.
> It means a bit more code in the functions, but avoid duplicating functions
> as in function versioning.
>
> Another approach was suggested by Bruce, and applied to dmadev.
> It is assuming we only add new fields at the end (no removal),
> and focus on the size of the struct.
> By passing sizeof as an extra parameter, the function knows
> which fields are OK to use.
> Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
+ @Richardson, Bruce
Either approach is fine, No strong opinion. We can have one approach
and use it across DPDK for consistency.
>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 14:36 3% ` Jerin Jacob
2022-01-24 17:35 0% ` Thomas Monjalon
@ 2022-01-24 17:40 0% ` Ajit Khaparde
2022-01-25 1:28 0% ` Alexander Kozyrev
1 sibling, 1 reply; 200+ results
From: Ajit Khaparde @ 2022-01-24 17:40 UTC (permalink / raw)
To: Jerin Jacob
Cc: Alexander Kozyrev, dpdk-dev, Ori Kam, Thomas Monjalon,
Ivan Malov, Andrew Rybchenko, Ferruh Yigit, mohammad.abdul.awal,
Qi Zhang, Jerin Jacob
On Mon, Jan 24, 2022 at 6:37 AM Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> >
> > The flow rules creation/destruction at a large scale incurs a performance
> > penalty and may negatively impact the packet processing when used
> > as part of the datapath logic. This is mainly because software/hardware
> > resources are allocated and prepared during the flow rule creation.
> >
> > In order to optimize the insertion rate, PMD may use some hints provided
> > by the application at the initialization phase. The rte_flow_configure()
> > function allows to pre-allocate all the needed resources beforehand.
> > These resources can be used at a later stage without costly allocations.
> > Every PMD may use only the subset of hints and ignore unused ones or
> > fail in case the requested configuration is not supported.
> >
> > Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> > ---
>
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Flow engine port configuration attributes.
> > + */
> > +__extension__
>
> Is this __extension__ required ?
>
>
> > +struct rte_flow_port_attr {
> > + /**
> > + * Version of the struct layout, should be 0.
> > + */
> > + uint32_t version;
>
> Why version number? Across DPDK, we are using dynamic function
> versioning, I think, that would
> be sufficient for ABI versioning
>
> > + /**
> > + * Number of counter actions pre-configured.
> > + * If set to 0, PMD will allocate counters dynamically.
> > + * @see RTE_FLOW_ACTION_TYPE_COUNT
> > + */
> > + uint32_t nb_counters;
> > + /**
> > + * Number of aging actions pre-configured.
> > + * If set to 0, PMD will allocate aging dynamically.
> > + * @see RTE_FLOW_ACTION_TYPE_AGE
> > + */
> > + uint32_t nb_aging;
> > + /**
> > + * Number of traffic metering actions pre-configured.
> > + * If set to 0, PMD will allocate meters dynamically.
> > + * @see RTE_FLOW_ACTION_TYPE_METER
> > + */
> > + uint32_t nb_meters;
> > +};
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Configure flow rules module.
> > + * To pre-allocate resources as per the flow port attributes
> > + * this configuration function must be called before any flow rule is created.
> > + * Must be called only after Ethernet device is configured, but may be called
> > + * before or after the device is started as long as there are no flow rules.
> > + * No other rte_flow function should be called while this function is invoked.
> > + * This function can be called again to change the configuration.
> > + * Some PMDs may not support re-configuration at all,
> > + * or may only allow increasing the number of resources allocated.
>
> Following comment from Ivan looks good to me
>
> * Pre-configure the port's flow API engine.
> *
> * This API can only be invoked before the application
> * starts using the rest of the flow library functions.
> *
> * The API can be invoked multiple times to change the
> * settings. The port, however, may reject the changes.
>
> > + *
> > + * @param port_id
> > + * Port identifier of Ethernet device.
> > + * @param[in] port_attr
> > + * Port configuration attributes.
> > + * @param[out] error
> > + * Perform verbose error reporting if not NULL.
> > + * PMDs initialize this structure in case of error only.
> > + *
> > + * @return
> > + * 0 on success, a negative errno value otherwise and rte_errno is set.
> > + */
> > +__rte_experimental
> > +int
> > +rte_flow_configure(uint16_t port_id,
>
> Should we couple, setting resource limit hint to configure function as
> if we add future items in
> configuration, we may pain to manage all state. Instead how about,
> rte_flow_resource_reserve_hint_set()?
+1
>
>
> > + const struct rte_flow_port_attr *port_attr,
> > + struct rte_flow_error *error);
>
> I think, we should have _get function to get those limit numbers otherwise,
> we can not write portable applications as the return value is kind of
> boolean now if
> don't define exact values for rte_errno for reasons.
+1
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
2022-01-24 14:36 3% ` Jerin Jacob
@ 2022-01-24 17:35 0% ` Thomas Monjalon
2022-01-24 17:46 0% ` Jerin Jacob
2022-01-24 17:40 0% ` Ajit Khaparde
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-01-24 17:35 UTC (permalink / raw)
To: Jerin Jacob
Cc: Alexander Kozyrev, dev, Ori Kam, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde, bruce.richardson, david.marchand, olivier.matz,
stephen
24/01/2022 15:36, Jerin Jacob:
> On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
> > +struct rte_flow_port_attr {
> > + /**
> > + * Version of the struct layout, should be 0.
> > + */
> > + uint32_t version;
>
> Why version number? Across DPDK, we are using dynamic function
> versioning, I think, that would be sufficient for ABI versioning
Function versioning is not ideal when the structure is accessed
in many places like many drivers and library functions.
The idea of this version field (which can be a bitfield)
is to update it when some new features are added,
so the users of the struct can check if a feature is there
before trying to use it.
It means a bit more code in the functions, but avoid duplicating functions
as in function versioning.
Another approach was suggested by Bruce, and applied to dmadev.
It is assuming we only add new fields at the end (no removal),
and focus on the size of the struct.
By passing sizeof as an extra parameter, the function knows
which fields are OK to use.
Example: http://code.dpdk.org/dpdk/v21.11/source/lib/dmadev/rte_dmadev.c#L476
^ permalink raw reply [relevance 0%]
* Re: [PATCH v3] mempool: fix the description of some function return values
@ 2022-01-24 17:04 3% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2022-01-24 17:04 UTC (permalink / raw)
To: Zhiheng Chen; +Cc: Andrew Rybchenko, dev
Hi Zhiheng,
Thank you for your patch proposal.
On Thu, Dec 23, 2021 at 10:07:41AM +0000, Zhiheng Chen wrote:
> In rte_mempool_ring.c, the committer uses the symbol ENOBUFS to
> describe the return value of function common_ring_sc_dequeue,
> but in rte_mempool.h, the symbol ENOENT is used to describe
> the return value of function rte_mempool_get. If the user of
> dpdk uses the symbol ENOENT as the judgment condition of
> the return value, it may cause some abnormal phenomena
> in their own programs, such as when the mempool space is exhausted.
The issue I see with this approach is that currently, there
is no standard error code in mempool drivers dequeue:
bucket: -ENOBUFS
cn10k: -ENOENT
cn9k: -ENOENT
dpaa: -1, -ENOBUFS
dpaa2: -1, -ENOENT, -ENOBUFS
octeontx: -ENOMEM
ring: -ENOBUFS
stack: -ENOBUFS
After your patch, the drivers do not match the documentation.
I agree it would be better to return the same code for the same error,
whatever the driver is used. But I think we should keep the possibility
for a driver to return another code. For instance, it could be an
hardware error in case of hardware mempool driver.
I see 2 possibilities:
1/ simplest one: relax documentation and do not talk about -ENOENT or
-ENOBUFS, just say negative value is an error
2/ fix driver and doc
Mempool drivers should be modified first, knowing that changing
them is an ABI modification (which I think is acceptable, because the
error code varies depending on driver). Then, this patch could be applied.
For reference, note that the documentation was probably right initially,
but the behavior changed in commit cfa7c9e6fc1f ("ring: make bulk and
burst return values consistent"), returning -ENOBUFS instead of -ENOENT
on dequeue error.
> v2:
> * Update the descriptions of underlying functions.
>
> v3:
> * Correct the description that the return value cannot be greater than 0
> * Update the description of the dequeue function prototype
>
> Signed-off-by: Zhiheng Chen <chenzhiheng0227@gmail.com>
> ---
> lib/mempool/rte_mempool.h | 34 ++++++++++++++++++++++------------
> 1 file changed, 22 insertions(+), 12 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 1e7a3c1527..cae81d8a32 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -447,6 +447,16 @@ typedef int (*rte_mempool_enqueue_t)(struct rte_mempool *mp,
>
> /**
> * Dequeue an object from the external pool.
> + *
> + * @param mp
> + * Pointer to the memory pool.
> + * @param obj_table
> + * Pointer to a table of void * pointers (objects).
> + * @param n
> + * Number of objects to get.
> + * @return
> + * - 0: Success; got n objects.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
Also, we should have, in addition to -ENOBUFS:
- <0: Another driver-specific error code (-errno)
This comment applies for the other functions below.
> */
> typedef int (*rte_mempool_dequeue_t)(struct rte_mempool *mp,
> void **obj_table, unsigned int n);
> @@ -738,7 +748,7 @@ rte_mempool_ops_alloc(struct rte_mempool *mp);
> * Number of objects to get.
> * @return
> * - 0: Success; got n objects.
> - * - <0: Error; code of dequeue function.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> */
> static inline int
> rte_mempool_ops_dequeue_bulk(struct rte_mempool *mp,
> @@ -1452,8 +1462,8 @@ rte_mempool_put(struct rte_mempool *mp, void *obj)
> * @param cache
> * A pointer to a mempool cache structure. May be NULL if not needed.
> * @return
> - * - >=0: Success; number of objects supplied.
> - * - <0: Error; code of ring dequeue function.
> + * - 0: Success; got n objects.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> */
> static __rte_always_inline int
> rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
> @@ -1521,7 +1531,7 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
> * Get several objects from the mempool.
> *
> * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
> * the local cache and common pool are empty, even if cache from other
> * lcores are full.
> *
> @@ -1534,8 +1544,8 @@ rte_mempool_do_generic_get(struct rte_mempool *mp, void **obj_table,
> * @param cache
> * A pointer to a mempool cache structure. May be NULL if not needed.
> * @return
> - * - 0: Success; objects taken.
> - * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + * - 0: Success; got n objects.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> */
> static __rte_always_inline int
> rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> @@ -1557,7 +1567,7 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> * mempool creation time (see flags).
> *
> * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
> * the local cache and common pool are empty, even if cache from other
> * lcores are full.
> *
> @@ -1568,8 +1578,8 @@ rte_mempool_generic_get(struct rte_mempool *mp, void **obj_table,
> * @param n
> * The number of objects to get from the mempool to obj_table.
> * @return
> - * - 0: Success; objects taken
> - * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + * - 0: Success; got n objects.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> */
> static __rte_always_inline int
> rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
> @@ -1588,7 +1598,7 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
> * mempool creation (see flags).
> *
> * If cache is enabled, objects will be retrieved first from cache,
> - * subsequently from the common pool. Note that it can return -ENOENT when
> + * subsequently from the common pool. Note that it can return -ENOBUFS when
> * the local cache and common pool are empty, even if cache from other
> * lcores are full.
> *
> @@ -1597,8 +1607,8 @@ rte_mempool_get_bulk(struct rte_mempool *mp, void **obj_table, unsigned int n)
> * @param obj_p
> * A pointer to a void * pointer (object) that will be filled.
> * @return
> - * - 0: Success; objects taken.
> - * - -ENOENT: Not enough entries in the mempool; no object is retrieved.
> + * - 0: Success; got n objects.
> + * - -ENOBUFS: Not enough entries in the mempool; no object is retrieved.
> */
> static __rte_always_inline int
> rte_mempool_get(struct rte_mempool *mp, void **obj_p)
> --
> 2.32.0
>
Thanks,
Olivier
^ permalink raw reply [relevance 3%]
* Re: [PATCH v3] mempool: fix put objects to mempool with cache
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
@ 2022-01-24 15:39 3% ` Olivier Matz
2022-01-28 9:37 0% ` Morten Brørup
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2022-01-24 15:39 UTC (permalink / raw)
To: Morten Brørup; +Cc: andrew.rybchenko, bruce.richardson, jerinjacobk, dev
Hi Morten,
On Wed, Jan 19, 2022 at 04:03:01PM +0100, Morten Brørup wrote:
> mempool: fix put objects to mempool with cache
>
> This patch optimizes the rte_mempool_do_generic_put() caching algorithm,
> and fixes a bug in it.
I think we should avoid grouping fixes and optimizations in one
patch. The main reason is that fixes aims to be backported, which
is not the case of optimizations.
> The existing algorithm was:
> 1. Add the objects to the cache
> 2. Anything greater than the cache size (if it crosses the cache flush
> threshold) is flushed to the ring.
>
> Please note that the description in the source code said that it kept
> "cache min value" objects after flushing, but the function actually kept
> "size" objects, which is reflected in the above description.
>
> Now, the algorithm is:
> 1. If the objects cannot be added to the cache without crossing the
> flush threshold, flush the cache to the ring.
> 2. Add the objects to the cache.
>
> This patch changes these details:
>
> 1. Bug: The cache was still full after flushing.
> In the opposite direction, i.e. when getting objects from the cache, the
> cache is refilled to full level when it crosses the low watermark (which
> happens to be zero).
> Similarly, the cache should be flushed to empty level when it crosses
> the high watermark (which happens to be 1.5 x the size of the cache).
> The existing flushing behaviour was suboptimal for real applications,
> because crossing the low or high watermark typically happens when the
> application is in a state where the number of put/get events are out of
> balance, e.g. when absorbing a burst of packets into a QoS queue
> (getting more mbufs from the mempool), or when a burst of packets is
> trickling out from the QoS queue (putting the mbufs back into the
> mempool).
> NB: When the application is in a state where put/get events are in
> balance, the cache should remain within its low and high watermarks, and
> the algorithms for refilling/flushing the cache should not come into
> play.
> Now, the mempool cache is completely flushed when crossing the flush
> threshold, so only the newly put (hot) objects remain in the mempool
> cache afterwards.
I'm not sure we should call this behavior a bug. What is the impact
on applications, from a user perspective? Can it break a use-case, or
have an important performance impact?
> 2. Minor bug: The flush threshold comparison has been corrected; it must
> be "len > flushthresh", not "len >= flushthresh".
> Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
> would be flushed already when reaching size elements, not when exceeding
> size elements.
> Now, flushing is triggered when the flush threshold is exceeded, not
> when reached.
Same here, we should ask ourselves what is the impact before calling
it a bug.
> 3. Optimization: The most recent (hot) objects are flushed, leaving the
> oldest (cold) objects in the mempool cache.
> This is bad for CPUs with a small L1 cache, because when they get
> objects from the mempool after the mempool cache has been flushed, they
> get cold objects instead of hot objects.
> Now, the existing (cold) objects in the mempool cache are flushed before
> the new (hot) objects are added the to the mempool cache.
>
> 4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
> here, where n is relatively small and unknown at compile time.
> Now, it has been replaced by an alternative copying method, optimized
> for the fact that most Ethernet PMDs operate in bursts of 4 or 8 mbufs
> or multiples thereof.
For these optimizations, do you have an idea of what is the performance
gain? Ideally (I understand it is not always possible), each optimization
is done separately, and its impact is measured.
> v2 changes:
>
> - Not adding the new objects to the mempool cache before flushing it
> also allows the memory allocated for the mempool cache to be reduced
> from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
> However, such this change would break the ABI, so it was removed in v2.
>
> - The mempool cache should be cache line aligned for the benefit of the
> copying method, which on some CPU architectures performs worse on data
> crossing a cache boundary.
> However, such this change would break the ABI, so it was removed in v2;
> and yet another alternative copying method replaced the rte_memcpy().
OK, we may want to keep this in mind for the next abi breakage.
>
> v3 changes:
>
> - Actually remove my modifications of the rte_mempool_cache structure.
>
> Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
> ---
> lib/mempool/rte_mempool.h | 51 +++++++++++++++++++++++++++++----------
> 1 file changed, 38 insertions(+), 13 deletions(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 1e7a3c1527..7b364cfc74 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -1334,6 +1334,7 @@ static __rte_always_inline void
> rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
> unsigned int n, struct rte_mempool_cache *cache)
> {
> + uint32_t index;
> void **cache_objs;
>
> /* increment stat now, adding in mempool always success */
> @@ -1344,31 +1345,56 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
> if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
> goto ring_enqueue;
>
> - cache_objs = &cache->objs[cache->len];
> + /* If the request itself is too big for the cache */
> + if (unlikely(n > cache->flushthresh))
> + goto ring_enqueue;
>
> /*
> * The cache follows the following algorithm
> - * 1. Add the objects to the cache
> - * 2. Anything greater than the cache min value (if it crosses the
> - * cache flush threshold) is flushed to the ring.
> + * 1. If the objects cannot be added to the cache without
> + * crossing the flush threshold, flush the cache to the ring.
> + * 2. Add the objects to the cache.
> */
>
> - /* Add elements back into the cache */
> - rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
> + if (cache->len + n <= cache->flushthresh) {
> + cache_objs = &cache->objs[cache->len];
>
> - cache->len += n;
> + cache->len += n;
> + } else {
> + cache_objs = cache->objs;
>
> - if (cache->len >= cache->flushthresh) {
> - rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
> - cache->len - cache->size);
> - cache->len = cache->size;
> +#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> + if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
> + rte_panic("cannot put objects in mempool\n");
> +#else
> + rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
> +#endif
> + cache->len = n;
> + }
> +
> + /* Add the objects to the cache. */
> + for (index = 0; index < (n & ~0x3); index += 4) {
> + cache_objs[index] = obj_table[index];
> + cache_objs[index + 1] = obj_table[index + 1];
> + cache_objs[index + 2] = obj_table[index + 2];
> + cache_objs[index + 3] = obj_table[index + 3];
> + }
> + switch (n & 0x3) {
> + case 3:
> + cache_objs[index] = obj_table[index];
> + index++; /* fallthrough */
> + case 2:
> + cache_objs[index] = obj_table[index];
> + index++; /* fallthrough */
> + case 1:
> + cache_objs[index] = obj_table[index];
> }
>
> return;
>
> ring_enqueue:
>
> - /* push remaining objects in ring */
> + /* Put the objects into the ring */
> #ifdef RTE_LIBRTE_MEMPOOL_DEBUG
> if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
> rte_panic("cannot put objects in mempool\n");
> @@ -1377,7 +1403,6 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
> #endif
> }
>
> -
> /**
> * Put several objects back in the mempool.
> *
> --
> 2.17.1
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints
@ 2022-01-24 14:36 3% ` Jerin Jacob
2022-01-24 17:35 0% ` Thomas Monjalon
2022-01-24 17:40 0% ` Ajit Khaparde
0 siblings, 2 replies; 200+ results
From: Jerin Jacob @ 2022-01-24 14:36 UTC (permalink / raw)
To: Alexander Kozyrev
Cc: dpdk-dev, Ori Kam, Thomas Monjalon, Ivan Malov, Andrew Rybchenko,
Ferruh Yigit, mohammad.abdul.awal, Qi Zhang, Jerin Jacob,
Ajit Khaparde
On Tue, Jan 18, 2022 at 9:01 PM Alexander Kozyrev <akozyrev@nvidia.com> wrote:
>
> The flow rules creation/destruction at a large scale incurs a performance
> penalty and may negatively impact the packet processing when used
> as part of the datapath logic. This is mainly because software/hardware
> resources are allocated and prepared during the flow rule creation.
>
> In order to optimize the insertion rate, PMD may use some hints provided
> by the application at the initialization phase. The rte_flow_configure()
> function allows to pre-allocate all the needed resources beforehand.
> These resources can be used at a later stage without costly allocations.
> Every PMD may use only the subset of hints and ignore unused ones or
> fail in case the requested configuration is not supported.
>
> Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
> ---
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Flow engine port configuration attributes.
> + */
> +__extension__
Is this __extension__ required ?
> +struct rte_flow_port_attr {
> + /**
> + * Version of the struct layout, should be 0.
> + */
> + uint32_t version;
Why version number? Across DPDK, we are using dynamic function
versioning, I think, that would
be sufficient for ABI versioning
> + /**
> + * Number of counter actions pre-configured.
> + * If set to 0, PMD will allocate counters dynamically.
> + * @see RTE_FLOW_ACTION_TYPE_COUNT
> + */
> + uint32_t nb_counters;
> + /**
> + * Number of aging actions pre-configured.
> + * If set to 0, PMD will allocate aging dynamically.
> + * @see RTE_FLOW_ACTION_TYPE_AGE
> + */
> + uint32_t nb_aging;
> + /**
> + * Number of traffic metering actions pre-configured.
> + * If set to 0, PMD will allocate meters dynamically.
> + * @see RTE_FLOW_ACTION_TYPE_METER
> + */
> + uint32_t nb_meters;
> +};
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Configure flow rules module.
> + * To pre-allocate resources as per the flow port attributes
> + * this configuration function must be called before any flow rule is created.
> + * Must be called only after Ethernet device is configured, but may be called
> + * before or after the device is started as long as there are no flow rules.
> + * No other rte_flow function should be called while this function is invoked.
> + * This function can be called again to change the configuration.
> + * Some PMDs may not support re-configuration at all,
> + * or may only allow increasing the number of resources allocated.
Following comment from Ivan looks good to me
* Pre-configure the port's flow API engine.
*
* This API can only be invoked before the application
* starts using the rest of the flow library functions.
*
* The API can be invoked multiple times to change the
* settings. The port, however, may reject the changes.
> + *
> + * @param port_id
> + * Port identifier of Ethernet device.
> + * @param[in] port_attr
> + * Port configuration attributes.
> + * @param[out] error
> + * Perform verbose error reporting if not NULL.
> + * PMDs initialize this structure in case of error only.
> + *
> + * @return
> + * 0 on success, a negative errno value otherwise and rte_errno is set.
> + */
> +__rte_experimental
> +int
> +rte_flow_configure(uint16_t port_id,
Should we couple, setting resource limit hint to configure function as
if we add future items in
configuration, we may pain to manage all state. Instead how about,
rte_flow_resource_reserve_hint_set()?
> + const struct rte_flow_port_attr *port_attr,
> + struct rte_flow_error *error);
I think, we should have _get function to get those limit numbers otherwise,
we can not write portable applications as the return value is kind of
boolean now if
don't define exact values for rte_errno for reasons.
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/ethdev/rte_flow_driver.h b/lib/ethdev/rte_flow_driver.h
> index f691b04af4..5f722f1a39 100644
> --- a/lib/ethdev/rte_flow_driver.h
> +++ b/lib/ethdev/rte_flow_driver.h
> @@ -152,6 +152,11 @@ struct rte_flow_ops {
> (struct rte_eth_dev *dev,
> const struct rte_flow_item_flex_handle *handle,
> struct rte_flow_error *error);
> + /** See rte_flow_configure() */
> + int (*configure)
> + (struct rte_eth_dev *dev,
> + const struct rte_flow_port_attr *port_attr,
> + struct rte_flow_error *err);
> };
>
> /**
> diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map
> index c2fb0669a4..7645796739 100644
> --- a/lib/ethdev/version.map
> +++ b/lib/ethdev/version.map
> @@ -256,6 +256,9 @@ EXPERIMENTAL {
> rte_flow_flex_item_create;
> rte_flow_flex_item_release;
> rte_flow_pick_transfer_proxy;
> +
> + # added in 22.03
> + rte_flow_configure;
> };
>
> INTERNAL {
> --
> 2.18.2
>
^ permalink raw reply [relevance 3%]
* Re: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-21 7:36 4% ` Morten Brørup
@ 2022-01-24 13:05 0% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2022-01-24 13:05 UTC (permalink / raw)
To: Morten Brørup
Cc: Honnappa Nagarahalli, Dharmik Thakkar, Olivier Matz,
Andrew Rybchenko, dev, Ruifeng Wang, Beilei Xing, nd
Morten Brørup <mb@smartsharesystems.com> writes:
> +Ray Kinsella, ABI Policy maintainer
>
>> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
>> Sent: Friday, 21 January 2022 07.01
>>
>> >
>> > +CC Beilei as i40e maintainer
>> >
>> > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
>> > > Sent: Thursday, 13 January 2022 06.37
>> > >
>> > > Current mempool per core cache implementation stores pointers to
>> mbufs
>> > > On 64b architectures, each pointer consumes 8B This patch replaces
>> it
>> > > with index-based implementation, where in each buffer is addressed
>> by
>> > > (pool base address + index) It reduces the amount of memory/cache
>> > > required for per core cache
>> > >
>> > > L3Fwd performance testing reveals minor improvements in the cache
>> > > performance (L1 and L2 misses reduced by 0.60%) with no change in
>> > > throughput
>> > >
>> > > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
>> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>> > > ---
>> > > lib/mempool/rte_mempool.h | 150
>> +++++++++++++++++++++++++-
>> > > lib/mempool/rte_mempool_ops_default.c | 7 ++
>> > > 2 files changed, 156 insertions(+), 1 deletion(-)
>> > >
>> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
>> > > index 1e7a3c15273c..f2403fbc97a7 100644
>> > > --- a/lib/mempool/rte_mempool.h
>> > > +++ b/lib/mempool/rte_mempool.h
>> > > @@ -50,6 +50,10 @@
>> > > #include <rte_memcpy.h>
>> > > #include <rte_common.h>
>> > >
>> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
>> > > +#include <rte_vect.h>
>> > > +#endif
>> > > +
>> > > #include "rte_mempool_trace_fp.h"
>> > >
>> > > #ifdef __cplusplus
>> > > @@ -239,6 +243,9 @@ struct rte_mempool {
>> > > int32_t ops_index;
>> > >
>> > > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
>> */
>> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
>> > > + void *pool_base_value; /**< Base value to calculate indices */
>> > > +#endif
>> > >
>> > > uint32_t populated_size; /**< Number of populated
>> > > objects. */
>> > > struct rte_mempool_objhdr_list elt_list; /**< List of objects in
>> > > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
>> > > rte_mempool_cache *cache,
>> > > if (cache == NULL || cache->len == 0)
>> > > return;
>> > > rte_mempool_trace_cache_flush(cache, mp);
>> > > +
>> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
>> > > + unsigned int i;
>> > > + unsigned int cache_len = cache->len;
>> > > + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
>> > > + void *base_value = mp->pool_base_value;
>> > > + uint32_t *cache_objs = (uint32_t *) cache->objs;
>> >
>> > Hi Dharmik and Honnappa,
>> >
>> > The essence of this patch is based on recasting the type of the objs
>> field in the
>> > rte_mempool_cache structure from an array of pointers to an array of
>> > uint32_t.
>> >
>> > However, this effectively breaks the ABI, because the
>> rte_mempool_cache
>> > structure is public and part of the API.
>> The patch does not change the public structure, the new member is under
>> compile time flag, not sure how it breaks the ABI.
>>
>> >
>> > Some drivers [1] even bypass the mempool API and access the
>> > rte_mempool_cache structure directly, assuming that the objs array in
>> the
>> > cache is an array of pointers. So you cannot recast the fields in the
>> > rte_mempool_cache structure the way this patch requires.
>> IMO, those drivers are at fault. The mempool cache structure is public
>> only because the APIs are inline. We should still maintain modularity
>> and not use the members of structures belonging to another library
>> directly. A similar effort involving rte_ring was not accepted sometime
>> back [1]
>>
>> [1]
>> http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR0
>> 8MB5814.eurprd08.prod.outlook.com/
>>
>> >
>> > Although I do consider bypassing an API's accessor functions
>> "spaghetti
>> > code", this driver's behavior is formally acceptable as long as the
>> > rte_mempool_cache structure is not marked as internal.
>> >
>> > I really liked your idea of using indexes instead of pointers, so I'm
>> very sorry to
>> > shoot it down. :-(
>> >
>> > [1]: E.g. the Intel i40e PMD,
>> >
>> http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_
>> avx
>> > 512.c#L25
>> It is possible to throw an error when this feature is enabled in this
>> file. Alternatively, this PMD could implement the code for index based
>> mempool.
>>
>
> I agree with both your points, Honnappa.
>
> The ABI remains intact, and only changes when this feature is enabled at compile time.
>
> In addition to your suggestions, I propose that the patch modifies the objs type in the mempool cache structure itself, instead of type casting it through an access variable. This should throw an error when compiling an application that accesses it as a pointer array instead of a uint32_t array - like the affected Intel PMDs.
>
> The updated objs field in the mempool cache structure should have the same size when compiled as the original objs field, so this feature doesn't change anything else in the ABI, only the type of the mempool cache objects.
>
> Also, the description of the feature should stress that applications accessing the cache objects directly will fail miserably.
Thanks for CC'ing me Morten.
My 2c is that, I would be slow in supporting this patch as it introduces
code paths that are harder (impossible?) to test regularly. So yes, it
is optional, in that case are we just adding automatically dead code -
I would ask, if a runtime option not make more sense for this?
Also we can't automatically assume what the PMD's are doing are breaking
an unwritten rule (breaking abstractions) - I would guess these are
doing it for solid performance reasons. If so that would futher support
my point about making the mempool runtime configurable and query-able
(is this mempool a bucket of indexes or pointers etc), and enabling the
PMDs to ask rather than assume.
Like Morten, I like the idea, saving memory and reducing cache misses
with indexes, this is all good IMHO.
--
Regards, Ray K
^ permalink raw reply [relevance 0%]
* [PATCH v6 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
@ 2022-01-24 12:28 3% ` Xiaoyun Li
0 siblings, 0 replies; 200+ results
From: Xiaoyun Li @ 2022-01-24 12:28 UTC (permalink / raw)
To: ktraynor, Aman.Deep.Singh, ferruh.yigit, olivier.matz, mb,
konstantin.ananyev, stephen, vladimir.medvedkin
Cc: dev, Xiaoyun Li, Aman Singh, Sunil Pai G
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 11 ++
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
2 files changed, 197 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..785fd22001 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,14 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added functions to calculate UDP/TCP checksum in mbuf.**
+
+ * Added the following functions to calculate UDP/TCP checksum of packets
+ which can be over multi-segments:
+ - ``rte_ipv4_udptcp_cksum_mbuf()``
+ - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
+ - ``rte_ipv6_udptcp_cksum_mbuf()``
+ - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
Removed Items
-------------
@@ -84,6 +92,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
+ ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
+ ``rte_ipv6_udptcp_cksum_mbuf_verify()``
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2022-01-21 10:22 0% ` Ferruh Yigit
@ 2022-01-24 5:12 0% ` Hemant Agrawal
2022-02-12 14:01 0% ` Yanling Song
1 sibling, 0 replies; 200+ results
From: Hemant Agrawal @ 2022-01-24 5:12 UTC (permalink / raw)
To: Ferruh Yigit, Yanling Song
Cc: dev, yanling.song, yanggan, xuyun, stephen, lihuisong,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Hemant Agrawal
On 1/21/2022 3:52 PM, Ferruh Yigit wrote:
> On 1/21/2022 9:27 AM, Yanling Song wrote:
>> On Wed, 19 Jan 2022 16:56:52 +0000
>> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>>> On 12/30/2021 6:08 AM, Yanling Song wrote:
>>>> The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC
>>>> cards into DPDK 22.03. Ramaxel Memory Technology is a company which
>>>> supply a lot of electric products: storage, communication, PCB...
>>>> SPNxxx is a serial PCIE interface NIC cards:
>>>> SPN110: 2 PORTs *25G
>>>> SPN120: 4 PORTs *25G
>>>> SPN130: 2 PORTs *100G
>>>
>>> Hi Yanling,
>>>
>>> As far as I can see hnic (from Huawei) and this spnic drivers are
>>> alike, what is the relation between these two?
>>>
>> It is hard to create a brand new driver from scratch, so we referenced
>> to hinic driver when developing spnic.
>>
>
> That is OK, but based on the familiarity of the code you may consider
> keeping the original code Copyright, I didn't investigate in
> that level but cc'ed hinic maintainers for info.
> Also cc'ed Hemant for guidance.
>
>
Yes, if large part of code is driven from an existing driver, it is
advisable to keep the original copyright. You can add your copyright as
well, if you have changed the code.
Alternately, you provide reference in the file header, that this file id
driver from xx file having license type Y.
> But my question was more related to the HW, is there any relation between
> the hinic HW and spnic HW? Like one is derived from other etc...
>
>>>> The following is main features of our SPNIC:
>>>> - TSO
>>>> - LRO
>>>> - Flow control
>>>> - SR-IOV(Partially supported)
>>>> - VLAN offload
>>>> - VLAN filter
>>>> - CRC offload
>>>> - Promiscuous mode
>>>> - RSS
>>>>
>>>> v6->v5, No real changes:
>>>> 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26
>>>> to patch 2; 2. Change the description of patch 26.
>>>>
>>>> v5->v4:
>>>> 1. Add prefix "spinc_" for external functions;
>>>> 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
>>>> 3. Do not use void* for keeping the type information
>>>>
>>>> v3->v4:
>>>> 1. Fix ABI test failure;
>>>> 2. Remove some descriptions in spnic.rst.
>>>>
>>>> v2->v3:
>>>> 1. Fix clang compiling failure.
>>>>
>>>> v1->v2:
>>>> 1. Fix coding style issues and compiling failures;
>>>> 2. Only support linux in meson.build;
>>>> 3. Use CLOCK_MONOTONIC_COARSE instead of
>>>> CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW; 4. Fix time_before();
>>>> 5. Remove redundant checks in spnic_dev_configure();
>>>>
>>>> Yanling Song (26):
>>>> drivers/net: introduce a new PMD driver
>>>> net/spnic: initialize the HW interface
>>>> net/spnic: add mbox message channel
>>>> net/spnic: introduce event queue
>>>> net/spnic: add mgmt module
>>>> net/spnic: add cmdq and work queue
>>>> net/spnic: add interface handling cmdq message
>>>> net/spnic: add hardware info initialization
>>>> net/spnic: support MAC and link event handling
>>>> net/spnic: add function info initialization
>>>> net/spnic: add queue pairs context initialization
>>>> net/spnic: support mbuf handling of Tx/Rx
>>>> net/spnic: support Rx congfiguration
>>>> net/spnic: add port/vport enable
>>>> net/spnic: support IO packets handling
>>>> net/spnic: add device configure/version/info
>>>> net/spnic: support RSS configuration update and get
>>>> net/spnic: support VLAN filtering and offloading
>>>> net/spnic: support promiscuous and allmulticast Rx modes
>>>> net/spnic: support flow control
>>>> net/spnic: support getting Tx/Rx queues info
>>>> net/spnic: net/spnic: support xstats statistics
>>>> net/spnic: support VFIO interrupt
>>>> net/spnic: support Tx/Rx queue start/stop
>>>> net/spnic: add doc infrastructure
>>>> net/spnic: fixes unsafe C style code
>>>
>>> <...>
>>
>
^ permalink raw reply [relevance 0%]
* [PATCH v4] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-23 21:07 8% ` [PATCH v3] " Michael Barker
@ 2022-01-23 21:20 8% ` Michael Barker
2022-01-25 10:33 0% ` Ray Kinsella
2022-01-31 0:05 8% ` [PATCH v5] " Michael Barker
0 siblings, 2 replies; 200+ results
From: Michael Barker @ 2022-01-23 21:20 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
When compiling with clang using -Wall (or -Wgcc-compat) the use of
diagnose_if kicks up a warning:
.../include/rte_interrupts.h:623:1: error: 'diagnose_if' is a clang
extension [-Werror,-Wgcc-compat]
__rte_internal
^
.../include/rte_compat.h:36:16: note: expanded from macro '__rte_internal'
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
This change ignores the '-Wgcc-compat' warning in the specific location
where the warning occurs. It is safe to do in this circumstance as the
specific macro is only defined when using the clang compiler.
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 8%]
* Re: [PATCH v2] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-20 14:16 0% ` Thomas Monjalon
@ 2022-01-23 21:17 0% ` Michael Barker
0 siblings, 0 replies; 200+ results
From: Michael Barker @ 2022-01-23 21:17 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: dev, Ray Kinsella
[-- Attachment #1: Type: text/plain, Size: 1673 bytes --]
On Fri, 21 Jan 2022 at 03:16, Thomas Monjalon <thomas@monjalon.net> wrote:
> 18/01/2022 00:23, Michael Barker:
> > When using clang with -Wall the use of diagnose_if kicks up a warning,
>
> Please could you copy the warning in the commit log?
>
I've updated the commit log to be more descriptive (and included the
associated warning).
> requiring all dpdk includes to be wrapped with the pragma. This change
> > isolates the ignore just the appropriate location and makes it easier
> > for users to apply -Wall,-Werror
>
> Please could you explain how it is related to -Wgcc-compat?
>
I'm currently working on some code that makes use of DPDK, which is built
with '-Wall,-Werror' enabled. When using the clang toolchain the build
fails as a result of this macro that this patch updates. The workaround
from my application is to wrap all of the DPDK header includes in pragma to
disable the warnings (see below). This has the unfortunate side effect of
disabling this warning across all of the included DPDK headers, which is
not ideal. Hence the reason to submit the patch which disables the warning
just in the location where it occurs.
#if defined(__clang__)
#pragma GCC diagnostic push
#pragma GCC diagnostic ignored "-Wgcc-compat"
#endif
#include <rte_ethdev.h>
#if defined(__clang__)
#pragma GCC diagnostic pop "-Wgcc-compat"
#endif
>
> [...]
> > #define __rte_internal \
> > +_Pragma("GCC diagnostic push") \
> > +_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
> > __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> > -section(".text.internal")))
> > +section(".text.internal"))) \
> > +_Pragma("GCC diagnostic pop")
>
>
>
>
[-- Attachment #2: Type: text/html, Size: 3549 bytes --]
^ permalink raw reply [relevance 0%]
* [PATCH v3] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
2022-01-20 14:16 0% ` Thomas Monjalon
@ 2022-01-23 21:07 8% ` Michael Barker
2022-01-23 21:20 8% ` [PATCH v4] " Michael Barker
1 sibling, 1 reply; 200+ results
From: Michael Barker @ 2022-01-23 21:07 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
When compiling with clang using -Wall (or -Wgcc-compat) the use of diagnose_if kicks up a warning:
.../include/rte_interrupts.h:623:1: error: 'diagnose_if' is a clang extension [-Werror,-Wgcc-compat]
__rte_internal
^
.../include/rte_compat.h:36:16: note: expanded from macro '__rte_internal'
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
This change ignores the '-Wgcc-compat' warning in the specific location
where the warning occurs. It is safe to do in this circumstance as the
specific macro is only defined when using the clang compiler.
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 8%]
* Re: [PATCH v3] eventdev/eth_rx: add event port get api
2022-01-22 17:14 4% ` [PATCH v3] eventdev/eth_rx: " Naga Harish K S V
@ 2022-01-23 15:32 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2022-01-23 15:32 UTC (permalink / raw)
To: Naga Harish K S V; +Cc: Jayatheerthan, Jay, dpdk-dev
On Sat, Jan 22, 2022 at 10:44 PM Naga Harish K S V
<s.v.naga.harish.k@intel.com> wrote:
>
> This patch introduces new api for retrieving event port id
> of eth rx adapter.
>
> Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
> Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
>
> ---
> v3:
> * update commit message head line
Applied to dpdk-next-net-eventdev/for-main with the following changes. Thanks
1) api -> API in git commit message
2) Added following in "New Features" section in
doc/guides/rel_notes/release_22_03.rst
* **Added an API to retrieve event port id of eth rx adapter.**
A new API, ``rte_event_eth_rx_adapter_event_port_get()``, was added.
>
> v2:
> * address review comments
>
> v1:
> * initial implementation
> ---
> doc/guides/rel_notes/release_22_03.rst | 2 ++
> lib/eventdev/rte_event_eth_rx_adapter.c | 20 ++++++++++++++++++++
> lib/eventdev/rte_event_eth_rx_adapter.h | 20 ++++++++++++++++++++
> lib/eventdev/version.map | 3 +++
> 4 files changed, 45 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 6d99d1eaa9..288d94c0e6 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -83,6 +83,8 @@ API Changes
> This section is a comment. Do not overwrite or remove it.
> Also, make sure to start the actual text at the margin.
> =======================================================
> +* eventdev: Added new API ``rte_event_eth_rx_adapter_event_port_get``,
> + to retrieve event port id of eth rx adapter.
>
>
> ABI Changes
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
> index f946137b25..ae1e260c08 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.c
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.c
> @@ -3123,6 +3123,26 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
> return rx_adapter->service_inited ? 0 : -ESRCH;
> }
>
> +int
> +rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
> +{
> + struct event_eth_rx_adapter *rx_adapter;
> +
> + if (rxa_memzone_lookup())
> + return -ENOMEM;
> +
> + RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
> +
> + rx_adapter = rxa_id_to_adapter(id);
> + if (rx_adapter == NULL || event_port_id == NULL)
> + return -EINVAL;
> +
> + if (rx_adapter->service_inited)
> + *event_port_id = rx_adapter->event_port_id;
> +
> + return rx_adapter->service_inited ? 0 : -ESRCH;
> +}
> +
> int
> rte_event_eth_rx_adapter_cb_register(uint8_t id,
> uint16_t eth_dev_id,
> diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
> index 9546d792e9..3608a7b2cf 100644
> --- a/lib/eventdev/rte_event_eth_rx_adapter.h
> +++ b/lib/eventdev/rte_event_eth_rx_adapter.h
> @@ -37,6 +37,7 @@
> * - rte_event_eth_rx_adapter_queue_conf_get()
> * - rte_event_eth_rx_adapter_queue_stats_get()
> * - rte_event_eth_rx_adapter_queue_stats_reset()
> + * - rte_event_eth_rx_adapter_event_port_get()
> *
> * The application creates an ethernet to event adapter using
> * rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
> @@ -684,6 +685,25 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
> uint16_t eth_dev_id,
> uint16_t rx_queue_id);
>
> +/**
> + * Retrieve the event port ID of an adapter. If the adapter doesn't use
> + * a rte_service function, this function returns -ESRCH.
> + *
> + * @param id
> + * Adapter identifier.
> + *
> + * @param [out] event_port_id
> + * A pointer to a uint8_t, to be filled in with the port id.
> + *
> + * @return
> + * - 0: Success
> + * - <0: Error code on failure, if the adapter doesn't use a rte_service
> + * function, this function returns -ESRCH.
> + */
> +__rte_experimental
> +int
> +rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
> index ade1f1182e..cd5dada07f 100644
> --- a/lib/eventdev/version.map
> +++ b/lib/eventdev/version.map
> @@ -105,6 +105,9 @@ EXPERIMENTAL {
> rte_event_eth_rx_adapter_queue_conf_get;
> rte_event_eth_rx_adapter_queue_stats_get;
> rte_event_eth_rx_adapter_queue_stats_reset;
> +
> + # added in 22.03
> + rte_event_eth_rx_adapter_event_port_get;
> };
>
> INTERNAL {
> --
> 2.25.1
>
^ permalink raw reply [relevance 0%]
* [PATCH v3] eventdev/eth_rx: add event port get api
2022-01-22 17:07 4% ` [PATCH v2] " Naga Harish K S V
@ 2022-01-22 17:14 4% ` Naga Harish K S V
2022-01-23 15:32 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Naga Harish K S V @ 2022-01-22 17:14 UTC (permalink / raw)
To: jerinjacobk, jay.jayatheerthan; +Cc: dev
This patch introduces new api for retrieving event port id
of eth rx adapter.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
---
v3:
* update commit message head line
v2:
* address review comments
v1:
* initial implementation
---
doc/guides/rel_notes/release_22_03.rst | 2 ++
lib/eventdev/rte_event_eth_rx_adapter.c | 20 ++++++++++++++++++++
lib/eventdev/rte_event_eth_rx_adapter.h | 20 ++++++++++++++++++++
lib/eventdev/version.map | 3 +++
4 files changed, 45 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..288d94c0e6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,8 @@ API Changes
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+* eventdev: Added new API ``rte_event_eth_rx_adapter_event_port_get``,
+ to retrieve event port id of eth rx adapter.
ABI Changes
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index f946137b25..ae1e260c08 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -3123,6 +3123,26 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
return rx_adapter->service_inited ? 0 : -ESRCH;
}
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+ struct event_eth_rx_adapter *rx_adapter;
+
+ if (rxa_memzone_lookup())
+ return -ENOMEM;
+
+ RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+ rx_adapter = rxa_id_to_adapter(id);
+ if (rx_adapter == NULL || event_port_id == NULL)
+ return -EINVAL;
+
+ if (rx_adapter->service_inited)
+ *event_port_id = rx_adapter->event_port_id;
+
+ return rx_adapter->service_inited ? 0 : -ESRCH;
+}
+
int
rte_event_eth_rx_adapter_cb_register(uint8_t id,
uint16_t eth_dev_id,
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index 9546d792e9..3608a7b2cf 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -37,6 +37,7 @@
* - rte_event_eth_rx_adapter_queue_conf_get()
* - rte_event_eth_rx_adapter_queue_stats_get()
* - rte_event_eth_rx_adapter_queue_stats_reset()
+ * - rte_event_eth_rx_adapter_event_port_get()
*
* The application creates an ethernet to event adapter using
* rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
@@ -684,6 +685,25 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
uint16_t eth_dev_id,
uint16_t rx_queue_id);
+/**
+ * Retrieve the event port ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ * Adapter identifier.
+ *
+ * @param [out] event_port_id
+ * A pointer to a uint8_t, to be filled in with the port id.
+ *
+ * @return
+ * - 0: Success
+ * - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+__rte_experimental
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index ade1f1182e..cd5dada07f 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -105,6 +105,9 @@ EXPERIMENTAL {
rte_event_eth_rx_adapter_queue_conf_get;
rte_event_eth_rx_adapter_queue_stats_get;
rte_event_eth_rx_adapter_queue_stats_reset;
+
+ # added in 22.03
+ rte_event_eth_rx_adapter_event_port_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH v2] eventdev/rx_adapter: add event port get api
@ 2022-01-22 17:07 4% ` Naga Harish K S V
2022-01-22 17:14 4% ` [PATCH v3] eventdev/eth_rx: " Naga Harish K S V
0 siblings, 1 reply; 200+ results
From: Naga Harish K S V @ 2022-01-22 17:07 UTC (permalink / raw)
To: jerinjacobk, jay.jayatheerthan; +Cc: dev
This patch introduces new api for retrieving event port id
of eth rx adapter.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 2 ++
lib/eventdev/rte_event_eth_rx_adapter.c | 20 ++++++++++++++++++++
lib/eventdev/rte_event_eth_rx_adapter.h | 20 ++++++++++++++++++++
lib/eventdev/version.map | 3 +++
4 files changed, 45 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..288d94c0e6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,8 @@ API Changes
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+* eventdev: Added new API ``rte_event_eth_rx_adapter_event_port_get``,
+ to retrieve event port id of eth rx adapter.
ABI Changes
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index f946137b25..ae1e260c08 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -3123,6 +3123,26 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
return rx_adapter->service_inited ? 0 : -ESRCH;
}
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+ struct event_eth_rx_adapter *rx_adapter;
+
+ if (rxa_memzone_lookup())
+ return -ENOMEM;
+
+ RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+ rx_adapter = rxa_id_to_adapter(id);
+ if (rx_adapter == NULL || event_port_id == NULL)
+ return -EINVAL;
+
+ if (rx_adapter->service_inited)
+ *event_port_id = rx_adapter->event_port_id;
+
+ return rx_adapter->service_inited ? 0 : -ESRCH;
+}
+
int
rte_event_eth_rx_adapter_cb_register(uint8_t id,
uint16_t eth_dev_id,
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index 9546d792e9..3608a7b2cf 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -37,6 +37,7 @@
* - rte_event_eth_rx_adapter_queue_conf_get()
* - rte_event_eth_rx_adapter_queue_stats_get()
* - rte_event_eth_rx_adapter_queue_stats_reset()
+ * - rte_event_eth_rx_adapter_event_port_get()
*
* The application creates an ethernet to event adapter using
* rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
@@ -684,6 +685,25 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
uint16_t eth_dev_id,
uint16_t rx_queue_id);
+/**
+ * Retrieve the event port ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ * Adapter identifier.
+ *
+ * @param [out] event_port_id
+ * A pointer to a uint8_t, to be filled in with the port id.
+ *
+ * @return
+ * - 0: Success
+ * - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+__rte_experimental
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index ade1f1182e..cd5dada07f 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -105,6 +105,9 @@ EXPERIMENTAL {
rte_event_eth_rx_adapter_queue_conf_get;
rte_event_eth_rx_adapter_queue_stats_get;
rte_event_eth_rx_adapter_queue_stats_reset;
+
+ # added in 22.03
+ rte_event_eth_rx_adapter_event_port_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH v2] eventdev/rx_adapter: add event port get api
@ 2022-01-22 17:02 4% Naga Harish K S V
0 siblings, 0 replies; 200+ results
From: Naga Harish K S V @ 2022-01-22 17:02 UTC (permalink / raw)
To: jerinjacobk, jay.jayatheerthan; +Cc: dev
This patch introduces new api for retrieving event port id
of eth rx adapter.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com>
Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 2 ++
lib/eventdev/rte_event_eth_rx_adapter.c | 20 ++++++++++++++++++++
lib/eventdev/rte_event_eth_rx_adapter.h | 20 ++++++++++++++++++++
lib/eventdev/version.map | 3 +++
4 files changed, 45 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..288d94c0e6 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -83,6 +83,8 @@ API Changes
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=======================================================
+* eventdev: Added new API ``rte_event_eth_rx_adapter_event_port_get``,
+ to retrieve event port id of eth rx adapter.
ABI Changes
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index f946137b25..ae1e260c08 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -3123,6 +3123,26 @@ rte_event_eth_rx_adapter_service_id_get(uint8_t id, uint32_t *service_id)
return rx_adapter->service_inited ? 0 : -ESRCH;
}
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id)
+{
+ struct event_eth_rx_adapter *rx_adapter;
+
+ if (rxa_memzone_lookup())
+ return -ENOMEM;
+
+ RTE_EVENT_ETH_RX_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL);
+
+ rx_adapter = rxa_id_to_adapter(id);
+ if (rx_adapter == NULL || event_port_id == NULL)
+ return -EINVAL;
+
+ if (rx_adapter->service_inited)
+ *event_port_id = rx_adapter->event_port_id;
+
+ return rx_adapter->service_inited ? 0 : -ESRCH;
+}
+
int
rte_event_eth_rx_adapter_cb_register(uint8_t id,
uint16_t eth_dev_id,
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.h b/lib/eventdev/rte_event_eth_rx_adapter.h
index 9546d792e9..3608a7b2cf 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.h
+++ b/lib/eventdev/rte_event_eth_rx_adapter.h
@@ -37,6 +37,7 @@
* - rte_event_eth_rx_adapter_queue_conf_get()
* - rte_event_eth_rx_adapter_queue_stats_get()
* - rte_event_eth_rx_adapter_queue_stats_reset()
+ * - rte_event_eth_rx_adapter_event_port_get()
*
* The application creates an ethernet to event adapter using
* rte_event_eth_rx_adapter_create_ext() or rte_event_eth_rx_adapter_create()
@@ -684,6 +685,25 @@ rte_event_eth_rx_adapter_queue_stats_reset(uint8_t id,
uint16_t eth_dev_id,
uint16_t rx_queue_id);
+/**
+ * Retrieve the event port ID of an adapter. If the adapter doesn't use
+ * a rte_service function, this function returns -ESRCH.
+ *
+ * @param id
+ * Adapter identifier.
+ *
+ * @param [out] event_port_id
+ * A pointer to a uint8_t, to be filled in with the port id.
+ *
+ * @return
+ * - 0: Success
+ * - <0: Error code on failure, if the adapter doesn't use a rte_service
+ * function, this function returns -ESRCH.
+ */
+__rte_experimental
+int
+rte_event_eth_rx_adapter_event_port_get(uint8_t id, uint8_t *event_port_id);
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map
index ade1f1182e..cd5dada07f 100644
--- a/lib/eventdev/version.map
+++ b/lib/eventdev/version.map
@@ -105,6 +105,9 @@ EXPERIMENTAL {
rte_event_eth_rx_adapter_queue_conf_get;
rte_event_eth_rx_adapter_queue_stats_get;
rte_event_eth_rx_adapter_queue_stats_reset;
+
+ # added in 22.03
+ rte_event_eth_rx_adapter_event_port_get;
};
INTERNAL {
--
2.25.1
^ permalink raw reply [relevance 4%]
* RE: [EXT] [PATCH v2 4/8] crypto/dpaa2_sec: support AES-GMAC
@ 2022-01-21 11:29 3% ` Akhil Goyal
2022-02-08 14:15 0% ` Gagandeep Singh
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2022-01-21 11:29 UTC (permalink / raw)
To: Gagandeep Singh, dev; +Cc: Akhil Goyal
> From: Akhil Goyal <akhil.goyal@nxp.com>
>
> This patch supports AES_GMAC algorithm for DPAA2
> driver.
>
> Signed-off-by: Akhil Goyal <akhil.goyal@nxp.com>
> Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
> ---
> doc/guides/cryptodevs/features/dpaa2_sec.ini | 1 +
> drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 14 ++++++++-
> drivers/crypto/dpaa2_sec/dpaa2_sec_priv.h | 30 ++++++++++++++++++++
> lib/cryptodev/rte_crypto_sym.h | 4 ++-
> 4 files changed, 47 insertions(+), 2 deletions(-)
This patch should be split in two - cryptodev change should be separate patch.
> diff --git a/lib/cryptodev/rte_crypto_sym.h b/lib/cryptodev/rte_crypto_sym.h
> index daa090b978..4644fa3e25 100644
> --- a/lib/cryptodev/rte_crypto_sym.h
> +++ b/lib/cryptodev/rte_crypto_sym.h
> @@ -467,8 +467,10 @@ enum rte_crypto_aead_algorithm {
> /**< AES algorithm in CCM mode. */
> RTE_CRYPTO_AEAD_AES_GCM,
> /**< AES algorithm in GCM mode. */
> - RTE_CRYPTO_AEAD_CHACHA20_POLY1305
> + RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
> /**< Chacha20 cipher with poly1305 authenticator */
> + RTE_CRYPTO_AEAD_AES_GMAC
> + /**< AES algorithm in GMAC mode. */
> };
AES-GMAC is also defined as AUTH algo. It may be removed but that would be
ABI break.
Is it not possible to use AES-GMAC as auth algo?
^ permalink raw reply [relevance 3%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2022-01-21 9:27 0% ` Yanling Song
@ 2022-01-21 10:22 0% ` Ferruh Yigit
2022-01-24 5:12 0% ` Hemant Agrawal
2022-02-12 14:01 0% ` Yanling Song
0 siblings, 2 replies; 200+ results
From: Ferruh Yigit @ 2022-01-21 10:22 UTC (permalink / raw)
To: Yanling Song
Cc: dev, yanling.song, yanggan, xuyun, stephen, lihuisong,
Ziyang Xuan, Xiaoyun Wang, Guoyang Zhou, Hemant Agrawal
On 1/21/2022 9:27 AM, Yanling Song wrote:
> On Wed, 19 Jan 2022 16:56:52 +0000
> Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
>> On 12/30/2021 6:08 AM, Yanling Song wrote:
>>> The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC
>>> cards into DPDK 22.03. Ramaxel Memory Technology is a company which
>>> supply a lot of electric products: storage, communication, PCB...
>>> SPNxxx is a serial PCIE interface NIC cards:
>>> SPN110: 2 PORTs *25G
>>> SPN120: 4 PORTs *25G
>>> SPN130: 2 PORTs *100G
>>>
>>
>> Hi Yanling,
>>
>> As far as I can see hnic (from Huawei) and this spnic drivers are
>> alike, what is the relation between these two?
>>
> It is hard to create a brand new driver from scratch, so we referenced
> to hinic driver when developing spnic.
>
That is OK, but based on the familiarity of the code you may consider
keeping the original code Copyright, I didn't investigate in
that level but cc'ed hinic maintainers for info.
Also cc'ed Hemant for guidance.
But my question was more related to the HW, is there any relation between
the hinic HW and spnic HW? Like one is derived from other etc...
>>> The following is main features of our SPNIC:
>>> - TSO
>>> - LRO
>>> - Flow control
>>> - SR-IOV(Partially supported)
>>> - VLAN offload
>>> - VLAN filter
>>> - CRC offload
>>> - Promiscuous mode
>>> - RSS
>>>
>>> v6->v5, No real changes:
>>> 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26
>>> to patch 2; 2. Change the description of patch 26.
>>>
>>> v5->v4:
>>> 1. Add prefix "spinc_" for external functions;
>>> 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
>>> 3. Do not use void* for keeping the type information
>>>
>>> v3->v4:
>>> 1. Fix ABI test failure;
>>> 2. Remove some descriptions in spnic.rst.
>>>
>>> v2->v3:
>>> 1. Fix clang compiling failure.
>>>
>>> v1->v2:
>>> 1. Fix coding style issues and compiling failures;
>>> 2. Only support linux in meson.build;
>>> 3. Use CLOCK_MONOTONIC_COARSE instead of
>>> CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW; 4. Fix time_before();
>>> 5. Remove redundant checks in spnic_dev_configure();
>>>
>>> Yanling Song (26):
>>> drivers/net: introduce a new PMD driver
>>> net/spnic: initialize the HW interface
>>> net/spnic: add mbox message channel
>>> net/spnic: introduce event queue
>>> net/spnic: add mgmt module
>>> net/spnic: add cmdq and work queue
>>> net/spnic: add interface handling cmdq message
>>> net/spnic: add hardware info initialization
>>> net/spnic: support MAC and link event handling
>>> net/spnic: add function info initialization
>>> net/spnic: add queue pairs context initialization
>>> net/spnic: support mbuf handling of Tx/Rx
>>> net/spnic: support Rx congfiguration
>>> net/spnic: add port/vport enable
>>> net/spnic: support IO packets handling
>>> net/spnic: add device configure/version/info
>>> net/spnic: support RSS configuration update and get
>>> net/spnic: support VLAN filtering and offloading
>>> net/spnic: support promiscuous and allmulticast Rx modes
>>> net/spnic: support flow control
>>> net/spnic: support getting Tx/Rx queues info
>>> net/spnic: net/spnic: support xstats statistics
>>> net/spnic: support VFIO interrupt
>>> net/spnic: support Tx/Rx queue start/stop
>>> net/spnic: add doc infrastructure
>>> net/spnic: fixes unsafe C style code
>>
>> <...>
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2022-01-19 16:56 0% ` Ferruh Yigit
@ 2022-01-21 9:27 0% ` Yanling Song
2022-01-21 10:22 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Yanling Song @ 2022-01-21 9:27 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, yanling.song, yanggan, xuyun, stephen, lihuisong
On Wed, 19 Jan 2022 16:56:52 +0000
Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> On 12/30/2021 6:08 AM, Yanling Song wrote:
> > The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC
> > cards into DPDK 22.03. Ramaxel Memory Technology is a company which
> > supply a lot of electric products: storage, communication, PCB...
> > SPNxxx is a serial PCIE interface NIC cards:
> > SPN110: 2 PORTs *25G
> > SPN120: 4 PORTs *25G
> > SPN130: 2 PORTs *100G
> >
>
> Hi Yanling,
>
> As far as I can see hnic (from Huawei) and this spnic drivers are
> alike, what is the relation between these two?
>
It is hard to create a brand new driver from scratch, so we referenced
to hinic driver when developing spnic.
> > The following is main features of our SPNIC:
> > - TSO
> > - LRO
> > - Flow control
> > - SR-IOV(Partially supported)
> > - VLAN offload
> > - VLAN filter
> > - CRC offload
> > - Promiscuous mode
> > - RSS
> >
> > v6->v5, No real changes:
> > 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26
> > to patch 2; 2. Change the description of patch 26.
> >
> > v5->v4:
> > 1. Add prefix "spinc_" for external functions;
> > 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
> > 3. Do not use void* for keeping the type information
> >
> > v3->v4:
> > 1. Fix ABI test failure;
> > 2. Remove some descriptions in spnic.rst.
> >
> > v2->v3:
> > 1. Fix clang compiling failure.
> >
> > v1->v2:
> > 1. Fix coding style issues and compiling failures;
> > 2. Only support linux in meson.build;
> > 3. Use CLOCK_MONOTONIC_COARSE instead of
> > CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW; 4. Fix time_before();
> > 5. Remove redundant checks in spnic_dev_configure();
> >
> > Yanling Song (26):
> > drivers/net: introduce a new PMD driver
> > net/spnic: initialize the HW interface
> > net/spnic: add mbox message channel
> > net/spnic: introduce event queue
> > net/spnic: add mgmt module
> > net/spnic: add cmdq and work queue
> > net/spnic: add interface handling cmdq message
> > net/spnic: add hardware info initialization
> > net/spnic: support MAC and link event handling
> > net/spnic: add function info initialization
> > net/spnic: add queue pairs context initialization
> > net/spnic: support mbuf handling of Tx/Rx
> > net/spnic: support Rx congfiguration
> > net/spnic: add port/vport enable
> > net/spnic: support IO packets handling
> > net/spnic: add device configure/version/info
> > net/spnic: support RSS configuration update and get
> > net/spnic: support VLAN filtering and offloading
> > net/spnic: support promiscuous and allmulticast Rx modes
> > net/spnic: support flow control
> > net/spnic: support getting Tx/Rx queues info
> > net/spnic: net/spnic: support xstats statistics
> > net/spnic: support VFIO interrupt
> > net/spnic: support Tx/Rx queue start/stop
> > net/spnic: add doc infrastructure
> > net/spnic: fixes unsafe C style code
>
> <...>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-21 6:01 3% ` Honnappa Nagarahalli
2022-01-21 7:36 4% ` Morten Brørup
@ 2022-01-21 9:12 0% ` Bruce Richardson
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2022-01-21 9:12 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Morten Brørup, Dharmik Thakkar, Olivier Matz,
Andrew Rybchenko, dev, nd, Ruifeng Wang, Beilei Xing
On Fri, Jan 21, 2022 at 06:01:23AM +0000, Honnappa Nagarahalli wrote:
>
> >
> > +CC Beilei as i40e maintainer
> >
> > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com] Sent:
> > > Thursday, 13 January 2022 06.37
> > >
> > > Current mempool per core cache implementation stores pointers to
> > > mbufs On 64b architectures, each pointer consumes 8B This patch
> > > replaces it with index-based implementation, where in each buffer is
> > > addressed by (pool base address + index) It reduces the amount of
> > > memory/cache required for per core cache
> > >
> > > L3Fwd performance testing reveals minor improvements in the cache
> > > performance (L1 and L2 misses reduced by 0.60%) with no change in
> > > throughput
> > >
> > > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com> Reviewed-by:
> > > Ruifeng Wang <ruifeng.wang@arm.com> --- lib/mempool/rte_mempool.h
> > > | 150 +++++++++++++++++++++++++-
> > > lib/mempool/rte_mempool_ops_default.c | 7 ++ 2 files changed, 156
> > > insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 1e7a3c15273c..f2403fbc97a7 100644 ---
> > > a/lib/mempool/rte_mempool.h +++ b/lib/mempool/rte_mempool.h @@ -50,6
> > > +50,10 @@ #include <rte_memcpy.h> #include <rte_common.h>
> > >
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE +#include <rte_vect.h>
> > > +#endif + #include "rte_mempool_trace_fp.h"
> > >
> > > #ifdef __cplusplus @@ -239,6 +243,9 @@ struct rte_mempool { int32_t
> > > ops_index;
> > >
> > > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> > > */ +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE + void
> > > *pool_base_value; /**< Base value to calculate indices */ +#endif
> > >
> > > uint32_t populated_size; /**< Number of populated objects.
> > > */ struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> > > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
> > > rte_mempool_cache *cache, if (cache == NULL || cache->len == 0)
> > > return; rte_mempool_trace_cache_flush(cache, mp); + +#ifdef
> > > RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE + unsigned int i; +
> > > unsigned int cache_len = cache->len; + void
> > > *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; + void *base_value =
> > > mp->pool_base_value; + uint32_t *cache_objs = (uint32_t *)
> > > cache->objs;
> >
> > Hi Dharmik and Honnappa,
> >
> > The essence of this patch is based on recasting the type of the objs
> > field in the rte_mempool_cache structure from an array of pointers to
> > an array of uint32_t.
> >
> > However, this effectively breaks the ABI, because the rte_mempool_cache
> > structure is public and part of the API.
> The patch does not change the public structure, the new member is under
> compile time flag, not sure how it breaks the ABI.
>
> >
> > Some drivers [1] even bypass the mempool API and access the
> > rte_mempool_cache structure directly, assuming that the objs array in
> > the cache is an array of pointers. So you cannot recast the fields in
> > the rte_mempool_cache structure the way this patch requires.
> IMO, those drivers are at fault. The mempool cache structure is public
> only because the APIs are inline. We should still maintain modularity and
> not use the members of structures belonging to another library directly.
> A similar effort involving rte_ring was not accepted sometime back [1]
>
> [1]
> http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR08MB5814.eurprd08.prod.outlook.com/
>
> >
> > Although I do consider bypassing an API's accessor functions "spaghetti
> > code", this driver's behavior is formally acceptable as long as the
> > rte_mempool_cache structure is not marked as internal.
> >
> > I really liked your idea of using indexes instead of pointers, so I'm
> > very sorry to shoot it down. :-(
> >
> > [1]: E.g. the Intel i40e PMD,
> > http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_avx
> > 512.c#L25
> It is possible to throw an error when this feature is enabled in this
> file. Alternatively, this PMD could implement the code for index based
> mempool.
>
Yes, it can implement it, and if this model get put in mempool it probably
will [even if it's just a fallback to the mempool code in that case].
However, I would object to adding in this model in the library right now if it
cannot be proved to show some benefit in a realworld case. As I understand
it, the only benefit seen has been in unit test cases? I want to ensure
that for any perf improvements we put in that they have some real-world
applicabilty - the amoung of applicability will depend on the scope and
impact - and by the same token that we don't reject simplifications or
improvements on the basis that they *might* cause issues, if all perf data
fails to show any problem.
So for this patch, can we get some perf numbers for an app where it does
show the value of it? L3fwd is a very trivial app, and as such is usually
fairly reliable in showing perf benefits of optimizations if they exist.
Perhaps for this case, we need something with a bigger cache footprint
perhaps?
Regards,
/Bruce
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-21 6:01 3% ` Honnappa Nagarahalli
@ 2022-01-21 7:36 4% ` Morten Brørup
2022-01-24 13:05 0% ` Ray Kinsella
2022-01-21 9:12 0% ` Bruce Richardson
1 sibling, 1 reply; 200+ results
From: Morten Brørup @ 2022-01-21 7:36 UTC (permalink / raw)
To: Honnappa Nagarahalli, Dharmik Thakkar, Olivier Matz,
Andrew Rybchenko, Ray Kinsella
Cc: dev, nd, Ruifeng Wang, Beilei Xing, nd
+Ray Kinsella, ABI Policy maintainer
> From: Honnappa Nagarahalli [mailto:Honnappa.Nagarahalli@arm.com]
> Sent: Friday, 21 January 2022 07.01
>
> >
> > +CC Beilei as i40e maintainer
> >
> > > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> > > Sent: Thursday, 13 January 2022 06.37
> > >
> > > Current mempool per core cache implementation stores pointers to
> mbufs
> > > On 64b architectures, each pointer consumes 8B This patch replaces
> it
> > > with index-based implementation, where in each buffer is addressed
> by
> > > (pool base address + index) It reduces the amount of memory/cache
> > > required for per core cache
> > >
> > > L3Fwd performance testing reveals minor improvements in the cache
> > > performance (L1 and L2 misses reduced by 0.60%) with no change in
> > > throughput
> > >
> > > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > ---
> > > lib/mempool/rte_mempool.h | 150
> +++++++++++++++++++++++++-
> > > lib/mempool/rte_mempool_ops_default.c | 7 ++
> > > 2 files changed, 156 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > > index 1e7a3c15273c..f2403fbc97a7 100644
> > > --- a/lib/mempool/rte_mempool.h
> > > +++ b/lib/mempool/rte_mempool.h
> > > @@ -50,6 +50,10 @@
> > > #include <rte_memcpy.h>
> > > #include <rte_common.h>
> > >
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > +#include <rte_vect.h>
> > > +#endif
> > > +
> > > #include "rte_mempool_trace_fp.h"
> > >
> > > #ifdef __cplusplus
> > > @@ -239,6 +243,9 @@ struct rte_mempool {
> > > int32_t ops_index;
> > >
> > > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> */
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > + void *pool_base_value; /**< Base value to calculate indices */
> > > +#endif
> > >
> > > uint32_t populated_size; /**< Number of populated
> > > objects. */
> > > struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> > > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
> > > rte_mempool_cache *cache,
> > > if (cache == NULL || cache->len == 0)
> > > return;
> > > rte_mempool_trace_cache_flush(cache, mp);
> > > +
> > > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > > + unsigned int i;
> > > + unsigned int cache_len = cache->len;
> > > + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> > > + void *base_value = mp->pool_base_value;
> > > + uint32_t *cache_objs = (uint32_t *) cache->objs;
> >
> > Hi Dharmik and Honnappa,
> >
> > The essence of this patch is based on recasting the type of the objs
> field in the
> > rte_mempool_cache structure from an array of pointers to an array of
> > uint32_t.
> >
> > However, this effectively breaks the ABI, because the
> rte_mempool_cache
> > structure is public and part of the API.
> The patch does not change the public structure, the new member is under
> compile time flag, not sure how it breaks the ABI.
>
> >
> > Some drivers [1] even bypass the mempool API and access the
> > rte_mempool_cache structure directly, assuming that the objs array in
> the
> > cache is an array of pointers. So you cannot recast the fields in the
> > rte_mempool_cache structure the way this patch requires.
> IMO, those drivers are at fault. The mempool cache structure is public
> only because the APIs are inline. We should still maintain modularity
> and not use the members of structures belonging to another library
> directly. A similar effort involving rte_ring was not accepted sometime
> back [1]
>
> [1]
> http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR0
> 8MB5814.eurprd08.prod.outlook.com/
>
> >
> > Although I do consider bypassing an API's accessor functions
> "spaghetti
> > code", this driver's behavior is formally acceptable as long as the
> > rte_mempool_cache structure is not marked as internal.
> >
> > I really liked your idea of using indexes instead of pointers, so I'm
> very sorry to
> > shoot it down. :-(
> >
> > [1]: E.g. the Intel i40e PMD,
> >
> http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_
> avx
> > 512.c#L25
> It is possible to throw an error when this feature is enabled in this
> file. Alternatively, this PMD could implement the code for index based
> mempool.
>
I agree with both your points, Honnappa.
The ABI remains intact, and only changes when this feature is enabled at compile time.
In addition to your suggestions, I propose that the patch modifies the objs type in the mempool cache structure itself, instead of type casting it through an access variable. This should throw an error when compiling an application that accesses it as a pointer array instead of a uint32_t array - like the affected Intel PMDs.
The updated objs field in the mempool cache structure should have the same size when compiled as the original objs field, so this feature doesn't change anything else in the ABI, only the type of the mempool cache objects.
Also, the description of the feature should stress that applications accessing the cache objects directly will fail miserably.
^ permalink raw reply [relevance 4%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
2022-01-20 8:21 3% ` Morten Brørup
@ 2022-01-21 6:01 3% ` Honnappa Nagarahalli
2022-01-21 7:36 4% ` Morten Brørup
2022-01-21 9:12 0% ` Bruce Richardson
0 siblings, 2 replies; 200+ results
From: Honnappa Nagarahalli @ 2022-01-21 6:01 UTC (permalink / raw)
To: Morten Brørup, Dharmik Thakkar, Olivier Matz, Andrew Rybchenko
Cc: dev, nd, Ruifeng Wang, Beilei Xing, nd
>
> +CC Beilei as i40e maintainer
>
> > From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> > Sent: Thursday, 13 January 2022 06.37
> >
> > Current mempool per core cache implementation stores pointers to mbufs
> > On 64b architectures, each pointer consumes 8B This patch replaces it
> > with index-based implementation, where in each buffer is addressed by
> > (pool base address + index) It reduces the amount of memory/cache
> > required for per core cache
> >
> > L3Fwd performance testing reveals minor improvements in the cache
> > performance (L1 and L2 misses reduced by 0.60%) with no change in
> > throughput
> >
> > Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > ---
> > lib/mempool/rte_mempool.h | 150 +++++++++++++++++++++++++-
> > lib/mempool/rte_mempool_ops_default.c | 7 ++
> > 2 files changed, 156 insertions(+), 1 deletion(-)
> >
> > diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> > index 1e7a3c15273c..f2403fbc97a7 100644
> > --- a/lib/mempool/rte_mempool.h
> > +++ b/lib/mempool/rte_mempool.h
> > @@ -50,6 +50,10 @@
> > #include <rte_memcpy.h>
> > #include <rte_common.h>
> >
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > +#include <rte_vect.h>
> > +#endif
> > +
> > #include "rte_mempool_trace_fp.h"
> >
> > #ifdef __cplusplus
> > @@ -239,6 +243,9 @@ struct rte_mempool {
> > int32_t ops_index;
> >
> > struct rte_mempool_cache *local_cache; /**< Per-lcore local cache */
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > + void *pool_base_value; /**< Base value to calculate indices */
> > +#endif
> >
> > uint32_t populated_size; /**< Number of populated
> > objects. */
> > struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> > pool */ @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct
> > rte_mempool_cache *cache,
> > if (cache == NULL || cache->len == 0)
> > return;
> > rte_mempool_trace_cache_flush(cache, mp);
> > +
> > +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> > + unsigned int i;
> > + unsigned int cache_len = cache->len;
> > + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> > + void *base_value = mp->pool_base_value;
> > + uint32_t *cache_objs = (uint32_t *) cache->objs;
>
> Hi Dharmik and Honnappa,
>
> The essence of this patch is based on recasting the type of the objs field in the
> rte_mempool_cache structure from an array of pointers to an array of
> uint32_t.
>
> However, this effectively breaks the ABI, because the rte_mempool_cache
> structure is public and part of the API.
The patch does not change the public structure, the new member is under compile time flag, not sure how it breaks the ABI.
>
> Some drivers [1] even bypass the mempool API and access the
> rte_mempool_cache structure directly, assuming that the objs array in the
> cache is an array of pointers. So you cannot recast the fields in the
> rte_mempool_cache structure the way this patch requires.
IMO, those drivers are at fault. The mempool cache structure is public only because the APIs are inline. We should still maintain modularity and not use the members of structures belonging to another library directly. A similar effort involving rte_ring was not accepted sometime back [1]
[1] http://inbox.dpdk.org/dev/DBAPR08MB5814907968595EE56F5E20A798390@DBAPR08MB5814.eurprd08.prod.outlook.com/
>
> Although I do consider bypassing an API's accessor functions "spaghetti
> code", this driver's behavior is formally acceptable as long as the
> rte_mempool_cache structure is not marked as internal.
>
> I really liked your idea of using indexes instead of pointers, so I'm very sorry to
> shoot it down. :-(
>
> [1]: E.g. the Intel i40e PMD,
> http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_avx
> 512.c#L25
It is possible to throw an error when this feature is enabled in this file. Alternatively, this PMD could implement the code for index based mempool.
>
> -Morten
^ permalink raw reply [relevance 3%]
* RE: [EXT] Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
2022-01-20 16:45 3% ` Stephen Hemminger
@ 2022-01-20 17:11 0% ` Akhil Goyal
0 siblings, 0 replies; 200+ results
From: Akhil Goyal @ 2022-01-20 17:11 UTC (permalink / raw)
To: Stephen Hemminger
Cc: dev, Anoob Joseph, radu.nicolau, declan.doherty, hemant.agrawal,
matan, konstantin.ananyev, thomas, ferruh.yigit,
andrew.rybchenko, olivier.matz, rosen.xu,
Jerin Jacob Kollanukkaran
> On Thu, 20 Jan 2022 21:56:24 +0530
> Akhil Goyal <gakhil@marvell.com> wrote:
>
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this structure may change without prior notice.
> > + *
> > + * A structure used to set IP reassembly configuration.
> > + *
> > + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> > + * the PMD will attempt IP reassembly for the received packets as per
> > + * properties defined in this structure:
> > + *
> > + */
> > +struct rte_eth_ip_reass_params {
> > + /** Maximum time in ms which PMD can wait for other fragments. */
> > + uint32_t reass_timeout;
> > + /** Maximum number of fragments that can be reassembled. */
> > + uint16_t max_frags;
> > + /**
> > + * Flags to enable reassembly of packet types -
> > + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> > + */
> > + uint16_t flags;
> > +};
> > +
>
> Actually, this is not experimental. You are embedding this in dev_info
> and dev_info is not experimental; therefore the reassembly parameters
> can never change without breaking ABI of dev_info.
Agreed, will remove the experimental tag from this struct.
^ permalink raw reply [relevance 0%]
* Re: [PATCH v2 1/4] ethdev: introduce IP reassembly offload
@ 2022-01-20 16:45 3% ` Stephen Hemminger
2022-01-20 17:11 0% ` [EXT] " Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2022-01-20 16:45 UTC (permalink / raw)
To: Akhil Goyal
Cc: dev, anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj
On Thu, 20 Jan 2022 21:56:24 +0530
Akhil Goyal <gakhil@marvell.com> wrote:
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this structure may change without prior notice.
> + *
> + * A structure used to set IP reassembly configuration.
> + *
> + * If RTE_ETH_RX_OFFLOAD_IP_REASSEMBLY flag is set in offloads field,
> + * the PMD will attempt IP reassembly for the received packets as per
> + * properties defined in this structure:
> + *
> + */
> +struct rte_eth_ip_reass_params {
> + /** Maximum time in ms which PMD can wait for other fragments. */
> + uint32_t reass_timeout;
> + /** Maximum number of fragments that can be reassembled. */
> + uint16_t max_frags;
> + /**
> + * Flags to enable reassembly of packet types -
> + * RTE_ETH_DEV_REASSEMBLY_F_xxx.
> + */
> + uint16_t flags;
> +};
> +
Actually, this is not experimental. You are embedding this in dev_info
and dev_info is not experimental; therefore the reassembly parameters
can never change without breaking ABI of dev_info.
^ permalink raw reply [relevance 3%]
* [PATCH v2 0/4] ethdev: introduce IP reassembly offload
@ 2022-01-20 16:26 4% ` Akhil Goyal
2022-01-30 17:59 4% ` [PATCH v3 0/4] " Akhil Goyal
0 siblings, 2 replies; 200+ results
From: Akhil Goyal @ 2022-01-20 16:26 UTC (permalink / raw)
To: dev
Cc: anoobj, radu.nicolau, declan.doherty, hemant.agrawal, matan,
konstantin.ananyev, thomas, ferruh.yigit, andrew.rybchenko,
olivier.matz, rosen.xu, jerinj, Akhil Goyal
As discussed in the RFC[1] sent in 21.11, a new offload is
introduced in ethdev for IP reassembly.
This patchset add the IP reassembly RX offload.
Currently, the offload is tested along with inline IPsec processing.
It can also be updated as a standalone offload without IPsec, if there
are some hardware available to test it.
The patchset is tested on cnxk platform. The driver implementation
and a test app are added as separate patchsets.
[1]: http://patches.dpdk.org/project/dpdk/patch/20210823100259.1619886-1-gakhil@marvell.com/
changes in v2:
- added abi ignore exceptions for modifications in reserved fields.
Added a crude way to subside the rte_security and rte_ipsec ABI issue.
Please suggest a better way.
- incorporated Konstantin's comment for extra checks in new API
introduced.
- converted static mbuf ol_flag to mbuf dynflag (Konstantin)
- added a get API for reassembly configuration (Konstantin)
- Fixed checkpatch issues.
- Dynfield is NOT split into 2 parts as it would cause an extra fetch in
case of IP reassembly failure.
- Application patches are split into a separate series.
Akhil Goyal (4):
ethdev: introduce IP reassembly offload
ethdev: add dev op to set/get IP reassembly configuration
ethdev: add mbuf dynfield for incomplete IP reassembly
security: add IPsec option for IP reassembly
devtools/libabigail.abignore | 19 ++++++
doc/guides/nics/features.rst | 11 ++++
lib/ethdev/ethdev_driver.h | 45 ++++++++++++++
lib/ethdev/rte_ethdev.c | 110 +++++++++++++++++++++++++++++++++++
lib/ethdev/rte_ethdev.h | 104 ++++++++++++++++++++++++++++++++-
lib/ethdev/version.map | 5 ++
lib/security/rte_security.h | 12 +++-
7 files changed, 304 insertions(+), 2 deletions(-)
--
2.25.1
^ permalink raw reply [relevance 4%]
* Re: [PATCH v2] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
@ 2022-01-20 14:16 0% ` Thomas Monjalon
2022-01-23 21:17 0% ` Michael Barker
2022-01-23 21:07 8% ` [PATCH v3] " Michael Barker
1 sibling, 1 reply; 200+ results
From: Thomas Monjalon @ 2022-01-20 14:16 UTC (permalink / raw)
To: Michael Barker; +Cc: dev, Ray Kinsella
18/01/2022 00:23, Michael Barker:
> When using clang with -Wall the use of diagnose_if kicks up a warning,
Please could you copy the warning in the commit log?
> requiring all dpdk includes to be wrapped with the pragma. This change
> isolates the ignore just the appropriate location and makes it easier
> for users to apply -Wall,-Werror
Please could you explain how it is related to -Wgcc-compat?
[...]
> #define __rte_internal \
> +_Pragma("GCC diagnostic push") \
> +_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
> __attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
> -section(".text.internal")))
> +section(".text.internal"))) \
> +_Pragma("GCC diagnostic pop")
^ permalink raw reply [relevance 0%]
* RE: [PATCH v2 1/1] mempool: implement index-based per core cache
@ 2022-01-20 8:21 3% ` Morten Brørup
2022-01-21 6:01 3% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2022-01-20 8:21 UTC (permalink / raw)
To: Dharmik Thakkar, honnappa.nagarahalli, Olivier Matz, Andrew Rybchenko
Cc: dev, nd, ruifeng.wang, Beilei Xing
+CC Beilei as i40e maintainer
> From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
> Sent: Thursday, 13 January 2022 06.37
>
> Current mempool per core cache implementation stores pointers to mbufs
> On 64b architectures, each pointer consumes 8B
> This patch replaces it with index-based implementation,
> where in each buffer is addressed by (pool base address + index)
> It reduces the amount of memory/cache required for per core cache
>
> L3Fwd performance testing reveals minor improvements in the cache
> performance (L1 and L2 misses reduced by 0.60%)
> with no change in throughput
>
> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---
> lib/mempool/rte_mempool.h | 150 +++++++++++++++++++++++++-
> lib/mempool/rte_mempool_ops_default.c | 7 ++
> 2 files changed, 156 insertions(+), 1 deletion(-)
>
> diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
> index 1e7a3c15273c..f2403fbc97a7 100644
> --- a/lib/mempool/rte_mempool.h
> +++ b/lib/mempool/rte_mempool.h
> @@ -50,6 +50,10 @@
> #include <rte_memcpy.h>
> #include <rte_common.h>
>
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> +#include <rte_vect.h>
> +#endif
> +
> #include "rte_mempool_trace_fp.h"
>
> #ifdef __cplusplus
> @@ -239,6 +243,9 @@ struct rte_mempool {
> int32_t ops_index;
>
> struct rte_mempool_cache *local_cache; /**< Per-lcore local cache
> */
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> + void *pool_base_value; /**< Base value to calculate indices */
> +#endif
>
> uint32_t populated_size; /**< Number of populated
> objects. */
> struct rte_mempool_objhdr_list elt_list; /**< List of objects in
> pool */
> @@ -1314,7 +1321,22 @@ rte_mempool_cache_flush(struct rte_mempool_cache
> *cache,
> if (cache == NULL || cache->len == 0)
> return;
> rte_mempool_trace_cache_flush(cache, mp);
> +
> +#ifdef RTE_MEMPOOL_INDEX_BASED_LCORE_CACHE
> + unsigned int i;
> + unsigned int cache_len = cache->len;
> + void *obj_table[RTE_MEMPOOL_CACHE_MAX_SIZE * 3];
> + void *base_value = mp->pool_base_value;
> + uint32_t *cache_objs = (uint32_t *) cache->objs;
Hi Dharmik and Honnappa,
The essence of this patch is based on recasting the type of the objs field in the rte_mempool_cache structure from an array of pointers to an array of uint32_t.
However, this effectively breaks the ABI, because the rte_mempool_cache structure is public and part of the API.
Some drivers [1] even bypass the mempool API and access the rte_mempool_cache structure directly, assuming that the objs array in the cache is an array of pointers. So you cannot recast the fields in the rte_mempool_cache structure the way this patch requires.
Although I do consider bypassing an API's accessor functions "spaghetti code", this driver's behavior is formally acceptable as long as the rte_mempool_cache structure is not marked as internal.
I really liked your idea of using indexes instead of pointers, so I'm very sorry to shoot it down. :-(
[1]: E.g. the Intel i40e PMD, http://code.dpdk.org/dpdk/latest/source/drivers/net/i40e/i40e_rxtx_vec_avx512.c#L25
-Morten
^ permalink raw reply [relevance 3%]
* Re: [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
2021-12-30 6:08 2% [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
@ 2022-01-19 16:56 0% ` Ferruh Yigit
2022-01-21 9:27 0% ` Yanling Song
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2022-01-19 16:56 UTC (permalink / raw)
To: Yanling Song, dev; +Cc: yanling.song, yanggan, xuyun, stephen, lihuisong
On 12/30/2021 6:08 AM, Yanling Song wrote:
> The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
> Ramaxel Memory Technology is a company which supply a lot of electric products:
> storage, communication, PCB...
> SPNxxx is a serial PCIE interface NIC cards:
> SPN110: 2 PORTs *25G
> SPN120: 4 PORTs *25G
> SPN130: 2 PORTs *100G
>
Hi Yanling,
As far as I can see hnic (from Huawei) and this spnic drivers are alike,
what is the relation between these two?
> The following is main features of our SPNIC:
> - TSO
> - LRO
> - Flow control
> - SR-IOV(Partially supported)
> - VLAN offload
> - VLAN filter
> - CRC offload
> - Promiscuous mode
> - RSS
>
> v6->v5, No real changes:
> 1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26 to patch 2;
> 2. Change the description of patch 26.
>
> v5->v4:
> 1. Add prefix "spinc_" for external functions;
> 2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
> 3. Do not use void* for keeping the type information
>
> v3->v4:
> 1. Fix ABI test failure;
> 2. Remove some descriptions in spnic.rst.
>
> v2->v3:
> 1. Fix clang compiling failure.
>
> v1->v2:
> 1. Fix coding style issues and compiling failures;
> 2. Only support linux in meson.build;
> 3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
> 4. Fix time_before();
> 5. Remove redundant checks in spnic_dev_configure();
>
> Yanling Song (26):
> drivers/net: introduce a new PMD driver
> net/spnic: initialize the HW interface
> net/spnic: add mbox message channel
> net/spnic: introduce event queue
> net/spnic: add mgmt module
> net/spnic: add cmdq and work queue
> net/spnic: add interface handling cmdq message
> net/spnic: add hardware info initialization
> net/spnic: support MAC and link event handling
> net/spnic: add function info initialization
> net/spnic: add queue pairs context initialization
> net/spnic: support mbuf handling of Tx/Rx
> net/spnic: support Rx congfiguration
> net/spnic: add port/vport enable
> net/spnic: support IO packets handling
> net/spnic: add device configure/version/info
> net/spnic: support RSS configuration update and get
> net/spnic: support VLAN filtering and offloading
> net/spnic: support promiscuous and allmulticast Rx modes
> net/spnic: support flow control
> net/spnic: support getting Tx/Rx queues info
> net/spnic: net/spnic: support xstats statistics
> net/spnic: support VFIO interrupt
> net/spnic: support Tx/Rx queue start/stop
> net/spnic: add doc infrastructure
> net/spnic: fixes unsafe C style code
<...>
^ permalink raw reply [relevance 0%]
* [PATCH v3] mempool: fix put objects to mempool with cache
2022-01-19 14:52 3% ` [PATCH v2] mempool: fix put objects to mempool with cache Morten Brørup
@ 2022-01-19 15:03 3% ` Morten Brørup
2022-01-24 15:39 3% ` Olivier Matz
2022-02-02 10:33 3% ` [PATCH v4] mempool: fix mempool cache flushing algorithm Morten Brørup
2 siblings, 1 reply; 200+ results
From: Morten Brørup @ 2022-01-19 15:03 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko
Cc: bruce.richardson, jerinjacobk, dev, Morten Brørup
mempool: fix put objects to mempool with cache
This patch optimizes the rte_mempool_do_generic_put() caching algorithm,
and fixes a bug in it.
The existing algorithm was:
1. Add the objects to the cache
2. Anything greater than the cache size (if it crosses the cache flush
threshold) is flushed to the ring.
Please note that the description in the source code said that it kept
"cache min value" objects after flushing, but the function actually kept
"size" objects, which is reflected in the above description.
Now, the algorithm is:
1. If the objects cannot be added to the cache without crossing the
flush threshold, flush the cache to the ring.
2. Add the objects to the cache.
This patch changes these details:
1. Bug: The cache was still full after flushing.
In the opposite direction, i.e. when getting objects from the cache, the
cache is refilled to full level when it crosses the low watermark (which
happens to be zero).
Similarly, the cache should be flushed to empty level when it crosses
the high watermark (which happens to be 1.5 x the size of the cache).
The existing flushing behaviour was suboptimal for real applications,
because crossing the low or high watermark typically happens when the
application is in a state where the number of put/get events are out of
balance, e.g. when absorbing a burst of packets into a QoS queue
(getting more mbufs from the mempool), or when a burst of packets is
trickling out from the QoS queue (putting the mbufs back into the
mempool).
NB: When the application is in a state where put/get events are in
balance, the cache should remain within its low and high watermarks, and
the algorithms for refilling/flushing the cache should not come into
play.
Now, the mempool cache is completely flushed when crossing the flush
threshold, so only the newly put (hot) objects remain in the mempool
cache afterwards.
2. Minor bug: The flush threshold comparison has been corrected; it must
be "len > flushthresh", not "len >= flushthresh".
Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
would be flushed already when reaching size elements, not when exceeding
size elements.
Now, flushing is triggered when the flush threshold is exceeded, not
when reached.
3. Optimization: The most recent (hot) objects are flushed, leaving the
oldest (cold) objects in the mempool cache.
This is bad for CPUs with a small L1 cache, because when they get
objects from the mempool after the mempool cache has been flushed, they
get cold objects instead of hot objects.
Now, the existing (cold) objects in the mempool cache are flushed before
the new (hot) objects are added the to the mempool cache.
4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
here, where n is relatively small and unknown at compile time.
Now, it has been replaced by an alternative copying method, optimized
for the fact that most Ethernet PMDs operate in bursts of 4 or 8 mbufs
or multiples thereof.
v2 changes:
- Not adding the new objects to the mempool cache before flushing it
also allows the memory allocated for the mempool cache to be reduced
from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
However, such this change would break the ABI, so it was removed in v2.
- The mempool cache should be cache line aligned for the benefit of the
copying method, which on some CPU architectures performs worse on data
crossing a cache boundary.
However, such this change would break the ABI, so it was removed in v2;
and yet another alternative copying method replaced the rte_memcpy().
v3 changes:
- Actually remove my modifications of the rte_mempool_cache structure.
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/mempool/rte_mempool.h | 51 +++++++++++++++++++++++++++++----------
1 file changed, 38 insertions(+), 13 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..7b364cfc74 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -1334,6 +1334,7 @@ static __rte_always_inline void
rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
unsigned int n, struct rte_mempool_cache *cache)
{
+ uint32_t index;
void **cache_objs;
/* increment stat now, adding in mempool always success */
@@ -1344,31 +1345,56 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache_objs = &cache->objs[cache->len];
+ /* If the request itself is too big for the cache */
+ if (unlikely(n > cache->flushthresh))
+ goto ring_enqueue;
/*
* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
+ * 1. If the objects cannot be added to the cache without
+ * crossing the flush threshold, flush the cache to the ring.
+ * 2. Add the objects to the cache.
*/
- /* Add elements back into the cache */
- rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
+ if (cache->len + n <= cache->flushthresh) {
+ cache_objs = &cache->objs[cache->len];
- cache->len += n;
+ cache->len += n;
+ } else {
+ cache_objs = cache->objs;
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
+ rte_panic("cannot put objects in mempool\n");
+#else
+ rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
+#endif
+ cache->len = n;
+ }
+
+ /* Add the objects to the cache. */
+ for (index = 0; index < (n & ~0x3); index += 4) {
+ cache_objs[index] = obj_table[index];
+ cache_objs[index + 1] = obj_table[index + 1];
+ cache_objs[index + 2] = obj_table[index + 2];
+ cache_objs[index + 3] = obj_table[index + 3];
+ }
+ switch (n & 0x3) {
+ case 3:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 2:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 1:
+ cache_objs[index] = obj_table[index];
}
return;
ring_enqueue:
- /* push remaining objects in ring */
+ /* Put the objects into the ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
rte_panic("cannot put objects in mempool\n");
@@ -1377,7 +1403,6 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
#endif
}
-
/**
* Put several objects back in the mempool.
*
--
2.17.1
^ permalink raw reply [relevance 3%]
* [PATCH v2] mempool: fix put objects to mempool with cache
@ 2022-01-19 14:52 3% ` Morten Brørup
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
2022-02-02 10:33 3% ` [PATCH v4] mempool: fix mempool cache flushing algorithm Morten Brørup
2 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-19 14:52 UTC (permalink / raw)
To: olivier.matz, andrew.rybchenko
Cc: bruce.richardson, jerinjacobk, dev, Morten Brørup
This patch optimizes the rte_mempool_do_generic_put() caching algorithm,
and fixes a bug in it.
The existing algorithm was:
1. Add the objects to the cache
2. Anything greater than the cache size (if it crosses the cache flush
threshold) is flushed to the ring.
Please note that the description in the source code said that it kept
"cache min value" objects after flushing, but the function actually kept
"size" objects, which is reflected in the above description.
Now, the algorithm is:
1. If the objects cannot be added to the cache without crossing the
flush threshold, flush the cache to the ring.
2. Add the objects to the cache.
This patch changes these details:
1. Bug: The cache was still full after flushing.
In the opposite direction, i.e. when getting objects from the cache, the
cache is refilled to full level when it crosses the low watermark (which
happens to be zero).
Similarly, the cache should be flushed to empty level when it crosses
the high watermark (which happens to be 1.5 x the size of the cache).
The existing flushing behaviour was suboptimal for real applications,
because crossing the low or high watermark typically happens when the
application is in a state where the number of put/get events are out of
balance, e.g. when absorbing a burst of packets into a QoS queue
(getting more mbufs from the mempool), or when a burst of packets is
trickling out from the QoS queue (putting the mbufs back into the
mempool).
NB: When the application is in a state where put/get events are in
balance, the cache should remain within its low and high watermarks, and
the algorithms for refilling/flushing the cache should not come into
play.
Now, the mempool cache is completely flushed when crossing the flush
threshold, so only the newly put (hot) objects remain in the mempool
cache afterwards.
2. Minor bug: The flush threshold comparison has been corrected; it must
be "len > flushthresh", not "len >= flushthresh".
Reasoning: Consider a flush multiplier of 1 instead of 1.5; the cache
would be flushed already when reaching size elements, not when exceeding
size elements.
Now, flushing is triggered when the flush threshold is exceeded, not
when reached.
3. Optimization: The most recent (hot) objects are flushed, leaving the
oldest (cold) objects in the mempool cache.
This is bad for CPUs with a small L1 cache, because when they get
objects from the mempool after the mempool cache has been flushed, they
get cold objects instead of hot objects.
Now, the existing (cold) objects in the mempool cache are flushed before
the new (hot) objects are added the to the mempool cache.
4. Optimization: Using the x86 variant of rte_memcpy() is inefficient
here, where n is relatively small and unknown at compile time.
Now, it has been replaced by an alternative copying method, optimized
for the fact that most Ethernet PMDs operate in bursts of 4 or 8 mbufs
or multiples thereof.
v2 changes:
- Not adding the new objects to the mempool cache before flushing it
also allows the memory allocated for the mempool cache to be reduced
from 3 x to 2 x RTE_MEMPOOL_CACHE_MAX_SIZE.
However, such this change would break the ABI, so it was removed in v2.
- The mempool cache should be cache line aligned for the benefit of the
copying method, which on some CPU architectures performs worse on data
crossing a cache boundary.
However, such this change would break the ABI, so it was removed in v2;
and yet another alternative copying method replaced the rte_memcpy().
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
---
lib/mempool/rte_mempool.h | 54 +++++++++++++++++++++++++++++----------
1 file changed, 40 insertions(+), 14 deletions(-)
diff --git a/lib/mempool/rte_mempool.h b/lib/mempool/rte_mempool.h
index 1e7a3c1527..8a7067ee5b 100644
--- a/lib/mempool/rte_mempool.h
+++ b/lib/mempool/rte_mempool.h
@@ -94,7 +94,8 @@ struct rte_mempool_cache {
* Cache is allocated to this size to allow it to overflow in certain
* cases to avoid needless emptying of cache.
*/
- void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 3]; /**< Cache objects */
+ void *objs[RTE_MEMPOOL_CACHE_MAX_SIZE * 2] __rte_cache_aligned;
+ /**< Cache objects */
} __rte_cache_aligned;
/**
@@ -1334,6 +1335,7 @@ static __rte_always_inline void
rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
unsigned int n, struct rte_mempool_cache *cache)
{
+ uint32_t index;
void **cache_objs;
/* increment stat now, adding in mempool always success */
@@ -1344,31 +1346,56 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
if (unlikely(cache == NULL || n > RTE_MEMPOOL_CACHE_MAX_SIZE))
goto ring_enqueue;
- cache_objs = &cache->objs[cache->len];
+ /* If the request itself is too big for the cache */
+ if (unlikely(n > cache->flushthresh))
+ goto ring_enqueue;
/*
* The cache follows the following algorithm
- * 1. Add the objects to the cache
- * 2. Anything greater than the cache min value (if it crosses the
- * cache flush threshold) is flushed to the ring.
+ * 1. If the objects cannot be added to the cache without
+ * crossing the flush threshold, flush the cache to the ring.
+ * 2. Add the objects to the cache.
*/
- /* Add elements back into the cache */
- rte_memcpy(&cache_objs[0], obj_table, sizeof(void *) * n);
+ if (cache->len + n <= cache->flushthresh) {
+ cache_objs = &cache->objs[cache->len];
- cache->len += n;
+ cache->len += n;
+ } else {
+ cache_objs = cache->objs;
- if (cache->len >= cache->flushthresh) {
- rte_mempool_ops_enqueue_bulk(mp, &cache->objs[cache->size],
- cache->len - cache->size);
- cache->len = cache->size;
+#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
+ if (rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len) < 0)
+ rte_panic("cannot put objects in mempool\n");
+#else
+ rte_mempool_ops_enqueue_bulk(mp, cache_objs, cache->len);
+#endif
+ cache->len = n;
+ }
+
+ /* Add the objects to the cache. */
+ for (index = 0; index < (n & ~0x3); index += 4) {
+ cache_objs[index] = obj_table[index];
+ cache_objs[index + 1] = obj_table[index + 1];
+ cache_objs[index + 2] = obj_table[index + 2];
+ cache_objs[index + 3] = obj_table[index + 3];
+ }
+ switch (n & 0x3) {
+ case 3:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 2:
+ cache_objs[index] = obj_table[index];
+ index++; /* fallthrough */
+ case 1:
+ cache_objs[index] = obj_table[index];
}
return;
ring_enqueue:
- /* push remaining objects in ring */
+ /* Put the objects into the ring */
#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
if (rte_mempool_ops_enqueue_bulk(mp, obj_table, n) < 0)
rte_panic("cannot put objects in mempool\n");
@@ -1377,7 +1404,6 @@ rte_mempool_do_generic_put(struct rte_mempool *mp, void * const *obj_table,
#endif
}
-
/**
* Put several objects back in the mempool.
*
--
2.17.1
^ permalink raw reply [relevance 3%]
* RE: [RFC 1/3] ethdev: support GRE optional fields
@ 2022-01-19 10:56 4% ` Ori Kam
2022-01-25 9:49 0% ` Sean Zhang (Networking SW)
0 siblings, 1 reply; 200+ results
From: Ori Kam @ 2022-01-19 10:56 UTC (permalink / raw)
To: NBU-Contact-Thomas Monjalon (EXTERNAL),
Sean Zhang (Networking SW),
Matan Azrad, Ferruh Yigit
Cc: Andrew Rybchenko, dev
Hi,
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Subject: Re: [RFC 1/3] ethdev: support GRE optional fields
>
> 19/01/2022 10:53, Ferruh Yigit:
> > On 12/30/2021 3:08 AM, Sean Zhang wrote:
> > > --- a/lib/ethdev/rte_flow.h
> > > +++ b/lib/ethdev/rte_flow.h
> > > /**
> > > + * RTE_FLOW_ITEM_TYPE_GRE_OPTION.
> > > + *
> > > + * Matches GRE optional fields in header.
> > > + */
> > > +struct rte_gre_hdr_option {
> > > + rte_be16_t checksum;
> > > + rte_be32_t key;
> > > + rte_be32_t sequence;
> > > +};
> > > +
> >
> > Hi Ori, Andrew,
> >
> > The decision was to have protocol structs in the net library and flow structs
> > use from there, wasn't it?
> > (Btw, a deprecation notice is still pending to clear some existing ones)
> >
> > So for the GRE optional fields, what about having a struct in the 'rte_gre.h'?
> > (Also perhaps an GRE extended protocol header can be defined combining
> > 'rte_gre_hdr' and optional fields struct.)
> > Later flow API struct can embed that struct.
>
> +1 for using librte_net.
> This addition in rte_flow looks to be a mistake.
> Please fix the next version.
>
Nice idea,
but my main concern is that the header should have the header is defined.
Since some of the fields are optional this will look something like this:
gre_hdr_option_checksum {
rte_be_16_t checksum;
}
gre_hdr_option_key {
rte_be_32_t key;
}
gre_hdr_option_ sequence {
rte_be_32_t sequence;
}
I don't want to have so many rte_flow_items,
Has more and more protocols have optional data it doesn't make sense to create the item for each.
If I'm looking at it from an ideal place, I would like that the optional fields will be part of the original item.
For example in test pmd I would like to write:
Eth / ipv4 / udp / gre flags is key & checksum checksum is yyy key is xxx / end
And not
Eth / ipv4 / udp / gre flags is key & checksum / gre_option checksum is yyy key is xxx / end
This means that the structure will look like this:
struct rte_flow_item_gre {
union {
struct {
/**
* Checksum (1b), reserved 0 (12b), version (3b).
* Refer to RFC 2784.
*/
rte_be16_t c_rsvd0_ver;
rte_be16_t protocol; /**< Protocol type. */
}
struct rte_gre_hdr hdr
}
rte_be_16_t checksum;
rte_be_32_t key;
rte_be_32_t sequence;
};
The main issue with this is that it breaks ABI,
Maybe to solve this we can create a new structure gre_ext?
In any way I think we should think how we allow adding members to structures without
ABI breakage.
Best,
Ori
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
2022-01-17 5:39 0% ` Hu, Jiayu
@ 2022-01-19 2:18 0% ` Xia, Chenbo
0 siblings, 0 replies; 200+ results
From: Xia, Chenbo @ 2022-01-19 2:18 UTC (permalink / raw)
To: Hu, Jiayu, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
> -----Original Message-----
> From: Hu, Jiayu <jiayu.hu@intel.com>
> Sent: Monday, January 17, 2022 1:40 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>; dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; i.maximets@ovn.org; Richardson, Bruce
> <bruce.richardson@intel.com>; Van Haaren, Harry <harry.van.haaren@intel.com>;
> Pai G, Sunil <sunil.pai.g@intel.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Ding, Xuan <xuan.ding@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> liangma@liangbit.com
> Subject: RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
>
> Hi Chenbo,
>
> Please see replies inline.
>
> Thanks,
> Jiayu
>
> > -----Original Message-----
> > From: Xia, Chenbo <chenbo.xia@intel.com>
> > > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > > index 33d023aa39..44073499bc 100644
> > > --- a/examples/vhost/main.c
> > > +++ b/examples/vhost/main.c
> > > @@ -24,8 +24,9 @@
> > > #include <rte_ip.h>
> > > #include <rte_tcp.h>
> > > #include <rte_pause.h>
> > > +#include <rte_dmadev.h>
> > > +#include <rte_vhost_async.h>
> > >
> > > -#include "ioat.h"
> > > #include "main.h"
> > >
> > > #ifndef MAX_QUEUES
> > > @@ -56,6 +57,14 @@
> > > #define RTE_TEST_TX_DESC_DEFAULT 512
> > >
> > > #define INVALID_PORT_ID 0xFF
> > > +#define INVALID_DMA_ID -1
> > > +
> > > +#define MAX_VHOST_DEVICE 1024
> > > +#define DMA_RING_SIZE 4096
> > > +
> > > +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> > > +struct rte_vhost_async_dma_info
> > dma_config[RTE_DMADEV_DEFAULT_MAX];
> > > +static int dma_count;
> > >
> > > /* mask of enabled ports */
> > > static uint32_t enabled_port_mask = 0;
> > > @@ -96,8 +105,6 @@ static int builtin_net_driver;
> > >
> > > static int async_vhost_driver;
> > >
> > > -static char *dma_type;
> > > -
> > > /* Specify timeout (in useconds) between retries on RX. */
> > > static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> > > /* Specify the number of retries on RX. */
> > > @@ -196,13 +203,134 @@ struct vhost_bufftable
> > *vhost_txbuff[RTE_MAX_LCORE *
> > > MAX_VHOST_DEVICE];
> > > #define MBUF_TABLE_DRAIN_TSC((rte_get_tsc_hz() + US_PER_S - 1) \
> > > / US_PER_S * BURST_TX_DRAIN_US)
> > >
> > > +static inline bool
> > > +is_dma_configured(int16_t dev_id)
> > > +{
> > > +int i;
> > > +
> > > +for (i = 0; i < dma_count; i++) {
> > > +if (dma_config[i].dev_id == dev_id) {
> > > +return true;
> > > +}
> > > +}
> > > +return false;
> > > +}
> > > +
> > > static inline int
> > > open_dma(const char *value)
> > > {
> > > -if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> > > -return open_ioat(value);
> > > +struct dma_for_vhost *dma_info = dma_bind;
> > > +char *input = strndup(value, strlen(value) + 1);
> > > +char *addrs = input;
> > > +char *ptrs[2];
> > > +char *start, *end, *substr;
> > > +int64_t vid, vring_id;
> > > +
> > > +struct rte_dma_info info;
> > > +struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> > > +struct rte_dma_vchan_conf qconf = {
> > > +.direction = RTE_DMA_DIR_MEM_TO_MEM,
> > > +.nb_desc = DMA_RING_SIZE
> > > +};
> > > +
> > > +int dev_id;
> > > +int ret = 0;
> > > +uint16_t i = 0;
> > > +char *dma_arg[MAX_VHOST_DEVICE];
> > > +int args_nr;
> > > +
> > > +while (isblank(*addrs))
> > > +addrs++;
> > > +if (*addrs == '\0') {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +/* process DMA devices within bracket. */
> > > +addrs++;
> > > +substr = strtok(addrs, ";]");
> > > +if (!substr) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +args_nr = rte_strsplit(substr, strlen(substr),
> > > +dma_arg, MAX_VHOST_DEVICE, ',');
> > > +if (args_nr <= 0) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +while (i < args_nr) {
> > > +char *arg_temp = dma_arg[i];
> > > +uint8_t sub_nr;
> > > +
> > > +sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> > > +if (sub_nr != 2) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +start = strstr(ptrs[0], "txd");
> > > +if (start == NULL) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +start += 3;
> > > +vid = strtol(start, &end, 0);
> > > +if (end == start) {
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +vring_id = 0 + VIRTIO_RXQ;
> >
> > No need to introduce vring_id, it's always VIRTIO_RXQ
>
> I will remove it later.
>
> >
> > > +
> > > +dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> > > +if (dev_id < 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to find
> > DMA %s.\n",
> > > ptrs[1]);
> > > +ret = -1;
> > > +goto out;
> > > +} else if (is_dma_configured(dev_id)) {
> > > +goto done;
> > > +}
> > > +
> >
> > Please call rte_dma_info_get before configure to make sure
> > info.max_vchans >=1
>
> Do you suggest to use "rte_dma_info_get() and info.max_vchans=0" to indicate
> the device is not configured, rather than using is_dma_configure()?
No, I mean when you configure the dmadev with one vchan, make sure it does have
at least one vchanl, even the 'vchan == 0' case can hardly happen.
Just like the function call sequence in test_dmadev_instance, test_dmadev.c.
>
> >
> > > +if (rte_dma_configure(dev_id, &dev_config) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure
> > DMA %d.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up
> > DMA %d.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > >
> > > -return -1;
> > > +rte_dma_info_get(dev_id, &info);
> > > +if (info.nb_vchans != 1) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no
> > queues.\n",
> > > dev_id);
> >
> > Then the above means the number of vchan is not configured.
> >
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +if (rte_dma_start(dev_id) != 0) {
> > > +RTE_LOG(ERR, VHOST_CONFIG, "Fail to start
> > DMA %u.\n",
> > > dev_id);
> > > +ret = -1;
> > > +goto out;
> > > +}
> > > +
> > > +dma_config[dma_count].dev_id = dev_id;
> > > +dma_config[dma_count].max_vchans = 1;
> > > +dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> > > +
> > > +done:
> > > +(dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> > > +i++;
> > > +}
> > > +out:
> > > +free(input);
> > > +return ret;
> > > }
> > >
> > > /*
> > > @@ -500,8 +628,6 @@ enum {
> > > OPT_CLIENT_NUM,
> > > #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> > > OPT_BUILTIN_NET_DRIVER_NUM,
> > > -#define OPT_DMA_TYPE "dma-type"
> > > -OPT_DMA_TYPE_NUM,
> > > #define OPT_DMAS "dmas"
> > > OPT_DMAS_NUM,
> > > };
> > > @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> > > NULL, OPT_CLIENT_NUM},
> > > {OPT_BUILTIN_NET_DRIVER, no_argument,
> > > NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> > > -{OPT_DMA_TYPE, required_argument,
> > > -NULL, OPT_DMA_TYPE_NUM},
> > > {OPT_DMAS, required_argument,
> > > NULL, OPT_DMAS_NUM},
> > > {NULL, 0, 0, 0},
> > > @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> > > }
> > > break;
> > >
> > > -case OPT_DMA_TYPE_NUM:
> > > -dma_type = optarg;
> > > -break;
> > > -
> > > case OPT_DMAS_NUM:
> > > if (open_dma(optarg) == -1) {
> > > RTE_LOG(INFO, VHOST_CONFIG,
> > > @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> > > {
> > > struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> > > uint16_t complete_count;
> > > +int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> > > -VIRTIO_RXQ, p_cpl,
> > MAX_PKT_BURST);
> > > +VIRTIO_RXQ, p_cpl, MAX_PKT_BURST,
> > dma_id, 0);
> > > if (complete_count) {
> > > free_pkts(p_cpl, complete_count);
> > > __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> > > __ATOMIC_SEQ_CST);
> > > @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
> > >
> > > if (builtin_net_driver) {
> > > ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> > > -} else if (async_vhost_driver) {
> > > +} else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t enqueue_fail = 0;
> > > +int16_t dma_id = dma_bind[vdev-
> > >vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_async_pkts(vdev);
> > > -ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> > VIRTIO_RXQ, m,
> > > nr_xmit);
> > > +ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> > VIRTIO_RXQ, m,
> > > nr_xmit, dma_id, 0);
> > > __atomic_add_fetch(&vdev->pkts_inflight, ret,
> > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = nr_xmit - ret;
> > > @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > -if (!async_vhost_driver)
> > > +if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > > free_pkts(m, nr_xmit);
> > > }
> > >
> > > @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> > > if (builtin_net_driver) {
> > > enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> > > pkts, rx_count);
> > > -} else if (async_vhost_driver) {
> > > +} else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t enqueue_fail = 0;
> > > +int16_t dma_id = dma_bind[vdev-
> > >vid].dmas[VIRTIO_RXQ].dev_id;
> > >
> > > complete_async_pkts(vdev);
> > > enqueue_count = rte_vhost_submit_enqueue_burst(vdev-
> > >vid,
> > > -VIRTIO_RXQ, pkts, rx_count);
> > > +VIRTIO_RXQ, pkts, rx_count, dma_id,
> > 0);
> > > __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> > > __ATOMIC_SEQ_CST);
> > >
> > > enqueue_fail = rx_count - enqueue_count;
> > > @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > -if (!async_vhost_driver)
> > > +if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > > free_pkts(pkts, rx_count);
> > > }
> > >
> > > @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> > > "(%d) device has been removed from data core\n",
> > > vdev->vid);
> > >
> > > -if (async_vhost_driver) {
> > > +if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > > uint16_t n_pkt = 0;
> > > +int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > >
> > > while (vdev->pkts_inflight) {
> > > n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> > VIRTIO_RXQ,
> > > -m_cpl, vdev->pkts_inflight);
> > > +m_cpl, vdev->pkts_inflight,
> > dma_id, 0);
> > > free_pkts(m_cpl, n_pkt);
> > > __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> > > __ATOMIC_SEQ_CST);
> > > }
> > >
> > > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > > +dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > > }
> > >
> > > rte_free(vdev);
> > > @@ -1468,20 +1593,14 @@ new_device(int vid)
> > > "(%d) device has been added to data core %d\n",
> > > vid, vdev->coreid);
> > >
> > > -if (async_vhost_driver) {
> > > -struct rte_vhost_async_config config = {0};
> > > -struct rte_vhost_async_channel_ops channel_ops;
> > > -
> > > -if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> > > -channel_ops.transfer_data = ioat_transfer_data_cb;
> > > -channel_ops.check_completed_copies =
> > > -ioat_check_completed_copies_cb;
> > > -
> > > -config.features = RTE_VHOST_ASYNC_INORDER;
> > > +if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> > > +int ret;
> > >
> > > -return rte_vhost_async_channel_register(vid,
> > VIRTIO_RXQ,
> > > -config, &channel_ops);
> > > +ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> > > +if (ret == 0) {
> > > +dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =
> > true;
> > > }
> > > +return ret;
> > > }
> > >
> > > return 0;
> > > @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t
> > queue_id, int
> > > enable)
> > > if (queue_id != VIRTIO_RXQ)
> > > return 0;
> > >
> > > -if (async_vhost_driver) {
> > > +if (dma_bind[vid].dmas[queue_id].async_enabled) {
> > > if (!enable) {
> > > uint16_t n_pkt = 0;
> > > +int16_t dma_id =
> > dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> > >
> > > while (vdev->pkts_inflight) {
> > > n_pkt =
> > rte_vhost_clear_queue_thread_unsafe(vid,
> > > queue_id,
> > > -m_cpl, vdev-
> > >pkts_inflight);
> > > +m_cpl, vdev-
> > >pkts_inflight, dma_id,
> > > 0);
> > > free_pkts(m_cpl, n_pkt);
> > > __atomic_sub_fetch(&vdev->pkts_inflight,
> > n_pkt,
> > > __ATOMIC_SEQ_CST);
> > > }
> > > @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> > > nr_switch_core, uint32_t mbuf_size,
> > > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> > > }
> > >
> > > +static void
> > > +init_dma(void)
> > > +{
> > > +int i;
> > > +
> > > +for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> > > +int j;
> > > +
> > > +for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> > > +dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> > > +dma_bind[i].dmas[j].async_enabled = false;
> > > +}
> > > +}
> > > +
> > > +for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > > +dma_config[i].dev_id = INVALID_DMA_ID;
> > > +}
> > > +}
> > > +
> > > /*
> > > * Main function, does initialisation and calls the per-lcore functions.
> > > */
> > > @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> > > argc -= ret;
> > > argv += ret;
> > >
> > > +/* initialize dma structures */
> > > +init_dma();
> > > +
> > > /* parse app arguments */
> > > ret = us_vhost_parse_args(argc, argv);
> > > if (ret < 0)
> > > @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> > > if (client_mode)
> > > flags |= RTE_VHOST_USER_CLIENT;
> > >
> > > +if (async_vhost_driver) {
> > > +if (rte_vhost_async_dma_configure(dma_config, dma_count)
> > < 0) {
> > > +RTE_LOG(ERR, VHOST_PORT, "Failed to configure
> > DMA in
> > > vhost.\n");
> > > +for (i = 0; i < dma_count; i++) {
> > > +if (dma_config[i].dev_id != INVALID_DMA_ID)
> > {
> > > +rte_dma_stop(dma_config[i].dev_id);
> > > +dma_config[i].dev_id =
> > INVALID_DMA_ID;
> > > +}
> > > +}
> > > +dma_count = 0;
> > > +async_vhost_driver = false;
> > > +}
> > > +}
> > > +
> > > /* Register vhost user driver to handle vhost messages. */
> > > for (i = 0; i < nb_sockets; i++) {
> > > char *file = socket_files + i * PATH_MAX;
> > > diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> > > index e7b1ac60a6..b4a453e77e 100644
> > > --- a/examples/vhost/main.h
> > > +++ b/examples/vhost/main.h
> > > @@ -8,6 +8,7 @@
> > > #include <sys/queue.h>
> > >
> > > #include <rte_ether.h>
> > > +#include <rte_pci.h>
> > >
> > > /* Macros for printing using RTE_LOG */
> > > #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> > > @@ -79,6 +80,16 @@ struct lcore_info {
> > > struct vhost_dev_tailq_list vdev_list;
> > > };
> > >
> > > +struct dma_info {
> > > +struct rte_pci_addr addr;
> > > +int16_t dev_id;
> > > +bool async_enabled;
> > > +};
> > > +
> > > +struct dma_for_vhost {
> > > +struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> > > +};
> > > +
> > > /* we implement non-extra virtio net features */
> > > #define VIRTIO_NET_FEATURES0
> > >
> > > diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> > > index 3efd5e6540..87a637f83f 100644
> > > --- a/examples/vhost/meson.build
> > > +++ b/examples/vhost/meson.build
> > > @@ -12,13 +12,9 @@ if not is_linux
> > > endif
> > >
> > > deps += 'vhost'
> > > +deps += 'dmadev'
> > > allow_experimental_apis = true
> > > sources = files(
> > > 'main.c',
> > > 'virtio_net.c',
> > > )
> > > -
> > > -if dpdk_conf.has('RTE_RAW_IOAT')
> > > - deps += 'raw_ioat'
> > > - sources += files('ioat.c')
> > > -endif
> > > diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> > > index cdb37a4814..8107329400 100644
> > > --- a/lib/vhost/meson.build
> > > +++ b/lib/vhost/meson.build
> > > @@ -33,7 +33,8 @@ headers = files(
> > > 'rte_vhost_async.h',
> > > 'rte_vhost_crypto.h',
> > > )
> > > +
> > > driver_sdk_headers = files(
> > > 'vdpa_driver.h',
> > > )
> > > -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> > > +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> > > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > > index a87ea6ba37..23a7a2d8b3 100644
> > > --- a/lib/vhost/rte_vhost_async.h
> > > +++ b/lib/vhost/rte_vhost_async.h
> > > @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> > > };
> > >
> > > /**
> > > - * dma transfer status
> > > + * DMA device information
> > > */
> > > -struct rte_vhost_async_status {
> > > -/** An array of application specific data for source memory */
> > > -uintptr_t *src_opaque_data;
> > > -/** An array of application specific data for destination memory */
> > > -uintptr_t *dst_opaque_data;
> > > -};
> > > -
> > > -/**
> > > - * dma operation callbacks to be implemented by applications
> > > - */
> > > -struct rte_vhost_async_channel_ops {
> > > -/**
> > > - * instruct async engines to perform copies for a batch of packets
> > > - *
> > > - * @param vid
> > > - * id of vhost device to perform data copies
> > > - * @param queue_id
> > > - * queue id to perform data copies
> > > - * @param iov_iter
> > > - * an array of IOV iterators
> > > - * @param opaque_data
> > > - * opaque data pair sending to DMA engine
> > > - * @param count
> > > - * number of elements in the "descs" array
> > > - * @return
> > > - * number of IOV iterators processed, negative value means error
> > > - */
> > > -int32_t (*transfer_data)(int vid, uint16_t queue_id,
> > > -struct rte_vhost_iov_iter *iov_iter,
> > > -struct rte_vhost_async_status *opaque_data,
> > > -uint16_t count);
> > > -/**
> > > - * check copy-completed packets from the async engine
> > > - * @param vid
> > > - * id of vhost device to check copy completion
> > > - * @param queue_id
> > > - * queue id to check copy completion
> > > - * @param opaque_data
> > > - * buffer to receive the opaque data pair from DMA engine
> > > - * @param max_packets
> > > - * max number of packets could be completed
> > > - * @return
> > > - * number of async descs completed, negative value means error
> > > - */
> > > -int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_status *opaque_data,
> > > -uint16_t max_packets);
> > > -};
> > > -
> > > -/**
> > > - * async channel features
> > > - */
> > > -enum {
> > > -RTE_VHOST_ASYNC_INORDER = 1U << 0,
> > > -};
> > > -
> > > -/**
> > > - * async channel configuration
> > > - */
> > > -struct rte_vhost_async_config {
> > > -uint32_t features;
> > > -uint32_t rsvd[2];
> > > +struct rte_vhost_async_dma_info {
> > > +int16_t dev_id;/* DMA device ID */
> > > +uint16_t max_vchans;/* max number of vchan */
> > > +uint16_t max_desc;/* max desc number of vchan */
> > > };
> > >
> > > /**
> > > @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> > > * vhost device id async channel to be attached to
> > > * @param queue_id
> > > * vhost queue id async channel to be attached to
> > > - * @param config
> > > - * Async channel configuration structure
> > > - * @param ops
> > > - * Async channel operation callbacks
> > > * @return
> > > * 0 on success, -1 on failures
> > > */
> > > __rte_experimental
> > > -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops);
> > > +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
> > >
> > > /**
> > > * Unregister an async channel for a vhost queue
> > > @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid,
> > uint16_t
> > > queue_id);
> > > * vhost device id async channel to be attached to
> > > * @param queue_id
> > > * vhost queue id async channel to be attached to
> > > - * @param config
> > > - * Async channel configuration
> > > - * @param ops
> > > - * Async channel operation callbacks
> > > * @return
> > > * 0 on success, -1 on failures
> > > */
> > > __rte_experimental
> > > -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops);
> > > +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > > queue_id);
> > >
> > > /**
> > > * Unregister an async channel for a vhost queue without performing any
> > > @@ -179,12 +109,17 @@ int
> > rte_vhost_async_channel_unregister_thread_unsafe(int
> > > vid,
> > > * array of packets to be enqueued
> > > * @param count
> > > * packets num to be enqueued
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * num of packets enqueued
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> >
> > All dma_id in the API should be uint16_t. Otherwise you need to check if
> valid.
>
> Yes, you are right. Although dma_id is defined as int16_t and DMA library
> checks
> if it is valid, vhost doesn't handle DMA failure and we need to make sure
> dma_id
> is valid before using it. And even if vhost handles DMA error, a better place
> to check
> invalid dma_id is before passing it to DMA library too. I will add the check
> later.
>
> >
> > >
> > > /**
> > > * This function checks async completion status for a specific vhost
> > > @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int
> > vid,
> > > uint16_t queue_id,
> > > * blank array to get return packet pointer
> > > * @param count
> > > * size of the packet array
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * num of packets returned
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> > >
> > > /**
> > > * This function returns the amount of in-flight packets for the vhost
> > > @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> > > queue_id);
> > > * Blank array to get return packet pointer
> > > * @param count
> > > * Size of the packet array
> > > + * @param dma_id
> > > + * the identifier of the DMA device
> > > + * @param vchan
> > > + * the identifier of virtual DMA channel
> > > * @return
> > > * Number of packets returned
> > > */
> > > __rte_experimental
> > > uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> > > -struct rte_mbuf **pkts, uint16_t count);
> > > +struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > > +uint16_t vchan);
> > > +/**
> > > + * The DMA vChannels used in asynchronous data path must be
> > configured
> > > + * first. So this function needs to be called before enabling DMA
> > > + * acceleration for vring. If this function fails, asynchronous data path
> > > + * cannot be enabled for any vring further.
> > > + *
> > > + * @param dmas
> > > + * DMA information
> > > + * @param count
> > > + * Element number of 'dmas'
> > > + * @return
> > > + * 0 on success, and -1 on failure
> > > + */
> > > +__rte_experimental
> > > +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info
> > *dmas,
> > > +uint16_t count);
> >
> > I think based on current design, vhost can use every vchan if user app let
> it.
> > So the max_desc and max_vchans can just be got from dmadev APIs? Then
> > there's
> > no need to introduce the new ABI struct rte_vhost_async_dma_info.
>
> Yes, no need to introduce struct rte_vhost_async_dma_info. We can either use
> struct rte_dma_info which is suggested by Maxime, or query from dma library
> via device id. Since dma device configuration is left to applications, I
> prefer to
> use rte_dma_info directly. How do you think?
If you only use rte_dma_info as input param, you will also need to call dmadev
API to get dmadev ID in rte_vhost_async_dma_configure (Or you add both rte_dma_info
and dmadev ID). So I suggest to only use dmadev ID as input.
>
> >
> > And about max_desc, I see the dmadev lib, you can get vchan's max_desc
> > but you
> > may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants
> > to
> > know the nb_desc instead of max_desc?
>
> True, nb_desc is better than max_desc. But dma library doesn’t provide
> function
> to query nb_desc for every vchannel. And rte_dma_info cannot be used in
> rte_vhost_async_dma_configure(), if vhost uses nb_desc. So the only way is
> to require users to provide nb_desc for every vchannel, and it will introduce
> a new struct. Is it really needed?
>
Since now dmadev lib does not provide a way to query real nb_desc for a vchanl,
so I think we can just use max_desc.
But ideally, if dmadev lib provides such a way, the configured nb_desc and nb_vchanl
should be used to configure vhost lib.
@Bruce, should you add such a way in dmadev lib? As users now do not know the real
configured nb_desc of vchanl.
> >
> > >
> > > #endif /* _RTE_VHOST_ASYNC_H_ */
> > > diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> > > index a7ef7f1976..1202ba9c1a 100644
> > > --- a/lib/vhost/version.map
> > > +++ b/lib/vhost/version.map
> > > @@ -84,6 +84,9 @@ EXPERIMENTAL {
> > >
> > > # added in 21.11
> > > rte_vhost_get_monitor_addr;
> > > +
> > > +# added in 22.03
> > > +rte_vhost_async_dma_configure;
> > > };
> > >
> > > INTERNAL {
> > > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> > > index 13a9bb9dd1..32f37f4851 100644
> > > --- a/lib/vhost/vhost.c
> > > +++ b/lib/vhost/vhost.c
> > > @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> > > return;
> > >
> > > rte_free(vq->async->pkts_info);
> > > +rte_free(vq->async->pkts_cmpl_flag);
> > >
> > > rte_free(vq->async->buffers_packed);
> > > vq->async->buffers_packed = NULL;
> > > @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> > > }
> > >
> > > static __rte_always_inline int
> > > -async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +async_channel_register(int vid, uint16_t queue_id)
> > > {
> > > struct virtio_net *dev = get_device(vid);
> > > struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> > > @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > goto out_free_async;
> > > }
> > >
> > > +async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size *
> > sizeof(bool),
> > > +RTE_CACHE_LINE_SIZE, node);
> > > +if (!async->pkts_cmpl_flag) {
> > > +VHOST_LOG_CONFIG(ERR, "failed to allocate async
> > pkts_cmpl_flag
> > > (vid %d, qid: %d)\n",
> > > +vid, queue_id);
> >
> > qid: %u
> >
> > > +goto out_free_async;
> > > +}
> > > +
> > > if (vq_is_packed(dev)) {
> > > async->buffers_packed = rte_malloc_socket(NULL,
> > > vq->size * sizeof(struct
> > vring_used_elem_packed),
> > > @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > }
> > > }
> > >
> > > -async->ops.check_completed_copies = ops-
> > >check_completed_copies;
> > > -async->ops.transfer_data = ops->transfer_data;
> > > -
> > > vq->async = async;
> > >
> > > return 0;
> > > @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t
> > queue_id,
> > > }
> > >
> > > int
> > > -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> > > {
> > > struct vhost_virtqueue *vq;
> > > struct virtio_net *dev = get_device(vid);
> > > int ret;
> > >
> > > -if (dev == NULL || ops == NULL)
> > > +if (dev == NULL)
> > > return -1;
> > >
> > > if (queue_id >= VHOST_MAX_VRING)
> > > @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid,
> > uint16_t
> > > queue_id,
> > > if (unlikely(vq == NULL || !dev->async_copy))
> > > return -1;
> > >
> > > -if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > > -VHOST_LOG_CONFIG(ERR,
> > > -"async copy is not supported on non-inorder mode "
> > > -"(vid %d, qid: %d)\n", vid, queue_id);
> > > -return -1;
> > > -}
> > > -
> > > -if (unlikely(ops->check_completed_copies == NULL ||
> > > -ops->transfer_data == NULL))
> > > -return -1;
> > > -
> > > rte_spinlock_lock(&vq->access_lock);
> > > -ret = async_channel_register(vid, queue_id, ops);
> > > +ret = async_channel_register(vid, queue_id);
> > > rte_spinlock_unlock(&vq->access_lock);
> > >
> > > return ret;
> > > }
> > >
> > > int
> > > -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id,
> > > -struct rte_vhost_async_config config,
> > > -struct rte_vhost_async_channel_ops *ops)
> > > +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id)
> > > {
> > > struct vhost_virtqueue *vq;
> > > struct virtio_net *dev = get_device(vid);
> > >
> > > -if (dev == NULL || ops == NULL)
> > > +if (dev == NULL)
> > > return -1;
> > >
> > > if (queue_id >= VHOST_MAX_VRING)
> > > @@ -1747,18 +1737,7 @@
> > rte_vhost_async_channel_register_thread_unsafe(int vid,
> > > uint16_t queue_id,
> > > if (unlikely(vq == NULL || !dev->async_copy))
> > > return -1;
> > >
> > > -if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > > -VHOST_LOG_CONFIG(ERR,
> > > -"async copy is not supported on non-inorder mode "
> > > -"(vid %d, qid: %d)\n", vid, queue_id);
> > > -return -1;
> > > -}
> > > -
> > > -if (unlikely(ops->check_completed_copies == NULL ||
> > > -ops->transfer_data == NULL))
> > > -return -1;
> > > -
> > > -return async_channel_register(vid, queue_id, ops);
> > > +return async_channel_register(vid, queue_id);
> > > }
> > >
> > > int
> > > @@ -1835,6 +1814,83 @@
> > rte_vhost_async_channel_unregister_thread_unsafe(int
> > > vid, uint16_t queue_id)
> > > return 0;
> > > }
> > >
> > > +static __rte_always_inline void
> > > +vhost_free_async_dma_mem(void)
> > > +{
> > > +uint16_t i;
> > > +
> > > +for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > > +struct async_dma_info *dma = &dma_copy_track[i];
> > > +int16_t j;
> > > +
> > > +if (dma->max_vchans == 0) {
> > > +continue;
> > > +}
> > > +
> > > +for (j = 0; j < dma->max_vchans; j++) {
> > > +rte_free(dma->vchans[j].metadata);
> > > +}
> > > +rte_free(dma->vchans);
> > > +dma->vchans = NULL;
> > > +dma->max_vchans = 0;
> > > +}
> > > +}
> > > +
> > > +int
> > > +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> > uint16_t
> > > count)
> > > +{
> > > +uint16_t i;
> > > +
> > > +if (!dmas) {
> > > +VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration
> > parameter.\n");
> > > +return -1;
> > > +}
> > > +
> > > +for (i = 0; i < count; i++) {
> > > +struct async_dma_vchan_info *vchans;
> > > +int16_t dev_id;
> > > +uint16_t max_vchans;
> > > +uint16_t max_desc;
> > > +uint16_t j;
> > > +
> > > +dev_id = dmas[i].dev_id;
> > > +max_vchans = dmas[i].max_vchans;
> > > +max_desc = dmas[i].max_desc;
> > > +
> > > +if (!rte_is_power_of_2(max_desc)) {
> > > +max_desc = rte_align32pow2(max_desc);
> > > +}
> >
> > I think when aligning to power of 2, it should exceed not max_desc?
>
> Aligned max_desc is used to allocate context tracking array. We only need
> to guarantee the size of the array for every vchannel is >= max_desc. So it's
> OK to have greater array size than max_desc.
>
> > And based on above comment, if this max_desc is nb_desc configured for
> > vchanl, you should just make sure the nb_desc be power-of-2.
> >
> > > +
> > > +vchans = rte_zmalloc(NULL, sizeof(struct
> > async_dma_vchan_info) *
> > > max_vchans,
> > > +RTE_CACHE_LINE_SIZE);
> > > +if (vchans == NULL) {
> > > +VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans
> > for dma-
> > > %d."
> > > +" Cannot enable async data-path.\n",
> > dev_id);
> > > +vhost_free_async_dma_mem();
> > > +return -1;
> > > +}
> > > +
> > > +for (j = 0; j < max_vchans; j++) {
> > > +vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *)
> > *
> > > max_desc,
> > > +RTE_CACHE_LINE_SIZE);
> > > +if (!vchans[j].metadata) {
> > > +VHOST_LOG_CONFIG(ERR, "Failed to allocate
> > metadata for
> > > "
> > > +"dma-%d vchan-%u\n",
> > dev_id, j);
> > > +vhost_free_async_dma_mem();
> > > +return -1;
> > > +}
> > > +
> > > +vchans[j].ring_size = max_desc;
> > > +vchans[j].ring_mask = max_desc - 1;
> > > +}
> > > +
> > > +dma_copy_track[dev_id].vchans = vchans;
> > > +dma_copy_track[dev_id].max_vchans = max_vchans;
> > > +}
> > > +
> > > +return 0;
> > > +}
> > > +
> > > int
> > > rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> > > {
> > > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > > index 7085e0885c..d9bda34e11 100644
> > > --- a/lib/vhost/vhost.h
> > > +++ b/lib/vhost/vhost.h
> > > @@ -19,6 +19,7 @@
> > > #include <rte_ether.h>
> > > #include <rte_rwlock.h>
> > > #include <rte_malloc.h>
> > > +#include <rte_dmadev.h>
> > >
> > > #include "rte_vhost.h"
> > > #include "rte_vdpa.h"
> > > @@ -50,6 +51,7 @@
> > >
> > > #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> > > #define VHOST_MAX_ASYNC_VEC 2048
> > > +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
> > >
> > > #define PACKED_DESC_ENQUEUE_USED_FLAG(w)\
> > > ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED |
> > VRING_DESC_F_WRITE) : \
> > > @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> > > uint32_t count;
> > > };
> > >
> > > +struct async_dma_vchan_info {
> > > +/* circular array to track copy metadata */
> > > +bool **metadata;
> >
> > If the metadata will only be flags, maybe just use some
> > name called XXX_flag
>
> Sure, I will rename it.
>
> >
> > > +
> > > +/* max elements in 'metadata' */
> > > +uint16_t ring_size;
> > > +/* ring index mask for 'metadata' */
> > > +uint16_t ring_mask;
> > > +
> > > +/* batching copies before a DMA doorbell */
> > > +uint16_t nr_batching;
> > > +
> > > +/**
> > > + * DMA virtual channel lock. Although it is able to bind DMA
> > > + * virtual channels to data plane threads, vhost control plane
> > > + * thread could call data plane functions too, thus causing
> > > + * DMA device contention.
> > > + *
> > > + * For example, in VM exit case, vhost control plane thread needs
> > > + * to clear in-flight packets before disable vring, but there could
> > > + * be anotther data plane thread is enqueuing packets to the same
> > > + * vring with the same DMA virtual channel. But dmadev PMD
> > functions
> > > + * are lock-free, so the control plane and data plane threads
> > > + * could operate the same DMA virtual channel at the same time.
> > > + */
> > > +rte_spinlock_t dma_lock;
> > > +};
> > > +
> > > +struct async_dma_info {
> > > +uint16_t max_vchans;
> > > +struct async_dma_vchan_info *vchans;
> > > +};
> > > +
> > > +extern struct async_dma_info
> > dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > > +
> > > /**
> > > * inflight async packet information
> > > */
> > > @@ -129,9 +166,6 @@ struct async_inflight_info {
> > > };
> > >
> > > struct vhost_async {
> > > -/* operation callbacks for DMA */
> > > -struct rte_vhost_async_channel_ops ops;
> > > -
> > > struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> > > struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> > > uint16_t iter_idx;
> > > @@ -139,6 +173,19 @@ struct vhost_async {
> > >
> > > /* data transfer status */
> > > struct async_inflight_info *pkts_info;
> > > +/**
> > > + * packet reorder array. "true" indicates that DMA
> > > + * device completes all copies for the packet.
> > > + *
> > > + * Note that this array could be written by multiple
> > > + * threads at the same time. For example, two threads
> > > + * enqueue packets to the same virtqueue with their
> > > + * own DMA devices. However, since offloading is
> > > + * per-packet basis, each packet flag will only be
> > > + * written by one thread. And single byte write is
> > > + * atomic, so no lock is needed.
> > > + */
> > > +bool *pkts_cmpl_flag;
> > > uint16_t pkts_idx;
> > > uint16_t pkts_inflight_n;
> > > union {
> > > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> > > index b3d954aab4..9f81fc9733 100644
> > > --- a/lib/vhost/virtio_net.c
> > > +++ b/lib/vhost/virtio_net.c
> > > @@ -11,6 +11,7 @@
> > > #include <rte_net.h>
> > > #include <rte_ether.h>
> > > #include <rte_ip.h>
> > > +#include <rte_dmadev.h>
> > > #include <rte_vhost.h>
> > > #include <rte_tcp.h>
> > > #include <rte_udp.h>
> > > @@ -25,6 +26,9 @@
> > >
> > > #define MAX_BATCH_LEN 256
> > >
> > > +/* DMA device copy operation tracking array. */
> > > +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > > +
> > > static __rte_always_inline bool
> > > rxvq_is_mergeable(struct virtio_net *dev)
> > > {
> > > @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx,
> > uint32_t
> > > nr_vring)
> > > return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> > > }
> > >
> > > +static __rte_always_inline uint16_t
> > > +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> > > +uint16_t vchan, uint16_t head_idx,
> > > +struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> > > +{
> > > +struct async_dma_vchan_info *dma_info =
> > > &dma_copy_track[dma_id].vchans[vchan];
> > > +uint16_t ring_mask = dma_info->ring_mask;
> > > +uint16_t pkt_idx;
> > > +
> > > +rte_spinlock_lock(&dma_info->dma_lock);
> > > +
> > > +for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> > > +struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> > > +int copy_idx = 0;
> > > +uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> > > +uint16_t i;
> > > +
> > > +if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> > > +goto out;
> > > +}
> > > +
> > > +for (i = 0; i < nr_segs; i++) {
> > > +/**
> > > + * We have checked the available space before
> > submit copies
> > > to DMA
> > > + * vChannel, so we don't handle error here.
> > > + */
> > > +copy_idx = rte_dma_copy(dma_id, vchan,
> > > (rte_iova_t)iov[i].src_addr,
> > > +(rte_iova_t)iov[i].dst_addr, iov[i].len,
> > > +RTE_DMA_OP_FLAG_LLC);
> >
> > This assumes rte_dma_copy will always succeed if there's available space.
> >
> > But the API doxygen says:
> >
> > * @return
> > * - 0..UINT16_MAX: index of enqueued job.
> > * - -ENOSPC: if no space left to enqueue.
> > * - other values < 0 on failure.
> >
> > So it should consider other vendor-specific errors.
>
> Error handling is not free here. Specifically, SW fallback is a way to handle
> failed
> copy operations. But it requires vhost to track VA for every source and
> destination
> buffer for every copy. DMA library uses IOVA, so vhost only prepares IOVA for
> copies of
> every packet in async data-path. In the case of IOVA as PA, the prepared IOVAs
> cannot
> be used as SW fallback, which means vhost needs to store VA for every copy of
> every
> packet too, even if there no errors will happen or IOVA is VA.
>
> I am thinking that the only usable DMA engines in vhost are CBDMA and DSA, is
> it worth
> the cost for "future HW"? If there will be other vendor's HW in future, is it
> OK to add the
> support later? Or is there any way to get VA from IOVA?
Let's investigate how much performance drop the error handling will bring and see...
Thanks,
Chenbo
>
> Thanks,
> Jiayu
> >
> > Thanks,
> > Chenbo
> >
> >
>
^ permalink raw reply [relevance 0%]
* [PATCH v2] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
2022-01-17 23:14 4% [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if Michael Barker
@ 2022-01-17 23:23 4% ` Michael Barker
2022-01-20 14:16 0% ` Thomas Monjalon
2022-01-23 21:07 8% ` [PATCH v3] " Michael Barker
0 siblings, 2 replies; 200+ results
From: Michael Barker @ 2022-01-17 23:23 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
When using clang with -Wall the use of diagnose_if kicks up a warning,
requiring all dpdk includes to be wrapped with the pragma. This change
isolates the ignore just the appropriate location and makes it easier
for users to apply -Wall,-Werror
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 4%]
* [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if.
@ 2022-01-17 23:14 4% Michael Barker
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
0 siblings, 1 reply; 200+ results
From: Michael Barker @ 2022-01-17 23:14 UTC (permalink / raw)
To: dev; +Cc: Michael Barker, Ray Kinsella
Signed-off-by: Michael Barker <mikeb01@gmail.com>
---
lib/eal/include/rte_compat.h | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/lib/eal/include/rte_compat.h b/lib/eal/include/rte_compat.h
index 2718612cce..9556bbf4d0 100644
--- a/lib/eal/include/rte_compat.h
+++ b/lib/eal/include/rte_compat.h
@@ -33,8 +33,11 @@ section(".text.internal")))
#elif !defined ALLOW_INTERNAL_API && __has_attribute(diagnose_if) /* For clang */
#define __rte_internal \
+_Pragma("GCC diagnostic push") \
+_Pragma("GCC diagnostic ignored \"-Wgcc-compat\"") \
__attribute__((diagnose_if(1, "Symbol is not public ABI", "error"), \
-section(".text.internal")))
+section(".text.internal"))) \
+_Pragma("GCC diagnostic pop")
#else
--
2.25.1
^ permalink raw reply [relevance 4%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
2022-01-14 6:30 3% ` Xia, Chenbo
@ 2022-01-17 5:39 0% ` Hu, Jiayu
2022-01-19 2:18 0% ` Xia, Chenbo
0 siblings, 1 reply; 200+ results
From: Hu, Jiayu @ 2022-01-17 5:39 UTC (permalink / raw)
To: Xia, Chenbo, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
Hi Chenbo,
Please see replies inline.
Thanks,
Jiayu
> -----Original Message-----
> From: Xia, Chenbo <chenbo.xia@intel.com>
> > diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> > index 33d023aa39..44073499bc 100644
> > --- a/examples/vhost/main.c
> > +++ b/examples/vhost/main.c
> > @@ -24,8 +24,9 @@
> > #include <rte_ip.h>
> > #include <rte_tcp.h>
> > #include <rte_pause.h>
> > +#include <rte_dmadev.h>
> > +#include <rte_vhost_async.h>
> >
> > -#include "ioat.h"
> > #include "main.h"
> >
> > #ifndef MAX_QUEUES
> > @@ -56,6 +57,14 @@
> > #define RTE_TEST_TX_DESC_DEFAULT 512
> >
> > #define INVALID_PORT_ID 0xFF
> > +#define INVALID_DMA_ID -1
> > +
> > +#define MAX_VHOST_DEVICE 1024
> > +#define DMA_RING_SIZE 4096
> > +
> > +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> > +struct rte_vhost_async_dma_info
> dma_config[RTE_DMADEV_DEFAULT_MAX];
> > +static int dma_count;
> >
> > /* mask of enabled ports */
> > static uint32_t enabled_port_mask = 0;
> > @@ -96,8 +105,6 @@ static int builtin_net_driver;
> >
> > static int async_vhost_driver;
> >
> > -static char *dma_type;
> > -
> > /* Specify timeout (in useconds) between retries on RX. */
> > static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> > /* Specify the number of retries on RX. */
> > @@ -196,13 +203,134 @@ struct vhost_bufftable
> *vhost_txbuff[RTE_MAX_LCORE *
> > MAX_VHOST_DEVICE];
> > #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \
> > / US_PER_S * BURST_TX_DRAIN_US)
> >
> > +static inline bool
> > +is_dma_configured(int16_t dev_id)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < dma_count; i++) {
> > + if (dma_config[i].dev_id == dev_id) {
> > + return true;
> > + }
> > + }
> > + return false;
> > +}
> > +
> > static inline int
> > open_dma(const char *value)
> > {
> > - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> > - return open_ioat(value);
> > + struct dma_for_vhost *dma_info = dma_bind;
> > + char *input = strndup(value, strlen(value) + 1);
> > + char *addrs = input;
> > + char *ptrs[2];
> > + char *start, *end, *substr;
> > + int64_t vid, vring_id;
> > +
> > + struct rte_dma_info info;
> > + struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> > + struct rte_dma_vchan_conf qconf = {
> > + .direction = RTE_DMA_DIR_MEM_TO_MEM,
> > + .nb_desc = DMA_RING_SIZE
> > + };
> > +
> > + int dev_id;
> > + int ret = 0;
> > + uint16_t i = 0;
> > + char *dma_arg[MAX_VHOST_DEVICE];
> > + int args_nr;
> > +
> > + while (isblank(*addrs))
> > + addrs++;
> > + if (*addrs == '\0') {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + /* process DMA devices within bracket. */
> > + addrs++;
> > + substr = strtok(addrs, ";]");
> > + if (!substr) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + args_nr = rte_strsplit(substr, strlen(substr),
> > + dma_arg, MAX_VHOST_DEVICE, ',');
> > + if (args_nr <= 0) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + while (i < args_nr) {
> > + char *arg_temp = dma_arg[i];
> > + uint8_t sub_nr;
> > +
> > + sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> > + if (sub_nr != 2) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + start = strstr(ptrs[0], "txd");
> > + if (start == NULL) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + start += 3;
> > + vid = strtol(start, &end, 0);
> > + if (end == start) {
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + vring_id = 0 + VIRTIO_RXQ;
>
> No need to introduce vring_id, it's always VIRTIO_RXQ
I will remove it later.
>
> > +
> > + dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> > + if (dev_id < 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to find
> DMA %s.\n",
> > ptrs[1]);
> > + ret = -1;
> > + goto out;
> > + } else if (is_dma_configured(dev_id)) {
> > + goto done;
> > + }
> > +
>
> Please call rte_dma_info_get before configure to make sure
> info.max_vchans >=1
Do you suggest to use "rte_dma_info_get() and info.max_vchans=0" to indicate
the device is not configured, rather than using is_dma_configure()?
>
> > + if (rte_dma_configure(dev_id, &dev_config) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure
> DMA %d.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up
> DMA %d.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> >
> > - return -1;
> > + rte_dma_info_get(dev_id, &info);
> > + if (info.nb_vchans != 1) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no
> queues.\n",
> > dev_id);
>
> Then the above means the number of vchan is not configured.
>
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + if (rte_dma_start(dev_id) != 0) {
> > + RTE_LOG(ERR, VHOST_CONFIG, "Fail to start
> DMA %u.\n",
> > dev_id);
> > + ret = -1;
> > + goto out;
> > + }
> > +
> > + dma_config[dma_count].dev_id = dev_id;
> > + dma_config[dma_count].max_vchans = 1;
> > + dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> > +
> > +done:
> > + (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> > + i++;
> > + }
> > +out:
> > + free(input);
> > + return ret;
> > }
> >
> > /*
> > @@ -500,8 +628,6 @@ enum {
> > OPT_CLIENT_NUM,
> > #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> > OPT_BUILTIN_NET_DRIVER_NUM,
> > -#define OPT_DMA_TYPE "dma-type"
> > - OPT_DMA_TYPE_NUM,
> > #define OPT_DMAS "dmas"
> > OPT_DMAS_NUM,
> > };
> > @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> > NULL, OPT_CLIENT_NUM},
> > {OPT_BUILTIN_NET_DRIVER, no_argument,
> > NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> > - {OPT_DMA_TYPE, required_argument,
> > - NULL, OPT_DMA_TYPE_NUM},
> > {OPT_DMAS, required_argument,
> > NULL, OPT_DMAS_NUM},
> > {NULL, 0, 0, 0},
> > @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> > }
> > break;
> >
> > - case OPT_DMA_TYPE_NUM:
> > - dma_type = optarg;
> > - break;
> > -
> > case OPT_DMAS_NUM:
> > if (open_dma(optarg) == -1) {
> > RTE_LOG(INFO, VHOST_CONFIG,
> > @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> > {
> > struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> > uint16_t complete_count;
> > + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> > - VIRTIO_RXQ, p_cpl,
> MAX_PKT_BURST);
> > + VIRTIO_RXQ, p_cpl, MAX_PKT_BURST,
> dma_id, 0);
> > if (complete_count) {
> > free_pkts(p_cpl, complete_count);
> > __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> > __ATOMIC_SEQ_CST);
> > @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
> >
> > if (builtin_net_driver) {
> > ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> > - } else if (async_vhost_driver) {
> > + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t enqueue_fail = 0;
> > + int16_t dma_id = dma_bind[vdev-
> >vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_async_pkts(vdev);
> > - ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> VIRTIO_RXQ, m,
> > nr_xmit);
> > + ret = rte_vhost_submit_enqueue_burst(vdev->vid,
> VIRTIO_RXQ, m,
> > nr_xmit, dma_id, 0);
> > __atomic_add_fetch(&vdev->pkts_inflight, ret,
> __ATOMIC_SEQ_CST);
> >
> > enqueue_fail = nr_xmit - ret;
> > @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> > __ATOMIC_SEQ_CST);
> > }
> >
> > - if (!async_vhost_driver)
> > + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > free_pkts(m, nr_xmit);
> > }
> >
> > @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> > if (builtin_net_driver) {
> > enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> > pkts, rx_count);
> > - } else if (async_vhost_driver) {
> > + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t enqueue_fail = 0;
> > + int16_t dma_id = dma_bind[vdev-
> >vid].dmas[VIRTIO_RXQ].dev_id;
> >
> > complete_async_pkts(vdev);
> > enqueue_count = rte_vhost_submit_enqueue_burst(vdev-
> >vid,
> > - VIRTIO_RXQ, pkts, rx_count);
> > + VIRTIO_RXQ, pkts, rx_count, dma_id,
> 0);
> > __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> > __ATOMIC_SEQ_CST);
> >
> > enqueue_fail = rx_count - enqueue_count;
> > @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> > __ATOMIC_SEQ_CST);
> > }
> >
> > - if (!async_vhost_driver)
> > + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> > free_pkts(pkts, rx_count);
> > }
> >
> > @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> > "(%d) device has been removed from data core\n",
> > vdev->vid);
> >
> > - if (async_vhost_driver) {
> > + if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> > uint16_t n_pkt = 0;
> > + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> >
> > while (vdev->pkts_inflight) {
> > n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> VIRTIO_RXQ,
> > - m_cpl, vdev->pkts_inflight);
> > + m_cpl, vdev->pkts_inflight,
> dma_id, 0);
> > free_pkts(m_cpl, n_pkt);
> > __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> > __ATOMIC_SEQ_CST);
> > }
> >
> > rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> > + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> > }
> >
> > rte_free(vdev);
> > @@ -1468,20 +1593,14 @@ new_device(int vid)
> > "(%d) device has been added to data core %d\n",
> > vid, vdev->coreid);
> >
> > - if (async_vhost_driver) {
> > - struct rte_vhost_async_config config = {0};
> > - struct rte_vhost_async_channel_ops channel_ops;
> > -
> > - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> > - channel_ops.transfer_data = ioat_transfer_data_cb;
> > - channel_ops.check_completed_copies =
> > - ioat_check_completed_copies_cb;
> > -
> > - config.features = RTE_VHOST_ASYNC_INORDER;
> > + if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> > + int ret;
> >
> > - return rte_vhost_async_channel_register(vid,
> VIRTIO_RXQ,
> > - config, &channel_ops);
> > + ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> > + if (ret == 0) {
> > + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled =
> true;
> > }
> > + return ret;
> > }
> >
> > return 0;
> > @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t
> queue_id, int
> > enable)
> > if (queue_id != VIRTIO_RXQ)
> > return 0;
> >
> > - if (async_vhost_driver) {
> > + if (dma_bind[vid].dmas[queue_id].async_enabled) {
> > if (!enable) {
> > uint16_t n_pkt = 0;
> > + int16_t dma_id =
> dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> > struct rte_mbuf *m_cpl[vdev->pkts_inflight];
> >
> > while (vdev->pkts_inflight) {
> > n_pkt =
> rte_vhost_clear_queue_thread_unsafe(vid,
> > queue_id,
> > - m_cpl, vdev-
> >pkts_inflight);
> > + m_cpl, vdev-
> >pkts_inflight, dma_id,
> > 0);
> > free_pkts(m_cpl, n_pkt);
> > __atomic_sub_fetch(&vdev->pkts_inflight,
> n_pkt,
> > __ATOMIC_SEQ_CST);
> > }
> > @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> > nr_switch_core, uint32_t mbuf_size,
> > rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> > }
> >
> > +static void
> > +init_dma(void)
> > +{
> > + int i;
> > +
> > + for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> > + int j;
> > +
> > + for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> > + dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> > + dma_bind[i].dmas[j].async_enabled = false;
> > + }
> > + }
> > +
> > + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > + dma_config[i].dev_id = INVALID_DMA_ID;
> > + }
> > +}
> > +
> > /*
> > * Main function, does initialisation and calls the per-lcore functions.
> > */
> > @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> > argc -= ret;
> > argv += ret;
> >
> > + /* initialize dma structures */
> > + init_dma();
> > +
> > /* parse app arguments */
> > ret = us_vhost_parse_args(argc, argv);
> > if (ret < 0)
> > @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> > if (client_mode)
> > flags |= RTE_VHOST_USER_CLIENT;
> >
> > + if (async_vhost_driver) {
> > + if (rte_vhost_async_dma_configure(dma_config, dma_count)
> < 0) {
> > + RTE_LOG(ERR, VHOST_PORT, "Failed to configure
> DMA in
> > vhost.\n");
> > + for (i = 0; i < dma_count; i++) {
> > + if (dma_config[i].dev_id != INVALID_DMA_ID)
> {
> > + rte_dma_stop(dma_config[i].dev_id);
> > + dma_config[i].dev_id =
> INVALID_DMA_ID;
> > + }
> > + }
> > + dma_count = 0;
> > + async_vhost_driver = false;
> > + }
> > + }
> > +
> > /* Register vhost user driver to handle vhost messages. */
> > for (i = 0; i < nb_sockets; i++) {
> > char *file = socket_files + i * PATH_MAX;
> > diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> > index e7b1ac60a6..b4a453e77e 100644
> > --- a/examples/vhost/main.h
> > +++ b/examples/vhost/main.h
> > @@ -8,6 +8,7 @@
> > #include <sys/queue.h>
> >
> > #include <rte_ether.h>
> > +#include <rte_pci.h>
> >
> > /* Macros for printing using RTE_LOG */
> > #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> > @@ -79,6 +80,16 @@ struct lcore_info {
> > struct vhost_dev_tailq_list vdev_list;
> > };
> >
> > +struct dma_info {
> > + struct rte_pci_addr addr;
> > + int16_t dev_id;
> > + bool async_enabled;
> > +};
> > +
> > +struct dma_for_vhost {
> > + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> > +};
> > +
> > /* we implement non-extra virtio net features */
> > #define VIRTIO_NET_FEATURES 0
> >
> > diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> > index 3efd5e6540..87a637f83f 100644
> > --- a/examples/vhost/meson.build
> > +++ b/examples/vhost/meson.build
> > @@ -12,13 +12,9 @@ if not is_linux
> > endif
> >
> > deps += 'vhost'
> > +deps += 'dmadev'
> > allow_experimental_apis = true
> > sources = files(
> > 'main.c',
> > 'virtio_net.c',
> > )
> > -
> > -if dpdk_conf.has('RTE_RAW_IOAT')
> > - deps += 'raw_ioat'
> > - sources += files('ioat.c')
> > -endif
> > diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> > index cdb37a4814..8107329400 100644
> > --- a/lib/vhost/meson.build
> > +++ b/lib/vhost/meson.build
> > @@ -33,7 +33,8 @@ headers = files(
> > 'rte_vhost_async.h',
> > 'rte_vhost_crypto.h',
> > )
> > +
> > driver_sdk_headers = files(
> > 'vdpa_driver.h',
> > )
> > -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> > +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> > diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> > index a87ea6ba37..23a7a2d8b3 100644
> > --- a/lib/vhost/rte_vhost_async.h
> > +++ b/lib/vhost/rte_vhost_async.h
> > @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> > };
> >
> > /**
> > - * dma transfer status
> > + * DMA device information
> > */
> > -struct rte_vhost_async_status {
> > - /** An array of application specific data for source memory */
> > - uintptr_t *src_opaque_data;
> > - /** An array of application specific data for destination memory */
> > - uintptr_t *dst_opaque_data;
> > -};
> > -
> > -/**
> > - * dma operation callbacks to be implemented by applications
> > - */
> > -struct rte_vhost_async_channel_ops {
> > - /**
> > - * instruct async engines to perform copies for a batch of packets
> > - *
> > - * @param vid
> > - * id of vhost device to perform data copies
> > - * @param queue_id
> > - * queue id to perform data copies
> > - * @param iov_iter
> > - * an array of IOV iterators
> > - * @param opaque_data
> > - * opaque data pair sending to DMA engine
> > - * @param count
> > - * number of elements in the "descs" array
> > - * @return
> > - * number of IOV iterators processed, negative value means error
> > - */
> > - int32_t (*transfer_data)(int vid, uint16_t queue_id,
> > - struct rte_vhost_iov_iter *iov_iter,
> > - struct rte_vhost_async_status *opaque_data,
> > - uint16_t count);
> > - /**
> > - * check copy-completed packets from the async engine
> > - * @param vid
> > - * id of vhost device to check copy completion
> > - * @param queue_id
> > - * queue id to check copy completion
> > - * @param opaque_data
> > - * buffer to receive the opaque data pair from DMA engine
> > - * @param max_packets
> > - * max number of packets could be completed
> > - * @return
> > - * number of async descs completed, negative value means error
> > - */
> > - int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_status *opaque_data,
> > - uint16_t max_packets);
> > -};
> > -
> > -/**
> > - * async channel features
> > - */
> > -enum {
> > - RTE_VHOST_ASYNC_INORDER = 1U << 0,
> > -};
> > -
> > -/**
> > - * async channel configuration
> > - */
> > -struct rte_vhost_async_config {
> > - uint32_t features;
> > - uint32_t rsvd[2];
> > +struct rte_vhost_async_dma_info {
> > + int16_t dev_id; /* DMA device ID */
> > + uint16_t max_vchans; /* max number of vchan */
> > + uint16_t max_desc; /* max desc number of vchan */
> > };
> >
> > /**
> > @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> > * vhost device id async channel to be attached to
> > * @param queue_id
> > * vhost queue id async channel to be attached to
> > - * @param config
> > - * Async channel configuration structure
> > - * @param ops
> > - * Async channel operation callbacks
> > * @return
> > * 0 on success, -1 on failures
> > */
> > __rte_experimental
> > -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops);
> > +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
> >
> > /**
> > * Unregister an async channel for a vhost queue
> > @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid,
> uint16_t
> > queue_id);
> > * vhost device id async channel to be attached to
> > * @param queue_id
> > * vhost queue id async channel to be attached to
> > - * @param config
> > - * Async channel configuration
> > - * @param ops
> > - * Async channel operation callbacks
> > * @return
> > * 0 on success, -1 on failures
> > */
> > __rte_experimental
> > -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops);
> > +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> > queue_id);
> >
> > /**
> > * Unregister an async channel for a vhost queue without performing any
> > @@ -179,12 +109,17 @@ int
> rte_vhost_async_channel_unregister_thread_unsafe(int
> > vid,
> > * array of packets to be enqueued
> > * @param count
> > * packets num to be enqueued
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * num of packets enqueued
> > */
> > __rte_experimental
> > uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
>
> All dma_id in the API should be uint16_t. Otherwise you need to check if valid.
Yes, you are right. Although dma_id is defined as int16_t and DMA library checks
if it is valid, vhost doesn't handle DMA failure and we need to make sure dma_id
is valid before using it. And even if vhost handles DMA error, a better place to check
invalid dma_id is before passing it to DMA library too. I will add the check later.
>
> >
> > /**
> > * This function checks async completion status for a specific vhost
> > @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int
> vid,
> > uint16_t queue_id,
> > * blank array to get return packet pointer
> > * @param count
> > * size of the packet array
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * num of packets returned
> > */
> > __rte_experimental
> > uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
> >
> > /**
> > * This function returns the amount of in-flight packets for the vhost
> > @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> > queue_id);
> > * Blank array to get return packet pointer
> > * @param count
> > * Size of the packet array
> > + * @param dma_id
> > + * the identifier of the DMA device
> > + * @param vchan
> > + * the identifier of virtual DMA channel
> > * @return
> > * Number of packets returned
> > */
> > __rte_experimental
> > uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> > - struct rte_mbuf **pkts, uint16_t count);
> > + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> > + uint16_t vchan);
> > +/**
> > + * The DMA vChannels used in asynchronous data path must be
> configured
> > + * first. So this function needs to be called before enabling DMA
> > + * acceleration for vring. If this function fails, asynchronous data path
> > + * cannot be enabled for any vring further.
> > + *
> > + * @param dmas
> > + * DMA information
> > + * @param count
> > + * Element number of 'dmas'
> > + * @return
> > + * 0 on success, and -1 on failure
> > + */
> > +__rte_experimental
> > +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info
> *dmas,
> > + uint16_t count);
>
> I think based on current design, vhost can use every vchan if user app let it.
> So the max_desc and max_vchans can just be got from dmadev APIs? Then
> there's
> no need to introduce the new ABI struct rte_vhost_async_dma_info.
Yes, no need to introduce struct rte_vhost_async_dma_info. We can either use
struct rte_dma_info which is suggested by Maxime, or query from dma library
via device id. Since dma device configuration is left to applications, I prefer to
use rte_dma_info directly. How do you think?
>
> And about max_desc, I see the dmadev lib, you can get vchan's max_desc
> but you
> may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants
> to
> know the nb_desc instead of max_desc?
True, nb_desc is better than max_desc. But dma library doesn’t provide function
to query nb_desc for every vchannel. And rte_dma_info cannot be used in
rte_vhost_async_dma_configure(), if vhost uses nb_desc. So the only way is
to require users to provide nb_desc for every vchannel, and it will introduce
a new struct. Is it really needed?
>
> >
> > #endif /* _RTE_VHOST_ASYNC_H_ */
> > diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> > index a7ef7f1976..1202ba9c1a 100644
> > --- a/lib/vhost/version.map
> > +++ b/lib/vhost/version.map
> > @@ -84,6 +84,9 @@ EXPERIMENTAL {
> >
> > # added in 21.11
> > rte_vhost_get_monitor_addr;
> > +
> > + # added in 22.03
> > + rte_vhost_async_dma_configure;
> > };
> >
> > INTERNAL {
> > diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> > index 13a9bb9dd1..32f37f4851 100644
> > --- a/lib/vhost/vhost.c
> > +++ b/lib/vhost/vhost.c
> > @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> > return;
> >
> > rte_free(vq->async->pkts_info);
> > + rte_free(vq->async->pkts_cmpl_flag);
> >
> > rte_free(vq->async->buffers_packed);
> > vq->async->buffers_packed = NULL;
> > @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> > }
> >
> > static __rte_always_inline int
> > -async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_channel_ops *ops)
> > +async_channel_register(int vid, uint16_t queue_id)
> > {
> > struct virtio_net *dev = get_device(vid);
> > struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> > @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > goto out_free_async;
> > }
> >
> > + async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size *
> sizeof(bool),
> > + RTE_CACHE_LINE_SIZE, node);
> > + if (!async->pkts_cmpl_flag) {
> > + VHOST_LOG_CONFIG(ERR, "failed to allocate async
> pkts_cmpl_flag
> > (vid %d, qid: %d)\n",
> > + vid, queue_id);
>
> qid: %u
>
> > + goto out_free_async;
> > + }
> > +
> > if (vq_is_packed(dev)) {
> > async->buffers_packed = rte_malloc_socket(NULL,
> > vq->size * sizeof(struct
> vring_used_elem_packed),
> > @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > }
> > }
> >
> > - async->ops.check_completed_copies = ops-
> >check_completed_copies;
> > - async->ops.transfer_data = ops->transfer_data;
> > -
> > vq->async = async;
> >
> > return 0;
> > @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t
> queue_id,
> > }
> >
> > int
> > -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops)
> > +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> > {
> > struct vhost_virtqueue *vq;
> > struct virtio_net *dev = get_device(vid);
> > int ret;
> >
> > - if (dev == NULL || ops == NULL)
> > + if (dev == NULL)
> > return -1;
> >
> > if (queue_id >= VHOST_MAX_VRING)
> > @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid,
> uint16_t
> > queue_id,
> > if (unlikely(vq == NULL || !dev->async_copy))
> > return -1;
> >
> > - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > - VHOST_LOG_CONFIG(ERR,
> > - "async copy is not supported on non-inorder mode "
> > - "(vid %d, qid: %d)\n", vid, queue_id);
> > - return -1;
> > - }
> > -
> > - if (unlikely(ops->check_completed_copies == NULL ||
> > - ops->transfer_data == NULL))
> > - return -1;
> > -
> > rte_spinlock_lock(&vq->access_lock);
> > - ret = async_channel_register(vid, queue_id, ops);
> > + ret = async_channel_register(vid, queue_id);
> > rte_spinlock_unlock(&vq->access_lock);
> >
> > return ret;
> > }
> >
> > int
> > -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id,
> > - struct rte_vhost_async_config config,
> > - struct rte_vhost_async_channel_ops *ops)
> > +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id)
> > {
> > struct vhost_virtqueue *vq;
> > struct virtio_net *dev = get_device(vid);
> >
> > - if (dev == NULL || ops == NULL)
> > + if (dev == NULL)
> > return -1;
> >
> > if (queue_id >= VHOST_MAX_VRING)
> > @@ -1747,18 +1737,7 @@
> rte_vhost_async_channel_register_thread_unsafe(int vid,
> > uint16_t queue_id,
> > if (unlikely(vq == NULL || !dev->async_copy))
> > return -1;
> >
> > - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> > - VHOST_LOG_CONFIG(ERR,
> > - "async copy is not supported on non-inorder mode "
> > - "(vid %d, qid: %d)\n", vid, queue_id);
> > - return -1;
> > - }
> > -
> > - if (unlikely(ops->check_completed_copies == NULL ||
> > - ops->transfer_data == NULL))
> > - return -1;
> > -
> > - return async_channel_register(vid, queue_id, ops);
> > + return async_channel_register(vid, queue_id);
> > }
> >
> > int
> > @@ -1835,6 +1814,83 @@
> rte_vhost_async_channel_unregister_thread_unsafe(int
> > vid, uint16_t queue_id)
> > return 0;
> > }
> >
> > +static __rte_always_inline void
> > +vhost_free_async_dma_mem(void)
> > +{
> > + uint16_t i;
> > +
> > + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> > + struct async_dma_info *dma = &dma_copy_track[i];
> > + int16_t j;
> > +
> > + if (dma->max_vchans == 0) {
> > + continue;
> > + }
> > +
> > + for (j = 0; j < dma->max_vchans; j++) {
> > + rte_free(dma->vchans[j].metadata);
> > + }
> > + rte_free(dma->vchans);
> > + dma->vchans = NULL;
> > + dma->max_vchans = 0;
> > + }
> > +}
> > +
> > +int
> > +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> uint16_t
> > count)
> > +{
> > + uint16_t i;
> > +
> > + if (!dmas) {
> > + VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration
> parameter.\n");
> > + return -1;
> > + }
> > +
> > + for (i = 0; i < count; i++) {
> > + struct async_dma_vchan_info *vchans;
> > + int16_t dev_id;
> > + uint16_t max_vchans;
> > + uint16_t max_desc;
> > + uint16_t j;
> > +
> > + dev_id = dmas[i].dev_id;
> > + max_vchans = dmas[i].max_vchans;
> > + max_desc = dmas[i].max_desc;
> > +
> > + if (!rte_is_power_of_2(max_desc)) {
> > + max_desc = rte_align32pow2(max_desc);
> > + }
>
> I think when aligning to power of 2, it should exceed not max_desc?
Aligned max_desc is used to allocate context tracking array. We only need
to guarantee the size of the array for every vchannel is >= max_desc. So it's
OK to have greater array size than max_desc.
> And based on above comment, if this max_desc is nb_desc configured for
> vchanl, you should just make sure the nb_desc be power-of-2.
>
> > +
> > + vchans = rte_zmalloc(NULL, sizeof(struct
> async_dma_vchan_info) *
> > max_vchans,
> > + RTE_CACHE_LINE_SIZE);
> > + if (vchans == NULL) {
> > + VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans
> for dma-
> > %d."
> > + " Cannot enable async data-path.\n",
> dev_id);
> > + vhost_free_async_dma_mem();
> > + return -1;
> > + }
> > +
> > + for (j = 0; j < max_vchans; j++) {
> > + vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *)
> *
> > max_desc,
> > + RTE_CACHE_LINE_SIZE);
> > + if (!vchans[j].metadata) {
> > + VHOST_LOG_CONFIG(ERR, "Failed to allocate
> metadata for
> > "
> > + "dma-%d vchan-%u\n",
> dev_id, j);
> > + vhost_free_async_dma_mem();
> > + return -1;
> > + }
> > +
> > + vchans[j].ring_size = max_desc;
> > + vchans[j].ring_mask = max_desc - 1;
> > + }
> > +
> > + dma_copy_track[dev_id].vchans = vchans;
> > + dma_copy_track[dev_id].max_vchans = max_vchans;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > int
> > rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> > {
> > diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> > index 7085e0885c..d9bda34e11 100644
> > --- a/lib/vhost/vhost.h
> > +++ b/lib/vhost/vhost.h
> > @@ -19,6 +19,7 @@
> > #include <rte_ether.h>
> > #include <rte_rwlock.h>
> > #include <rte_malloc.h>
> > +#include <rte_dmadev.h>
> >
> > #include "rte_vhost.h"
> > #include "rte_vdpa.h"
> > @@ -50,6 +51,7 @@
> >
> > #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> > #define VHOST_MAX_ASYNC_VEC 2048
> > +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
> >
> > #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
> > ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED |
> VRING_DESC_F_WRITE) : \
> > @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> > uint32_t count;
> > };
> >
> > +struct async_dma_vchan_info {
> > + /* circular array to track copy metadata */
> > + bool **metadata;
>
> If the metadata will only be flags, maybe just use some
> name called XXX_flag
Sure, I will rename it.
>
> > +
> > + /* max elements in 'metadata' */
> > + uint16_t ring_size;
> > + /* ring index mask for 'metadata' */
> > + uint16_t ring_mask;
> > +
> > + /* batching copies before a DMA doorbell */
> > + uint16_t nr_batching;
> > +
> > + /**
> > + * DMA virtual channel lock. Although it is able to bind DMA
> > + * virtual channels to data plane threads, vhost control plane
> > + * thread could call data plane functions too, thus causing
> > + * DMA device contention.
> > + *
> > + * For example, in VM exit case, vhost control plane thread needs
> > + * to clear in-flight packets before disable vring, but there could
> > + * be anotther data plane thread is enqueuing packets to the same
> > + * vring with the same DMA virtual channel. But dmadev PMD
> functions
> > + * are lock-free, so the control plane and data plane threads
> > + * could operate the same DMA virtual channel at the same time.
> > + */
> > + rte_spinlock_t dma_lock;
> > +};
> > +
> > +struct async_dma_info {
> > + uint16_t max_vchans;
> > + struct async_dma_vchan_info *vchans;
> > +};
> > +
> > +extern struct async_dma_info
> dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > +
> > /**
> > * inflight async packet information
> > */
> > @@ -129,9 +166,6 @@ struct async_inflight_info {
> > };
> >
> > struct vhost_async {
> > - /* operation callbacks for DMA */
> > - struct rte_vhost_async_channel_ops ops;
> > -
> > struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> > struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> > uint16_t iter_idx;
> > @@ -139,6 +173,19 @@ struct vhost_async {
> >
> > /* data transfer status */
> > struct async_inflight_info *pkts_info;
> > + /**
> > + * packet reorder array. "true" indicates that DMA
> > + * device completes all copies for the packet.
> > + *
> > + * Note that this array could be written by multiple
> > + * threads at the same time. For example, two threads
> > + * enqueue packets to the same virtqueue with their
> > + * own DMA devices. However, since offloading is
> > + * per-packet basis, each packet flag will only be
> > + * written by one thread. And single byte write is
> > + * atomic, so no lock is needed.
> > + */
> > + bool *pkts_cmpl_flag;
> > uint16_t pkts_idx;
> > uint16_t pkts_inflight_n;
> > union {
> > diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> > index b3d954aab4..9f81fc9733 100644
> > --- a/lib/vhost/virtio_net.c
> > +++ b/lib/vhost/virtio_net.c
> > @@ -11,6 +11,7 @@
> > #include <rte_net.h>
> > #include <rte_ether.h>
> > #include <rte_ip.h>
> > +#include <rte_dmadev.h>
> > #include <rte_vhost.h>
> > #include <rte_tcp.h>
> > #include <rte_udp.h>
> > @@ -25,6 +26,9 @@
> >
> > #define MAX_BATCH_LEN 256
> >
> > +/* DMA device copy operation tracking array. */
> > +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> > +
> > static __rte_always_inline bool
> > rxvq_is_mergeable(struct virtio_net *dev)
> > {
> > @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx,
> uint32_t
> > nr_vring)
> > return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> > }
> >
> > +static __rte_always_inline uint16_t
> > +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> > + uint16_t vchan, uint16_t head_idx,
> > + struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> > +{
> > + struct async_dma_vchan_info *dma_info =
> > &dma_copy_track[dma_id].vchans[vchan];
> > + uint16_t ring_mask = dma_info->ring_mask;
> > + uint16_t pkt_idx;
> > +
> > + rte_spinlock_lock(&dma_info->dma_lock);
> > +
> > + for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> > + struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> > + int copy_idx = 0;
> > + uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> > + uint16_t i;
> > +
> > + if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> > + goto out;
> > + }
> > +
> > + for (i = 0; i < nr_segs; i++) {
> > + /**
> > + * We have checked the available space before
> submit copies
> > to DMA
> > + * vChannel, so we don't handle error here.
> > + */
> > + copy_idx = rte_dma_copy(dma_id, vchan,
> > (rte_iova_t)iov[i].src_addr,
> > + (rte_iova_t)iov[i].dst_addr, iov[i].len,
> > + RTE_DMA_OP_FLAG_LLC);
>
> This assumes rte_dma_copy will always succeed if there's available space.
>
> But the API doxygen says:
>
> * @return
> * - 0..UINT16_MAX: index of enqueued job.
> * - -ENOSPC: if no space left to enqueue.
> * - other values < 0 on failure.
>
> So it should consider other vendor-specific errors.
Error handling is not free here. Specifically, SW fallback is a way to handle failed
copy operations. But it requires vhost to track VA for every source and destination
buffer for every copy. DMA library uses IOVA, so vhost only prepares IOVA for copies of
every packet in async data-path. In the case of IOVA as PA, the prepared IOVAs cannot
be used as SW fallback, which means vhost needs to store VA for every copy of every
packet too, even if there no errors will happen or IOVA is VA.
I am thinking that the only usable DMA engines in vhost are CBDMA and DSA, is it worth
the cost for "future HW"? If there will be other vendor's HW in future, is it OK to add the
support later? Or is there any way to get VA from IOVA?
Thanks,
Jiayu
>
> Thanks,
> Chenbo
>
>
^ permalink raw reply [relevance 0%]
* RE: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
@ 2022-01-14 6:30 3% ` Xia, Chenbo
2022-01-17 5:39 0% ` Hu, Jiayu
0 siblings, 1 reply; 200+ results
From: Xia, Chenbo @ 2022-01-14 6:30 UTC (permalink / raw)
To: Hu, Jiayu, dev
Cc: maxime.coquelin, i.maximets, Richardson, Bruce, Van Haaren,
Harry, Pai G, Sunil, Mcnamara, John, Ding, Xuan, Jiang, Cheng1,
liangma
Hi Jiayu,
This is first round of review, I'll spend time on OVS patches later and look back.
> -----Original Message-----
> From: Hu, Jiayu <jiayu.hu@intel.com>
> Sent: Friday, December 31, 2021 5:55 AM
> To: dev@dpdk.org
> Cc: maxime.coquelin@redhat.com; i.maximets@ovn.org; Xia, Chenbo
> <chenbo.xia@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>; Van
> Haaren, Harry <harry.van.haaren@intel.com>; Pai G, Sunil
> <sunil.pai.g@intel.com>; Mcnamara, John <john.mcnamara@intel.com>; Ding, Xuan
> <xuan.ding@intel.com>; Jiang, Cheng1 <cheng1.jiang@intel.com>;
> liangma@liangbit.com; Hu, Jiayu <jiayu.hu@intel.com>
> Subject: [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath
>
> Since dmadev is introduced in 21.11, to avoid the overhead of vhost DMA
> abstraction layer and simplify application logics, this patch integrates
> dmadev in asynchronous data path.
>
> Signed-off-by: Jiayu Hu <jiayu.hu@intel.com>
> Signed-off-by: Sunil Pai G <sunil.pai.g@intel.com>
> ---
> doc/guides/prog_guide/vhost_lib.rst | 70 ++++-----
> examples/vhost/Makefile | 2 +-
> examples/vhost/ioat.c | 218 --------------------------
> examples/vhost/ioat.h | 63 --------
> examples/vhost/main.c | 230 +++++++++++++++++++++++-----
> examples/vhost/main.h | 11 ++
> examples/vhost/meson.build | 6 +-
> lib/vhost/meson.build | 3 +-
> lib/vhost/rte_vhost_async.h | 121 +++++----------
> lib/vhost/version.map | 3 +
> lib/vhost/vhost.c | 130 +++++++++++-----
> lib/vhost/vhost.h | 53 ++++++-
> lib/vhost/virtio_net.c | 206 +++++++++++++++++++------
> 13 files changed, 587 insertions(+), 529 deletions(-)
> delete mode 100644 examples/vhost/ioat.c
> delete mode 100644 examples/vhost/ioat.h
>
> diff --git a/doc/guides/prog_guide/vhost_lib.rst
> b/doc/guides/prog_guide/vhost_lib.rst
> index 76f5d303c9..bdce7cbf02 100644
> --- a/doc/guides/prog_guide/vhost_lib.rst
> +++ b/doc/guides/prog_guide/vhost_lib.rst
> @@ -218,38 +218,12 @@ The following is an overview of some key Vhost API
> functions:
>
> Enable or disable zero copy feature of the vhost crypto backend.
>
> -* ``rte_vhost_async_channel_register(vid, queue_id, config, ops)``
> +* ``rte_vhost_async_channel_register(vid, queue_id)``
>
> Register an async copy device channel for a vhost queue after vring
Since dmadev is here, let's just use 'DMA device' instead of 'copy device'
> - is enabled. Following device ``config`` must be specified together
> - with the registration:
> + is enabled.
>
> - * ``features``
> -
> - This field is used to specify async copy device features.
> -
> - ``RTE_VHOST_ASYNC_INORDER`` represents the async copy device can
> - guarantee the order of copy completion is the same as the order
> - of copy submission.
> -
> - Currently, only ``RTE_VHOST_ASYNC_INORDER`` capable device is
> - supported by vhost.
> -
> - Applications must provide following ``ops`` callbacks for vhost lib to
> - work with the async copy devices:
> -
> - * ``transfer_data(vid, queue_id, descs, opaque_data, count)``
> -
> - vhost invokes this function to submit copy data to the async devices.
> - For non-async_inorder capable devices, ``opaque_data`` could be used
> - for identifying the completed packets.
> -
> - * ``check_completed_copies(vid, queue_id, opaque_data, max_packets)``
> -
> - vhost invokes this function to get the copy data completed by async
> - devices.
> -
> -* ``rte_vhost_async_channel_register_thread_unsafe(vid, queue_id, config,
> ops)``
> +* ``rte_vhost_async_channel_register_thread_unsafe(vid, queue_id)``
>
> Register an async copy device channel for a vhost queue without
> performing any locking.
> @@ -277,18 +251,13 @@ The following is an overview of some key Vhost API
> functions:
> This function is only safe to call in vhost callback functions
> (i.e., struct rte_vhost_device_ops).
>
> -* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count, comp_pkts,
> comp_count)``
> +* ``rte_vhost_submit_enqueue_burst(vid, queue_id, pkts, count, dma_id,
> dma_vchan)``
>
> Submit an enqueue request to transmit ``count`` packets from host to guest
> - by async data path. Successfully enqueued packets can be transfer completed
> - or being occupied by DMA engines; transfer completed packets are returned
> in
> - ``comp_pkts``, but others are not guaranteed to finish, when this API
> - call returns.
> + by async data path. Applications must not free the packets submitted for
> + enqueue until the packets are completed.
>
> - Applications must not free the packets submitted for enqueue until the
> - packets are completed.
> -
> -* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count)``
> +* ``rte_vhost_poll_enqueue_completed(vid, queue_id, pkts, count, dma_id,
> dma_vchan)``
>
> Poll enqueue completion status from async data path. Completed packets
> are returned to applications through ``pkts``.
> @@ -298,7 +267,7 @@ The following is an overview of some key Vhost API
> functions:
> This function returns the amount of in-flight packets for the vhost
> queue using async acceleration.
>
> -* ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count)``
> +* ``rte_vhost_clear_queue_thread_unsafe(vid, queue_id, **pkts, count, dma_id,
> dma_vchan)``
>
> Clear inflight packets which are submitted to DMA engine in vhost async
> data
> path. Completed packets are returned to applications through ``pkts``.
> @@ -442,3 +411,26 @@ Finally, a set of device ops is defined for device
> specific operations:
> * ``get_notify_area``
>
> Called to get the notify area info of the queue.
> +
> +Vhost asynchronous data path
> +----------------------------
> +
> +Vhost asynchronous data path leverages DMA devices to offload memory
> +copies from the CPU and it is implemented in an asynchronous way. It
> +enables applcations, like OVS, to save CPU cycles and hide memory copy
> +overhead, thus achieving higher throughput.
> +
> +Vhost doesn't manage DMA devices and applications, like OVS, need to
> +manage and configure DMA devices. Applications need to tell vhost what
> +DMA devices to use in every data path function call. This design enables
> +the flexibility for applications to dynamically use DMA channels in
> +different function modules, not limited in vhost.
> +
> +In addition, vhost supports M:N mapping between vrings and DMA virtual
> +channels. Specifically, one vring can use multiple different DMA channels
> +and one DMA channel can be shared by multiple vrings at the same time.
> +The reason of enabling one vring to use multiple DMA channels is that
> +it's possible that more than one dataplane threads enqueue packets to
> +the same vring with their own DMA virtual channels. Besides, the number
> +of DMA devices is limited. For the purpose of scaling, it's necessary to
> +support sharing DMA channels among vrings.
> diff --git a/examples/vhost/Makefile b/examples/vhost/Makefile
> index 587ea2ab47..975a5dfe40 100644
> --- a/examples/vhost/Makefile
> +++ b/examples/vhost/Makefile
> @@ -5,7 +5,7 @@
> APP = vhost-switch
>
> # all source are stored in SRCS-y
> -SRCS-y := main.c virtio_net.c ioat.c
> +SRCS-y := main.c virtio_net.c
>
> PKGCONF ?= pkg-config
>
> diff --git a/examples/vhost/ioat.c b/examples/vhost/ioat.c
> deleted file mode 100644
> index 9aeeb12fd9..0000000000
> --- a/examples/vhost/ioat.c
> +++ /dev/null
> @@ -1,218 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2010-2020 Intel Corporation
> - */
> -
> -#include <sys/uio.h>
> -#ifdef RTE_RAW_IOAT
> -#include <rte_rawdev.h>
> -#include <rte_ioat_rawdev.h>
> -
> -#include "ioat.h"
> -#include "main.h"
> -
> -struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> -
> -struct packet_tracker {
> - unsigned short size_track[MAX_ENQUEUED_SIZE];
> - unsigned short next_read;
> - unsigned short next_write;
> - unsigned short last_remain;
> - unsigned short ioat_space;
> -};
> -
> -struct packet_tracker cb_tracker[MAX_VHOST_DEVICE];
> -
> -int
> -open_ioat(const char *value)
> -{
> - struct dma_for_vhost *dma_info = dma_bind;
> - char *input = strndup(value, strlen(value) + 1);
> - char *addrs = input;
> - char *ptrs[2];
> - char *start, *end, *substr;
> - int64_t vid, vring_id;
> - struct rte_ioat_rawdev_config config;
> - struct rte_rawdev_info info = { .dev_private = &config };
> - char name[32];
> - int dev_id;
> - int ret = 0;
> - uint16_t i = 0;
> - char *dma_arg[MAX_VHOST_DEVICE];
> - int args_nr;
> -
> - while (isblank(*addrs))
> - addrs++;
> - if (*addrs == '\0') {
> - ret = -1;
> - goto out;
> - }
> -
> - /* process DMA devices within bracket. */
> - addrs++;
> - substr = strtok(addrs, ";]");
> - if (!substr) {
> - ret = -1;
> - goto out;
> - }
> - args_nr = rte_strsplit(substr, strlen(substr),
> - dma_arg, MAX_VHOST_DEVICE, ',');
> - if (args_nr <= 0) {
> - ret = -1;
> - goto out;
> - }
> - while (i < args_nr) {
> - char *arg_temp = dma_arg[i];
> - uint8_t sub_nr;
> - sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> - if (sub_nr != 2) {
> - ret = -1;
> - goto out;
> - }
> -
> - start = strstr(ptrs[0], "txd");
> - if (start == NULL) {
> - ret = -1;
> - goto out;
> - }
> -
> - start += 3;
> - vid = strtol(start, &end, 0);
> - if (end == start) {
> - ret = -1;
> - goto out;
> - }
> -
> - vring_id = 0 + VIRTIO_RXQ;
> - if (rte_pci_addr_parse(ptrs[1],
> - &(dma_info + vid)->dmas[vring_id].addr) < 0) {
> - ret = -1;
> - goto out;
> - }
> -
> - rte_pci_device_name(&(dma_info + vid)->dmas[vring_id].addr,
> - name, sizeof(name));
> - dev_id = rte_rawdev_get_dev_id(name);
> - if (dev_id == (uint16_t)(-ENODEV) ||
> - dev_id == (uint16_t)(-EINVAL)) {
> - ret = -1;
> - goto out;
> - }
> -
> - if (rte_rawdev_info_get(dev_id, &info, sizeof(config)) < 0 ||
> - strstr(info.driver_name, "ioat") == NULL) {
> - ret = -1;
> - goto out;
> - }
> -
> - (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> - (dma_info + vid)->dmas[vring_id].is_valid = true;
> - config.ring_size = IOAT_RING_SIZE;
> - config.hdls_disable = true;
> - if (rte_rawdev_configure(dev_id, &info, sizeof(config)) < 0) {
> - ret = -1;
> - goto out;
> - }
> - rte_rawdev_start(dev_id);
> - cb_tracker[dev_id].ioat_space = IOAT_RING_SIZE - 1;
> - dma_info->nr++;
> - i++;
> - }
> -out:
> - free(input);
> - return ret;
> -}
> -
> -int32_t
> -ioat_transfer_data_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data, uint16_t count)
> -{
> - uint32_t i_iter;
> - uint16_t dev_id = dma_bind[vid].dmas[queue_id * 2 + VIRTIO_RXQ].dev_id;
> - struct rte_vhost_iov_iter *iter = NULL;
> - unsigned long i_seg;
> - unsigned short mask = MAX_ENQUEUED_SIZE - 1;
> - unsigned short write = cb_tracker[dev_id].next_write;
> -
> - if (!opaque_data) {
> - for (i_iter = 0; i_iter < count; i_iter++) {
> - iter = iov_iter + i_iter;
> - i_seg = 0;
> - if (cb_tracker[dev_id].ioat_space < iter->nr_segs)
> - break;
> - while (i_seg < iter->nr_segs) {
> - rte_ioat_enqueue_copy(dev_id,
> - (uintptr_t)(iter->iov[i_seg].src_addr),
> - (uintptr_t)(iter->iov[i_seg].dst_addr),
> - iter->iov[i_seg].len,
> - 0,
> - 0);
> - i_seg++;
> - }
> - write &= mask;
> - cb_tracker[dev_id].size_track[write] = iter->nr_segs;
> - cb_tracker[dev_id].ioat_space -= iter->nr_segs;
> - write++;
> - }
> - } else {
> - /* Opaque data is not supported */
> - return -1;
> - }
> - /* ring the doorbell */
> - rte_ioat_perform_ops(dev_id);
> - cb_tracker[dev_id].next_write = write;
> - return i_iter;
> -}
> -
> -int32_t
> -ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets)
> -{
> - if (!opaque_data) {
> - uintptr_t dump[255];
> - int n_seg;
> - unsigned short read, write;
> - unsigned short nb_packet = 0;
> - unsigned short mask = MAX_ENQUEUED_SIZE - 1;
> - unsigned short i;
> -
> - uint16_t dev_id = dma_bind[vid].dmas[queue_id * 2
> - + VIRTIO_RXQ].dev_id;
> - n_seg = rte_ioat_completed_ops(dev_id, 255, NULL, NULL, dump,
> dump);
> - if (n_seg < 0) {
> - RTE_LOG(ERR,
> - VHOST_DATA,
> - "fail to poll completed buf on IOAT device %u",
> - dev_id);
> - return 0;
> - }
> - if (n_seg == 0)
> - return 0;
> -
> - cb_tracker[dev_id].ioat_space += n_seg;
> - n_seg += cb_tracker[dev_id].last_remain;
> -
> - read = cb_tracker[dev_id].next_read;
> - write = cb_tracker[dev_id].next_write;
> - for (i = 0; i < max_packets; i++) {
> - read &= mask;
> - if (read == write)
> - break;
> - if (n_seg >= cb_tracker[dev_id].size_track[read]) {
> - n_seg -= cb_tracker[dev_id].size_track[read];
> - read++;
> - nb_packet++;
> - } else {
> - break;
> - }
> - }
> - cb_tracker[dev_id].next_read = read;
> - cb_tracker[dev_id].last_remain = n_seg;
> - return nb_packet;
> - }
> - /* Opaque data is not supported */
> - return -1;
> -}
> -
> -#endif /* RTE_RAW_IOAT */
> diff --git a/examples/vhost/ioat.h b/examples/vhost/ioat.h
> deleted file mode 100644
> index d9bf717e8d..0000000000
> --- a/examples/vhost/ioat.h
> +++ /dev/null
> @@ -1,63 +0,0 @@
> -/* SPDX-License-Identifier: BSD-3-Clause
> - * Copyright(c) 2010-2020 Intel Corporation
> - */
> -
> -#ifndef _IOAT_H_
> -#define _IOAT_H_
> -
> -#include <rte_vhost.h>
> -#include <rte_pci.h>
> -#include <rte_vhost_async.h>
> -
> -#define MAX_VHOST_DEVICE 1024
> -#define IOAT_RING_SIZE 4096
> -#define MAX_ENQUEUED_SIZE 4096
> -
> -struct dma_info {
> - struct rte_pci_addr addr;
> - uint16_t dev_id;
> - bool is_valid;
> -};
> -
> -struct dma_for_vhost {
> - struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> - uint16_t nr;
> -};
> -
> -#ifdef RTE_RAW_IOAT
> -int open_ioat(const char *value);
> -
> -int32_t
> -ioat_transfer_data_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data, uint16_t count);
> -
> -int32_t
> -ioat_check_completed_copies_cb(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets);
> -#else
> -static int open_ioat(const char *value __rte_unused)
> -{
> - return -1;
> -}
> -
> -static int32_t
> -ioat_transfer_data_cb(int vid __rte_unused, uint16_t queue_id __rte_unused,
> - struct rte_vhost_iov_iter *iov_iter __rte_unused,
> - struct rte_vhost_async_status *opaque_data __rte_unused,
> - uint16_t count __rte_unused)
> -{
> - return -1;
> -}
> -
> -static int32_t
> -ioat_check_completed_copies_cb(int vid __rte_unused,
> - uint16_t queue_id __rte_unused,
> - struct rte_vhost_async_status *opaque_data __rte_unused,
> - uint16_t max_packets __rte_unused)
> -{
> - return -1;
> -}
> -#endif
> -#endif /* _IOAT_H_ */
> diff --git a/examples/vhost/main.c b/examples/vhost/main.c
> index 33d023aa39..44073499bc 100644
> --- a/examples/vhost/main.c
> +++ b/examples/vhost/main.c
> @@ -24,8 +24,9 @@
> #include <rte_ip.h>
> #include <rte_tcp.h>
> #include <rte_pause.h>
> +#include <rte_dmadev.h>
> +#include <rte_vhost_async.h>
>
> -#include "ioat.h"
> #include "main.h"
>
> #ifndef MAX_QUEUES
> @@ -56,6 +57,14 @@
> #define RTE_TEST_TX_DESC_DEFAULT 512
>
> #define INVALID_PORT_ID 0xFF
> +#define INVALID_DMA_ID -1
> +
> +#define MAX_VHOST_DEVICE 1024
> +#define DMA_RING_SIZE 4096
> +
> +struct dma_for_vhost dma_bind[MAX_VHOST_DEVICE];
> +struct rte_vhost_async_dma_info dma_config[RTE_DMADEV_DEFAULT_MAX];
> +static int dma_count;
>
> /* mask of enabled ports */
> static uint32_t enabled_port_mask = 0;
> @@ -96,8 +105,6 @@ static int builtin_net_driver;
>
> static int async_vhost_driver;
>
> -static char *dma_type;
> -
> /* Specify timeout (in useconds) between retries on RX. */
> static uint32_t burst_rx_delay_time = BURST_RX_WAIT_US;
> /* Specify the number of retries on RX. */
> @@ -196,13 +203,134 @@ struct vhost_bufftable *vhost_txbuff[RTE_MAX_LCORE *
> MAX_VHOST_DEVICE];
> #define MBUF_TABLE_DRAIN_TSC ((rte_get_tsc_hz() + US_PER_S - 1) \
> / US_PER_S * BURST_TX_DRAIN_US)
>
> +static inline bool
> +is_dma_configured(int16_t dev_id)
> +{
> + int i;
> +
> + for (i = 0; i < dma_count; i++) {
> + if (dma_config[i].dev_id == dev_id) {
> + return true;
> + }
> + }
> + return false;
> +}
> +
> static inline int
> open_dma(const char *value)
> {
> - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0)
> - return open_ioat(value);
> + struct dma_for_vhost *dma_info = dma_bind;
> + char *input = strndup(value, strlen(value) + 1);
> + char *addrs = input;
> + char *ptrs[2];
> + char *start, *end, *substr;
> + int64_t vid, vring_id;
> +
> + struct rte_dma_info info;
> + struct rte_dma_conf dev_config = { .nb_vchans = 1 };
> + struct rte_dma_vchan_conf qconf = {
> + .direction = RTE_DMA_DIR_MEM_TO_MEM,
> + .nb_desc = DMA_RING_SIZE
> + };
> +
> + int dev_id;
> + int ret = 0;
> + uint16_t i = 0;
> + char *dma_arg[MAX_VHOST_DEVICE];
> + int args_nr;
> +
> + while (isblank(*addrs))
> + addrs++;
> + if (*addrs == '\0') {
> + ret = -1;
> + goto out;
> + }
> +
> + /* process DMA devices within bracket. */
> + addrs++;
> + substr = strtok(addrs, ";]");
> + if (!substr) {
> + ret = -1;
> + goto out;
> + }
> +
> + args_nr = rte_strsplit(substr, strlen(substr),
> + dma_arg, MAX_VHOST_DEVICE, ',');
> + if (args_nr <= 0) {
> + ret = -1;
> + goto out;
> + }
> +
> + while (i < args_nr) {
> + char *arg_temp = dma_arg[i];
> + uint8_t sub_nr;
> +
> + sub_nr = rte_strsplit(arg_temp, strlen(arg_temp), ptrs, 2, '@');
> + if (sub_nr != 2) {
> + ret = -1;
> + goto out;
> + }
> +
> + start = strstr(ptrs[0], "txd");
> + if (start == NULL) {
> + ret = -1;
> + goto out;
> + }
> +
> + start += 3;
> + vid = strtol(start, &end, 0);
> + if (end == start) {
> + ret = -1;
> + goto out;
> + }
> +
> + vring_id = 0 + VIRTIO_RXQ;
No need to introduce vring_id, it's always VIRTIO_RXQ
> +
> + dev_id = rte_dma_get_dev_id_by_name(ptrs[1]);
> + if (dev_id < 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to find DMA %s.\n",
> ptrs[1]);
> + ret = -1;
> + goto out;
> + } else if (is_dma_configured(dev_id)) {
> + goto done;
> + }
> +
Please call rte_dma_info_get before configure to make sure info.max_vchans >=1
> + if (rte_dma_configure(dev_id, &dev_config) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to configure DMA %d.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
> +
> + if (rte_dma_vchan_setup(dev_id, 0, &qconf) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to set up DMA %d.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
>
> - return -1;
> + rte_dma_info_get(dev_id, &info);
> + if (info.nb_vchans != 1) {
> + RTE_LOG(ERR, VHOST_CONFIG, "DMA %d has no queues.\n",
> dev_id);
Then the above means the number of vchan is not configured.
> + ret = -1;
> + goto out;
> + }
> +
> + if (rte_dma_start(dev_id) != 0) {
> + RTE_LOG(ERR, VHOST_CONFIG, "Fail to start DMA %u.\n",
> dev_id);
> + ret = -1;
> + goto out;
> + }
> +
> + dma_config[dma_count].dev_id = dev_id;
> + dma_config[dma_count].max_vchans = 1;
> + dma_config[dma_count++].max_desc = DMA_RING_SIZE;
> +
> +done:
> + (dma_info + vid)->dmas[vring_id].dev_id = dev_id;
> + i++;
> + }
> +out:
> + free(input);
> + return ret;
> }
>
> /*
> @@ -500,8 +628,6 @@ enum {
> OPT_CLIENT_NUM,
> #define OPT_BUILTIN_NET_DRIVER "builtin-net-driver"
> OPT_BUILTIN_NET_DRIVER_NUM,
> -#define OPT_DMA_TYPE "dma-type"
> - OPT_DMA_TYPE_NUM,
> #define OPT_DMAS "dmas"
> OPT_DMAS_NUM,
> };
> @@ -539,8 +665,6 @@ us_vhost_parse_args(int argc, char **argv)
> NULL, OPT_CLIENT_NUM},
> {OPT_BUILTIN_NET_DRIVER, no_argument,
> NULL, OPT_BUILTIN_NET_DRIVER_NUM},
> - {OPT_DMA_TYPE, required_argument,
> - NULL, OPT_DMA_TYPE_NUM},
> {OPT_DMAS, required_argument,
> NULL, OPT_DMAS_NUM},
> {NULL, 0, 0, 0},
> @@ -661,10 +785,6 @@ us_vhost_parse_args(int argc, char **argv)
> }
> break;
>
> - case OPT_DMA_TYPE_NUM:
> - dma_type = optarg;
> - break;
> -
> case OPT_DMAS_NUM:
> if (open_dma(optarg) == -1) {
> RTE_LOG(INFO, VHOST_CONFIG,
> @@ -841,9 +961,10 @@ complete_async_pkts(struct vhost_dev *vdev)
> {
> struct rte_mbuf *p_cpl[MAX_PKT_BURST];
> uint16_t complete_count;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_count = rte_vhost_poll_enqueue_completed(vdev->vid,
> - VIRTIO_RXQ, p_cpl, MAX_PKT_BURST);
> + VIRTIO_RXQ, p_cpl, MAX_PKT_BURST, dma_id, 0);
> if (complete_count) {
> free_pkts(p_cpl, complete_count);
> __atomic_sub_fetch(&vdev->pkts_inflight, complete_count,
> __ATOMIC_SEQ_CST);
> @@ -883,11 +1004,12 @@ drain_vhost(struct vhost_dev *vdev)
>
> if (builtin_net_driver) {
> ret = vs_enqueue_pkts(vdev, VIRTIO_RXQ, m, nr_xmit);
> - } else if (async_vhost_driver) {
> + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t enqueue_fail = 0;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_async_pkts(vdev);
> - ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m,
> nr_xmit);
> + ret = rte_vhost_submit_enqueue_burst(vdev->vid, VIRTIO_RXQ, m,
> nr_xmit, dma_id, 0);
> __atomic_add_fetch(&vdev->pkts_inflight, ret, __ATOMIC_SEQ_CST);
>
> enqueue_fail = nr_xmit - ret;
> @@ -905,7 +1027,7 @@ drain_vhost(struct vhost_dev *vdev)
> __ATOMIC_SEQ_CST);
> }
>
> - if (!async_vhost_driver)
> + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> free_pkts(m, nr_xmit);
> }
>
> @@ -1211,12 +1333,13 @@ drain_eth_rx(struct vhost_dev *vdev)
> if (builtin_net_driver) {
> enqueue_count = vs_enqueue_pkts(vdev, VIRTIO_RXQ,
> pkts, rx_count);
> - } else if (async_vhost_driver) {
> + } else if (dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t enqueue_fail = 0;
> + int16_t dma_id = dma_bind[vdev->vid].dmas[VIRTIO_RXQ].dev_id;
>
> complete_async_pkts(vdev);
> enqueue_count = rte_vhost_submit_enqueue_burst(vdev->vid,
> - VIRTIO_RXQ, pkts, rx_count);
> + VIRTIO_RXQ, pkts, rx_count, dma_id, 0);
> __atomic_add_fetch(&vdev->pkts_inflight, enqueue_count,
> __ATOMIC_SEQ_CST);
>
> enqueue_fail = rx_count - enqueue_count;
> @@ -1235,7 +1358,7 @@ drain_eth_rx(struct vhost_dev *vdev)
> __ATOMIC_SEQ_CST);
> }
>
> - if (!async_vhost_driver)
> + if (!dma_bind[vdev->vid].dmas[VIRTIO_RXQ].async_enabled)
> free_pkts(pkts, rx_count);
> }
>
> @@ -1387,18 +1510,20 @@ destroy_device(int vid)
> "(%d) device has been removed from data core\n",
> vdev->vid);
>
> - if (async_vhost_driver) {
> + if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) {
> uint16_t n_pkt = 0;
> + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> struct rte_mbuf *m_cpl[vdev->pkts_inflight];
>
> while (vdev->pkts_inflight) {
> n_pkt = rte_vhost_clear_queue_thread_unsafe(vid, VIRTIO_RXQ,
> - m_cpl, vdev->pkts_inflight);
> + m_cpl, vdev->pkts_inflight, dma_id, 0);
> free_pkts(m_cpl, n_pkt);
> __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> __ATOMIC_SEQ_CST);
> }
>
> rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ);
> + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false;
> }
>
> rte_free(vdev);
> @@ -1468,20 +1593,14 @@ new_device(int vid)
> "(%d) device has been added to data core %d\n",
> vid, vdev->coreid);
>
> - if (async_vhost_driver) {
> - struct rte_vhost_async_config config = {0};
> - struct rte_vhost_async_channel_ops channel_ops;
> -
> - if (dma_type != NULL && strncmp(dma_type, "ioat", 4) == 0) {
> - channel_ops.transfer_data = ioat_transfer_data_cb;
> - channel_ops.check_completed_copies =
> - ioat_check_completed_copies_cb;
> -
> - config.features = RTE_VHOST_ASYNC_INORDER;
> + if (dma_bind[vid].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) {
> + int ret;
>
> - return rte_vhost_async_channel_register(vid, VIRTIO_RXQ,
> - config, &channel_ops);
> + ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ);
> + if (ret == 0) {
> + dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = true;
> }
> + return ret;
> }
>
> return 0;
> @@ -1502,14 +1621,15 @@ vring_state_changed(int vid, uint16_t queue_id, int
> enable)
> if (queue_id != VIRTIO_RXQ)
> return 0;
>
> - if (async_vhost_driver) {
> + if (dma_bind[vid].dmas[queue_id].async_enabled) {
> if (!enable) {
> uint16_t n_pkt = 0;
> + int16_t dma_id = dma_bind[vid].dmas[VIRTIO_RXQ].dev_id;
> struct rte_mbuf *m_cpl[vdev->pkts_inflight];
>
> while (vdev->pkts_inflight) {
> n_pkt = rte_vhost_clear_queue_thread_unsafe(vid,
> queue_id,
> - m_cpl, vdev->pkts_inflight);
> + m_cpl, vdev->pkts_inflight, dma_id,
> 0);
> free_pkts(m_cpl, n_pkt);
> __atomic_sub_fetch(&vdev->pkts_inflight, n_pkt,
> __ATOMIC_SEQ_CST);
> }
> @@ -1657,6 +1777,25 @@ create_mbuf_pool(uint16_t nr_port, uint32_t
> nr_switch_core, uint32_t mbuf_size,
> rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
> }
>
> +static void
> +init_dma(void)
> +{
> + int i;
> +
> + for (i = 0; i < MAX_VHOST_DEVICE; i++) {
> + int j;
> +
> + for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) {
> + dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID;
> + dma_bind[i].dmas[j].async_enabled = false;
> + }
> + }
> +
> + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> + dma_config[i].dev_id = INVALID_DMA_ID;
> + }
> +}
> +
> /*
> * Main function, does initialisation and calls the per-lcore functions.
> */
> @@ -1679,6 +1818,9 @@ main(int argc, char *argv[])
> argc -= ret;
> argv += ret;
>
> + /* initialize dma structures */
> + init_dma();
> +
> /* parse app arguments */
> ret = us_vhost_parse_args(argc, argv);
> if (ret < 0)
> @@ -1754,6 +1896,20 @@ main(int argc, char *argv[])
> if (client_mode)
> flags |= RTE_VHOST_USER_CLIENT;
>
> + if (async_vhost_driver) {
> + if (rte_vhost_async_dma_configure(dma_config, dma_count) < 0) {
> + RTE_LOG(ERR, VHOST_PORT, "Failed to configure DMA in
> vhost.\n");
> + for (i = 0; i < dma_count; i++) {
> + if (dma_config[i].dev_id != INVALID_DMA_ID) {
> + rte_dma_stop(dma_config[i].dev_id);
> + dma_config[i].dev_id = INVALID_DMA_ID;
> + }
> + }
> + dma_count = 0;
> + async_vhost_driver = false;
> + }
> + }
> +
> /* Register vhost user driver to handle vhost messages. */
> for (i = 0; i < nb_sockets; i++) {
> char *file = socket_files + i * PATH_MAX;
> diff --git a/examples/vhost/main.h b/examples/vhost/main.h
> index e7b1ac60a6..b4a453e77e 100644
> --- a/examples/vhost/main.h
> +++ b/examples/vhost/main.h
> @@ -8,6 +8,7 @@
> #include <sys/queue.h>
>
> #include <rte_ether.h>
> +#include <rte_pci.h>
>
> /* Macros for printing using RTE_LOG */
> #define RTE_LOGTYPE_VHOST_CONFIG RTE_LOGTYPE_USER1
> @@ -79,6 +80,16 @@ struct lcore_info {
> struct vhost_dev_tailq_list vdev_list;
> };
>
> +struct dma_info {
> + struct rte_pci_addr addr;
> + int16_t dev_id;
> + bool async_enabled;
> +};
> +
> +struct dma_for_vhost {
> + struct dma_info dmas[RTE_MAX_QUEUES_PER_PORT * 2];
> +};
> +
> /* we implement non-extra virtio net features */
> #define VIRTIO_NET_FEATURES 0
>
> diff --git a/examples/vhost/meson.build b/examples/vhost/meson.build
> index 3efd5e6540..87a637f83f 100644
> --- a/examples/vhost/meson.build
> +++ b/examples/vhost/meson.build
> @@ -12,13 +12,9 @@ if not is_linux
> endif
>
> deps += 'vhost'
> +deps += 'dmadev'
> allow_experimental_apis = true
> sources = files(
> 'main.c',
> 'virtio_net.c',
> )
> -
> -if dpdk_conf.has('RTE_RAW_IOAT')
> - deps += 'raw_ioat'
> - sources += files('ioat.c')
> -endif
> diff --git a/lib/vhost/meson.build b/lib/vhost/meson.build
> index cdb37a4814..8107329400 100644
> --- a/lib/vhost/meson.build
> +++ b/lib/vhost/meson.build
> @@ -33,7 +33,8 @@ headers = files(
> 'rte_vhost_async.h',
> 'rte_vhost_crypto.h',
> )
> +
> driver_sdk_headers = files(
> 'vdpa_driver.h',
> )
> -deps += ['ethdev', 'cryptodev', 'hash', 'pci']
> +deps += ['ethdev', 'cryptodev', 'hash', 'pci', 'dmadev']
> diff --git a/lib/vhost/rte_vhost_async.h b/lib/vhost/rte_vhost_async.h
> index a87ea6ba37..23a7a2d8b3 100644
> --- a/lib/vhost/rte_vhost_async.h
> +++ b/lib/vhost/rte_vhost_async.h
> @@ -27,70 +27,12 @@ struct rte_vhost_iov_iter {
> };
>
> /**
> - * dma transfer status
> + * DMA device information
> */
> -struct rte_vhost_async_status {
> - /** An array of application specific data for source memory */
> - uintptr_t *src_opaque_data;
> - /** An array of application specific data for destination memory */
> - uintptr_t *dst_opaque_data;
> -};
> -
> -/**
> - * dma operation callbacks to be implemented by applications
> - */
> -struct rte_vhost_async_channel_ops {
> - /**
> - * instruct async engines to perform copies for a batch of packets
> - *
> - * @param vid
> - * id of vhost device to perform data copies
> - * @param queue_id
> - * queue id to perform data copies
> - * @param iov_iter
> - * an array of IOV iterators
> - * @param opaque_data
> - * opaque data pair sending to DMA engine
> - * @param count
> - * number of elements in the "descs" array
> - * @return
> - * number of IOV iterators processed, negative value means error
> - */
> - int32_t (*transfer_data)(int vid, uint16_t queue_id,
> - struct rte_vhost_iov_iter *iov_iter,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t count);
> - /**
> - * check copy-completed packets from the async engine
> - * @param vid
> - * id of vhost device to check copy completion
> - * @param queue_id
> - * queue id to check copy completion
> - * @param opaque_data
> - * buffer to receive the opaque data pair from DMA engine
> - * @param max_packets
> - * max number of packets could be completed
> - * @return
> - * number of async descs completed, negative value means error
> - */
> - int32_t (*check_completed_copies)(int vid, uint16_t queue_id,
> - struct rte_vhost_async_status *opaque_data,
> - uint16_t max_packets);
> -};
> -
> -/**
> - * async channel features
> - */
> -enum {
> - RTE_VHOST_ASYNC_INORDER = 1U << 0,
> -};
> -
> -/**
> - * async channel configuration
> - */
> -struct rte_vhost_async_config {
> - uint32_t features;
> - uint32_t rsvd[2];
> +struct rte_vhost_async_dma_info {
> + int16_t dev_id; /* DMA device ID */
> + uint16_t max_vchans; /* max number of vchan */
> + uint16_t max_desc; /* max desc number of vchan */
> };
>
> /**
> @@ -100,17 +42,11 @@ struct rte_vhost_async_config {
> * vhost device id async channel to be attached to
> * @param queue_id
> * vhost queue id async channel to be attached to
> - * @param config
> - * Async channel configuration structure
> - * @param ops
> - * Async channel operation callbacks
> * @return
> * 0 on success, -1 on failures
> */
> __rte_experimental
> -int rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops);
> +int rte_vhost_async_channel_register(int vid, uint16_t queue_id);
>
> /**
> * Unregister an async channel for a vhost queue
> @@ -136,17 +72,11 @@ int rte_vhost_async_channel_unregister(int vid, uint16_t
> queue_id);
> * vhost device id async channel to be attached to
> * @param queue_id
> * vhost queue id async channel to be attached to
> - * @param config
> - * Async channel configuration
> - * @param ops
> - * Async channel operation callbacks
> * @return
> * 0 on success, -1 on failures
> */
> __rte_experimental
> -int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops);
> +int rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t
> queue_id);
>
> /**
> * Unregister an async channel for a vhost queue without performing any
> @@ -179,12 +109,17 @@ int rte_vhost_async_channel_unregister_thread_unsafe(int
> vid,
> * array of packets to be enqueued
> * @param count
> * packets num to be enqueued
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * num of packets enqueued
> */
> __rte_experimental
> uint16_t rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
All dma_id in the API should be uint16_t. Otherwise you need to check if valid.
>
> /**
> * This function checks async completion status for a specific vhost
> @@ -199,12 +134,17 @@ uint16_t rte_vhost_submit_enqueue_burst(int vid,
> uint16_t queue_id,
> * blank array to get return packet pointer
> * @param count
> * size of the packet array
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * num of packets returned
> */
> __rte_experimental
> uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
>
> /**
> * This function returns the amount of in-flight packets for the vhost
> @@ -235,11 +175,32 @@ int rte_vhost_async_get_inflight(int vid, uint16_t
> queue_id);
> * Blank array to get return packet pointer
> * @param count
> * Size of the packet array
> + * @param dma_id
> + * the identifier of the DMA device
> + * @param vchan
> + * the identifier of virtual DMA channel
> * @return
> * Number of packets returned
> */
> __rte_experimental
> uint16_t rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count);
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan);
> +/**
> + * The DMA vChannels used in asynchronous data path must be configured
> + * first. So this function needs to be called before enabling DMA
> + * acceleration for vring. If this function fails, asynchronous data path
> + * cannot be enabled for any vring further.
> + *
> + * @param dmas
> + * DMA information
> + * @param count
> + * Element number of 'dmas'
> + * @return
> + * 0 on success, and -1 on failure
> + */
> +__rte_experimental
> +int rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas,
> + uint16_t count);
I think based on current design, vhost can use every vchan if user app let it.
So the max_desc and max_vchans can just be got from dmadev APIs? Then there's
no need to introduce the new ABI struct rte_vhost_async_dma_info.
And about max_desc, I see the dmadev lib, you can get vchan's max_desc but you
may use a nb_desc (<= max_desc) to configure vchanl. And IIUC, vhost wants to
know the nb_desc instead of max_desc?
>
> #endif /* _RTE_VHOST_ASYNC_H_ */
> diff --git a/lib/vhost/version.map b/lib/vhost/version.map
> index a7ef7f1976..1202ba9c1a 100644
> --- a/lib/vhost/version.map
> +++ b/lib/vhost/version.map
> @@ -84,6 +84,9 @@ EXPERIMENTAL {
>
> # added in 21.11
> rte_vhost_get_monitor_addr;
> +
> + # added in 22.03
> + rte_vhost_async_dma_configure;
> };
>
> INTERNAL {
> diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c
> index 13a9bb9dd1..32f37f4851 100644
> --- a/lib/vhost/vhost.c
> +++ b/lib/vhost/vhost.c
> @@ -344,6 +344,7 @@ vhost_free_async_mem(struct vhost_virtqueue *vq)
> return;
>
> rte_free(vq->async->pkts_info);
> + rte_free(vq->async->pkts_cmpl_flag);
>
> rte_free(vq->async->buffers_packed);
> vq->async->buffers_packed = NULL;
> @@ -1626,8 +1627,7 @@ rte_vhost_extern_callback_register(int vid,
> }
>
> static __rte_always_inline int
> -async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_channel_ops *ops)
> +async_channel_register(int vid, uint16_t queue_id)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> @@ -1656,6 +1656,14 @@ async_channel_register(int vid, uint16_t queue_id,
> goto out_free_async;
> }
>
> + async->pkts_cmpl_flag = rte_zmalloc_socket(NULL, vq->size * sizeof(bool),
> + RTE_CACHE_LINE_SIZE, node);
> + if (!async->pkts_cmpl_flag) {
> + VHOST_LOG_CONFIG(ERR, "failed to allocate async pkts_cmpl_flag
> (vid %d, qid: %d)\n",
> + vid, queue_id);
qid: %u
> + goto out_free_async;
> + }
> +
> if (vq_is_packed(dev)) {
> async->buffers_packed = rte_malloc_socket(NULL,
> vq->size * sizeof(struct vring_used_elem_packed),
> @@ -1676,9 +1684,6 @@ async_channel_register(int vid, uint16_t queue_id,
> }
> }
>
> - async->ops.check_completed_copies = ops->check_completed_copies;
> - async->ops.transfer_data = ops->transfer_data;
> -
> vq->async = async;
>
> return 0;
> @@ -1691,15 +1696,13 @@ async_channel_register(int vid, uint16_t queue_id,
> }
>
> int
> -rte_vhost_async_channel_register(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops)
> +rte_vhost_async_channel_register(int vid, uint16_t queue_id)
> {
> struct vhost_virtqueue *vq;
> struct virtio_net *dev = get_device(vid);
> int ret;
>
> - if (dev == NULL || ops == NULL)
> + if (dev == NULL)
> return -1;
>
> if (queue_id >= VHOST_MAX_VRING)
> @@ -1710,33 +1713,20 @@ rte_vhost_async_channel_register(int vid, uint16_t
> queue_id,
> if (unlikely(vq == NULL || !dev->async_copy))
> return -1;
>
> - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> - VHOST_LOG_CONFIG(ERR,
> - "async copy is not supported on non-inorder mode "
> - "(vid %d, qid: %d)\n", vid, queue_id);
> - return -1;
> - }
> -
> - if (unlikely(ops->check_completed_copies == NULL ||
> - ops->transfer_data == NULL))
> - return -1;
> -
> rte_spinlock_lock(&vq->access_lock);
> - ret = async_channel_register(vid, queue_id, ops);
> + ret = async_channel_register(vid, queue_id);
> rte_spinlock_unlock(&vq->access_lock);
>
> return ret;
> }
>
> int
> -rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_vhost_async_config config,
> - struct rte_vhost_async_channel_ops *ops)
> +rte_vhost_async_channel_register_thread_unsafe(int vid, uint16_t queue_id)
> {
> struct vhost_virtqueue *vq;
> struct virtio_net *dev = get_device(vid);
>
> - if (dev == NULL || ops == NULL)
> + if (dev == NULL)
> return -1;
>
> if (queue_id >= VHOST_MAX_VRING)
> @@ -1747,18 +1737,7 @@ rte_vhost_async_channel_register_thread_unsafe(int vid,
> uint16_t queue_id,
> if (unlikely(vq == NULL || !dev->async_copy))
> return -1;
>
> - if (unlikely(!(config.features & RTE_VHOST_ASYNC_INORDER))) {
> - VHOST_LOG_CONFIG(ERR,
> - "async copy is not supported on non-inorder mode "
> - "(vid %d, qid: %d)\n", vid, queue_id);
> - return -1;
> - }
> -
> - if (unlikely(ops->check_completed_copies == NULL ||
> - ops->transfer_data == NULL))
> - return -1;
> -
> - return async_channel_register(vid, queue_id, ops);
> + return async_channel_register(vid, queue_id);
> }
>
> int
> @@ -1835,6 +1814,83 @@ rte_vhost_async_channel_unregister_thread_unsafe(int
> vid, uint16_t queue_id)
> return 0;
> }
>
> +static __rte_always_inline void
> +vhost_free_async_dma_mem(void)
> +{
> + uint16_t i;
> +
> + for (i = 0; i < RTE_DMADEV_DEFAULT_MAX; i++) {
> + struct async_dma_info *dma = &dma_copy_track[i];
> + int16_t j;
> +
> + if (dma->max_vchans == 0) {
> + continue;
> + }
> +
> + for (j = 0; j < dma->max_vchans; j++) {
> + rte_free(dma->vchans[j].metadata);
> + }
> + rte_free(dma->vchans);
> + dma->vchans = NULL;
> + dma->max_vchans = 0;
> + }
> +}
> +
> +int
> +rte_vhost_async_dma_configure(struct rte_vhost_async_dma_info *dmas, uint16_t
> count)
> +{
> + uint16_t i;
> +
> + if (!dmas) {
> + VHOST_LOG_CONFIG(ERR, "Invalid DMA configuration parameter.\n");
> + return -1;
> + }
> +
> + for (i = 0; i < count; i++) {
> + struct async_dma_vchan_info *vchans;
> + int16_t dev_id;
> + uint16_t max_vchans;
> + uint16_t max_desc;
> + uint16_t j;
> +
> + dev_id = dmas[i].dev_id;
> + max_vchans = dmas[i].max_vchans;
> + max_desc = dmas[i].max_desc;
> +
> + if (!rte_is_power_of_2(max_desc)) {
> + max_desc = rte_align32pow2(max_desc);
> + }
I think when aligning to power of 2, it should exceed not max_desc?
And based on above comment, if this max_desc is nb_desc configured for
vchanl, you should just make sure the nb_desc be power-of-2.
> +
> + vchans = rte_zmalloc(NULL, sizeof(struct async_dma_vchan_info) *
> max_vchans,
> + RTE_CACHE_LINE_SIZE);
> + if (vchans == NULL) {
> + VHOST_LOG_CONFIG(ERR, "Failed to allocate vchans for dma-
> %d."
> + " Cannot enable async data-path.\n", dev_id);
> + vhost_free_async_dma_mem();
> + return -1;
> + }
> +
> + for (j = 0; j < max_vchans; j++) {
> + vchans[j].metadata = rte_zmalloc(NULL, sizeof(bool *) *
> max_desc,
> + RTE_CACHE_LINE_SIZE);
> + if (!vchans[j].metadata) {
> + VHOST_LOG_CONFIG(ERR, "Failed to allocate metadata for
> "
> + "dma-%d vchan-%u\n", dev_id, j);
> + vhost_free_async_dma_mem();
> + return -1;
> + }
> +
> + vchans[j].ring_size = max_desc;
> + vchans[j].ring_mask = max_desc - 1;
> + }
> +
> + dma_copy_track[dev_id].vchans = vchans;
> + dma_copy_track[dev_id].max_vchans = max_vchans;
> + }
> +
> + return 0;
> +}
> +
> int
> rte_vhost_async_get_inflight(int vid, uint16_t queue_id)
> {
> diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h
> index 7085e0885c..d9bda34e11 100644
> --- a/lib/vhost/vhost.h
> +++ b/lib/vhost/vhost.h
> @@ -19,6 +19,7 @@
> #include <rte_ether.h>
> #include <rte_rwlock.h>
> #include <rte_malloc.h>
> +#include <rte_dmadev.h>
>
> #include "rte_vhost.h"
> #include "rte_vdpa.h"
> @@ -50,6 +51,7 @@
>
> #define VHOST_MAX_ASYNC_IT (MAX_PKT_BURST)
> #define VHOST_MAX_ASYNC_VEC 2048
> +#define VHOST_ASYNC_DMA_BATCHING_SIZE 32
>
> #define PACKED_DESC_ENQUEUE_USED_FLAG(w) \
> ((w) ? (VRING_DESC_F_AVAIL | VRING_DESC_F_USED | VRING_DESC_F_WRITE) : \
> @@ -119,6 +121,41 @@ struct vring_used_elem_packed {
> uint32_t count;
> };
>
> +struct async_dma_vchan_info {
> + /* circular array to track copy metadata */
> + bool **metadata;
If the metadata will only be flags, maybe just use some
name called XXX_flag
> +
> + /* max elements in 'metadata' */
> + uint16_t ring_size;
> + /* ring index mask for 'metadata' */
> + uint16_t ring_mask;
> +
> + /* batching copies before a DMA doorbell */
> + uint16_t nr_batching;
> +
> + /**
> + * DMA virtual channel lock. Although it is able to bind DMA
> + * virtual channels to data plane threads, vhost control plane
> + * thread could call data plane functions too, thus causing
> + * DMA device contention.
> + *
> + * For example, in VM exit case, vhost control plane thread needs
> + * to clear in-flight packets before disable vring, but there could
> + * be anotther data plane thread is enqueuing packets to the same
> + * vring with the same DMA virtual channel. But dmadev PMD functions
> + * are lock-free, so the control plane and data plane threads
> + * could operate the same DMA virtual channel at the same time.
> + */
> + rte_spinlock_t dma_lock;
> +};
> +
> +struct async_dma_info {
> + uint16_t max_vchans;
> + struct async_dma_vchan_info *vchans;
> +};
> +
> +extern struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> +
> /**
> * inflight async packet information
> */
> @@ -129,9 +166,6 @@ struct async_inflight_info {
> };
>
> struct vhost_async {
> - /* operation callbacks for DMA */
> - struct rte_vhost_async_channel_ops ops;
> -
> struct rte_vhost_iov_iter iov_iter[VHOST_MAX_ASYNC_IT];
> struct rte_vhost_iovec iovec[VHOST_MAX_ASYNC_VEC];
> uint16_t iter_idx;
> @@ -139,6 +173,19 @@ struct vhost_async {
>
> /* data transfer status */
> struct async_inflight_info *pkts_info;
> + /**
> + * packet reorder array. "true" indicates that DMA
> + * device completes all copies for the packet.
> + *
> + * Note that this array could be written by multiple
> + * threads at the same time. For example, two threads
> + * enqueue packets to the same virtqueue with their
> + * own DMA devices. However, since offloading is
> + * per-packet basis, each packet flag will only be
> + * written by one thread. And single byte write is
> + * atomic, so no lock is needed.
> + */
> + bool *pkts_cmpl_flag;
> uint16_t pkts_idx;
> uint16_t pkts_inflight_n;
> union {
> diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
> index b3d954aab4..9f81fc9733 100644
> --- a/lib/vhost/virtio_net.c
> +++ b/lib/vhost/virtio_net.c
> @@ -11,6 +11,7 @@
> #include <rte_net.h>
> #include <rte_ether.h>
> #include <rte_ip.h>
> +#include <rte_dmadev.h>
> #include <rte_vhost.h>
> #include <rte_tcp.h>
> #include <rte_udp.h>
> @@ -25,6 +26,9 @@
>
> #define MAX_BATCH_LEN 256
>
> +/* DMA device copy operation tracking array. */
> +struct async_dma_info dma_copy_track[RTE_DMADEV_DEFAULT_MAX];
> +
> static __rte_always_inline bool
> rxvq_is_mergeable(struct virtio_net *dev)
> {
> @@ -43,6 +47,108 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t
> nr_vring)
> return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
> }
>
> +static __rte_always_inline uint16_t
> +vhost_async_dma_transfer(struct vhost_virtqueue *vq, int16_t dma_id,
> + uint16_t vchan, uint16_t head_idx,
> + struct rte_vhost_iov_iter *pkts, uint16_t nr_pkts)
> +{
> + struct async_dma_vchan_info *dma_info =
> &dma_copy_track[dma_id].vchans[vchan];
> + uint16_t ring_mask = dma_info->ring_mask;
> + uint16_t pkt_idx;
> +
> + rte_spinlock_lock(&dma_info->dma_lock);
> +
> + for (pkt_idx = 0; pkt_idx < nr_pkts; pkt_idx++) {
> + struct rte_vhost_iovec *iov = pkts[pkt_idx].iov;
> + int copy_idx = 0;
> + uint16_t nr_segs = pkts[pkt_idx].nr_segs;
> + uint16_t i;
> +
> + if (rte_dma_burst_capacity(dma_id, vchan) < nr_segs) {
> + goto out;
> + }
> +
> + for (i = 0; i < nr_segs; i++) {
> + /**
> + * We have checked the available space before submit copies
> to DMA
> + * vChannel, so we don't handle error here.
> + */
> + copy_idx = rte_dma_copy(dma_id, vchan,
> (rte_iova_t)iov[i].src_addr,
> + (rte_iova_t)iov[i].dst_addr, iov[i].len,
> + RTE_DMA_OP_FLAG_LLC);
This assumes rte_dma_copy will always succeed if there's available space.
But the API doxygen says:
* @return
* - 0..UINT16_MAX: index of enqueued job.
* - -ENOSPC: if no space left to enqueue.
* - other values < 0 on failure.
So it should consider other vendor-specific errors.
Thanks,
Chenbo
> +
> + /**
> + * Only store packet completion flag address in the last
> copy's
> + * slot, and other slots are set to NULL.
> + */
> + if (unlikely(i == (nr_segs - 1))) {
> + dma_info->metadata[copy_idx & ring_mask] =
> + &vq->async->pkts_cmpl_flag[head_idx % vq->size];
> + }
> + }
> +
> + dma_info->nr_batching += nr_segs;
> + if (unlikely(dma_info->nr_batching >=
> VHOST_ASYNC_DMA_BATCHING_SIZE)) {
> + rte_dma_submit(dma_id, vchan);
> + dma_info->nr_batching = 0;
> + }
> +
> + head_idx++;
> + }
> +
> +out:
> + if (dma_info->nr_batching > 0) {
> + rte_dma_submit(dma_id, vchan);
> + dma_info->nr_batching = 0;
> + }
> + rte_spinlock_unlock(&dma_info->dma_lock);
> +
> + return pkt_idx;
> +}
> +
> +static __rte_always_inline uint16_t
> +vhost_async_dma_check_completed(int16_t dma_id, uint16_t vchan, uint16_t
> max_pkts)
> +{
> + struct async_dma_vchan_info *dma_info =
> &dma_copy_track[dma_id].vchans[vchan];
> + uint16_t ring_mask = dma_info->ring_mask;
> + uint16_t last_idx = 0;
> + uint16_t nr_copies;
> + uint16_t copy_idx;
> + uint16_t i;
> +
> + rte_spinlock_lock(&dma_info->dma_lock);
> +
> + /**
> + * Since all memory is pinned and addresses should be valid,
> + * we don't check errors.
> + */
> + nr_copies = rte_dma_completed(dma_id, vchan, max_pkts, &last_idx, NULL);
> + if (nr_copies == 0) {
> + goto out;
> + }
> +
> + copy_idx = last_idx - nr_copies + 1;
> + for (i = 0; i < nr_copies; i++) {
> + bool *flag;
> +
> + flag = dma_info->metadata[copy_idx & ring_mask];
> + if (flag) {
> + /**
> + * Mark the packet flag as received. The flag
> + * could belong to another virtqueue but write
> + * is atomic.
> + */
> + *flag = true;
> + dma_info->metadata[copy_idx & ring_mask] = NULL;
> + }
> + copy_idx++;
> + }
> +
> +out:
> + rte_spinlock_unlock(&dma_info->dma_lock);
> + return nr_copies;
> +}
> +
> static inline void
> do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
> {
> @@ -1449,9 +1555,9 @@ store_dma_desc_info_packed(struct vring_used_elem_packed
> *s_ring,
> }
>
> static __rte_noinline uint32_t
> -virtio_dev_rx_async_submit_split(struct virtio_net *dev,
> - struct vhost_virtqueue *vq, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> +virtio_dev_rx_async_submit_split(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> + uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count,
> + int16_t dma_id, uint16_t vchan)
> {
> struct buf_vector buf_vec[BUF_VECTOR_MAX];
> uint32_t pkt_idx = 0;
> @@ -1503,17 +1609,16 @@ virtio_dev_rx_async_submit_split(struct virtio_net
> *dev,
> if (unlikely(pkt_idx == 0))
> return 0;
>
> - n_xfer = async->ops.transfer_data(dev->vid, queue_id, async->iov_iter, 0,
> pkt_idx);
> - if (unlikely(n_xfer < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to transfer data for queue
> id %d.\n",
> - dev->vid, __func__, queue_id);
> - n_xfer = 0;
> - }
> + n_xfer = vhost_async_dma_transfer(vq, dma_id, vchan, async->pkts_idx,
> async->iov_iter,
> + pkt_idx);
>
> pkt_err = pkt_idx - n_xfer;
> if (unlikely(pkt_err)) {
> uint16_t num_descs = 0;
>
> + VHOST_LOG_DATA(DEBUG, "(%d) %s: failed to transfer %u packets for
> queue %u.\n",
> + dev->vid, __func__, pkt_err, queue_id);
> +
> /* update number of completed packets */
> pkt_idx = n_xfer;
>
> @@ -1656,13 +1761,13 @@ dma_error_handler_packed(struct vhost_virtqueue *vq,
> uint16_t slot_idx,
> }
>
> static __rte_noinline uint32_t
> -virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> - struct vhost_virtqueue *vq, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> +virtio_dev_rx_async_submit_packed(struct virtio_net *dev, struct
> vhost_virtqueue *vq,
> + uint16_t queue_id, struct rte_mbuf **pkts, uint32_t count,
> + int16_t dma_id, uint16_t vchan)
> {
> uint32_t pkt_idx = 0;
> uint32_t remained = count;
> - int32_t n_xfer;
> + uint16_t n_xfer;
> uint16_t num_buffers;
> uint16_t num_descs;
>
> @@ -1670,6 +1775,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev,
> struct async_inflight_info *pkts_info = async->pkts_info;
> uint32_t pkt_err = 0;
> uint16_t slot_idx = 0;
> + uint16_t head_idx = async->pkts_idx % vq->size;
>
> do {
> rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> @@ -1694,19 +1800,17 @@ virtio_dev_rx_async_submit_packed(struct virtio_net
> *dev,
> if (unlikely(pkt_idx == 0))
> return 0;
>
> - n_xfer = async->ops.transfer_data(dev->vid, queue_id, async->iov_iter, 0,
> pkt_idx);
> - if (unlikely(n_xfer < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to transfer data for queue
> id %d.\n",
> - dev->vid, __func__, queue_id);
> - n_xfer = 0;
> - }
> -
> - pkt_err = pkt_idx - n_xfer;
> + n_xfer = vhost_async_dma_transfer(vq, dma_id, vchan, head_idx,
> + async->iov_iter, pkt_idx);
>
> async_iter_reset(async);
>
> - if (unlikely(pkt_err))
> + pkt_err = pkt_idx - n_xfer;
> + if (unlikely(pkt_err)) {
> + VHOST_LOG_DATA(DEBUG, "(%d) %s: failed to transfer %u packets for
> queue %u.\n",
> + dev->vid, __func__, pkt_err, queue_id);
> dma_error_handler_packed(vq, slot_idx, pkt_err, &pkt_idx);
> + }
>
> if (likely(vq->shadow_used_idx)) {
> /* keep used descriptors. */
> @@ -1826,28 +1930,37 @@ write_back_completed_descs_packed(struct
> vhost_virtqueue *vq,
>
> static __rte_always_inline uint16_t
> vhost_poll_enqueue_completed(struct virtio_net *dev, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct vhost_virtqueue *vq = dev->virtqueue[queue_id];
> struct vhost_async *async = vq->async;
> struct async_inflight_info *pkts_info = async->pkts_info;
> - int32_t n_cpl;
> + uint16_t nr_cpl_pkts = 0;
> uint16_t n_descs = 0, n_buffers = 0;
> uint16_t start_idx, from, i;
>
> - n_cpl = async->ops.check_completed_copies(dev->vid, queue_id, 0, count);
> - if (unlikely(n_cpl < 0)) {
> - VHOST_LOG_DATA(ERR, "(%d) %s: failed to check completed copies for
> queue id %d.\n",
> - dev->vid, __func__, queue_id);
> - return 0;
> - }
> -
> - if (n_cpl == 0)
> - return 0;
> + /* Check completed copies for the given DMA vChannel */
> + vhost_async_dma_check_completed(dma_id, vchan, count);
>
> start_idx = async_get_first_inflight_pkt_idx(vq);
>
> - for (i = 0; i < n_cpl; i++) {
> + /**
> + * Calculate the number of copy completed packets.
> + * Note that there may be completed packets even if
> + * no copies are reported done by the given DMA vChannel,
> + * as DMA vChannels could be shared by other threads.
> + */
> + from = start_idx;
> + while (vq->async->pkts_cmpl_flag[from] && count--) {
> + vq->async->pkts_cmpl_flag[from] = false;
> + from++;
> + if (from >= vq->size)
> + from -= vq->size;
> + nr_cpl_pkts++;
> + }
> +
> + for (i = 0; i < nr_cpl_pkts; i++) {
> from = (start_idx + i) % vq->size;
> /* Only used with packed ring */
> n_buffers += pkts_info[from].nr_buffers;
> @@ -1856,7 +1969,7 @@ vhost_poll_enqueue_completed(struct virtio_net *dev,
> uint16_t queue_id,
> pkts[i] = pkts_info[from].mbuf;
> }
>
> - async->pkts_inflight_n -= n_cpl;
> + async->pkts_inflight_n -= nr_cpl_pkts;
>
> if (likely(vq->enabled && vq->access_ok)) {
> if (vq_is_packed(dev)) {
> @@ -1877,12 +1990,13 @@ vhost_poll_enqueue_completed(struct virtio_net *dev,
> uint16_t queue_id,
> }
> }
>
> - return n_cpl;
> + return nr_cpl_pkts;
> }
>
> uint16_t
> rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq;
> @@ -1908,7 +2022,7 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t
> queue_id,
>
> rte_spinlock_lock(&vq->access_lock);
>
> - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count);
> + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan);
>
> rte_spinlock_unlock(&vq->access_lock);
>
> @@ -1917,7 +2031,8 @@ rte_vhost_poll_enqueue_completed(int vid, uint16_t
> queue_id,
>
> uint16_t
> rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
> struct vhost_virtqueue *vq;
> @@ -1941,14 +2056,14 @@ rte_vhost_clear_queue_thread_unsafe(int vid, uint16_t
> queue_id,
> return 0;
> }
>
> - n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count);
> + n_pkts_cpl = vhost_poll_enqueue_completed(dev, queue_id, pkts, count,
> dma_id, vchan);
>
> return n_pkts_cpl;
> }
>
> static __rte_always_inline uint32_t
> virtio_dev_rx_async_submit(struct virtio_net *dev, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint32_t count)
> + struct rte_mbuf **pkts, uint32_t count, int16_t dma_id, uint16_t vchan)
> {
> struct vhost_virtqueue *vq;
> uint32_t nb_tx = 0;
> @@ -1980,10 +2095,10 @@ virtio_dev_rx_async_submit(struct virtio_net *dev,
> uint16_t queue_id,
>
> if (vq_is_packed(dev))
> nb_tx = virtio_dev_rx_async_submit_packed(dev, vq, queue_id,
> - pkts, count);
> + pkts, count, dma_id, vchan);
> else
> nb_tx = virtio_dev_rx_async_submit_split(dev, vq, queue_id,
> - pkts, count);
> + pkts, count, dma_id, vchan);
>
> out:
> if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
> @@ -1997,7 +2112,8 @@ virtio_dev_rx_async_submit(struct virtio_net *dev,
> uint16_t queue_id,
>
> uint16_t
> rte_vhost_submit_enqueue_burst(int vid, uint16_t queue_id,
> - struct rte_mbuf **pkts, uint16_t count)
> + struct rte_mbuf **pkts, uint16_t count, int16_t dma_id,
> + uint16_t vchan)
> {
> struct virtio_net *dev = get_device(vid);
>
> @@ -2011,7 +2127,7 @@ rte_vhost_submit_enqueue_burst(int vid, uint16_t
> queue_id,
> return 0;
> }
>
> - return virtio_dev_rx_async_submit(dev, queue_id, pkts, count);
> + return virtio_dev_rx_async_submit(dev, queue_id, pkts, count, dma_id,
> vchan);
> }
>
> static inline bool
> --
> 2.25.1
^ permalink raw reply [relevance 3%]
* RE: [PATCH 00/12] add packet generator library and example app
@ 2022-01-12 16:18 3% ` Morten Brørup
0 siblings, 0 replies; 200+ results
From: Morten Brørup @ 2022-01-12 16:18 UTC (permalink / raw)
To: Bruce Richardson, Ronan Randles; +Cc: dev, harry.van.haaren
> From: Bruce Richardson [mailto:bruce.richardson@intel.com]
> Sent: Tuesday, 14 December 2021 15.58
>
> On Tue, Dec 14, 2021 at 02:12:30PM +0000, Ronan Randles wrote:
> > This patchset introduces a Gen library for DPDK. This library
> provides an easy
> > way to generate traffic in order to test software based network
> components.
> >
> > This library enables the basic functionality required in the traffic
> generator.
> > This includes: raw data setting, packet Tx and Rx, creation and
> destruction of a
> > Gen instance and various types of data parsing.
> > This functionality is implemented in "lib/gen/rte_gen.c". IPv4
> parsing
> > functionality is also added in "lib/net/rte_ip.c", this is then used
> in the gen
> > library.
> >
> > A sample app is included in "examples/generator" which shows the use
> of the gen
> > library in making a traffic generator. This can be used to generate
> traffic by
> > running the dpdk-generator generator executable. This sample app
> supports
> > runtime stats reporting (/gen/stats) and line rate limiting
> > (/gen/mpps,<target traffic rate in mpps>) through telemetry.py.
> >
> > As more features are added to the gen library, the sample application
> will
> > become more powerful through the "/gen/packet" string parameter
> > (currently supports IP and Ether address setting). This will allow
> every
> > application to generate more complex traffic types in the future
> without
> > changing API.
> >
>
> I think this is great to see, and sounds a good addition to DPDK. One
> thing
> to address in any v2 is to add more documentation for both the library
> and
> the example app. You need a chapter on the lib added to the programmers
> guide to help others use the library from their code, and a chapter on
> the
> generator example in the example apps guide.
>
> More general question - if we do have a traffic generator in DPDK,
> would it
> be better in the "app" rather than the examples one? If it's only going
> to
> ever stay a simple example of using the lib, examples might be fine,
> but I
> suspect that it will get quite complicated if people start using it and
> adding more features, in which case a move to the "app" folder might be
> more appropriate. Thoughts?
>
> /Bruce
If adding a traffic generator lib/app to DPDK itself, it should be able to evolve freely, unencumbered by the DPDK ABI/API stability requirements.
Also, it MUST be optional when building DPDK for production purposes. Consider the security perspective: If a network appliance based on DPDK is compromised by a hacker, you don't want it to include a traffic generator.
-Morten
^ permalink raw reply [relevance 3%]
* [PATCH v3] ethdev: mark old macros as deprecated
@ 2022-01-12 14:36 1% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2022-01-12 14:36 UTC (permalink / raw)
To: Thomas Monjalon, Andrew Rybchenko, Hemant Agrawal,
Tyler Retzlaff, Chenbo Xia, Jerin Jacob
Cc: dev, Ferruh Yigit, Stephen Hemminger
Old macros kept for backward compatibility, but this cause old macro
usage to sneak in silently.
Marking old macros as deprecated. Downside is this will cause some noise
for applications that are using old macros.
Fixes: 295968d17407 ("ethdev: add namespace")
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
v2:
* Release notes updated
v3:
* Update 22.03 release note
---
doc/guides/rel_notes/release_22_03.rst | 3 +
lib/ethdev/rte_ethdev.h | 474 +++++++++++++------------
2 files changed, 247 insertions(+), 230 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa94a..16c66c0641d4 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -84,6 +84,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* ethdev: Old public macros and enumeration constants without ``RTE_ETH_`` prefix,
+ which are kept for backward compatibility, are marked as deprecated.
+
ABI Changes
-----------
diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h
index fa299c8ad70e..147cc1ced36a 100644
--- a/lib/ethdev/rte_ethdev.h
+++ b/lib/ethdev/rte_ethdev.h
@@ -288,76 +288,78 @@ struct rte_eth_stats {
* Device supported speeds bitmap flags
*/
#define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */
-#define ETH_LINK_SPEED_AUTONEG RTE_ETH_LINK_SPEED_AUTONEG
#define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */
-#define ETH_LINK_SPEED_FIXED RTE_ETH_LINK_SPEED_FIXED
#define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */
-#define ETH_LINK_SPEED_10M_HD RTE_ETH_LINK_SPEED_10M_HD
#define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */
-#define ETH_LINK_SPEED_10M RTE_ETH_LINK_SPEED_10M
#define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */
-#define ETH_LINK_SPEED_100M_HD RTE_ETH_LINK_SPEED_100M_HD
#define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */
-#define ETH_LINK_SPEED_100M RTE_ETH_LINK_SPEED_100M
#define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */
-#define ETH_LINK_SPEED_1G RTE_ETH_LINK_SPEED_1G
#define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */
-#define ETH_LINK_SPEED_2_5G RTE_ETH_LINK_SPEED_2_5G
#define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */
-#define ETH_LINK_SPEED_5G RTE_ETH_LINK_SPEED_5G
#define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */
-#define ETH_LINK_SPEED_10G RTE_ETH_LINK_SPEED_10G
#define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */
-#define ETH_LINK_SPEED_20G RTE_ETH_LINK_SPEED_20G
#define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */
-#define ETH_LINK_SPEED_25G RTE_ETH_LINK_SPEED_25G
#define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */
-#define ETH_LINK_SPEED_40G RTE_ETH_LINK_SPEED_40G
#define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */
-#define ETH_LINK_SPEED_50G RTE_ETH_LINK_SPEED_50G
#define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */
-#define ETH_LINK_SPEED_56G RTE_ETH_LINK_SPEED_56G
#define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */
-#define ETH_LINK_SPEED_100G RTE_ETH_LINK_SPEED_100G
#define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */
-#define ETH_LINK_SPEED_200G RTE_ETH_LINK_SPEED_200G
/**@}*/
+#define ETH_LINK_SPEED_AUTONEG RTE_DEPRECATED(ETH_LINK_SPEED_AUTONEG) RTE_ETH_LINK_SPEED_AUTONEG
+#define ETH_LINK_SPEED_FIXED RTE_DEPRECATED(ETH_LINK_SPEED_FIXED) RTE_ETH_LINK_SPEED_FIXED
+#define ETH_LINK_SPEED_10M_HD RTE_DEPRECATED(ETH_LINK_SPEED_10M_HD) RTE_ETH_LINK_SPEED_10M_HD
+#define ETH_LINK_SPEED_10M RTE_DEPRECATED(ETH_LINK_SPEED_10M) RTE_ETH_LINK_SPEED_10M
+#define ETH_LINK_SPEED_100M_HD RTE_DEPRECATED(ETH_LINK_SPEED_100M_HD) RTE_ETH_LINK_SPEED_100M_HD
+#define ETH_LINK_SPEED_100M RTE_DEPRECATED(ETH_LINK_SPEED_100M) RTE_ETH_LINK_SPEED_100M
+#define ETH_LINK_SPEED_1G RTE_DEPRECATED(ETH_LINK_SPEED_1G) RTE_ETH_LINK_SPEED_1G
+#define ETH_LINK_SPEED_2_5G RTE_DEPRECATED(ETH_LINK_SPEED_2_5G) RTE_ETH_LINK_SPEED_2_5G
+#define ETH_LINK_SPEED_5G RTE_DEPRECATED(ETH_LINK_SPEED_5G) RTE_ETH_LINK_SPEED_5G
+#define ETH_LINK_SPEED_10G RTE_DEPRECATED(ETH_LINK_SPEED_10G) RTE_ETH_LINK_SPEED_10G
+#define ETH_LINK_SPEED_20G RTE_DEPRECATED(ETH_LINK_SPEED_20G) RTE_ETH_LINK_SPEED_20G
+#define ETH_LINK_SPEED_25G RTE_DEPRECATED(ETH_LINK_SPEED_25G) RTE_ETH_LINK_SPEED_25G
+#define ETH_LINK_SPEED_40G RTE_DEPRECATED(ETH_LINK_SPEED_40G) RTE_ETH_LINK_SPEED_40G
+#define ETH_LINK_SPEED_50G RTE_DEPRECATED(ETH_LINK_SPEED_50G) RTE_ETH_LINK_SPEED_50G
+#define ETH_LINK_SPEED_56G RTE_DEPRECATED(ETH_LINK_SPEED_56G) RTE_ETH_LINK_SPEED_56G
+#define ETH_LINK_SPEED_100G RTE_DEPRECATED(ETH_LINK_SPEED_100G) RTE_ETH_LINK_SPEED_100G
+#define ETH_LINK_SPEED_200G RTE_DEPRECATED(ETH_LINK_SPEED_200G) RTE_ETH_LINK_SPEED_200G
+
/**@{@name Link speed
* Ethernet numeric link speeds in Mbps
*/
#define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */
-#define ETH_SPEED_NUM_NONE RTE_ETH_SPEED_NUM_NONE
#define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */
-#define ETH_SPEED_NUM_10M RTE_ETH_SPEED_NUM_10M
#define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */
-#define ETH_SPEED_NUM_100M RTE_ETH_SPEED_NUM_100M
#define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */
-#define ETH_SPEED_NUM_1G RTE_ETH_SPEED_NUM_1G
#define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */
-#define ETH_SPEED_NUM_2_5G RTE_ETH_SPEED_NUM_2_5G
#define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */
-#define ETH_SPEED_NUM_5G RTE_ETH_SPEED_NUM_5G
#define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */
-#define ETH_SPEED_NUM_10G RTE_ETH_SPEED_NUM_10G
#define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */
-#define ETH_SPEED_NUM_20G RTE_ETH_SPEED_NUM_20G
#define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */
-#define ETH_SPEED_NUM_25G RTE_ETH_SPEED_NUM_25G
#define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */
-#define ETH_SPEED_NUM_40G RTE_ETH_SPEED_NUM_40G
#define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */
-#define ETH_SPEED_NUM_50G RTE_ETH_SPEED_NUM_50G
#define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */
-#define ETH_SPEED_NUM_56G RTE_ETH_SPEED_NUM_56G
#define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */
-#define ETH_SPEED_NUM_100G RTE_ETH_SPEED_NUM_100G
#define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */
-#define ETH_SPEED_NUM_200G RTE_ETH_SPEED_NUM_200G
#define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */
-#define ETH_SPEED_NUM_UNKNOWN RTE_ETH_SPEED_NUM_UNKNOWN
/**@}*/
+#define ETH_SPEED_NUM_NONE RTE_DEPRECATED(ETH_SPEED_NUM_NONE) RTE_ETH_SPEED_NUM_NONE
+#define ETH_SPEED_NUM_10M RTE_DEPRECATED(ETH_SPEED_NUM_10M) RTE_ETH_SPEED_NUM_10M
+#define ETH_SPEED_NUM_100M RTE_DEPRECATED(ETH_SPEED_NUM_100M) RTE_ETH_SPEED_NUM_100M
+#define ETH_SPEED_NUM_1G RTE_DEPRECATED(ETH_SPEED_NUM_1G) RTE_ETH_SPEED_NUM_1G
+#define ETH_SPEED_NUM_2_5G RTE_DEPRECATED(ETH_SPEED_NUM_2_5G) RTE_ETH_SPEED_NUM_2_5G
+#define ETH_SPEED_NUM_5G RTE_DEPRECATED(ETH_SPEED_NUM_5G) RTE_ETH_SPEED_NUM_5G
+#define ETH_SPEED_NUM_10G RTE_DEPRECATED(ETH_SPEED_NUM_10G) RTE_ETH_SPEED_NUM_10G
+#define ETH_SPEED_NUM_20G RTE_DEPRECATED(ETH_SPEED_NUM_20G) RTE_ETH_SPEED_NUM_20G
+#define ETH_SPEED_NUM_25G RTE_DEPRECATED(ETH_SPEED_NUM_25G) RTE_ETH_SPEED_NUM_25G
+#define ETH_SPEED_NUM_40G RTE_DEPRECATED(ETH_SPEED_NUM_40G) RTE_ETH_SPEED_NUM_40G
+#define ETH_SPEED_NUM_50G RTE_DEPRECATED(ETH_SPEED_NUM_50G) RTE_ETH_SPEED_NUM_50G
+#define ETH_SPEED_NUM_56G RTE_DEPRECATED(ETH_SPEED_NUM_56G) RTE_ETH_SPEED_NUM_56G
+#define ETH_SPEED_NUM_100G RTE_DEPRECATED(ETH_SPEED_NUM_100G) RTE_ETH_SPEED_NUM_100G
+#define ETH_SPEED_NUM_200G RTE_DEPRECATED(ETH_SPEED_NUM_200G) RTE_ETH_SPEED_NUM_200G
+#define ETH_SPEED_NUM_UNKNOWN RTE_DEPRECATED(ETH_SPEED_NUM_UNKNOWN) RTE_ETH_SPEED_NUM_UNKNOWN
+
/**
* A structure used to retrieve link-level information of an Ethernet port.
*/
@@ -373,20 +375,21 @@ struct rte_eth_link {
* Constants used in link management.
*/
#define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */
-#define ETH_LINK_HALF_DUPLEX RTE_ETH_LINK_HALF_DUPLEX
#define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */
-#define ETH_LINK_FULL_DUPLEX RTE_ETH_LINK_FULL_DUPLEX
#define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */
-#define ETH_LINK_DOWN RTE_ETH_LINK_DOWN
#define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */
-#define ETH_LINK_UP RTE_ETH_LINK_UP
#define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */
-#define ETH_LINK_FIXED RTE_ETH_LINK_FIXED
#define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */
-#define ETH_LINK_AUTONEG RTE_ETH_LINK_AUTONEG
#define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */
/**@}*/
+#define ETH_LINK_HALF_DUPLEX RTE_DEPRECATED(ETH_LINK_HALF_DUPLEX) RTE_ETH_LINK_HALF_DUPLEX
+#define ETH_LINK_FULL_DUPLEX RTE_DEPRECATED(ETH_LINK_FULL_DUPLEX) RTE_ETH_LINK_FULL_DUPLEX
+#define ETH_LINK_DOWN RTE_DEPRECATED(ETH_LINK_DOWN) RTE_ETH_LINK_DOWN
+#define ETH_LINK_UP RTE_DEPRECATED(ETH_LINK_UP) RTE_ETH_LINK_UP
+#define ETH_LINK_FIXED RTE_DEPRECATED(ETH_LINK_FIXED) RTE_ETH_LINK_FIXED
+#define ETH_LINK_AUTONEG RTE_DEPRECATED(ETH_LINK_AUTONEG) RTE_ETH_LINK_AUTONEG
+
/**
* A structure used to configure the ring threshold registers of an Rx/Tx
* queue for an Ethernet port.
@@ -401,13 +404,14 @@ struct rte_eth_thresh {
* @see rte_eth_conf.rxmode.mq_mode.
*/
#define RTE_ETH_MQ_RX_RSS_FLAG RTE_BIT32(0) /**< Enable RSS. @see rte_eth_rss_conf */
-#define ETH_MQ_RX_RSS_FLAG RTE_ETH_MQ_RX_RSS_FLAG
#define RTE_ETH_MQ_RX_DCB_FLAG RTE_BIT32(1) /**< Enable DCB. */
-#define ETH_MQ_RX_DCB_FLAG RTE_ETH_MQ_RX_DCB_FLAG
#define RTE_ETH_MQ_RX_VMDQ_FLAG RTE_BIT32(2) /**< Enable VMDq. */
-#define ETH_MQ_RX_VMDQ_FLAG RTE_ETH_MQ_RX_VMDQ_FLAG
/**@}*/
+#define ETH_MQ_RX_RSS_FLAG RTE_DEPRECATED(ETH_MQ_RX_RSS_FLAG) RTE_ETH_MQ_RX_RSS_FLAG
+#define ETH_MQ_RX_DCB_FLAG RTE_DEPRECATED(ETH_MQ_RX_DCB_FLAG) RTE_ETH_MQ_RX_DCB_FLAG
+#define ETH_MQ_RX_VMDQ_FLAG RTE_DEPRECATED(ETH_MQ_RX_VMDQ_FLAG) RTE_ETH_MQ_RX_VMDQ_FLAG
+
/**
* A set of values to identify what method is to be used to route
* packets to multiple queues.
@@ -434,14 +438,14 @@ enum rte_eth_rx_mq_mode {
RTE_ETH_MQ_RX_VMDQ_FLAG,
};
-#define ETH_MQ_RX_NONE RTE_ETH_MQ_RX_NONE
-#define ETH_MQ_RX_RSS RTE_ETH_MQ_RX_RSS
-#define ETH_MQ_RX_DCB RTE_ETH_MQ_RX_DCB
-#define ETH_MQ_RX_DCB_RSS RTE_ETH_MQ_RX_DCB_RSS
-#define ETH_MQ_RX_VMDQ_ONLY RTE_ETH_MQ_RX_VMDQ_ONLY
-#define ETH_MQ_RX_VMDQ_RSS RTE_ETH_MQ_RX_VMDQ_RSS
-#define ETH_MQ_RX_VMDQ_DCB RTE_ETH_MQ_RX_VMDQ_DCB
-#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_ETH_MQ_RX_VMDQ_DCB_RSS
+#define ETH_MQ_RX_NONE RTE_DEPRECATED(ETH_MQ_RX_NONE) RTE_ETH_MQ_RX_NONE
+#define ETH_MQ_RX_RSS RTE_DEPRECATED(ETH_MQ_RX_RSS) RTE_ETH_MQ_RX_RSS
+#define ETH_MQ_RX_DCB RTE_DEPRECATED(ETH_MQ_RX_DCB) RTE_ETH_MQ_RX_DCB
+#define ETH_MQ_RX_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_DCB_RSS) RTE_ETH_MQ_RX_DCB_RSS
+#define ETH_MQ_RX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_RX_VMDQ_ONLY) RTE_ETH_MQ_RX_VMDQ_ONLY
+#define ETH_MQ_RX_VMDQ_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_RSS) RTE_ETH_MQ_RX_VMDQ_RSS
+#define ETH_MQ_RX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB) RTE_ETH_MQ_RX_VMDQ_DCB
+#define ETH_MQ_RX_VMDQ_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB_RSS) RTE_ETH_MQ_RX_VMDQ_DCB_RSS
/**
* A set of values to identify what method is to be used to transmit
@@ -453,10 +457,11 @@ enum rte_eth_tx_mq_mode {
RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */
RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */
};
-#define ETH_MQ_TX_NONE RTE_ETH_MQ_TX_NONE
-#define ETH_MQ_TX_DCB RTE_ETH_MQ_TX_DCB
-#define ETH_MQ_TX_VMDQ_DCB RTE_ETH_MQ_TX_VMDQ_DCB
-#define ETH_MQ_TX_VMDQ_ONLY RTE_ETH_MQ_TX_VMDQ_ONLY
+
+#define ETH_MQ_TX_NONE RTE_DEPRECATED(ETH_MQ_TX_NONE) RTE_ETH_MQ_TX_NONE
+#define ETH_MQ_TX_DCB RTE_DEPRECATED(ETH_MQ_TX_DCB) RTE_ETH_MQ_TX_DCB
+#define ETH_MQ_TX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_TX_VMDQ_DCB) RTE_ETH_MQ_TX_VMDQ_DCB
+#define ETH_MQ_TX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_TX_VMDQ_ONLY) RTE_ETH_MQ_TX_VMDQ_ONLY
/**
* A structure used to configure the Rx features of an Ethernet port.
@@ -490,10 +495,10 @@ enum rte_vlan_type {
RTE_ETH_VLAN_TYPE_MAX,
};
-#define ETH_VLAN_TYPE_UNKNOWN RTE_ETH_VLAN_TYPE_UNKNOWN
-#define ETH_VLAN_TYPE_INNER RTE_ETH_VLAN_TYPE_INNER
-#define ETH_VLAN_TYPE_OUTER RTE_ETH_VLAN_TYPE_OUTER
-#define ETH_VLAN_TYPE_MAX RTE_ETH_VLAN_TYPE_MAX
+#define ETH_VLAN_TYPE_UNKNOWN RTE_DEPRECATED(ETH_VLAN_TYPE_UNKNOWN) RTE_ETH_VLAN_TYPE_UNKNOWN
+#define ETH_VLAN_TYPE_INNER RTE_DEPRECATED(ETH_VLAN_TYPE_INNER) RTE_ETH_VLAN_TYPE_INNER
+#define ETH_VLAN_TYPE_OUTER RTE_DEPRECATED(ETH_VLAN_TYPE_OUTER) RTE_ETH_VLAN_TYPE_OUTER
+#define ETH_VLAN_TYPE_MAX RTE_DEPRECATED(ETH_VLAN_TYPE_MAX) RTE_ETH_VLAN_TYPE_MAX
/**
* A structure used to describe a VLAN filter.
@@ -566,69 +571,70 @@ struct rte_eth_rss_conf {
* fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types.
*/
#define RTE_ETH_RSS_IPV4 RTE_BIT64(2)
-#define ETH_RSS_IPV4 RTE_ETH_RSS_IPV4
#define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3)
-#define ETH_RSS_FRAG_IPV4 RTE_ETH_RSS_FRAG_IPV4
#define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4)
-#define ETH_RSS_NONFRAG_IPV4_TCP RTE_ETH_RSS_NONFRAG_IPV4_TCP
#define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5)
-#define ETH_RSS_NONFRAG_IPV4_UDP RTE_ETH_RSS_NONFRAG_IPV4_UDP
#define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6)
-#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_ETH_RSS_NONFRAG_IPV4_SCTP
#define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7)
-#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_ETH_RSS_NONFRAG_IPV4_OTHER
#define RTE_ETH_RSS_IPV6 RTE_BIT64(8)
-#define ETH_RSS_IPV6 RTE_ETH_RSS_IPV6
#define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9)
-#define ETH_RSS_FRAG_IPV6 RTE_ETH_RSS_FRAG_IPV6
#define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10)
-#define ETH_RSS_NONFRAG_IPV6_TCP RTE_ETH_RSS_NONFRAG_IPV6_TCP
#define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11)
-#define ETH_RSS_NONFRAG_IPV6_UDP RTE_ETH_RSS_NONFRAG_IPV6_UDP
#define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12)
-#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_ETH_RSS_NONFRAG_IPV6_SCTP
#define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13)
-#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_ETH_RSS_NONFRAG_IPV6_OTHER
#define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14)
-#define ETH_RSS_L2_PAYLOAD RTE_ETH_RSS_L2_PAYLOAD
#define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15)
-#define ETH_RSS_IPV6_EX RTE_ETH_RSS_IPV6_EX
#define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16)
-#define ETH_RSS_IPV6_TCP_EX RTE_ETH_RSS_IPV6_TCP_EX
#define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17)
-#define ETH_RSS_IPV6_UDP_EX RTE_ETH_RSS_IPV6_UDP_EX
#define RTE_ETH_RSS_PORT RTE_BIT64(18)
-#define ETH_RSS_PORT RTE_ETH_RSS_PORT
#define RTE_ETH_RSS_VXLAN RTE_BIT64(19)
-#define ETH_RSS_VXLAN RTE_ETH_RSS_VXLAN
#define RTE_ETH_RSS_GENEVE RTE_BIT64(20)
-#define ETH_RSS_GENEVE RTE_ETH_RSS_GENEVE
#define RTE_ETH_RSS_NVGRE RTE_BIT64(21)
-#define ETH_RSS_NVGRE RTE_ETH_RSS_NVGRE
#define RTE_ETH_RSS_GTPU RTE_BIT64(23)
-#define ETH_RSS_GTPU RTE_ETH_RSS_GTPU
#define RTE_ETH_RSS_ETH RTE_BIT64(24)
-#define ETH_RSS_ETH RTE_ETH_RSS_ETH
#define RTE_ETH_RSS_S_VLAN RTE_BIT64(25)
-#define ETH_RSS_S_VLAN RTE_ETH_RSS_S_VLAN
#define RTE_ETH_RSS_C_VLAN RTE_BIT64(26)
-#define ETH_RSS_C_VLAN RTE_ETH_RSS_C_VLAN
#define RTE_ETH_RSS_ESP RTE_BIT64(27)
-#define ETH_RSS_ESP RTE_ETH_RSS_ESP
#define RTE_ETH_RSS_AH RTE_BIT64(28)
-#define ETH_RSS_AH RTE_ETH_RSS_AH
#define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29)
-#define ETH_RSS_L2TPV3 RTE_ETH_RSS_L2TPV3
#define RTE_ETH_RSS_PFCP RTE_BIT64(30)
-#define ETH_RSS_PFCP RTE_ETH_RSS_PFCP
#define RTE_ETH_RSS_PPPOE RTE_BIT64(31)
-#define ETH_RSS_PPPOE RTE_ETH_RSS_PPPOE
#define RTE_ETH_RSS_ECPRI RTE_BIT64(32)
-#define ETH_RSS_ECPRI RTE_ETH_RSS_ECPRI
#define RTE_ETH_RSS_MPLS RTE_BIT64(33)
-#define ETH_RSS_MPLS RTE_ETH_RSS_MPLS
#define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34)
-#define ETH_RSS_IPV4_CHKSUM RTE_ETH_RSS_IPV4_CHKSUM
+
+#define ETH_RSS_IPV4 RTE_DEPRECATED(ETH_RSS_IPV4) RTE_ETH_RSS_IPV4
+#define ETH_RSS_FRAG_IPV4 RTE_DEPRECATED(ETH_RSS_FRAG_IPV4) RTE_ETH_RSS_FRAG_IPV4
+#define ETH_RSS_NONFRAG_IPV4_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_TCP) RTE_ETH_RSS_NONFRAG_IPV4_TCP
+#define ETH_RSS_NONFRAG_IPV4_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_UDP) RTE_ETH_RSS_NONFRAG_IPV4_UDP
+#define ETH_RSS_NONFRAG_IPV4_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_SCTP) RTE_ETH_RSS_NONFRAG_IPV4_SCTP
+#define ETH_RSS_NONFRAG_IPV4_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_OTHER) RTE_ETH_RSS_NONFRAG_IPV4_OTHER
+#define ETH_RSS_IPV6 RTE_DEPRECATED(ETH_RSS_IPV6) RTE_ETH_RSS_IPV6
+#define ETH_RSS_FRAG_IPV6 RTE_DEPRECATED(ETH_RSS_FRAG_IPV6) RTE_ETH_RSS_FRAG_IPV6
+#define ETH_RSS_NONFRAG_IPV6_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_TCP) RTE_ETH_RSS_NONFRAG_IPV6_TCP
+#define ETH_RSS_NONFRAG_IPV6_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_UDP) RTE_ETH_RSS_NONFRAG_IPV6_UDP
+#define ETH_RSS_NONFRAG_IPV6_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_SCTP) RTE_ETH_RSS_NONFRAG_IPV6_SCTP
+#define ETH_RSS_NONFRAG_IPV6_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_OTHER) RTE_ETH_RSS_NONFRAG_IPV6_OTHER
+#define ETH_RSS_L2_PAYLOAD RTE_DEPRECATED(ETH_RSS_L2_PAYLOAD) RTE_ETH_RSS_L2_PAYLOAD
+#define ETH_RSS_IPV6_EX RTE_DEPRECATED(ETH_RSS_IPV6_EX) RTE_ETH_RSS_IPV6_EX
+#define ETH_RSS_IPV6_TCP_EX RTE_DEPRECATED(ETH_RSS_IPV6_TCP_EX) RTE_ETH_RSS_IPV6_TCP_EX
+#define ETH_RSS_IPV6_UDP_EX RTE_DEPRECATED(ETH_RSS_IPV6_UDP_EX) RTE_ETH_RSS_IPV6_UDP_EX
+#define ETH_RSS_PORT RTE_DEPRECATED(ETH_RSS_PORT) RTE_ETH_RSS_PORT
+#define ETH_RSS_VXLAN RTE_DEPRECATED(ETH_RSS_VXLAN) RTE_ETH_RSS_VXLAN
+#define ETH_RSS_GENEVE RTE_DEPRECATED(ETH_RSS_GENEVE) RTE_ETH_RSS_GENEVE
+#define ETH_RSS_NVGRE RTE_DEPRECATED(ETH_RSS_NVGRE) RTE_ETH_RSS_NVGRE
+#define ETH_RSS_GTPU RTE_DEPRECATED(ETH_RSS_GTPU) RTE_ETH_RSS_GTPU
+#define ETH_RSS_ETH RTE_DEPRECATED(ETH_RSS_ETH) RTE_ETH_RSS_ETH
+#define ETH_RSS_S_VLAN RTE_DEPRECATED(ETH_RSS_S_VLAN) RTE_ETH_RSS_S_VLAN
+#define ETH_RSS_C_VLAN RTE_DEPRECATED(ETH_RSS_C_VLAN) RTE_ETH_RSS_C_VLAN
+#define ETH_RSS_ESP RTE_DEPRECATED(ETH_RSS_ESP) RTE_ETH_RSS_ESP
+#define ETH_RSS_AH RTE_DEPRECATED(ETH_RSS_AH) RTE_ETH_RSS_AH
+#define ETH_RSS_L2TPV3 RTE_DEPRECATED(ETH_RSS_L2TPV3) RTE_ETH_RSS_L2TPV3
+#define ETH_RSS_PFCP RTE_DEPRECATED(ETH_RSS_PFCP) RTE_ETH_RSS_PFCP
+#define ETH_RSS_PPPOE RTE_DEPRECATED(ETH_RSS_PPPOE) RTE_ETH_RSS_PPPOE
+#define ETH_RSS_ECPRI RTE_DEPRECATED(ETH_RSS_ECPRI) RTE_ETH_RSS_ECPRI
+#define ETH_RSS_MPLS RTE_DEPRECATED(ETH_RSS_MPLS) RTE_ETH_RSS_MPLS
+#define ETH_RSS_IPV4_CHKSUM RTE_DEPRECATED(ETH_RSS_IPV4_CHKSUM) RTE_ETH_RSS_IPV4_CHKSUM
/**
* The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header.
@@ -643,7 +649,7 @@ struct rte_eth_rss_conf {
* it takes the reserved value 0 as input for the hash function.
*/
#define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35)
-#define ETH_RSS_L4_CHKSUM RTE_ETH_RSS_L4_CHKSUM
+#define ETH_RSS_L4_CHKSUM RTE_DEPRECATED(ETH_RSS_L4_CHKSUM) RTE_ETH_RSS_L4_CHKSUM
/*
* We use the following macros to combine with above RTE_ETH_RSS_* for
@@ -655,21 +661,22 @@ struct rte_eth_rss_conf {
* them are added.
*/
#define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63)
-#define ETH_RSS_L3_SRC_ONLY RTE_ETH_RSS_L3_SRC_ONLY
#define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62)
-#define ETH_RSS_L3_DST_ONLY RTE_ETH_RSS_L3_DST_ONLY
#define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61)
-#define ETH_RSS_L4_SRC_ONLY RTE_ETH_RSS_L4_SRC_ONLY
#define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60)
-#define ETH_RSS_L4_DST_ONLY RTE_ETH_RSS_L4_DST_ONLY
#define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59)
-#define ETH_RSS_L2_SRC_ONLY RTE_ETH_RSS_L2_SRC_ONLY
#define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58)
-#define ETH_RSS_L2_DST_ONLY RTE_ETH_RSS_L2_DST_ONLY
+
+#define ETH_RSS_L3_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L3_SRC_ONLY) RTE_ETH_RSS_L3_SRC_ONLY
+#define ETH_RSS_L3_DST_ONLY RTE_DEPRECATED(ETH_RSS_L3_DST_ONLY) RTE_ETH_RSS_L3_DST_ONLY
+#define ETH_RSS_L4_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L4_SRC_ONLY) RTE_ETH_RSS_L4_SRC_ONLY
+#define ETH_RSS_L4_DST_ONLY RTE_DEPRECATED(ETH_RSS_L4_DST_ONLY) RTE_ETH_RSS_L4_DST_ONLY
+#define ETH_RSS_L2_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L2_SRC_ONLY) RTE_ETH_RSS_L2_SRC_ONLY
+#define ETH_RSS_L2_DST_ONLY RTE_DEPRECATED(ETH_RSS_L2_DST_ONLY) RTE_ETH_RSS_L2_DST_ONLY
/*
* Only select IPV6 address prefix as RSS input set according to
- * https:tools.ietf.org/html/rfc6052
+ * https://tools.ietf.org/html/rfc6052
* Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP,
* RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP.
*/
@@ -694,26 +701,27 @@ struct rte_eth_rss_conf {
* can be performed on according to PMD and device capabilities.
*/
#define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (UINT64_C(0) << 50)
-#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_ETH_RSS_LEVEL_PMD_DEFAULT
+#define ETH_RSS_LEVEL_PMD_DEFAULT RTE_DEPRECATED(ETH_RSS_LEVEL_PMD_DEFAULT) RTE_ETH_RSS_LEVEL_PMD_DEFAULT
/**
* level 1, requests RSS to be performed on the outermost packet
* encapsulation level.
*/
#define RTE_ETH_RSS_LEVEL_OUTERMOST (UINT64_C(1) << 50)
-#define ETH_RSS_LEVEL_OUTERMOST RTE_ETH_RSS_LEVEL_OUTERMOST
+#define ETH_RSS_LEVEL_OUTERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_OUTERMOST) RTE_ETH_RSS_LEVEL_OUTERMOST
/**
* level 2, requests RSS to be performed on the specified inner packet
* encapsulation level, from outermost to innermost (lower to higher values).
*/
#define RTE_ETH_RSS_LEVEL_INNERMOST (UINT64_C(2) << 50)
-#define ETH_RSS_LEVEL_INNERMOST RTE_ETH_RSS_LEVEL_INNERMOST
#define RTE_ETH_RSS_LEVEL_MASK (UINT64_C(3) << 50)
-#define ETH_RSS_LEVEL_MASK RTE_ETH_RSS_LEVEL_MASK
+
+#define ETH_RSS_LEVEL_INNERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_INNERMOST) RTE_ETH_RSS_LEVEL_INNERMOST
+#define ETH_RSS_LEVEL_MASK RTE_DEPRECATED(ETH_RSS_LEVEL_MASK) RTE_ETH_RSS_LEVEL_MASK
#define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50)
-#define ETH_RSS_LEVEL(rss_hf) RTE_ETH_RSS_LEVEL(rss_hf)
+#define ETH_RSS_LEVEL(rss_hf) RTE_DEPRECATED(ETH_RSS_LEVEL(rss_hf)) RTE_ETH_RSS_LEVEL(rss_hf)
/**
* For input set change of hash filter, if SRC_ONLY and DST_ONLY of
@@ -740,122 +748,122 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
#define RTE_ETH_RSS_IPV6_PRE32 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32 RTE_ETH_RSS_IPV6_PRE32
+#define ETH_RSS_IPV6_PRE32 RTE_DEPRECATED(ETH_RSS_IPV6_PRE32) RTE_ETH_RSS_IPV6_PRE32
#define RTE_ETH_RSS_IPV6_PRE40 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40 RTE_ETH_RSS_IPV6_PRE40
+#define ETH_RSS_IPV6_PRE40 RTE_DEPRECATED(ETH_RSS_IPV6_PRE40) RTE_ETH_RSS_IPV6_PRE40
#define RTE_ETH_RSS_IPV6_PRE48 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48 RTE_ETH_RSS_IPV6_PRE48
+#define ETH_RSS_IPV6_PRE48 RTE_DEPRECATED(ETH_RSS_IPV6_PRE48) RTE_ETH_RSS_IPV6_PRE48
#define RTE_ETH_RSS_IPV6_PRE56 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56 RTE_ETH_RSS_IPV6_PRE56
+#define ETH_RSS_IPV6_PRE56 RTE_DEPRECATED(ETH_RSS_IPV6_PRE56) RTE_ETH_RSS_IPV6_PRE56
#define RTE_ETH_RSS_IPV6_PRE64 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64 RTE_ETH_RSS_IPV6_PRE64
+#define ETH_RSS_IPV6_PRE64 RTE_DEPRECATED(ETH_RSS_IPV6_PRE64) RTE_ETH_RSS_IPV6_PRE64
#define RTE_ETH_RSS_IPV6_PRE96 ( \
RTE_ETH_RSS_IPV6 | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96 RTE_ETH_RSS_IPV6_PRE96
+#define ETH_RSS_IPV6_PRE96 RTE_DEPRECATED(ETH_RSS_IPV6_PRE96) RTE_ETH_RSS_IPV6_PRE96
#define RTE_ETH_RSS_IPV6_PRE32_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_UDP RTE_ETH_RSS_IPV6_PRE32_UDP
+#define ETH_RSS_IPV6_PRE32_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_UDP) RTE_ETH_RSS_IPV6_PRE32_UDP
#define RTE_ETH_RSS_IPV6_PRE40_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_UDP RTE_ETH_RSS_IPV6_PRE40_UDP
+#define ETH_RSS_IPV6_PRE40_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_UDP) RTE_ETH_RSS_IPV6_PRE40_UDP
#define RTE_ETH_RSS_IPV6_PRE48_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_UDP RTE_ETH_RSS_IPV6_PRE48_UDP
+#define ETH_RSS_IPV6_PRE48_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_UDP) RTE_ETH_RSS_IPV6_PRE48_UDP
#define RTE_ETH_RSS_IPV6_PRE56_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_UDP RTE_ETH_RSS_IPV6_PRE56_UDP
+#define ETH_RSS_IPV6_PRE56_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_UDP) RTE_ETH_RSS_IPV6_PRE56_UDP
#define RTE_ETH_RSS_IPV6_PRE64_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_UDP RTE_ETH_RSS_IPV6_PRE64_UDP
+#define ETH_RSS_IPV6_PRE64_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_UDP) RTE_ETH_RSS_IPV6_PRE64_UDP
#define RTE_ETH_RSS_IPV6_PRE96_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_UDP RTE_ETH_RSS_IPV6_PRE96_UDP
+#define ETH_RSS_IPV6_PRE96_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_UDP) RTE_ETH_RSS_IPV6_PRE96_UDP
#define RTE_ETH_RSS_IPV6_PRE32_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_TCP RTE_ETH_RSS_IPV6_PRE32_TCP
+#define ETH_RSS_IPV6_PRE32_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_TCP) RTE_ETH_RSS_IPV6_PRE32_TCP
#define RTE_ETH_RSS_IPV6_PRE40_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_TCP RTE_ETH_RSS_IPV6_PRE40_TCP
+#define ETH_RSS_IPV6_PRE40_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_TCP) RTE_ETH_RSS_IPV6_PRE40_TCP
#define RTE_ETH_RSS_IPV6_PRE48_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_TCP RTE_ETH_RSS_IPV6_PRE48_TCP
+#define ETH_RSS_IPV6_PRE48_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_TCP) RTE_ETH_RSS_IPV6_PRE48_TCP
#define RTE_ETH_RSS_IPV6_PRE56_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_TCP RTE_ETH_RSS_IPV6_PRE56_TCP
+#define ETH_RSS_IPV6_PRE56_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_TCP) RTE_ETH_RSS_IPV6_PRE56_TCP
#define RTE_ETH_RSS_IPV6_PRE64_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_TCP RTE_ETH_RSS_IPV6_PRE64_TCP
+#define ETH_RSS_IPV6_PRE64_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_TCP) RTE_ETH_RSS_IPV6_PRE64_TCP
#define RTE_ETH_RSS_IPV6_PRE96_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_TCP RTE_ETH_RSS_IPV6_PRE96_TCP
+#define ETH_RSS_IPV6_PRE96_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_TCP) RTE_ETH_RSS_IPV6_PRE96_TCP
#define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE32)
-#define ETH_RSS_IPV6_PRE32_SCTP RTE_ETH_RSS_IPV6_PRE32_SCTP
+#define ETH_RSS_IPV6_PRE32_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_SCTP) RTE_ETH_RSS_IPV6_PRE32_SCTP
#define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE40)
-#define ETH_RSS_IPV6_PRE40_SCTP RTE_ETH_RSS_IPV6_PRE40_SCTP
+#define ETH_RSS_IPV6_PRE40_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_SCTP) RTE_ETH_RSS_IPV6_PRE40_SCTP
#define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE48)
-#define ETH_RSS_IPV6_PRE48_SCTP RTE_ETH_RSS_IPV6_PRE48_SCTP
+#define ETH_RSS_IPV6_PRE48_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_SCTP) RTE_ETH_RSS_IPV6_PRE48_SCTP
#define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE56)
-#define ETH_RSS_IPV6_PRE56_SCTP RTE_ETH_RSS_IPV6_PRE56_SCTP
+#define ETH_RSS_IPV6_PRE56_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_SCTP) RTE_ETH_RSS_IPV6_PRE56_SCTP
#define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE64)
-#define ETH_RSS_IPV6_PRE64_SCTP RTE_ETH_RSS_IPV6_PRE64_SCTP
+#define ETH_RSS_IPV6_PRE64_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_SCTP) RTE_ETH_RSS_IPV6_PRE64_SCTP
#define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \
RTE_ETH_RSS_L3_PRE96)
-#define ETH_RSS_IPV6_PRE96_SCTP RTE_ETH_RSS_IPV6_PRE96_SCTP
+#define ETH_RSS_IPV6_PRE96_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_SCTP) RTE_ETH_RSS_IPV6_PRE96_SCTP
#define RTE_ETH_RSS_IP ( \
RTE_ETH_RSS_IPV4 | \
@@ -865,35 +873,35 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
RTE_ETH_RSS_FRAG_IPV6 | \
RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \
RTE_ETH_RSS_IPV6_EX)
-#define ETH_RSS_IP RTE_ETH_RSS_IP
+#define ETH_RSS_IP RTE_DEPRECATED(ETH_RSS_IP) RTE_ETH_RSS_IP
#define RTE_ETH_RSS_UDP ( \
RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
RTE_ETH_RSS_IPV6_UDP_EX)
-#define ETH_RSS_UDP RTE_ETH_RSS_UDP
+#define ETH_RSS_UDP RTE_DEPRECATED(ETH_RSS_UDP) RTE_ETH_RSS_UDP
#define RTE_ETH_RSS_TCP ( \
RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
RTE_ETH_RSS_IPV6_TCP_EX)
-#define ETH_RSS_TCP RTE_ETH_RSS_TCP
+#define ETH_RSS_TCP RTE_DEPRECATED(ETH_RSS_TCP) RTE_ETH_RSS_TCP
#define RTE_ETH_RSS_SCTP ( \
RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \
RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-#define ETH_RSS_SCTP RTE_ETH_RSS_SCTP
+#define ETH_RSS_SCTP RTE_DEPRECATED(ETH_RSS_SCTP) RTE_ETH_RSS_SCTP
#define RTE_ETH_RSS_TUNNEL ( \
RTE_ETH_RSS_VXLAN | \
RTE_ETH_RSS_GENEVE | \
RTE_ETH_RSS_NVGRE)
-#define ETH_RSS_TUNNEL RTE_ETH_RSS_TUNNEL
+#define ETH_RSS_TUNNEL RTE_DEPRECATED(ETH_RSS_TUNNEL) RTE_ETH_RSS_TUNNEL
#define RTE_ETH_RSS_VLAN ( \
RTE_ETH_RSS_S_VLAN | \
RTE_ETH_RSS_C_VLAN)
-#define ETH_RSS_VLAN RTE_ETH_RSS_VLAN
+#define ETH_RSS_VLAN RTE_DEPRECATED(ETH_RSS_VLAN) RTE_ETH_RSS_VLAN
/** Mask of valid RSS hash protocols */
#define RTE_ETH_RSS_PROTO_MASK ( \
@@ -918,7 +926,7 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
RTE_ETH_RSS_GENEVE | \
RTE_ETH_RSS_NVGRE | \
RTE_ETH_RSS_MPLS)
-#define ETH_RSS_PROTO_MASK RTE_ETH_RSS_PROTO_MASK
+#define ETH_RSS_PROTO_MASK RTE_DEPRECATED(ETH_RSS_PROTO_MASK) RTE_ETH_RSS_PROTO_MASK
/*
* Definitions used for redirection table entry size.
@@ -926,84 +934,90 @@ rte_eth_rss_hf_refine(uint64_t rss_hf)
* documentation or the description of relevant functions for more details.
*/
#define RTE_ETH_RSS_RETA_SIZE_64 64
-#define ETH_RSS_RETA_SIZE_64 RTE_ETH_RSS_RETA_SIZE_64
#define RTE_ETH_RSS_RETA_SIZE_128 128
-#define ETH_RSS_RETA_SIZE_128 RTE_ETH_RSS_RETA_SIZE_128
#define RTE_ETH_RSS_RETA_SIZE_256 256
-#define ETH_RSS_RETA_SIZE_256 RTE_ETH_RSS_RETA_SIZE_256
#define RTE_ETH_RSS_RETA_SIZE_512 512
-#define ETH_RSS_RETA_SIZE_512 RTE_ETH_RSS_RETA_SIZE_512
#define RTE_ETH_RETA_GROUP_SIZE 64
-#define RTE_RETA_GROUP_SIZE RTE_ETH_RETA_GROUP_SIZE
+
+#define ETH_RSS_RETA_SIZE_64 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_64) RTE_ETH_RSS_RETA_SIZE_64
+#define ETH_RSS_RETA_SIZE_128 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_128) RTE_ETH_RSS_RETA_SIZE_128
+#define ETH_RSS_RETA_SIZE_256 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_256) RTE_ETH_RSS_RETA_SIZE_256
+#define ETH_RSS_RETA_SIZE_512 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_512) RTE_ETH_RSS_RETA_SIZE_512
+#define RTE_RETA_GROUP_SIZE RTE_DEPRECATED(RTE_RETA_GROUP_SIZE) RTE_ETH_RETA_GROUP_SIZE
/**@{@name VMDq and DCB maximums */
#define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */
-#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_ETH_VMDQ_MAX_VLAN_FILTERS
#define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */
-#define ETH_DCB_NUM_USER_PRIORITIES RTE_ETH_DCB_NUM_USER_PRIORITIES
#define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */
-#define ETH_VMDQ_DCB_NUM_QUEUES RTE_ETH_VMDQ_DCB_NUM_QUEUES
#define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */
-#define ETH_DCB_NUM_QUEUES RTE_ETH_DCB_NUM_QUEUES
/**@}*/
+#define ETH_VMDQ_MAX_VLAN_FILTERS RTE_DEPRECATED(ETH_VMDQ_MAX_VLAN_FILTERS) RTE_ETH_VMDQ_MAX_VLAN_FILTERS
+#define ETH_DCB_NUM_USER_PRIORITIES RTE_DEPRECATED(ETH_DCB_NUM_USER_PRIORITIES) RTE_ETH_DCB_NUM_USER_PRIORITIES
+#define ETH_VMDQ_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_VMDQ_DCB_NUM_QUEUES) RTE_ETH_VMDQ_DCB_NUM_QUEUES
+#define ETH_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_DCB_NUM_QUEUES) RTE_ETH_DCB_NUM_QUEUES
+
/**@{@name DCB capabilities */
#define RTE_ETH_DCB_PG_SUPPORT RTE_BIT32(0) /**< Priority Group(ETS) support. */
-#define ETH_DCB_PG_SUPPORT RTE_ETH_DCB_PG_SUPPORT
#define RTE_ETH_DCB_PFC_SUPPORT RTE_BIT32(1) /**< Priority Flow Control support. */
-#define ETH_DCB_PFC_SUPPORT RTE_ETH_DCB_PFC_SUPPORT
/**@}*/
+#define ETH_DCB_PG_SUPPORT RTE_DEPRECATED(ETH_DCB_PG_SUPPORT) RTE_ETH_DCB_PG_SUPPORT
+#define ETH_DCB_PFC_SUPPORT RTE_DEPRECATED(ETH_DCB_PFC_SUPPORT) RTE_ETH_DCB_PFC_SUPPORT
+
/**@{@name VLAN offload bits */
#define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */
-#define ETH_VLAN_STRIP_OFFLOAD RTE_ETH_VLAN_STRIP_OFFLOAD
#define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */
-#define ETH_VLAN_FILTER_OFFLOAD RTE_ETH_VLAN_FILTER_OFFLOAD
#define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */
-#define ETH_VLAN_EXTEND_OFFLOAD RTE_ETH_VLAN_EXTEND_OFFLOAD
#define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */
-#define ETH_QINQ_STRIP_OFFLOAD RTE_ETH_QINQ_STRIP_OFFLOAD
+
+#define ETH_VLAN_STRIP_OFFLOAD RTE_DEPRECATED(ETH_VLAN_STRIP_OFFLOAD) RTE_ETH_VLAN_STRIP_OFFLOAD
+#define ETH_VLAN_FILTER_OFFLOAD RTE_DEPRECATED(ETH_VLAN_FILTER_OFFLOAD) RTE_ETH_VLAN_FILTER_OFFLOAD
+#define ETH_VLAN_EXTEND_OFFLOAD RTE_DEPRECATED(ETH_VLAN_EXTEND_OFFLOAD) RTE_ETH_VLAN_EXTEND_OFFLOAD
+#define ETH_QINQ_STRIP_OFFLOAD RTE_DEPRECATED(ETH_QINQ_STRIP_OFFLOAD) RTE_ETH_QINQ_STRIP_OFFLOAD
#define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */
-#define ETH_VLAN_STRIP_MASK RTE_ETH_VLAN_STRIP_MASK
#define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/
-#define ETH_VLAN_FILTER_MASK RTE_ETH_VLAN_FILTER_MASK
#define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/
-#define ETH_VLAN_EXTEND_MASK RTE_ETH_VLAN_EXTEND_MASK
#define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */
-#define ETH_QINQ_STRIP_MASK RTE_ETH_QINQ_STRIP_MASK
#define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/
-#define ETH_VLAN_ID_MAX RTE_ETH_VLAN_ID_MAX
/**@}*/
+#define ETH_VLAN_STRIP_MASK RTE_DEPRECATED(ETH_VLAN_STRIP_MASK) RTE_ETH_VLAN_STRIP_MASK
+#define ETH_VLAN_FILTER_MASK RTE_DEPRECATED(ETH_VLAN_FILTER_MASK) RTE_ETH_VLAN_FILTER_MASK
+#define ETH_VLAN_EXTEND_MASK RTE_DEPRECATED(ETH_VLAN_EXTEND_MASK) RTE_ETH_VLAN_EXTEND_MASK
+#define ETH_QINQ_STRIP_MASK RTE_DEPRECATED(ETH_QINQ_STRIP_MASK) RTE_ETH_QINQ_STRIP_MASK
+#define ETH_VLAN_ID_MAX RTE_DEPRECATED(ETH_VLAN_ID_MAX) RTE_ETH_VLAN_ID_MAX
+
/* Definitions used for receive MAC address */
#define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */
-#define ETH_NUM_RECEIVE_MAC_ADDR RTE_ETH_NUM_RECEIVE_MAC_ADDR
+#define ETH_NUM_RECEIVE_MAC_ADDR RTE_DEPRECATED(ETH_NUM_RECEIVE_MAC_ADDR) RTE_ETH_NUM_RECEIVE_MAC_ADDR
/* Definitions used for unicast hash */
#define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */
-#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
+#define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_DEPRECATED(ETH_VMDQ_NUM_UC_HASH_ARRAY) RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY
/**@{@name VMDq Rx mode
* @see rte_eth_vmdq_rx_conf.rx_mode
*/
/** Accept untagged packets. */
#define RTE_ETH_VMDQ_ACCEPT_UNTAG RTE_BIT32(0)
-#define ETH_VMDQ_ACCEPT_UNTAG RTE_ETH_VMDQ_ACCEPT_UNTAG
/** Accept packets in multicast table. */
#define RTE_ETH_VMDQ_ACCEPT_HASH_MC RTE_BIT32(1)
-#define ETH_VMDQ_ACCEPT_HASH_MC RTE_ETH_VMDQ_ACCEPT_HASH_MC
/** Accept packets in unicast table. */
#define RTE_ETH_VMDQ_ACCEPT_HASH_UC RTE_BIT32(2)
-#define ETH_VMDQ_ACCEPT_HASH_UC RTE_ETH_VMDQ_ACCEPT_HASH_UC
/** Accept broadcast packets. */
#define RTE_ETH_VMDQ_ACCEPT_BROADCAST RTE_BIT32(3)
-#define ETH_VMDQ_ACCEPT_BROADCAST RTE_ETH_VMDQ_ACCEPT_BROADCAST
/** Multicast promiscuous. */
#define RTE_ETH_VMDQ_ACCEPT_MULTICAST RTE_BIT32(4)
-#define ETH_VMDQ_ACCEPT_MULTICAST RTE_ETH_VMDQ_ACCEPT_MULTICAST
/**@}*/
+#define ETH_VMDQ_ACCEPT_UNTAG RTE_DEPRECATED(ETH_VMDQ_ACCEPT_UNTAG) RTE_ETH_VMDQ_ACCEPT_UNTAG
+#define ETH_VMDQ_ACCEPT_HASH_MC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_MC) RTE_ETH_VMDQ_ACCEPT_HASH_MC
+#define ETH_VMDQ_ACCEPT_HASH_UC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_UC) RTE_ETH_VMDQ_ACCEPT_HASH_UC
+#define ETH_VMDQ_ACCEPT_BROADCAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_BROADCAST) RTE_ETH_VMDQ_ACCEPT_BROADCAST
+#define ETH_VMDQ_ACCEPT_MULTICAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_MULTICAST) RTE_ETH_VMDQ_ACCEPT_MULTICAST
+
/**
* A structure used to configure 64 entries of Redirection Table of the
* Receive Side Scaling (RSS) feature of an Ethernet port. To configure
@@ -1025,8 +1039,8 @@ enum rte_eth_nb_tcs {
RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */
RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */
};
-#define ETH_4_TCS RTE_ETH_4_TCS
-#define ETH_8_TCS RTE_ETH_8_TCS
+#define ETH_4_TCS RTE_DEPRECATED(ETH_4_TCS) RTE_ETH_4_TCS
+#define ETH_8_TCS RTE_DEPRECATED(ETH_8_TCS) RTE_ETH_8_TCS
/**
* This enum indicates the possible number of queue pools
@@ -1038,10 +1052,10 @@ enum rte_eth_nb_pools {
RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */
RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */
};
-#define ETH_8_POOLS RTE_ETH_8_POOLS
-#define ETH_16_POOLS RTE_ETH_16_POOLS
-#define ETH_32_POOLS RTE_ETH_32_POOLS
-#define ETH_64_POOLS RTE_ETH_64_POOLS
+#define ETH_8_POOLS RTE_DEPRECATED(ETH_8_POOLS) RTE_ETH_8_POOLS
+#define ETH_16_POOLS RTE_DEPRECATED(ETH_16_POOLS) RTE_ETH_16_POOLS
+#define ETH_32_POOLS RTE_DEPRECATED(ETH_32_POOLS) RTE_ETH_32_POOLS
+#define ETH_64_POOLS RTE_DEPRECATED(ETH_64_POOLS) RTE_ETH_64_POOLS
/* This structure may be extended in future. */
struct rte_eth_dcb_rx_conf {
@@ -1364,11 +1378,10 @@ enum rte_eth_fc_mode {
RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */
RTE_ETH_FC_FULL /**< Enable flow control on both side. */
};
-
-#define RTE_FC_NONE RTE_ETH_FC_NONE
-#define RTE_FC_RX_PAUSE RTE_ETH_FC_RX_PAUSE
-#define RTE_FC_TX_PAUSE RTE_ETH_FC_TX_PAUSE
-#define RTE_FC_FULL RTE_ETH_FC_FULL
+#define RTE_FC_NONE RTE_DEPRECATED(RTE_FC_NONE) RTE_ETH_FC_NONE
+#define RTE_FC_RX_PAUSE RTE_DEPRECATED(RTE_FC_RX_PAUSE) RTE_ETH_FC_RX_PAUSE
+#define RTE_FC_TX_PAUSE RTE_DEPRECATED(RTE_FC_TX_PAUSE) RTE_ETH_FC_TX_PAUSE
+#define RTE_FC_FULL RTE_DEPRECATED(RTE_FC_FULL) RTE_ETH_FC_FULL
/**
* A structure used to configure Ethernet flow control parameter.
@@ -1411,17 +1424,16 @@ enum rte_eth_tunnel_type {
RTE_ETH_TUNNEL_TYPE_ECPRI,
RTE_ETH_TUNNEL_TYPE_MAX,
};
-
-#define RTE_TUNNEL_TYPE_NONE RTE_ETH_TUNNEL_TYPE_NONE
-#define RTE_TUNNEL_TYPE_VXLAN RTE_ETH_TUNNEL_TYPE_VXLAN
-#define RTE_TUNNEL_TYPE_GENEVE RTE_ETH_TUNNEL_TYPE_GENEVE
-#define RTE_TUNNEL_TYPE_TEREDO RTE_ETH_TUNNEL_TYPE_TEREDO
-#define RTE_TUNNEL_TYPE_NVGRE RTE_ETH_TUNNEL_TYPE_NVGRE
-#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
-#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_ETH_L2_TUNNEL_TYPE_E_TAG
-#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
-#define RTE_TUNNEL_TYPE_ECPRI RTE_ETH_TUNNEL_TYPE_ECPRI
-#define RTE_TUNNEL_TYPE_MAX RTE_ETH_TUNNEL_TYPE_MAX
+#define RTE_TUNNEL_TYPE_NONE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NONE) RTE_ETH_TUNNEL_TYPE_NONE
+#define RTE_TUNNEL_TYPE_VXLAN RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN) RTE_ETH_TUNNEL_TYPE_VXLAN
+#define RTE_TUNNEL_TYPE_GENEVE RTE_DEPRECATED(RTE_TUNNEL_TYPE_GENEVE) RTE_ETH_TUNNEL_TYPE_GENEVE
+#define RTE_TUNNEL_TYPE_TEREDO RTE_DEPRECATED(RTE_TUNNEL_TYPE_TEREDO) RTE_ETH_TUNNEL_TYPE_TEREDO
+#define RTE_TUNNEL_TYPE_NVGRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NVGRE) RTE_ETH_TUNNEL_TYPE_NVGRE
+#define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_IP_IN_GRE) RTE_ETH_TUNNEL_TYPE_IP_IN_GRE
+#define RTE_L2_TUNNEL_TYPE_E_TAG RTE_DEPRECATED(RTE_L2_TUNNEL_TYPE_E_TAG) RTE_ETH_L2_TUNNEL_TYPE_E_TAG
+#define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN_GPE) RTE_ETH_TUNNEL_TYPE_VXLAN_GPE
+#define RTE_TUNNEL_TYPE_ECPRI RTE_DEPRECATED(RTE_TUNNEL_TYPE_ECPRI) RTE_ETH_TUNNEL_TYPE_ECPRI
+#define RTE_TUNNEL_TYPE_MAX RTE_DEPRECATED(RTE_TUNNEL_TYPE_MAX) RTE_ETH_TUNNEL_TYPE_MAX
/* Deprecated API file for rte_eth_dev_filter_* functions */
#include "rte_eth_ctrl.h"
@@ -1437,9 +1449,9 @@ enum rte_eth_fdir_pballoc_type {
};
#define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type
-#define RTE_FDIR_PBALLOC_64K RTE_ETH_FDIR_PBALLOC_64K
-#define RTE_FDIR_PBALLOC_128K RTE_ETH_FDIR_PBALLOC_128K
-#define RTE_FDIR_PBALLOC_256K RTE_ETH_FDIR_PBALLOC_256K
+#define RTE_FDIR_PBALLOC_64K RTE_DEPRECATED(RTE_FDIR_PBALLOC_64K) RTE_ETH_FDIR_PBALLOC_64K
+#define RTE_FDIR_PBALLOC_128K RTE_DEPRECATED(RTE_FDIR_PBALLOC_128K) RTE_ETH_FDIR_PBALLOC_128K
+#define RTE_FDIR_PBALLOC_256K RTE_DEPRECATED(RTE_FDIR_PBALLOC_256K) RTE_ETH_FDIR_PBALLOC_256K
/**
* Select report mode of FDIR hash information in Rx descriptors.
@@ -1466,7 +1478,6 @@ struct rte_eth_fdir_conf {
/** Flex payload configuration. */
struct rte_eth_fdir_flex_conf flex_conf;
};
-
#define rte_fdir_conf rte_eth_fdir_conf
/**
@@ -1545,57 +1556,58 @@ struct rte_eth_conf {
* Rx offload capabilities of a device.
*/
#define RTE_ETH_RX_OFFLOAD_VLAN_STRIP RTE_BIT64(0)
-#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_ETH_RX_OFFLOAD_VLAN_STRIP
#define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1)
-#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
#define RTE_ETH_RX_OFFLOAD_UDP_CKSUM RTE_BIT64(2)
-#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_ETH_RX_OFFLOAD_UDP_CKSUM
#define RTE_ETH_RX_OFFLOAD_TCP_CKSUM RTE_BIT64(3)
-#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_ETH_RX_OFFLOAD_TCP_CKSUM
#define RTE_ETH_RX_OFFLOAD_TCP_LRO RTE_BIT64(4)
-#define DEV_RX_OFFLOAD_TCP_LRO RTE_ETH_RX_OFFLOAD_TCP_LRO
#define RTE_ETH_RX_OFFLOAD_QINQ_STRIP RTE_BIT64(5)
-#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_ETH_RX_OFFLOAD_QINQ_STRIP
#define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(6)
-#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
#define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP RTE_BIT64(7)
-#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
#define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT RTE_BIT64(8)
-#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
#define RTE_ETH_RX_OFFLOAD_VLAN_FILTER RTE_BIT64(9)
-#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_ETH_RX_OFFLOAD_VLAN_FILTER
#define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND RTE_BIT64(10)
-#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
#define RTE_ETH_RX_OFFLOAD_SCATTER RTE_BIT64(13)
-#define DEV_RX_OFFLOAD_SCATTER RTE_ETH_RX_OFFLOAD_SCATTER
/**
* Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
#define RTE_ETH_RX_OFFLOAD_TIMESTAMP RTE_BIT64(14)
-#define DEV_RX_OFFLOAD_TIMESTAMP RTE_ETH_RX_OFFLOAD_TIMESTAMP
#define RTE_ETH_RX_OFFLOAD_SECURITY RTE_BIT64(15)
-#define DEV_RX_OFFLOAD_SECURITY RTE_ETH_RX_OFFLOAD_SECURITY
#define RTE_ETH_RX_OFFLOAD_KEEP_CRC RTE_BIT64(16)
-#define DEV_RX_OFFLOAD_KEEP_CRC RTE_ETH_RX_OFFLOAD_KEEP_CRC
#define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM RTE_BIT64(17)
-#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
#define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18)
-#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
#define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19)
-#define DEV_RX_OFFLOAD_RSS_HASH RTE_ETH_RX_OFFLOAD_RSS_HASH
#define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20)
+#define DEV_RX_OFFLOAD_VLAN_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_STRIP) RTE_ETH_RX_OFFLOAD_VLAN_STRIP
+#define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM
+#define DEV_RX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_UDP_CKSUM
+#define DEV_RX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_CKSUM) RTE_ETH_RX_OFFLOAD_TCP_CKSUM
+#define DEV_RX_OFFLOAD_TCP_LRO RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_LRO) RTE_ETH_RX_OFFLOAD_TCP_LRO
+#define DEV_RX_OFFLOAD_QINQ_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_QINQ_STRIP) RTE_ETH_RX_OFFLOAD_QINQ_STRIP
+#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM
+#define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_MACSEC_STRIP) RTE_ETH_RX_OFFLOAD_MACSEC_STRIP
+#define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_DEPRECATED(DEV_RX_OFFLOAD_HEADER_SPLIT) RTE_ETH_RX_OFFLOAD_HEADER_SPLIT
+#define DEV_RX_OFFLOAD_VLAN_FILTER RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_FILTER) RTE_ETH_RX_OFFLOAD_VLAN_FILTER
+#define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_EXTEND) RTE_ETH_RX_OFFLOAD_VLAN_EXTEND
+#define DEV_RX_OFFLOAD_SCATTER RTE_DEPRECATED(DEV_RX_OFFLOAD_SCATTER) RTE_ETH_RX_OFFLOAD_SCATTER
+#define DEV_RX_OFFLOAD_TIMESTAMP RTE_DEPRECATED(DEV_RX_OFFLOAD_TIMESTAMP) RTE_ETH_RX_OFFLOAD_TIMESTAMP
+#define DEV_RX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_RX_OFFLOAD_SECURITY) RTE_ETH_RX_OFFLOAD_SECURITY
+#define DEV_RX_OFFLOAD_KEEP_CRC RTE_DEPRECATED(DEV_RX_OFFLOAD_KEEP_CRC) RTE_ETH_RX_OFFLOAD_KEEP_CRC
+#define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_SCTP_CKSUM) RTE_ETH_RX_OFFLOAD_SCTP_CKSUM
+#define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM
+#define DEV_RX_OFFLOAD_RSS_HASH RTE_DEPRECATED(DEV_RX_OFFLOAD_RSS_HASH) RTE_ETH_RX_OFFLOAD_RSS_HASH
+
#define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \
RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \
RTE_ETH_RX_OFFLOAD_TCP_CKSUM)
-#define DEV_RX_OFFLOAD_CHECKSUM RTE_ETH_RX_OFFLOAD_CHECKSUM
+#define DEV_RX_OFFLOAD_CHECKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_CHECKSUM) RTE_ETH_RX_OFFLOAD_CHECKSUM
#define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \
RTE_ETH_RX_OFFLOAD_QINQ_STRIP)
-#define DEV_RX_OFFLOAD_VLAN RTE_ETH_RX_OFFLOAD_VLAN
+#define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN
/*
* If new Rx offload capabilities are defined, they also must be
@@ -1606,80 +1618,81 @@ struct rte_eth_conf {
* Tx offload capabilities of a device.
*/
#define RTE_ETH_TX_OFFLOAD_VLAN_INSERT RTE_BIT64(0)
-#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_ETH_TX_OFFLOAD_VLAN_INSERT
#define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1)
-#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
#define RTE_ETH_TX_OFFLOAD_UDP_CKSUM RTE_BIT64(2)
-#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_ETH_TX_OFFLOAD_UDP_CKSUM
#define RTE_ETH_TX_OFFLOAD_TCP_CKSUM RTE_BIT64(3)
-#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_ETH_TX_OFFLOAD_TCP_CKSUM
#define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM RTE_BIT64(4)
-#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
#define RTE_ETH_TX_OFFLOAD_TCP_TSO RTE_BIT64(5)
-#define DEV_TX_OFFLOAD_TCP_TSO RTE_ETH_TX_OFFLOAD_TCP_TSO
#define RTE_ETH_TX_OFFLOAD_UDP_TSO RTE_BIT64(6)
-#define DEV_TX_OFFLOAD_UDP_TSO RTE_ETH_TX_OFFLOAD_UDP_TSO
#define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(7) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
#define RTE_ETH_TX_OFFLOAD_QINQ_INSERT RTE_BIT64(8)
-#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_ETH_TX_OFFLOAD_QINQ_INSERT
#define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO RTE_BIT64(9) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO RTE_BIT64(10) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO RTE_BIT64(11) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO RTE_BIT64(12) /**< Used for tunneling packet. */
-#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
#define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT RTE_BIT64(13)
-#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
/**
* Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
* Tx queue without SW lock.
*/
#define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE RTE_BIT64(14)
-#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
/** Device supports multi segment send. */
#define RTE_ETH_TX_OFFLOAD_MULTI_SEGS RTE_BIT64(15)
-#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_ETH_TX_OFFLOAD_MULTI_SEGS
/**
* Device supports optimization for fast release of mbufs.
* When set application must guarantee that per-queue all mbufs comes from
* the same mempool and has refcnt = 1.
*/
#define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE RTE_BIT64(16)
-#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
#define RTE_ETH_TX_OFFLOAD_SECURITY RTE_BIT64(17)
-#define DEV_TX_OFFLOAD_SECURITY RTE_ETH_TX_OFFLOAD_SECURITY
/**
* Device supports generic UDP tunneled packet TSO.
* Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required
* for tunnel TSO.
*/
#define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO RTE_BIT64(18)
-#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
/**
* Device supports generic IP tunneled packet TSO.
* Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required
* for tunnel TSO.
*/
#define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO RTE_BIT64(19)
-#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
/** Device supports outer UDP checksum */
#define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(20)
-#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
/**
* Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME
* if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags.
* The mbuf field and flag are registered when the offload is configured.
*/
#define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_BIT64(21)
-#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
/*
* If new Tx offload capabilities are defined, they also must be
* mentioned in rte_tx_offload_names in rte_ethdev.c file.
*/
+#define DEV_TX_OFFLOAD_VLAN_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_VLAN_INSERT) RTE_ETH_TX_OFFLOAD_VLAN_INSERT
+#define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_IPV4_CKSUM
+#define DEV_TX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_UDP_CKSUM
+#define DEV_TX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_CKSUM) RTE_ETH_TX_OFFLOAD_TCP_CKSUM
+#define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_SCTP_CKSUM) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM
+#define DEV_TX_OFFLOAD_TCP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_TSO) RTE_ETH_TX_OFFLOAD_TCP_TSO
+#define DEV_TX_OFFLOAD_UDP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TSO) RTE_ETH_TX_OFFLOAD_UDP_TSO
+#define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM
+#define DEV_TX_OFFLOAD_QINQ_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_QINQ_INSERT) RTE_ETH_TX_OFFLOAD_QINQ_INSERT
+#define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_VXLAN_TNL_TSO) RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO
+#define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GRE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO
+#define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IPIP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO
+#define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GENEVE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO
+#define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_MACSEC_INSERT) RTE_ETH_TX_OFFLOAD_MACSEC_INSERT
+#define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MT_LOCKFREE) RTE_ETH_TX_OFFLOAD_MT_LOCKFREE
+#define DEV_TX_OFFLOAD_MULTI_SEGS RTE_DEPRECATED(DEV_TX_OFFLOAD_MULTI_SEGS) RTE_ETH_TX_OFFLOAD_MULTI_SEGS
+#define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MBUF_FAST_FREE) RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE
+#define DEV_TX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_TX_OFFLOAD_SECURITY) RTE_ETH_TX_OFFLOAD_SECURITY
+#define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TNL_TSO) RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO
+#define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IP_TNL_TSO
+#define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM
+#define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_DEPRECATED(DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP
+
/**@{@name Device capabilities
* Non-offload capabilities reported in rte_eth_dev_info.dev_capa.
*/
@@ -1931,9 +1944,10 @@ struct rte_eth_xstat_name {
};
#define RTE_ETH_DCB_NUM_TCS 8
-#define ETH_DCB_NUM_TCS RTE_ETH_DCB_NUM_TCS
#define RTE_ETH_MAX_VMDQ_POOL 64
-#define ETH_MAX_VMDQ_POOL RTE_ETH_MAX_VMDQ_POOL
+
+#define ETH_DCB_NUM_TCS RTE_DEPRECATED(ETH_DCB_NUM_TCS) RTE_ETH_DCB_NUM_TCS
+#define ETH_MAX_VMDQ_POOL RTE_DEPRECATED(ETH_MAX_VMDQ_POOL) RTE_ETH_MAX_VMDQ_POOL
/**
* A structure used to get the information of queue and
--
2.34.1
^ permalink raw reply [relevance 1%]
* [PATCH v5 1/2] eal: add API for bus close
@ 2022-01-10 5:26 3% ` rohit.raj
2022-02-09 11:04 3% ` David Marchand
0 siblings, 1 reply; 200+ results
From: rohit.raj @ 2022-01-10 5:26 UTC (permalink / raw)
To: Bruce Richardson, Ray Kinsella, Dmitry Kozlyuk,
Narcisa Ana Maria Vasile, Dmitry Malloy, Pallavi Kadam
Cc: dev, nipun.gupta, sachin.saxena, hemant.agrawal, Rohit Raj
From: Rohit Raj <rohit.raj@nxp.com>
As per the current code we have API for bus probe, but the
bus close API is missing. This breaks the multi process
scenarios as objects are not cleaned while terminating the
secondary processes.
This patch adds a new API rte_bus_close() for cleanup of
bus objects which were acquired during probe.
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
---
Rebased on this patch series:
https://patches.dpdk.org/project/dpdk/list/?series=21049
v5:
* Updated release notes for new feature and API change.
* Added support for error checking while closing bus.
* Added experimental banner for new API.
* Squashed changes related to freebsd and windows into single patch.
* Discarded patch to fix a bug which is already fixed on latest
release.
v4:
* Added comments to clarify responsibility of rte_bus_close.
* Added support for rte_bus_close on freebsd.
* Added support for rte_bus_close on windows.
v3:
* nit: combined nested if statements.
v2:
* Moved rte_bus_close call to rte_eal_cleanup path.
doc/guides/rel_notes/release_22_03.rst | 8 +++++++
lib/eal/common/eal_common_bus.c | 33 +++++++++++++++++++++++++-
lib/eal/freebsd/eal.c | 1 +
lib/eal/include/rte_bus.h | 30 ++++++++++++++++++++++-
lib/eal/linux/eal.c | 8 +++++++
lib/eal/version.map | 3 +++
lib/eal/windows/eal.c | 1 +
7 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..7417606a2a 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,11 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+ * **Added support to close bus.**
+
+ Added capability to allow a user to do cleanup of bus objects which
+ were acquired during bus probe.
+
Removed Items
-------------
@@ -84,6 +89,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+ * eal: Added new API ``rte_bus_close`` to perform cleanup bus objects which
+ were acquired during bus probe.
+
ABI Changes
-----------
diff --git a/lib/eal/common/eal_common_bus.c b/lib/eal/common/eal_common_bus.c
index baa5b532af..2c3c0a90d2 100644
--- a/lib/eal/common/eal_common_bus.c
+++ b/lib/eal/common/eal_common_bus.c
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2016 NXP
+ * Copyright 2016,2022 NXP
*/
#include <stdio.h>
@@ -85,6 +85,37 @@ rte_bus_probe(void)
return 0;
}
+/* Close all devices of all buses */
+int
+rte_bus_close(void)
+{
+ int ret;
+ struct rte_bus *bus, *vbus = NULL;
+
+ TAILQ_FOREACH(bus, &rte_bus_list, next) {
+ if (!strcmp(bus->name, "vdev")) {
+ vbus = bus;
+ continue;
+ }
+
+ if (bus->close) {
+ ret = bus->close();
+ if (ret)
+ RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
+ bus->name);
+ }
+ }
+
+ if (vbus && vbus->close) {
+ ret = vbus->close();
+ if (ret)
+ RTE_LOG(ERR, EAL, "Bus (%s) close failed.\n",
+ vbus->name);
+ }
+
+ return 0;
+}
+
/* Dump information of a single bus */
static int
bus_dump_one(FILE *f, struct rte_bus *bus)
diff --git a/lib/eal/freebsd/eal.c b/lib/eal/freebsd/eal.c
index a1cd2462db..87d70c6898 100644
--- a/lib/eal/freebsd/eal.c
+++ b/lib/eal/freebsd/eal.c
@@ -984,6 +984,7 @@ rte_eal_cleanup(void)
{
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ rte_bus_close();
rte_service_finalize();
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
diff --git a/lib/eal/include/rte_bus.h b/lib/eal/include/rte_bus.h
index bbbb6efd28..c6211bbd95 100644
--- a/lib/eal/include/rte_bus.h
+++ b/lib/eal/include/rte_bus.h
@@ -1,5 +1,5 @@
/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright 2016 NXP
+ * Copyright 2016,2022 NXP
*/
#ifndef _RTE_BUS_H_
@@ -66,6 +66,23 @@ typedef int (*rte_bus_scan_t)(void);
*/
typedef int (*rte_bus_probe_t)(void);
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice
+ *
+ * Implementation specific close function which is responsible for resetting all
+ * detected devices on the bus to a default state, closing UIO nodes or VFIO
+ * groups and also freeing any memory allocated during rte_bus_probe like
+ * private resources for device list.
+ *
+ * This is called while iterating over each registered bus.
+ *
+ * @return
+ * 0 for successful close
+ * !0 for any error while closing
+ */
+typedef int (*rte_bus_close_t)(void);
+
/**
* Device iterator to find a device on a bus.
*
@@ -263,6 +280,7 @@ struct rte_bus {
const char *name; /**< Name of the bus */
rte_bus_scan_t scan; /**< Scan for devices attached to bus */
rte_bus_probe_t probe; /**< Probe devices on bus */
+ rte_bus_close_t close; /**< Close devices on bus */
rte_bus_find_device_t find_device; /**< Find a device on the bus */
rte_bus_plug_t plug; /**< Probe single device for drivers */
rte_bus_unplug_t unplug; /**< Remove single device from driver */
@@ -317,6 +335,16 @@ int rte_bus_scan(void);
*/
int rte_bus_probe(void);
+/**
+ * For each device on the buses, call the device specific close.
+ *
+ * @return
+ * 0 for successful close
+ * !0 otherwise
+ */
+__rte_experimental
+int rte_bus_close(void);
+
/**
* Dump information of all the buses registered with EAL.
*
diff --git a/lib/eal/linux/eal.c b/lib/eal/linux/eal.c
index 60b4924838..5c60131e46 100644
--- a/lib/eal/linux/eal.c
+++ b/lib/eal/linux/eal.c
@@ -1362,6 +1362,14 @@ rte_eal_cleanup(void)
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
rte_memseg_walk(mark_freeable, NULL);
+
+ /* Close all the buses and devices/drivers on them */
+ if (rte_bus_close()) {
+ rte_eal_init_alert("Cannot close devices");
+ rte_errno = ENOTSUP;
+ return -1;
+ }
+
rte_service_finalize();
rte_mp_channel_cleanup();
/* after this point, any DPDK pointers will become dangling */
diff --git a/lib/eal/version.map b/lib/eal/version.map
index ab28c22791..39882dbbd5 100644
--- a/lib/eal/version.map
+++ b/lib/eal/version.map
@@ -420,6 +420,9 @@ EXPERIMENTAL {
rte_intr_instance_free;
rte_intr_type_get;
rte_intr_type_set;
+
+ # added in 22.03
+ rte_bus_close;
};
INTERNAL {
diff --git a/lib/eal/windows/eal.c b/lib/eal/windows/eal.c
index 67db7f099a..5915ab6291 100644
--- a/lib/eal/windows/eal.c
+++ b/lib/eal/windows/eal.c
@@ -260,6 +260,7 @@ rte_eal_cleanup(void)
struct internal_config *internal_conf =
eal_get_internal_configuration();
+ rte_bus_close();
eal_intr_thread_cancel();
eal_mem_virt2iova_cleanup();
/* after this point, any DPDK pointers will become dangling */
--
2.17.1
^ permalink raw reply [relevance 3%]
* [PATCH v5 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
@ 2022-01-06 16:03 3% ` Xiaoyun Li
0 siblings, 0 replies; 200+ results
From: Xiaoyun Li @ 2022-01-06 16:03 UTC (permalink / raw)
To: Aman.Deep.Singh, ferruh.yigit, olivier.matz, mb,
konstantin.ananyev, stephen, vladimir.medvedkin
Cc: dev, Xiaoyun Li, Aman Singh, Sunil Pai G
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Tested-by: Sunil Pai G <sunil.pai.g@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 11 ++
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
2 files changed, 197 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..785fd22001 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,14 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added functions to calculate UDP/TCP checksum in mbuf.**
+
+ * Added the following functions to calculate UDP/TCP checksum of packets
+ which can be over multi-segments:
+ - ``rte_ipv4_udptcp_cksum_mbuf()``
+ - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
+ - ``rte_ipv6_udptcp_cksum_mbuf()``
+ - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
Removed Items
-------------
@@ -84,6 +92,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
+ ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
+ ``rte_ipv6_udptcp_cksum_mbuf_verify()``
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
--
2.25.1
^ permalink raw reply [relevance 3%]
* Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2022-01-04 15:40 0% ` Li, Xiaoyun
@ 2022-01-06 12:56 0% ` Singh, Aman Deep
0 siblings, 0 replies; 200+ results
From: Singh, Aman Deep @ 2022-01-06 12:56 UTC (permalink / raw)
To: Li, Xiaoyun, Yigit, Ferruh, olivier.matz, mb, Ananyev,
Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
[-- Attachment #1: Type: text/plain, Size: 11772 bytes --]
On 1/4/2022 9:10 PM, Li, Xiaoyun wrote:
> Hi
>
>> -----Original Message-----
>> From: Li, Xiaoyun<xiaoyun.li@intel.com>
>> Sent: Tuesday, January 4, 2022 15:19
>> To: Singh, Aman Deep<aman.deep.singh@intel.com>; Yigit, Ferruh
>> <ferruh.yigit@intel.com>;olivier.matz@6wind.com;
>> mb@smartsharesystems.com; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>;stephen@networkplumber.org;
>> Medvedkin, Vladimir<vladimir.medvedkin@intel.com>
>> Cc:dev@dpdk.org
>> Subject: RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
>> mbuf
>>
>> Hi
>>
>>> -----Original Message-----
>>> From: Singh, Aman Deep<aman.deep.singh@intel.com>
>>> Sent: Wednesday, December 15, 2021 11:34
>>> To: Li, Xiaoyun<xiaoyun.li@intel.com>; Yigit, Ferruh
>>> <ferruh.yigit@intel.com>;olivier.matz@6wind.com;
>>> mb@smartsharesystems.com; Ananyev, Konstantin
>>> <konstantin.ananyev@intel.com>;stephen@networkplumber.org;
>> Medvedkin,
>>> Vladimir<vladimir.medvedkin@intel.com>
>>> Cc:dev@dpdk.org
>>> Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP
>>> cksum in mbuf
>>>
>>>
>>> On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
>>>> Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
>>>> UDP/TCP checksum in mbuf which can be over multi-segments.
>>>>
>>>> Signed-off-by: Xiaoyun Li<xiaoyun.li@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
>>>> ---
>>>> doc/guides/rel_notes/release_22_03.rst | 10 ++
>>>> lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
>>>> lib/net/version.map | 10 ++
>>>> 3 files changed, 206 insertions(+)
>>>>
>>>> diff --git a/doc/guides/rel_notes/release_22_03.rst
>>>> b/doc/guides/rel_notes/release_22_03.rst
>>>> index 6d99d1eaa9..7a082c4427 100644
>>>> --- a/doc/guides/rel_notes/release_22_03.rst
>>>> +++ b/doc/guides/rel_notes/release_22_03.rst
>>>> @@ -55,6 +55,13 @@ New Features
>>>> Also, make sure to start the actual text at the margin.
>>>> =======================================================
>>>>
>>>> +* **Added functions to calculate UDP/TCP checksum in mbuf.**
>>>> + * Added the following functions to calculate UDP/TCP checksum of
>>> packets
>>>> + which can be over multi-segments:
>>>> + - ``rte_ipv4_udptcp_cksum_mbuf()``
>>>> + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
>>>> + - ``rte_ipv6_udptcp_cksum_mbuf()``
>>>> + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>>>>
>>>> Removed Items
>>>> -------------
>>>> @@ -84,6 +91,9 @@ API Changes
>>>> Also, make sure to start the actual text at the margin.
>>>> =======================================================
>>>>
>>>> +* net: added experimental functions
>>>> +``rte_ipv4_udptcp_cksum_mbuf()``,
>>>> + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
>>>> +``rte_ipv6_udptcp_cksum_mbuf()``,
>>>> + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>>>>
>>>> ABI Changes
>>>> -----------
>>>> diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
>>>> c575250852..534f401d26 100644
>>>> --- a/lib/net/rte_ip.h
>>>> +++ b/lib/net/rte_ip.h
>>>> @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct
>> rte_ipv4_hdr
>>> *ipv4_hdr, const void *l4_hdr)
>>>> return cksum;
>>>> }
>>>>
>>>> +/**
>>>> + * @internal Calculate the non-complemented IPv4 L4 checksum of a
>>>> +packet */ static inline uint16_t
>>>> +__rte_ipv4_udptcp_cksum_mbuf(const
>>>> +struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t raw_cksum;
>>>> + uint32_t cksum;
>>>> +
>>>> + if (l4_off > m->pkt_len)
>>>> + return 0;
>>>> +
>>>> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
>>> &raw_cksum))
>>>> + return 0;
>>>> +
>>>> + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
>>>> +
>>>> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
>>> At times, even after above operation "cksum" might stay above 16-bits,
>>> ex "cksum = 0x1FFFF" to start with.
>>> Can we consider using "return __rte_raw_cksum_reduce(cksum);"
>> Will use it in next version. Thanks.
>>
>> Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and
>> __rte_ipv6_udptcp_cksum have the same issue, right?
>> Should anyone fix that?
> Forgot the intent here.
> rte_raw_cksum_mbuf() already calls __rte_raw_cksum_reduce().
> So actually, it's a result of uint16_t + uint16_t. So it's impossible of your case. There's no need to call __rte_raw_cksum_reduce().
Got it, Thanks. With u16 + u16, max 1-bit overflow only possible. So
effective operation here reduces to-
cksum = ((cksum & 0x10000) >> 16) + (cksum & 0xffff);
>>>> +
>>>> + return (uint16_t)cksum;
>>>> +}
>>>> +
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Compute the IPv4 UDP/TCP checksum of a packet.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv4_hdr
>>>> + * The pointer to the contiguous IPv4 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * The complemented checksum to set in the L4 header.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
>>> {
>>>> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
>>> l4_off);
>>>> +
>>>> + cksum = ~cksum;
>>>> +
>>>> + /*
>>>> + * Per RFC 768: If the computed checksum is zero for UDP,
>>>> + * it is transmitted as all ones
>>>> + * (the equivalent in one's complement arithmetic).
>>>> + */
>>>> + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
>>>> + cksum = 0xffff;
>>>> +
>>>> + return cksum;
>>>> +}
>>>> +
>>>> /**
>>>> * Validate the IPv4 UDP or TCP checksum.
>>>> *
>>>> @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
>>> rte_ipv4_hdr *ipv4_hdr,
>>>> return 0;
>>>> }
>>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Verify the IPv4 UDP/TCP checksum of a packet.
>>>> + *
>>>> + * In case of UDP, the caller must first check if
>>>> +udp_hdr->dgram_cksum is 0
>>>> + * (i.e. no checksum).
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv4_hdr
>>>> + * The pointer to the contiguous IPv4 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * Return 0 if the checksum is correct, else -1.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
>>>> + const struct rte_ipv4_hdr *ipv4_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
>>> l4_off);
>>>> +
>>>> + if (cksum != 0xffff)
>>>> + return -1;
>>> cksum other than 0xffff, should return error. Is that the intent or I
>>> am missing something obvious.
>> This is the intent. This function is to verify if the cksum in the packet is correct.
>>
>> It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling
>> rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp
>> header as 0. Then calculate the cksum.
>>
>> But here, user should directly call this function with the original packet. Then
>> if the udp/tcp cksum is correct, after the calculation (please note that, this is
>> calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it
>> should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can
>> see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
>>
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> /**
>>>> * IPv6 Header
>>>> */
>>>> @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct
>> rte_ipv6_hdr
>>> *ipv6_hdr, const void *l4_hdr)
>>>> return cksum;
>>>> }
>>>>
>>>> +/**
>>>> + * @internal Calculate the non-complemented IPv6 L4 checksum of a
>>>> +packet */ static inline uint16_t
>>>> +__rte_ipv6_udptcp_cksum_mbuf(const
>>>> +struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t raw_cksum;
>>>> + uint32_t cksum;
>>>> +
>>>> + if (l4_off > m->pkt_len)
>>>> + return 0;
>>>> +
>>>> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
>>> &raw_cksum))
>>>> + return 0;
>>>> +
>>>> + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
>>>> +
>>>> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
>>> Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
>>>> +
>>>> + return (uint16_t)cksum;
>>>> +}
>>>> +
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Process the IPv6 UDP or TCP checksum of a packet.
>>>> + *
>>>> + * The IPv6 header must not be followed by extension headers. The
>>>> +layer 4
>>>> + * checksum must be set to 0 in the L4 header by the caller.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv6_hdr
>>>> + * The pointer to the contiguous IPv6 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * The complemented checksum to set in the L4 header.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline uint16_t
>>>> +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
>>> {
>>>> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
>>> l4_off);
>>>> +
>>>> + cksum = ~cksum;
>>>> +
>>>> + /*
>>>> + * Per RFC 768: If the computed checksum is zero for UDP,
>>>> + * it is transmitted as all ones
>>>> + * (the equivalent in one's complement arithmetic).
>>>> + */
>>>> + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
>>>> + cksum = 0xffff;
>>>> +
>>>> + return cksum;
>>>> +}
>>>> +
>>>> /**
>>>> * Validate the IPv6 UDP or TCP checksum.
>>>> *
>>>> @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
>>> rte_ipv6_hdr *ipv6_hdr,
>>>> return 0;
>>>> }
>>>>
>>>> +/**
>>>> + * @warning
>>>> + * @b EXPERIMENTAL: this API may change without prior notice.
>>>> + *
>>>> + * Validate the IPv6 UDP or TCP checksum of a packet.
>>>> + *
>>>> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is
>> 0:
>>>> + * this is either invalid or means no checksum in some situations.
>>>> +See 8.1
>>>> + * (Upper-Layer Checksums) in RFC 8200.
>>>> + *
>>>> + * @param m
>>>> + * The pointer to the mbuf.
>>>> + * @param ipv6_hdr
>>>> + * The pointer to the contiguous IPv6 header.
>>>> + * @param l4_off
>>>> + * The offset in bytes to start L4 checksum.
>>>> + * @return
>>>> + * Return 0 if the checksum is correct, else -1.
>>>> + */
>>>> +__rte_experimental
>>>> +static inline int
>>>> +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
>>>> + const struct rte_ipv6_hdr *ipv6_hdr,
>>>> + uint16_t l4_off)
>>>> +{
>>>> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
>>> l4_off);
>>>> +
>>>> + if (cksum != 0xffff)
>>>> + return -1;
>>>> +
>>>> + return 0;
>>>> +}
>>>> +
>>>> /** IPv6 fragment extension header. */
>>>> #define RTE_IPV6_EHDR_MF_SHIFT 0
>>>> #define RTE_IPV6_EHDR_MF_MASK 1
>>>> diff --git a/lib/net/version.map b/lib/net/version.map index
>>>> 4f4330d1c4..0f2aacdef8 100644
>>>> --- a/lib/net/version.map
>>>> +++ b/lib/net/version.map
>>>> @@ -12,3 +12,13 @@ DPDK_22 {
>>>>
>>>> local: *;
>>>> };
>>>> +
>>>> +EXPERIMENTAL {
>>>> + global:
>>>> +
>>>> + # added in 22.03
>>>> + rte_ipv4_udptcp_cksum_mbuf;
>>>> + rte_ipv4_udptcp_cksum_mbuf_verify;
>>>> + rte_ipv6_udptcp_cksum_mbuf;
>>>> + rte_ipv6_udptcp_cksum_mbuf_verify;
>>>> +};
[-- Attachment #2: Type: text/html, Size: 16806 bytes --]
^ permalink raw reply [relevance 0%]
* RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2022-01-04 15:18 0% ` Li, Xiaoyun
@ 2022-01-04 15:40 0% ` Li, Xiaoyun
2022-01-06 12:56 0% ` Singh, Aman Deep
0 siblings, 1 reply; 200+ results
From: Li, Xiaoyun @ 2022-01-04 15:40 UTC (permalink / raw)
To: Li, Xiaoyun, Singh, Aman Deep, Yigit, Ferruh, olivier.matz, mb,
Ananyev, Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
Hi
> -----Original Message-----
> From: Li, Xiaoyun <xiaoyun.li@intel.com>
> Sent: Tuesday, January 4, 2022 15:19
> To: Singh, Aman Deep <aman.deep.singh@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> mb@smartsharesystems.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
> mbuf
>
> Hi
>
> > -----Original Message-----
> > From: Singh, Aman Deep <aman.deep.singh@intel.com>
> > Sent: Wednesday, December 15, 2021 11:34
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Yigit, Ferruh
> > <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> > mb@smartsharesystems.com; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin,
> > Vladimir <vladimir.medvedkin@intel.com>
> > Cc: dev@dpdk.org
> > Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP
> > cksum in mbuf
> >
> >
> > On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> > > Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
> > > UDP/TCP checksum in mbuf which can be over multi-segments.
> > >
> > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > ---
> > > doc/guides/rel_notes/release_22_03.rst | 10 ++
> > > lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> > > lib/net/version.map | 10 ++
> > > 3 files changed, 206 insertions(+)
> > >
> > > diff --git a/doc/guides/rel_notes/release_22_03.rst
> > > b/doc/guides/rel_notes/release_22_03.rst
> > > index 6d99d1eaa9..7a082c4427 100644
> > > --- a/doc/guides/rel_notes/release_22_03.rst
> > > +++ b/doc/guides/rel_notes/release_22_03.rst
> > > @@ -55,6 +55,13 @@ New Features
> > > Also, make sure to start the actual text at the margin.
> > > =======================================================
> > >
> > > +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> > > + * Added the following functions to calculate UDP/TCP checksum of
> > packets
> > > + which can be over multi-segments:
> > > + - ``rte_ipv4_udptcp_cksum_mbuf()``
> > > + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> > > + - ``rte_ipv6_udptcp_cksum_mbuf()``
> > > + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> > >
> > > Removed Items
> > > -------------
> > > @@ -84,6 +91,9 @@ API Changes
> > > Also, make sure to start the actual text at the margin.
> > > =======================================================
> > >
> > > +* net: added experimental functions
> > > +``rte_ipv4_udptcp_cksum_mbuf()``,
> > > + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
> > > +``rte_ipv6_udptcp_cksum_mbuf()``,
> > > + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> > >
> > > ABI Changes
> > > -----------
> > > diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
> > > c575250852..534f401d26 100644
> > > --- a/lib/net/rte_ip.h
> > > +++ b/lib/net/rte_ip.h
> > > @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct
> rte_ipv4_hdr
> > *ipv4_hdr, const void *l4_hdr)
> > > return cksum;
> > > }
> > >
> > > +/**
> > > + * @internal Calculate the non-complemented IPv4 L4 checksum of a
> > > +packet */ static inline uint16_t
> > > +__rte_ipv4_udptcp_cksum_mbuf(const
> > > +struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t raw_cksum;
> > > + uint32_t cksum;
> > > +
> > > + if (l4_off > m->pkt_len)
> > > + return 0;
> > > +
> > > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> > &raw_cksum))
> > > + return 0;
> > > +
> > > + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> > > +
> > > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> > At times, even after above operation "cksum" might stay above 16-bits,
> > ex "cksum = 0x1FFFF" to start with.
> > Can we consider using "return __rte_raw_cksum_reduce(cksum);"
>
> Will use it in next version. Thanks.
>
> Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and
> __rte_ipv6_udptcp_cksum have the same issue, right?
> Should anyone fix that?
Forgot the intent here.
rte_raw_cksum_mbuf() already calls __rte_raw_cksum_reduce().
So actually, it's a result of uint16_t + uint16_t. So it's impossible of your case. There's no need to call __rte_raw_cksum_reduce().
>
> > > +
> > > + return (uint16_t)cksum;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Compute the IPv4 UDP/TCP checksum of a packet.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv4_hdr
> > > + * The pointer to the contiguous IPv4 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * The complemented checksum to set in the L4 header.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> > {
> > > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> > l4_off);
> > > +
> > > + cksum = ~cksum;
> > > +
> > > + /*
> > > + * Per RFC 768: If the computed checksum is zero for UDP,
> > > + * it is transmitted as all ones
> > > + * (the equivalent in one's complement arithmetic).
> > > + */
> > > + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> > > + cksum = 0xffff;
> > > +
> > > + return cksum;
> > > +}
> > > +
> > > /**
> > > * Validate the IPv4 UDP or TCP checksum.
> > > *
> > > @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
> > rte_ipv4_hdr *ipv4_hdr,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Verify the IPv4 UDP/TCP checksum of a packet.
> > > + *
> > > + * In case of UDP, the caller must first check if
> > > +udp_hdr->dgram_cksum is 0
> > > + * (i.e. no checksum).
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv4_hdr
> > > + * The pointer to the contiguous IPv4 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * Return 0 if the checksum is correct, else -1.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > > + const struct rte_ipv4_hdr *ipv4_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> > l4_off);
> > > +
> > > + if (cksum != 0xffff)
> > > + return -1;
> > cksum other than 0xffff, should return error. Is that the intent or I
> > am missing something obvious.
>
> This is the intent. This function is to verify if the cksum in the packet is correct.
>
> It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling
> rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp
> header as 0. Then calculate the cksum.
>
> But here, user should directly call this function with the original packet. Then
> if the udp/tcp cksum is correct, after the calculation (please note that, this is
> calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it
> should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can
> see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
>
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /**
> > > * IPv6 Header
> > > */
> > > @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct
> rte_ipv6_hdr
> > *ipv6_hdr, const void *l4_hdr)
> > > return cksum;
> > > }
> > >
> > > +/**
> > > + * @internal Calculate the non-complemented IPv6 L4 checksum of a
> > > +packet */ static inline uint16_t
> > > +__rte_ipv6_udptcp_cksum_mbuf(const
> > > +struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t raw_cksum;
> > > + uint32_t cksum;
> > > +
> > > + if (l4_off > m->pkt_len)
> > > + return 0;
> > > +
> > > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> > &raw_cksum))
> > > + return 0;
> > > +
> > > + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> > > +
> > > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> > Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> > > +
> > > + return (uint16_t)cksum;
> > > +}
> > > +
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Process the IPv6 UDP or TCP checksum of a packet.
> > > + *
> > > + * The IPv6 header must not be followed by extension headers. The
> > > +layer 4
> > > + * checksum must be set to 0 in the L4 header by the caller.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv6_hdr
> > > + * The pointer to the contiguous IPv6 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * The complemented checksum to set in the L4 header.
> > > + */
> > > +__rte_experimental
> > > +static inline uint16_t
> > > +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> > {
> > > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> > l4_off);
> > > +
> > > + cksum = ~cksum;
> > > +
> > > + /*
> > > + * Per RFC 768: If the computed checksum is zero for UDP,
> > > + * it is transmitted as all ones
> > > + * (the equivalent in one's complement arithmetic).
> > > + */
> > > + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> > > + cksum = 0xffff;
> > > +
> > > + return cksum;
> > > +}
> > > +
> > > /**
> > > * Validate the IPv6 UDP or TCP checksum.
> > > *
> > > @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
> > rte_ipv6_hdr *ipv6_hdr,
> > > return 0;
> > > }
> > >
> > > +/**
> > > + * @warning
> > > + * @b EXPERIMENTAL: this API may change without prior notice.
> > > + *
> > > + * Validate the IPv6 UDP or TCP checksum of a packet.
> > > + *
> > > + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is
> 0:
> > > + * this is either invalid or means no checksum in some situations.
> > > +See 8.1
> > > + * (Upper-Layer Checksums) in RFC 8200.
> > > + *
> > > + * @param m
> > > + * The pointer to the mbuf.
> > > + * @param ipv6_hdr
> > > + * The pointer to the contiguous IPv6 header.
> > > + * @param l4_off
> > > + * The offset in bytes to start L4 checksum.
> > > + * @return
> > > + * Return 0 if the checksum is correct, else -1.
> > > + */
> > > +__rte_experimental
> > > +static inline int
> > > +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > > + const struct rte_ipv6_hdr *ipv6_hdr,
> > > + uint16_t l4_off)
> > > +{
> > > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> > l4_off);
> > > +
> > > + if (cksum != 0xffff)
> > > + return -1;
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /** IPv6 fragment extension header. */
> > > #define RTE_IPV6_EHDR_MF_SHIFT 0
> > > #define RTE_IPV6_EHDR_MF_MASK 1
> > > diff --git a/lib/net/version.map b/lib/net/version.map index
> > > 4f4330d1c4..0f2aacdef8 100644
> > > --- a/lib/net/version.map
> > > +++ b/lib/net/version.map
> > > @@ -12,3 +12,13 @@ DPDK_22 {
> > >
> > > local: *;
> > > };
> > > +
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + # added in 22.03
> > > + rte_ipv4_udptcp_cksum_mbuf;
> > > + rte_ipv4_udptcp_cksum_mbuf_verify;
> > > + rte_ipv6_udptcp_cksum_mbuf;
> > > + rte_ipv6_udptcp_cksum_mbuf_verify;
> > > +};
^ permalink raw reply [relevance 0%]
* RE: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2021-12-15 11:33 0% ` Singh, Aman Deep
@ 2022-01-04 15:18 0% ` Li, Xiaoyun
2022-01-04 15:40 0% ` Li, Xiaoyun
0 siblings, 1 reply; 200+ results
From: Li, Xiaoyun @ 2022-01-04 15:18 UTC (permalink / raw)
To: Singh, Aman Deep, Yigit, Ferruh, olivier.matz, mb, Ananyev,
Konstantin, stephen, Medvedkin, Vladimir
Cc: dev
Hi
> -----Original Message-----
> From: Singh, Aman Deep <aman.deep.singh@intel.com>
> Sent: Wednesday, December 15, 2021 11:34
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; olivier.matz@6wind.com;
> mb@smartsharesystems.com; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; stephen@networkplumber.org;
> Medvedkin, Vladimir <vladimir.medvedkin@intel.com>
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in
> mbuf
>
>
> On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> > Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6 UDP/TCP
> > checksum in mbuf which can be over multi-segments.
> >
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > ---
> > doc/guides/rel_notes/release_22_03.rst | 10 ++
> > lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> > lib/net/version.map | 10 ++
> > 3 files changed, 206 insertions(+)
> >
> > diff --git a/doc/guides/rel_notes/release_22_03.rst
> > b/doc/guides/rel_notes/release_22_03.rst
> > index 6d99d1eaa9..7a082c4427 100644
> > --- a/doc/guides/rel_notes/release_22_03.rst
> > +++ b/doc/guides/rel_notes/release_22_03.rst
> > @@ -55,6 +55,13 @@ New Features
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> > + * Added the following functions to calculate UDP/TCP checksum of
> packets
> > + which can be over multi-segments:
> > + - ``rte_ipv4_udptcp_cksum_mbuf()``
> > + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> > + - ``rte_ipv6_udptcp_cksum_mbuf()``
> > + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> >
> > Removed Items
> > -------------
> > @@ -84,6 +91,9 @@ API Changes
> > Also, make sure to start the actual text at the margin.
> > =======================================================
> >
> > +* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
> > + ``rte_ipv4_udptcp_cksum_mbuf_verify()``,
> > +``rte_ipv6_udptcp_cksum_mbuf()``,
> > + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
> >
> > ABI Changes
> > -----------
> > diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h index
> > c575250852..534f401d26 100644
> > --- a/lib/net/rte_ip.h
> > +++ b/lib/net/rte_ip.h
> > @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr
> *ipv4_hdr, const void *l4_hdr)
> > return cksum;
> > }
> >
> > +/**
> > + * @internal Calculate the non-complemented IPv4 L4 checksum of a
> > +packet */ static inline uint16_t __rte_ipv4_udptcp_cksum_mbuf(const
> > +struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t raw_cksum;
> > + uint32_t cksum;
> > +
> > + if (l4_off > m->pkt_len)
> > + return 0;
> > +
> > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> &raw_cksum))
> > + return 0;
> > +
> > + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> > +
> > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> At times, even after above operation "cksum" might stay above 16-bits, ex
> "cksum = 0x1FFFF" to start with.
> Can we consider using "return __rte_raw_cksum_reduce(cksum);"
Will use it in next version. Thanks.
Also, not related to this patch. It means that __rte_ipv4_udptcp_cksum and __rte_ipv6_udptcp_cksum have the same issue, right?
Should anyone fix that?
> > +
> > + return (uint16_t)cksum;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Compute the IPv4 UDP/TCP checksum of a packet.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv4_hdr
> > + * The pointer to the contiguous IPv4 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * The complemented checksum to set in the L4 header.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> {
> > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> l4_off);
> > +
> > + cksum = ~cksum;
> > +
> > + /*
> > + * Per RFC 768: If the computed checksum is zero for UDP,
> > + * it is transmitted as all ones
> > + * (the equivalent in one's complement arithmetic).
> > + */
> > + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> > + cksum = 0xffff;
> > +
> > + return cksum;
> > +}
> > +
> > /**
> > * Validate the IPv4 UDP or TCP checksum.
> > *
> > @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct
> rte_ipv4_hdr *ipv4_hdr,
> > return 0;
> > }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Verify the IPv4 UDP/TCP checksum of a packet.
> > + *
> > + * In case of UDP, the caller must first check if
> > +udp_hdr->dgram_cksum is 0
> > + * (i.e. no checksum).
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv4_hdr
> > + * The pointer to the contiguous IPv4 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * Return 0 if the checksum is correct, else -1.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > + const struct rte_ipv4_hdr *ipv4_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr,
> l4_off);
> > +
> > + if (cksum != 0xffff)
> > + return -1;
> cksum other than 0xffff, should return error. Is that the intent or I am
> missing something obvious.
This is the intent. This function is to verify if the cksum in the packet is correct.
It's different from calling rte_ipv4/6_udptcp_cksum_mbuf(). When calling rte_ipv4/6_udptcp_cksum_mbuf(), you need to set the cksum in udp/tcp header as 0. Then calculate the cksum.
But here, user should directly call this function with the original packet. Then if the udp/tcp cksum is correct, after the calculation (please note that, this is calling __rte_ipv4_udptcp_cksum_mbuf(), so the result needs to be ~), it should be 0xffff, namely, ~cksum = 0 which means cksum is correct. You can see rte_ipv4/6_udptcp_cksum_verify() is doing the same.
> > +
> > + return 0;
> > +}
> > +
> > /**
> > * IPv6 Header
> > */
> > @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr
> *ipv6_hdr, const void *l4_hdr)
> > return cksum;
> > }
> >
> > +/**
> > + * @internal Calculate the non-complemented IPv6 L4 checksum of a
> > +packet */ static inline uint16_t __rte_ipv6_udptcp_cksum_mbuf(const
> > +struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t raw_cksum;
> > + uint32_t cksum;
> > +
> > + if (l4_off > m->pkt_len)
> > + return 0;
> > +
> > + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off,
> &raw_cksum))
> > + return 0;
> > +
> > + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> > +
> > + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
> Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> > +
> > + return (uint16_t)cksum;
> > +}
> > +
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Process the IPv6 UDP or TCP checksum of a packet.
> > + *
> > + * The IPv6 header must not be followed by extension headers. The
> > +layer 4
> > + * checksum must be set to 0 in the L4 header by the caller.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv6_hdr
> > + * The pointer to the contiguous IPv6 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * The complemented checksum to set in the L4 header.
> > + */
> > +__rte_experimental
> > +static inline uint16_t
> > +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> {
> > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> l4_off);
> > +
> > + cksum = ~cksum;
> > +
> > + /*
> > + * Per RFC 768: If the computed checksum is zero for UDP,
> > + * it is transmitted as all ones
> > + * (the equivalent in one's complement arithmetic).
> > + */
> > + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> > + cksum = 0xffff;
> > +
> > + return cksum;
> > +}
> > +
> > /**
> > * Validate the IPv6 UDP or TCP checksum.
> > *
> > @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct
> rte_ipv6_hdr *ipv6_hdr,
> > return 0;
> > }
> >
> > +/**
> > + * @warning
> > + * @b EXPERIMENTAL: this API may change without prior notice.
> > + *
> > + * Validate the IPv6 UDP or TCP checksum of a packet.
> > + *
> > + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
> > + * this is either invalid or means no checksum in some situations.
> > +See 8.1
> > + * (Upper-Layer Checksums) in RFC 8200.
> > + *
> > + * @param m
> > + * The pointer to the mbuf.
> > + * @param ipv6_hdr
> > + * The pointer to the contiguous IPv6 header.
> > + * @param l4_off
> > + * The offset in bytes to start L4 checksum.
> > + * @return
> > + * Return 0 if the checksum is correct, else -1.
> > + */
> > +__rte_experimental
> > +static inline int
> > +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> > + const struct rte_ipv6_hdr *ipv6_hdr,
> > + uint16_t l4_off)
> > +{
> > + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr,
> l4_off);
> > +
> > + if (cksum != 0xffff)
> > + return -1;
> > +
> > + return 0;
> > +}
> > +
> > /** IPv6 fragment extension header. */
> > #define RTE_IPV6_EHDR_MF_SHIFT 0
> > #define RTE_IPV6_EHDR_MF_MASK 1
> > diff --git a/lib/net/version.map b/lib/net/version.map index
> > 4f4330d1c4..0f2aacdef8 100644
> > --- a/lib/net/version.map
> > +++ b/lib/net/version.map
> > @@ -12,3 +12,13 @@ DPDK_22 {
> >
> > local: *;
> > };
> > +
> > +EXPERIMENTAL {
> > + global:
> > +
> > + # added in 22.03
> > + rte_ipv4_udptcp_cksum_mbuf;
> > + rte_ipv4_udptcp_cksum_mbuf_verify;
> > + rte_ipv6_udptcp_cksum_mbuf;
> > + rte_ipv6_udptcp_cksum_mbuf_verify;
> > +};
^ permalink raw reply [relevance 0%]
* [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-30 6:08 2% Yanling Song
2022-01-19 16:56 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Yanling Song @ 2021-12-30 6:08 UTC (permalink / raw)
To: dev
Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit, stephen, lihuisong
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v6->v5, No real changes:
1. Move the fix of RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS from patch 26 to patch 2;
2. Change the description of patch 26.
v5->v4:
1. Add prefix "spinc_" for external functions;
2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
3. Do not use void* for keeping the type information
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (26):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
net/spnic: fixes unsafe C style code
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 201 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 483 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 770 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 366 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 183 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 138 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3211 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 728 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16557 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* [PATCH v5 00/26] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-29 13:37 2% Yanling Song
0 siblings, 0 replies; 200+ results
From: Yanling Song @ 2021-12-29 13:37 UTC (permalink / raw)
To: dev
Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit, stephen, lihuisong
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v5->v4:
1. Add prefix "spinc_" for external functions;
2. Remove temporary MACRO: RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS
3. Do not use void* for keeping the type information
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (26):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
net/spnic: Fix reviewers comments
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 201 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 483 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 770 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 366 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 183 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 138 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3212 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 728 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16558 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* [PATCH v4 00/25] Net/SPNIC: support SPNIC into DPDK 22.03
@ 2021-12-25 11:28 2% Yanling Song
0 siblings, 0 replies; 200+ results
From: Yanling Song @ 2021-12-25 11:28 UTC (permalink / raw)
To: dev; +Cc: songyl, yanling.song, yanggan, xuyun, ferruh.yigit
The patchsets introduce SPNIC driver for Ramaxel's SPNxx serial NIC cards into DPDK 22.03.
Ramaxel Memory Technology is a company which supply a lot of electric products:
storage, communication, PCB...
SPNxxx is a serial PCIE interface NIC cards:
SPN110: 2 PORTs *25G
SPN120: 4 PORTs *25G
SPN130: 2 PORTs *100G
The following is main features of our SPNIC:
- TSO
- LRO
- Flow control
- SR-IOV(Partially supported)
- VLAN offload
- VLAN filter
- CRC offload
- Promiscuous mode
- RSS
v3->v4:
1. Fix ABI test failure;
2. Remove some descriptions in spnic.rst.
v2->v3:
1. Fix clang compiling failure.
v1->v2:
1. Fix coding style issues and compiling failures;
2. Only support linux in meson.build;
3. Use CLOCK_MONOTONIC_COARSE instead of CLOCK_MONOTONIC/CLOCK_MONOTONIC_RAW;
4. Fix time_before();
5. Remove redundant checks in spnic_dev_configure();
Yanling Song (25):
drivers/net: introduce a new PMD driver
net/spnic: initialize the HW interface
net/spnic: add mbox message channel
net/spnic: introduce event queue
net/spnic: add mgmt module
net/spnic: add cmdq and work queue
net/spnic: add interface handling cmdq message
net/spnic: add hardware info initialization
net/spnic: support MAC and link event handling
net/spnic: add function info initialization
net/spnic: add queue pairs context initialization
net/spnic: support mbuf handling of Tx/Rx
net/spnic: support Rx congfiguration
net/spnic: add port/vport enable
net/spnic: support IO packets handling
net/spnic: add device configure/version/info
net/spnic: support RSS configuration update and get
net/spnic: support VLAN filtering and offloading
net/spnic: support promiscuous and allmulticast Rx modes
net/spnic: support flow control
net/spnic: support getting Tx/Rx queues info
net/spnic: net/spnic: support xstats statistics
net/spnic: support VFIO interrupt
net/spnic: support Tx/Rx queue start/stop
net/spnic: add doc infrastructure
MAINTAINERS | 6 +
doc/guides/nics/features/spnic.ini | 39 +
doc/guides/nics/index.rst | 1 +
doc/guides/nics/spnic.rst | 55 +
drivers/net/meson.build | 1 +
drivers/net/spnic/base/meson.build | 37 +
drivers/net/spnic/base/spnic_cmd.h | 222 ++
drivers/net/spnic/base/spnic_cmdq.c | 875 ++++++
drivers/net/spnic/base/spnic_cmdq.h | 248 ++
drivers/net/spnic/base/spnic_compat.h | 184 ++
drivers/net/spnic/base/spnic_csr.h | 104 +
drivers/net/spnic/base/spnic_eqs.c | 661 +++++
drivers/net/spnic/base/spnic_eqs.h | 102 +
drivers/net/spnic/base/spnic_hw_cfg.c | 212 ++
drivers/net/spnic/base/spnic_hw_cfg.h | 125 +
drivers/net/spnic/base/spnic_hw_comm.c | 485 ++++
drivers/net/spnic/base/spnic_hw_comm.h | 204 ++
drivers/net/spnic/base/spnic_hwdev.c | 514 ++++
drivers/net/spnic/base/spnic_hwdev.h | 143 +
drivers/net/spnic/base/spnic_hwif.c | 774 ++++++
drivers/net/spnic/base/spnic_hwif.h | 155 ++
drivers/net/spnic/base/spnic_mbox.c | 1194 ++++++++
drivers/net/spnic/base/spnic_mbox.h | 202 ++
drivers/net/spnic/base/spnic_mgmt.c | 367 +++
drivers/net/spnic/base/spnic_mgmt.h | 110 +
drivers/net/spnic/base/spnic_nic_cfg.c | 1348 +++++++++
drivers/net/spnic/base/spnic_nic_cfg.h | 1110 ++++++++
drivers/net/spnic/base/spnic_nic_event.c | 185 ++
drivers/net/spnic/base/spnic_nic_event.h | 24 +
drivers/net/spnic/base/spnic_wq.c | 139 +
drivers/net/spnic/base/spnic_wq.h | 123 +
drivers/net/spnic/meson.build | 20 +
drivers/net/spnic/spnic_ethdev.c | 3212 ++++++++++++++++++++++
drivers/net/spnic/spnic_ethdev.h | 95 +
drivers/net/spnic/spnic_io.c | 738 +++++
drivers/net/spnic/spnic_io.h | 154 ++
drivers/net/spnic/spnic_rx.c | 937 +++++++
drivers/net/spnic/spnic_rx.h | 326 +++
drivers/net/spnic/spnic_tx.c | 858 ++++++
drivers/net/spnic/spnic_tx.h | 297 ++
drivers/net/spnic/version.map | 3 +
41 files changed, 16589 insertions(+)
create mode 100644 doc/guides/nics/features/spnic.ini
create mode 100644 doc/guides/nics/spnic.rst
create mode 100644 drivers/net/spnic/base/meson.build
create mode 100644 drivers/net/spnic/base/spnic_cmd.h
create mode 100644 drivers/net/spnic/base/spnic_cmdq.c
create mode 100644 drivers/net/spnic/base/spnic_cmdq.h
create mode 100644 drivers/net/spnic/base/spnic_compat.h
create mode 100644 drivers/net/spnic/base/spnic_csr.h
create mode 100644 drivers/net/spnic/base/spnic_eqs.c
create mode 100644 drivers/net/spnic/base/spnic_eqs.h
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_hw_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.c
create mode 100644 drivers/net/spnic/base/spnic_hw_comm.h
create mode 100644 drivers/net/spnic/base/spnic_hwdev.c
create mode 100644 drivers/net/spnic/base/spnic_hwdev.h
create mode 100644 drivers/net/spnic/base/spnic_hwif.c
create mode 100644 drivers/net/spnic/base/spnic_hwif.h
create mode 100644 drivers/net/spnic/base/spnic_mbox.c
create mode 100644 drivers/net/spnic/base/spnic_mbox.h
create mode 100644 drivers/net/spnic/base/spnic_mgmt.c
create mode 100644 drivers/net/spnic/base/spnic_mgmt.h
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.c
create mode 100644 drivers/net/spnic/base/spnic_nic_cfg.h
create mode 100644 drivers/net/spnic/base/spnic_nic_event.c
create mode 100644 drivers/net/spnic/base/spnic_nic_event.h
create mode 100644 drivers/net/spnic/base/spnic_wq.c
create mode 100644 drivers/net/spnic/base/spnic_wq.h
create mode 100644 drivers/net/spnic/meson.build
create mode 100644 drivers/net/spnic/spnic_ethdev.c
create mode 100644 drivers/net/spnic/spnic_ethdev.h
create mode 100644 drivers/net/spnic/spnic_io.c
create mode 100644 drivers/net/spnic/spnic_io.h
create mode 100644 drivers/net/spnic/spnic_rx.c
create mode 100644 drivers/net/spnic/spnic_rx.h
create mode 100644 drivers/net/spnic/spnic_tx.c
create mode 100644 drivers/net/spnic/spnic_tx.h
create mode 100644 drivers/net/spnic/version.map
--
2.32.0
^ permalink raw reply [relevance 2%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-13 9:27 0% ` Ramkumar Balu
@ 2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
0 siblings, 0 replies; 200+ results
From: Kusztal, ArkadiuszX @ 2021-12-17 15:26 UTC (permalink / raw)
To: Ramkumar Balu, Akhil Goyal, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
> -----Original Message-----
> From: Ramkumar Balu <rbalu@marvell.com>
> Sent: Monday, December 13, 2021 10:27 AM
> To: Akhil Goyal <gakhil@marvell.com>; Kusztal, ArkadiuszX
> <arkadiuszx.kusztal@intel.com>; Anoob Joseph <anoobj@marvell.com>; Zhang,
> Roy Fan <roy.fan.zhang@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [RFC] cryptodev: asymmetric crypto random number source
>
> > ++Ram for openssl
> >
> > > ECDSA op:
> > > rte_crypto_param k;
> > > /**< The ECDSA per-message secret number, which is an
> > >integer
> > > * in the interval (1, n-1)
> > > */
> > > DSA op:
> > > No 'k'.
> > >
> > > This one I think have described some time ago:
> > > Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided
> by user.
> > > Only PMD that verifies DSA is OpenSSL PMD which will generate its own
> random number internally.
> > >
> > > So in case PMD supports one of these options (or especially when supports
> both) we need to give some information here.
>
> We can have a standard way to represent if a particular rte_crypto_param is set
> by the application or not. Then, it is up to the PMD to perform the op or return
> error code if unable to proceed.
>
> > >
> > > The most obvious option would be to change rte_crypto_param k ->
> > > rte_crypto_param *k In case (k == NULL) PMD should generate it itself if
> possible, otherwise it should push crypto_op to the response ring with
> appropriate error code.
>
> This case could occur for other params as well. Having a few as nested variables
> and others as pointers could be confusing for memory alloc/dealloc. However,
> the rte_crypto_param already has a data pointer inside it which can be used in
> same manner. For example, in this case (k.data == NULL), PMD should generate
> random number if possible or push to response ring with error code. This can be
> done without breaking backward compatibility.
> This can be the standard way for PMDs to find if a particular rte_crypto_param is
> valid or NULL.
[Arek] Agree, let keep it as easy as possible, and agree it could be useful elsewhere not necessarily in random number cases.
>
> > >
> > > Another options would be:
> > > - Extend rte_cryptodev_config and rte_cryptodev_info with
> > > information about random number generator for specific device
> > > (though it would be ABI breakage)
> > > - Provide some kind of callback to get random number from user
> > > (which could be useful for other things like RSA padding as well)
>
> I think the previous solution itself is more straightforward and simpler unless we
> want to have functionality to configure random number generator for each
> device.
>
> Thanks,
> Ramkumar Balu
>
^ permalink raw reply [relevance 0%]
* Re: [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
2021-12-03 11:38 3% ` [PATCH v4 1/2] net: add " Xiaoyun Li
@ 2021-12-15 11:33 0% ` Singh, Aman Deep
2022-01-04 15:18 0% ` Li, Xiaoyun
0 siblings, 1 reply; 200+ results
From: Singh, Aman Deep @ 2021-12-15 11:33 UTC (permalink / raw)
To: Xiaoyun Li, ferruh.yigit, olivier.matz, mb, konstantin.ananyev,
stephen, vladimir.medvedkin
Cc: dev
On 12/3/2021 5:08 PM, Xiaoyun Li wrote:
> Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
> UDP/TCP checksum in mbuf which can be over multi-segments.
>
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> ---
> doc/guides/rel_notes/release_22_03.rst | 10 ++
> lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
> lib/net/version.map | 10 ++
> 3 files changed, 206 insertions(+)
>
> diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
> index 6d99d1eaa9..7a082c4427 100644
> --- a/doc/guides/rel_notes/release_22_03.rst
> +++ b/doc/guides/rel_notes/release_22_03.rst
> @@ -55,6 +55,13 @@ New Features
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* **Added functions to calculate UDP/TCP checksum in mbuf.**
> + * Added the following functions to calculate UDP/TCP checksum of packets
> + which can be over multi-segments:
> + - ``rte_ipv4_udptcp_cksum_mbuf()``
> + - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
> + - ``rte_ipv6_udptcp_cksum_mbuf()``
> + - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>
> Removed Items
> -------------
> @@ -84,6 +91,9 @@ API Changes
> Also, make sure to start the actual text at the margin.
> =======================================================
>
> +* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
> + ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
> + ``rte_ipv6_udptcp_cksum_mbuf_verify()``
>
> ABI Changes
> -----------
> diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
> index c575250852..534f401d26 100644
> --- a/lib/net/rte_ip.h
> +++ b/lib/net/rte_ip.h
> @@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
> return cksum;
> }
>
> +/**
> + * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
> + */
> +static inline uint16_t
> +__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t raw_cksum;
> + uint32_t cksum;
> +
> + if (l4_off > m->pkt_len)
> + return 0;
> +
> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
> + return 0;
> +
> + cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
> +
> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
At times, even after above operation "cksum" might stay above 16-bits,
ex "cksum = 0x1FFFF" to start with.
Can we consider using "return __rte_raw_cksum_reduce(cksum);"
> +
> + return (uint16_t)cksum;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Compute the IPv4 UDP/TCP checksum of a packet.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv4_hdr
> + * The pointer to the contiguous IPv4 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * The complemented checksum to set in the L4 header.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
> +
> + cksum = ~cksum;
> +
> + /*
> + * Per RFC 768: If the computed checksum is zero for UDP,
> + * it is transmitted as all ones
> + * (the equivalent in one's complement arithmetic).
> + */
> + if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
> + cksum = 0xffff;
> +
> + return cksum;
> +}
> +
> /**
> * Validate the IPv4 UDP or TCP checksum.
> *
> @@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
> return 0;
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Verify the IPv4 UDP/TCP checksum of a packet.
> + *
> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
> + * (i.e. no checksum).
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv4_hdr
> + * The pointer to the contiguous IPv4 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * Return 0 if the checksum is correct, else -1.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> + const struct rte_ipv4_hdr *ipv4_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
> +
> + if (cksum != 0xffff)
> + return -1;
cksum other than 0xffff, should return error. Is that the intent or I am
missing something obvious.
> +
> + return 0;
> +}
> +
> /**
> * IPv6 Header
> */
> @@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
> return cksum;
> }
>
> +/**
> + * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
> + */
> +static inline uint16_t
> +__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t raw_cksum;
> + uint32_t cksum;
> +
> + if (l4_off > m->pkt_len)
> + return 0;
> +
> + if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
> + return 0;
> +
> + cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
> +
> + cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
Same, please check if we can opt for __rte_raw_cksum_reduce(cksum)
> +
> + return (uint16_t)cksum;
> +}
> +
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Process the IPv6 UDP or TCP checksum of a packet.
> + *
> + * The IPv6 header must not be followed by extension headers. The layer 4
> + * checksum must be set to 0 in the L4 header by the caller.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv6_hdr
> + * The pointer to the contiguous IPv6 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * The complemented checksum to set in the L4 header.
> + */
> +__rte_experimental
> +static inline uint16_t
> +rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
> +
> + cksum = ~cksum;
> +
> + /*
> + * Per RFC 768: If the computed checksum is zero for UDP,
> + * it is transmitted as all ones
> + * (the equivalent in one's complement arithmetic).
> + */
> + if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
> + cksum = 0xffff;
> +
> + return cksum;
> +}
> +
> /**
> * Validate the IPv6 UDP or TCP checksum.
> *
> @@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
> return 0;
> }
>
> +/**
> + * @warning
> + * @b EXPERIMENTAL: this API may change without prior notice.
> + *
> + * Validate the IPv6 UDP or TCP checksum of a packet.
> + *
> + * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
> + * this is either invalid or means no checksum in some situations. See 8.1
> + * (Upper-Layer Checksums) in RFC 8200.
> + *
> + * @param m
> + * The pointer to the mbuf.
> + * @param ipv6_hdr
> + * The pointer to the contiguous IPv6 header.
> + * @param l4_off
> + * The offset in bytes to start L4 checksum.
> + * @return
> + * Return 0 if the checksum is correct, else -1.
> + */
> +__rte_experimental
> +static inline int
> +rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
> + const struct rte_ipv6_hdr *ipv6_hdr,
> + uint16_t l4_off)
> +{
> + uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
> +
> + if (cksum != 0xffff)
> + return -1;
> +
> + return 0;
> +}
> +
> /** IPv6 fragment extension header. */
> #define RTE_IPV6_EHDR_MF_SHIFT 0
> #define RTE_IPV6_EHDR_MF_MASK 1
> diff --git a/lib/net/version.map b/lib/net/version.map
> index 4f4330d1c4..0f2aacdef8 100644
> --- a/lib/net/version.map
> +++ b/lib/net/version.map
> @@ -12,3 +12,13 @@ DPDK_22 {
>
> local: *;
> };
> +
> +EXPERIMENTAL {
> + global:
> +
> + # added in 22.03
> + rte_ipv4_udptcp_cksum_mbuf;
> + rte_ipv4_udptcp_cksum_mbuf_verify;
> + rte_ipv6_udptcp_cksum_mbuf;
> + rte_ipv6_udptcp_cksum_mbuf_verify;
> +};
^ permalink raw reply [relevance 0%]
* [PATCH 2/2] doc: update LTS release cadence
@ 2021-12-13 16:48 5% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-12-13 16:48 UTC (permalink / raw)
To: dev, christian.ehrhardt, xuemingl; +Cc: bluca, Kevin Traynor
Regular LTS releases have previously aligned to DPDK main branch
releases so that fixes being backported have already gone through
DPDK main branch release validation.
Now that DPDK main branch has moved to 3 releases per year, the LTS
releases should continue to align with it and follow a similar release
cadence.
Update stable docs to reflect this.
Signed-off-by: Kevin Traynor <ktraynor@redhat.com>
---
doc/guides/contributing/stable.rst | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/doc/guides/contributing/stable.rst b/doc/guides/contributing/stable.rst
index 69d8312b47..9ee7b4b7cc 100644
--- a/doc/guides/contributing/stable.rst
+++ b/doc/guides/contributing/stable.rst
@@ -39,5 +39,5 @@ A Stable Release is used to backport fixes from an ``N`` release back to an
``N-1`` release, for example, from 16.11 to 16.07.
-The duration of a stable is one complete release cycle (3 months). It can be
+The duration of a stable is one complete release cycle (4 months). It can be
longer, up to 1 year, if a maintainer continues to support the stable branch,
or if users supply backported fixes, however the explicit commitment should be
@@ -62,6 +62,8 @@ A LTS release may align with the declaration of a new major ABI version,
please read the :doc:`abi_policy` for more information.
-It is anticipated that there will be at least 4 releases per year of the LTS
-or approximately 1 every 3 months. However, the cadence can be shorter or
+It is anticipated that there will be at least 3 releases per year of the LTS
+or approximately 1 every 4 months. This is done to align with the DPDK main
+branch releases so that fixes have already gone through validation as part of
+the DPDK main branch release validation. However, the cadence can be shorter or
longer depending on the number and criticality of the backported
fixes. Releases should be coordinated with the validation engineers to ensure
--
2.31.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
@ 2021-12-11 9:04 1% ` jerinj
1 sibling, 0 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev, Thomas Monjalon, Akhil Goyal, Declan Doherty, Jerin Jacob,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Ankur Dwivedi, Anoob Joseph, Radha Mohan Chintakuntla,
Veerasenareddy Burru, Pavan Nikhilesh, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Nalla Pradeep,
Ciara Power, Shijith Thotton, Ashwin Sekhar T K, Anatoly Burakov
Cc: david.marchand, ferruh.yigit
From: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. Old release notes and
the kernel related documentation is not accounted for this change.
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
MAINTAINERS | 37 -
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 2 +-
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 12 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 520 ---
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 8 +-
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/event/octeontx2/version.map | 3 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
| 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
usertools/dpdk-devbind.py | 12 +-
149 files changed, 92 insertions(+), 52124 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/event/octeontx2/version.map
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 854b81f2a3..336bbb3547 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -534,15 +534,6 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-F: drivers/common/octeontx2/
-F: drivers/mempool/octeontx2/
-F: doc/guides/platform/img/octeontx2_*
-F: doc/guides/platform/octeontx2.rst
-F: doc/guides/mempool/octeontx2.rst
-
Bus Drivers
-----------
@@ -795,21 +786,6 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-M: Kiran Kumar K <kirankumark@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/octeontx2/
-F: doc/guides/nics/features/octeontx2*.ini
-F: doc/guides/nics/octeontx2.rst
-
-Marvell OCTEON TX2 - security
-M: Anoob Joseph <anoobj@marvell.com>
-T: git://dpdk.org/next/dpdk-next-crypto
-F: drivers/common/octeontx2/otx2_sec*
-F: drivers/net/octeontx2/otx2_ethdev_sec*
-
Marvell OCTEON TX EP - endpoint
M: Nalla Pradeep <pnalla@marvell.com>
M: Radha Mohan Chintakuntla <radhac@marvell.com>
@@ -1115,13 +1091,6 @@ F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
-Marvell OCTEON TX2 crypto
-M: Ankur Dwivedi <adwivedi@marvell.com>
-M: Anoob Joseph <anoobj@marvell.com>
-F: drivers/crypto/octeontx2/
-F: doc/guides/cryptodevs/octeontx2.rst
-F: doc/guides/cryptodevs/features/octeontx2.ini
-
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/crypto/mlx5/
@@ -1298,12 +1267,6 @@ M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
-Marvell OCTEON TX2
-M: Pavan Nikhilesh <pbhagavatula@marvell.com>
-M: Jerin Jacob <jerinj@marvell.com>
-F: drivers/event/octeontx2/
-F: doc/guides/eventdevs/octeontx2.rst
-
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 2b480adfba..344a609a4d 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -341,7 +341,6 @@ driver_test_names = [
'cryptodev_dpaa_sec_autotest',
'cryptodev_dpaa2_sec_autotest',
'cryptodev_null_autotest',
- 'cryptodev_octeontx2_autotest',
'cryptodev_openssl_autotest',
'cryptodev_openssl_asym_autotest',
'cryptodev_qat_autotest',
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 10b48cdadb..293f59b48c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -15615,12 +15615,6 @@ test_cryptodev_octeontx(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX_SYM_PMD));
}
-static int
-test_cryptodev_octeontx2(void)
-{
- return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
-}
-
static int
test_cryptodev_caam_jr(void)
{
@@ -15733,7 +15727,6 @@ REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..70f23a3f67 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -68,7 +68,6 @@
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..68f4d8e7a6 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -2375,20 +2375,6 @@ test_cryptodev_octeontx_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
-static int
-test_cryptodev_octeontx2_asym(void)
-{
- gbl_driver_id = rte_cryptodev_driver_id_get(
- RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
- if (gbl_driver_id == -1) {
- RTE_LOG(ERR, USER1, "OCTEONTX2 PMD must be loaded.\n");
- return TEST_FAILED;
- }
-
- /* Use test suite registered for crypto_octeontx PMD */
- return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
-}
-
static int
test_cryptodev_cn9k_asym(void)
{
@@ -2424,8 +2410,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_TEST_COMMAND(cryptodev_octeontx_asym_autotest,
test_cryptodev_octeontx_asym);
-
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_asym_autotest,
- test_cryptodev_octeontx2_asym);
REGISTER_TEST_COMMAND(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_TEST_COMMAND(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 843d9766b0..10028fe11d 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1018,12 +1018,6 @@ test_eventdev_selftest_octeontx(void)
return test_eventdev_selftest_impl("event_octeontx", "");
}
-static int
-test_eventdev_selftest_octeontx2(void)
-{
- return test_eventdev_selftest_impl("event_octeontx2", "");
-}
-
static int
test_eventdev_selftest_dpaa2(void)
{
@@ -1052,8 +1046,6 @@ REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
test_eventdev_selftest_octeontx);
-REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
- test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc
index 88e5f10945..a3578c03a1 100644
--- a/config/arm/arm64_cn10k_linux_gcc
+++ b/config/arm/arm64_cn10k_linux_gcc
@@ -14,4 +14,3 @@ endian = 'little'
[properties]
platform = 'cn10k'
-disable_drivers = 'common/octeontx2'
diff --git a/config/arm/arm64_octeontx2_linux_gcc b/config/arm/arm64_cn9k_linux_gcc
similarity index 84%
rename from config/arm/arm64_octeontx2_linux_gcc
rename to config/arm/arm64_cn9k_linux_gcc
index 8fbdd3868d..a94b44a551 100644
--- a/config/arm/arm64_octeontx2_linux_gcc
+++ b/config/arm/arm64_cn9k_linux_gcc
@@ -13,5 +13,4 @@ cpu = 'armv8-a'
endian = 'little'
[properties]
-platform = 'octeontx2'
-disable_drivers = 'common/cnxk'
+platform = 'cn9k'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 213324d262..16e808cdd5 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -139,7 +139,7 @@ implementer_cavium = {
'march_features': ['crc', 'crypto', 'lse'],
'compiler_options': ['-mcpu=octeontx2'],
'flags': [
- ['RTE_MACHINE', '"octeontx2"'],
+ ['RTE_MACHINE', '"cn9k"'],
['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true],
['RTE_MAX_LCORE', 36],
@@ -340,8 +340,8 @@ soc_n2 = {
'numa': false
}
-soc_octeontx2 = {
- 'description': 'Marvell OCTEON TX2',
+soc_cn9k = {
+ 'description': 'Marvell OCTEON 9',
'implementer': '0x43',
'part_number': '0xb2',
'numa': false
@@ -377,6 +377,7 @@ generic_aarch32: Generic un-optimized build for armv8 aarch32 execution mode.
armada: Marvell ARMADA
bluefield: NVIDIA BlueField
centriq2400: Qualcomm Centriq 2400
+cn9k: Marvell OCTEON 9
cn10k: Marvell OCTEON 10
dpaa: NXP DPAA
emag: Ampere eMAG
@@ -385,7 +386,6 @@ kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
n2: Arm Neoverse N2
-octeontx2: Marvell OCTEON TX2
stingray: Broadcom Stingray
thunderx2: Marvell ThunderX2 T99
thunderxt88: Marvell ThunderX T88
@@ -399,6 +399,7 @@ socs = {
'armada': soc_armada,
'bluefield': soc_bluefield,
'centriq2400': soc_centriq2400,
+ 'cn9k': soc_cn9k,
'cn10k' : soc_cn10k,
'dpaa': soc_dpaa,
'emag': soc_emag,
@@ -407,7 +408,6 @@ socs = {
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
'n2': soc_n2,
- 'octeontx2': soc_octeontx2,
'stingray': soc_stingray,
'thunderx2': soc_thunderx2,
'thunderxt88': soc_thunderxt88
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index 5e654189a8..675f10142e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,7 +48,7 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
- if grep -qE "\<librte_regex_octeontx2" $dump; then
+ if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
continue
fi
diff --git a/doc/guides/cryptodevs/features/octeontx2.ini b/doc/guides/cryptodevs/features/octeontx2.ini
deleted file mode 100644
index c54dc9409c..0000000000
--- a/doc/guides/cryptodevs/features/octeontx2.ini
+++ /dev/null
@@ -1,87 +0,0 @@
-;
-; Supported features of the 'octeontx2' crypto driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Symmetric crypto = Y
-Asymmetric crypto = Y
-Sym operation chaining = Y
-HW Accelerated = Y
-Protocol offload = Y
-In Place SGL = Y
-OOP SGL In LB Out = Y
-OOP SGL In SGL Out = Y
-OOP LB In LB Out = Y
-RSA PRIV OP KEY QT = Y
-Digest encrypted = Y
-Symmetric sessionless = Y
-
-;
-; Supported crypto algorithms of 'octeontx2' crypto driver.
-;
-[Cipher]
-NULL = Y
-3DES CBC = Y
-3DES ECB = Y
-AES CBC (128) = Y
-AES CBC (192) = Y
-AES CBC (256) = Y
-AES CTR (128) = Y
-AES CTR (192) = Y
-AES CTR (256) = Y
-AES XTS (128) = Y
-AES XTS (256) = Y
-DES CBC = Y
-KASUMI F8 = Y
-SNOW3G UEA2 = Y
-ZUC EEA3 = Y
-
-;
-; Supported authentication algorithms of 'octeontx2' crypto driver.
-;
-[Auth]
-NULL = Y
-AES GMAC = Y
-KASUMI F9 = Y
-MD5 = Y
-MD5 HMAC = Y
-SHA1 = Y
-SHA1 HMAC = Y
-SHA224 = Y
-SHA224 HMAC = Y
-SHA256 = Y
-SHA256 HMAC = Y
-SHA384 = Y
-SHA384 HMAC = Y
-SHA512 = Y
-SHA512 HMAC = Y
-SNOW3G UIA2 = Y
-ZUC EIA3 = Y
-
-;
-; Supported AEAD algorithms of 'octeontx2' crypto driver.
-;
-[AEAD]
-AES GCM (128) = Y
-AES GCM (192) = Y
-AES GCM (256) = Y
-CHACHA20-POLY1305 = Y
-
-;
-; Supported Asymmetric algorithms of the 'octeontx2' crypto driver.
-;
-[Asymmetric]
-RSA = Y
-DSA =
-Modular Exponentiation = Y
-Modular Inversion =
-Diffie-hellman =
-ECDSA = Y
-ECPM = Y
-
-;
-; Supported Operating systems of the 'octeontx2' crypto driver.
-;
-[OS]
-Linux = Y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 3dcc2ecd2e..39cca6dbde 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -22,7 +22,6 @@ Crypto Device Drivers
dpaa_sec
kasumi
octeontx
- octeontx2
openssl
mlx5
mvsam
diff --git a/doc/guides/cryptodevs/octeontx2.rst b/doc/guides/cryptodevs/octeontx2.rst
deleted file mode 100644
index 811e61a1f6..0000000000
--- a/doc/guides/cryptodevs/octeontx2.rst
+++ /dev/null
@@ -1,188 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-
-Marvell OCTEON TX2 Crypto Poll Mode Driver
-==========================================
-
-The OCTEON TX2 crypto poll mode driver provides support for offloading
-cryptographic operations to cryptographic accelerator units on the
-**OCTEON TX2** :sup:`®` family of processors (CN9XXX).
-
-More information about OCTEON TX2 SoCs may be obtained from `<https://www.marvell.com>`_
-
-Features
---------
-
-The OCTEON TX2 crypto PMD has support for:
-
-Symmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Cipher algorithms:
-
-* ``RTE_CRYPTO_CIPHER_NULL``
-* ``RTE_CRYPTO_CIPHER_3DES_CBC``
-* ``RTE_CRYPTO_CIPHER_3DES_ECB``
-* ``RTE_CRYPTO_CIPHER_AES_CBC``
-* ``RTE_CRYPTO_CIPHER_AES_CTR``
-* ``RTE_CRYPTO_CIPHER_AES_XTS``
-* ``RTE_CRYPTO_CIPHER_DES_CBC``
-* ``RTE_CRYPTO_CIPHER_KASUMI_F8``
-* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
-* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
-
-Hash algorithms:
-
-* ``RTE_CRYPTO_AUTH_NULL``
-* ``RTE_CRYPTO_AUTH_AES_GMAC``
-* ``RTE_CRYPTO_AUTH_KASUMI_F9``
-* ``RTE_CRYPTO_AUTH_MD5``
-* ``RTE_CRYPTO_AUTH_MD5_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA1``
-* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA224``
-* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA256``
-* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA384``
-* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA512``
-* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
-* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
-* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
-
-AEAD algorithms:
-
-* ``RTE_CRYPTO_AEAD_AES_GCM``
-* ``RTE_CRYPTO_AEAD_CHACHA20_POLY1305``
-
-Asymmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-* ``RTE_CRYPTO_ASYM_XFORM_RSA``
-* ``RTE_CRYPTO_ASYM_XFORM_MODEX``
-
-
-Installation
-------------
-
-The OCTEON TX2 crypto PMD may be compiled natively on an OCTEON TX2 platform or
-cross-compiled on an x86 platform.
-
-Refer to :doc:`../platform/octeontx2` for instructions to build your DPDK
-application.
-
-.. note::
-
- The OCTEON TX2 crypto PMD uses services from the kernel mode OCTEON TX2
- crypto PF driver in linux. This driver is included in the OCTEON TX SDK.
-
-Initialization
---------------
-
-List the CPT PF devices available on your OCTEON TX2 platform:
-
-.. code-block:: console
-
- lspci -d:a0fd
-
-``a0fd`` is the CPT PF device id. You should see output similar to:
-
-.. code-block:: console
-
- 0002:10:00.0 Class 1080: Device 177d:a0fd
-
-Set ``sriov_numvfs`` on the CPT PF device, to create a VF:
-
-.. code-block:: console
-
- echo 1 > /sys/bus/pci/drivers/octeontx2-cpt/0002:10:00.0/sriov_numvfs
-
-Bind the CPT VF device to the vfio_pci driver:
-
-.. code-block:: console
-
- echo '177d a0fe' > /sys/bus/pci/drivers/vfio-pci/new_id
- echo 0002:10:00.1 > /sys/bus/pci/devices/0002:10:00.1/driver/unbind
- echo 0002:10:00.1 > /sys/bus/pci/drivers/vfio-pci/bind
-
-Another way to bind the VF would be to use the ``dpdk-devbind.py`` script:
-
-.. code-block:: console
-
- cd <dpdk directory>
- ./usertools/dpdk-devbind.py -u 0002:10:00.1
- ./usertools/dpdk-devbind.py -b vfio-pci 0002:10.00.1
-
-.. note::
-
- * For CN98xx SoC, it is recommended to use even and odd DBDF VFs to achieve
- higher performance as even VF uses one crypto engine and odd one uses
- another crypto engine.
-
- * Ensure that sufficient huge pages are available for your application::
-
- dpdk-hugepages.py --setup 4G --pagesize 512M
-
- Refer to :ref:`linux_gsg_hugepages` for more details.
-
-Debugging Options
------------------
-
-.. _table_octeontx2_crypto_debug_options:
-
-.. table:: OCTEON TX2 crypto PMD debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | CPT | --log-level='pmd\.crypto\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Testing
--------
-
-The symmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_autotest
-
-The asymmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_asym_autotest
-
-
-Lookaside IPsec Support
------------------------
-
-The OCTEON TX2 SoC can accelerate IPsec traffic in lookaside protocol mode,
-with its **cryptographic accelerator (CPT)**. ``OCTEON TX2 crypto PMD`` implements
-this as an ``RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL`` offload.
-
-Refer to :doc:`../prog_guide/rte_security` for more details on protocol offloads.
-
-This feature can be tested with ipsec-secgw sample application.
-
-
-Features supported
-~~~~~~~~~~~~~~~~~~
-
-* IPv4
-* IPv6
-* ESP
-* Tunnel mode
-* Transport mode(IPv4)
-* ESN
-* Anti-replay
-* UDP Encapsulation
-* AES-128/192/256-GCM
-* AES-128/192/256-CBC-SHA1-HMAC
-* AES-128/192/256-CBC-SHA256-128-HMAC
diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst
index da2dd59071..418b9a9d63 100644
--- a/doc/guides/dmadevs/cnxk.rst
+++ b/doc/guides/dmadevs/cnxk.rst
@@ -7,7 +7,7 @@ CNXK DMA Device Driver
======================
The ``cnxk`` dmadev driver provides a poll-mode driver (PMD) for Marvell DPI DMA
-Hardware Accelerator block found in OCTEONTX2 and OCTEONTX3 family of SoCs.
+Hardware Accelerator block found in OCTEON 9 and OCTEON 10 family of SoCs.
Each DMA queue is exposed as a VF function when SRIOV is enabled.
The block supports following modes of DMA transfers:
diff --git a/doc/guides/eventdevs/features/octeontx2.ini b/doc/guides/eventdevs/features/octeontx2.ini
deleted file mode 100644
index 05b84beb6e..0000000000
--- a/doc/guides/eventdevs/features/octeontx2.ini
+++ /dev/null
@@ -1,30 +0,0 @@
-;
-; Supported features of the 'octeontx2' eventdev driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Scheduling Features]
-queue_qos = Y
-distributed_sched = Y
-queue_all_types = Y
-nonseq_mode = Y
-runtime_port_link = Y
-multiple_queue_port = Y
-carry_flow_id = Y
-maintenance_free = Y
-
-[Eth Rx adapter Features]
-internal_port = Y
-multi_eventq = Y
-
-[Eth Tx adapter Features]
-internal_port = Y
-
-[Crypto adapter Features]
-internal_port_op_new = Y
-internal_port_op_fwd = Y
-internal_port_qp_ev_bind = Y
-
-[Timer adapter Features]
-internal_port = Y
-periodic = Y
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index b11657f7ae..eed19ad28c 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -19,5 +19,4 @@ application through the eventdev API.
dsw
sw
octeontx
- octeontx2
opdl
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
deleted file mode 100644
index 0fa57abfa3..0000000000
--- a/doc/guides/eventdevs/octeontx2.rst
+++ /dev/null
@@ -1,178 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 SSO Eventdev Driver
-===============================
-
-The OCTEON TX2 SSO PMD (**librte_event_octeontx2**) provides poll mode
-eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2**
-SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 SSO PMD are:
-
-- 256 Event queues
-- 26 (dual) and 52 (single) Event ports
-- HW event scheduler
-- Supports 1M flows per event queue
-- Flow based event pipelining
-- Flow pinning support in flow based event pipelining
-- Queue based event pipelining
-- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
-- Event scheduling QoS based on event queue priority
-- Open system with configurable amount of outstanding events limited only by
- DRAM
-- HW accelerated dequeue timeout support to enable power management
-- HW managed event timers support through TIM, with high precision and
- time granularity of 2.5us.
-- Up to 256 TIM rings aka event timer adapters.
-- Up to 8 rings traversed in parallel.
-- HW managed packets enqueued from ethdev to eventdev exposed through event eth
- RX adapter.
-- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
- capability while maintaining receive packet order.
-- Full Rx/Tx offload support defined through ethdev queue config.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-
-Runtime Config Options
-----------------------
-
-- ``Maximum number of in-flight events`` (default ``8192``)
-
- In **Marvell OCTEON TX2** the max number of in-flight events are only limited
- by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
- upper limit for in-flight events.
- For example::
-
- -a 0002:0e:00.0,xae_cnt=16384
-
-- ``Force legacy mode``
-
- The ``single_ws`` devargs parameter is introduced to force legacy mode i.e
- single workslot mode in SSO and disable the default dual workslot mode.
- For example::
-
- -a 0002:0e:00.0,single_ws=1
-
-- ``Event Group QoS support``
-
- SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
- events. By default the buffers are assigned to the SSO GGRPs to
- satisfy minimum HW requirements. SSO is free to assign the remaining
- buffers to GGRPs based on a preconfigured threshold.
- We can control the QoS of SSO GGRP by modifying the above mentioned
- thresholds. GGRPs that have higher importance can be assigned higher
- thresholds than the rest. The dictionary format is as follows
- [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
- default.
- For example::
-
- -a 0002:0e:00.0,qos=[1-50-50-50]
-
-- ``TIM disable NPA``
-
- By default chunks are allocated from NPA then TIM can automatically free
- them when traversing the list of chunks. The ``tim_disable_npa`` devargs
- parameter disables NPA and uses software mempool to manage chunks
- For example::
-
- -a 0002:0e:00.0,tim_disable_npa=1
-
-- ``TIM modify chunk slots``
-
- The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
- Chunks are used to store event timers, a chunk can be visualised as an array
- where the last element points to the next chunk and rest of them are used to
- store events. TIM traverses the list of chunks and enqueues the event timers
- to SSO. The default value is 255 and the max value is 4095.
- For example::
-
- -a 0002:0e:00.0,tim_chnk_slots=1023
-
-- ``TIM enable arm/cancel statistics``
-
- The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
- event timer adapter.
- For example::
-
- -a 0002:0e:00.0,tim_stats_ena=1
-
-- ``TIM limit max rings reserved``
-
- The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
- rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
- resources we can avoid starving other applications by not grabbing all the
- rings.
- For example::
-
- -a 0002:0e:00.0,tim_rings_lmt=5
-
-- ``TIM ring control internal parameters``
-
- When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
- control each TIM rings internal parameters uniquely. The following dict
- format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
- default values.
- For Example::
-
- -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:0e:00.0,npa_lock_mask=0xf
-
-- ``Force Rx Back pressure``
-
- Force Rx back pressure when same mempool is used across ethernet device
- connected to event device.
-
- For example::
-
- -a 0002:0e:00.0,force_rx_bp=1
-
-Debugging Options
------------------
-
-.. _table_octeontx2_event_debug_options:
-
-.. table:: OCTEON TX2 event device debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' |
- +---+------------+-------------------------------------------------------+
-
-Limitations
------------
-
-Rx adapter support
-~~~~~~~~~~~~~~~~~~
-
-Using the same mempool for all the ethernet device ports connected to
-event device would cause back pressure to be asserted only on the first
-ethernet device.
-Back pressure is automatically disabled when using same mempool for all the
-ethernet devices connected to event device to override this applications can
-use `force_rx_bp=1` device arguments.
-Using unique mempool per each ethernet device is recommended when they are
-connected to event device.
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
index ce53bc1ac7..e4b6ee7d31 100644
--- a/doc/guides/mempool/index.rst
+++ b/doc/guides/mempool/index.rst
@@ -13,6 +13,5 @@ application through the mempool API.
cnxk
octeontx
- octeontx2
ring
stack
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
deleted file mode 100644
index 1272c1e72b..0000000000
--- a/doc/guides/mempool/octeontx2.rst
+++ /dev/null
@@ -1,92 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 NPA Mempool Driver
-=============================
-
-The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool
-driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-OCTEON TX2 NPA PMD supports:
-
-- Up to 128 NPA LFs
-- 1M Pools per LF
-- HW mempool manager
-- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path.
-- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-Pre-Installation Configuration
-------------------------------
-
-
-Runtime Config Options
-~~~~~~~~~~~~~~~~~~~~~~
-
-- ``Maximum number of mempools per application`` (default ``128``)
-
- The maximum number of mempools per application needs to be configured on
- HW during mempool driver initialization. HW can support up to 1M mempools,
- Since each mempool costs set of HW resources, the ``max_pools`` ``devargs``
- parameter is being introduced to configure the number of mempools required
- for the application.
- For example::
-
- -a 0002:02:00.0,max_pools=512
-
- With the above configuration, the driver will set up only 512 mempools for
- the given application to save HW resources.
-
-.. note::
-
- Since this configuration is per application, the end user needs to
- provide ``max_pools`` parameter to the first PCIe device probed by the given
- application.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-Debugging Options
-~~~~~~~~~~~~~~~~~
-
-.. _table_octeontx2_mempool_debug_options:
-
-.. table:: OCTEON TX2 mempool debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Standalone mempool device
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
- The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices
- available in the system. In order to avoid, the end user to bind the mempool
- device prior to use ethdev and/or eventdev device, the respective driver
- configures an NPA LF and attach to the first probed ethdev or eventdev device.
- In case, if end user need to run mempool as a standalone device
- (without ethdev or eventdev), end user needs to bind a mempool device using
- ``usertools/dpdk-devbind.py``
-
- Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device::
-
- echo "mempool_autotest" | <build_dir>/app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa"
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 84f9865654..2119ba51c8 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -178,7 +178,7 @@ Runtime Config Options
* ``rss_adder<7:0> = flow_tag<7:0>``
Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
+ RSS adder scheme used in OCTEON 9 products.
By default, the driver runs in the latter mode.
Setting this flag to 1 to select the legacy mode.
@@ -291,7 +291,7 @@ Limitations
The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
+recycling on OCTEON 9 SoC platform.
CRC stripping
~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
deleted file mode 100644
index bf0c2890f2..0000000000
--- a/doc/guides/nics/features/octeontx2.ini
+++ /dev/null
@@ -1,97 +0,0 @@
-;
-; Supported features of the 'octeontx2' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Rx interrupt = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-TSO = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Timesync = Y
-Timestamp offload = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Stats per queue = Y
-Extended stats = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
-
-[rte_flow items]
-any = Y
-arp_eth_ipv4 = Y
-esp = Y
-eth = Y
-e_tag = Y
-geneve = Y
-gre = Y
-gre_key = Y
-gtpc = Y
-gtpu = Y
-higig2 = Y
-icmp = Y
-ipv4 = Y
-ipv6 = Y
-ipv6_ext = Y
-mpls = Y
-nvgre = Y
-raw = Y
-sctp = Y
-tcp = Y
-udp = Y
-vlan = Y
-vxlan = Y
-vxlan_gpe = Y
-
-[rte_flow actions]
-count = Y
-drop = Y
-flag = Y
-mark = Y
-of_pop_vlan = Y
-of_push_vlan = Y
-of_set_vlan_pcp = Y
-of_set_vlan_vid = Y
-pf = Y
-port_id = Y
-port_representor = Y
-queue = Y
-rss = Y
-security = Y
-vf = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
deleted file mode 100644
index c405db7cf9..0000000000
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ /dev/null
@@ -1,48 +0,0 @@
-;
-; Supported features of the 'octeontx2_vec' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
deleted file mode 100644
index 5ac7a49a5c..0000000000
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ /dev/null
@@ -1,45 +0,0 @@
-;
-; Supported features of the 'octeontx2_vf' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-Multiprocess aware = Y
-Rx interrupt = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-TSO = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1c94caccea..f48e9f815c 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -52,7 +52,6 @@ Network Interface Controller Drivers
ngbe
null
octeontx
- octeontx2
octeontx_ep
pfe
qede
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
deleted file mode 100644
index 4ce067f2c5..0000000000
--- a/doc/guides/nics/octeontx2.rst
+++ /dev/null
@@ -1,465 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(C) 2019 Marvell International Ltd.
-
-OCTEON TX2 Poll Mode driver
-===========================
-
-The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
-driver support for the inbuilt network device found in **Marvell OCTEON TX2**
-SoC family as well as for their virtual functions (VF) in SR-IOV context.
-
-More information can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 Ethdev PMD are:
-
-- Packet type information
-- Promiscuous mode
-- Jumbo frames
-- SR-IOV VF
-- Lock-free Tx queue
-- Multiple queues for TX and RX
-- Receiver Side Scaling (RSS)
-- MAC/VLAN filtering
-- Multicast MAC filtering
-- Generic flow API
-- Inner and Outer Checksum offload
-- VLAN/QinQ stripping and insertion
-- Port hardware statistics
-- Link state information
-- Link flow control
-- MTU update
-- Scatter-Gather IO support
-- Vector Poll mode driver
-- Debug utilities - Context dump and error interrupt support
-- IEEE1588 timestamping
-- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
-- Support Rx interrupt
-- Inline IPsec processing support
-- :ref:`Traffic Management API <otx2_tmapi>`
-
-Prerequisites
--------------
-
-See :doc:`../platform/octeontx2` for setup information.
-
-
-Driver compilation and testing
-------------------------------
-
-Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
-for details.
-
-#. Running testpmd:
-
- Follow instructions available in the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
- to run testpmd.
-
- Example output:
-
- .. code-block:: console
-
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
- EAL: Detected 24 lcore(s)
- EAL: Detected 1 NUMA nodes
- EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
- EAL: No available hugepages reported in hugepages-2048kB
- EAL: Probing VFIO support...
- EAL: VFIO support initialized
- EAL: PCI device 0002:02:00.0 on NUMA socket 0
- EAL: probe driver: 177d:a063 net_octeontx2
- EAL: using IOMMU type 1 (Type 1)
- testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
- testpmd: preferred mempool ops selected: octeontx2_npa
- Configuring Port 0 (socket 0)
- PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
-
- Port 0: link state change event
- Port 0: 36:10:66:88:7A:57
- Checking link statuses...
- Done
- No commandline core given, start packet forwarding
- io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
- Logical Core 9 (socket 0) forwards packets on 1 streams:
- RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
-
- io packet forwarding packets/burst=32
- nb forwarding cores=1 - nb forwarding ports=1
- port 0: RX queue number: 1 Tx queue number: 1
- Rx offloads=0x0 Tx offloads=0x10000
- RX queue: 0
- RX desc=512 - RX free threshold=0
- RX threshold registers: pthresh=0 hthresh=0 wthresh=0
- RX Offloads=0x0
- TX queue: 0
- TX desc=512 - TX free threshold=0
- TX threshold registers: pthresh=0 hthresh=0 wthresh=0
- TX offloads=0x10000 - TX RS bit threshold=0
- Press enter to exit
-
-Runtime Config Options
-----------------------
-
-- ``Rx&Tx scalar mode enable`` (default ``0``)
-
- Ethdev supports both scalar and vector mode, it may be selected at runtime
- using ``scalar_enable`` ``devargs`` parameter.
-
-- ``RSS reta size`` (default ``64``)
-
- RSS redirection table size may be configured during runtime using ``reta_size``
- ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,reta_size=256
-
- With the above configuration, reta table of size 256 is populated.
-
-- ``Flow priority levels`` (default ``3``)
-
- RTE Flow priority levels can be configured during runtime using
- ``flow_max_priority`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_max_priority=10
-
- With the above configuration, priority level was set to 10 (0-9). Max
- priority level supported is 32.
-
-- ``Reserve Flow entries`` (default ``8``)
-
- RTE flow entries can be pre allocated and the size of pre allocation can be
- selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_prealloc_size=4
-
- With the above configuration, pre alloc size was set to 4. Max pre alloc
- size supported is 32.
-
-- ``Max SQB buffer count`` (default ``512``)
-
- Send queue descriptor buffer count may be limited during runtime using
- ``max_sqb_count`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,max_sqb_count=64
-
- With the above configuration, each send queue's descriptor buffer count is
- limited to a maximum of 64 buffers.
-
-- ``Switch header enable`` (default ``none``)
-
- A port can be configured to a specific switch header type by using
- ``switch_header`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,switch_header="higig2"
-
- With the above configuration, higig2 will be enabled on that port and the
- traffic on this port should be higig2 traffic only. Supported switch header
- types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
-
-- ``RSS tag as XOR`` (default ``0``)
-
- C0 HW revision onward, The HW gives an option to configure the RSS adder as
-
- * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
-
- * ``rss_adder<7:0> = flow_tag<7:0>``
-
- Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
-
- By default, the driver runs in the latter mode from C0 HW revision onward.
- Setting this flag to 1 to select the legacy mode.
-
- For example to select the legacy mode(RSS tag adder as XOR)::
-
- -a 0002:02:00.0,tag_as_xor=1
-
-- ``Max SPI for inbound inline IPsec`` (default ``1``)
-
- Max SPI supported for inbound inline IPsec processing can be specified by
- ``ipsec_in_max_spi`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,ipsec_in_max_spi=128
-
- With the above configuration, application can enable inline IPsec processing
- on 128 SAs (SPI 0-127).
-
-- ``Lock Rx contexts in NDC cache``
-
- Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_rx_ctx=1
-
-- ``Lock Tx contexts in NDC cache``
-
- Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_tx_ctx=1
-
-.. note::
-
- Above devarg parameters are configurable per device, user needs to pass the
- parameters to all the PCIe devices if application requires to configure on
- all the ethdev ports.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-.. _otx2_tmapi:
-
-Traffic Management API
-----------------------
-
-OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
-configure the following features:
-
-#. Hierarchical scheduling
-#. Single rate - Two color, Two rate - Three color shaping
-
-Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
-
-Every parent can have atmost 10 SP Children and unlimited DWRR children.
-
-Both PF & VF supports traffic management API with PF supporting 6 levels
-and VF supporting 5 levels of topology.
-
-Limitations
------------
-
-``mempool_octeontx2`` external mempool handler dependency
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler
-as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
-the host interface irrespective of the offload configuration.
-
-Multicast MAC filtering
-~~~~~~~~~~~~~~~~~~~~~~~
-
-``net_octeontx2`` PMD supports multicast mac filtering feature only on physical
-function devices.
-
-SDP interface support
-~~~~~~~~~~~~~~~~~~~~~
-OCTEON TX2 SDP interface support is limited to PF device, No VF support.
-
-Inline Protocol Processing
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` PMD doesn't support the following features for packets to be
-inline protocol processed.
-- TSO offload
-- VLAN/QinQ offload
-- Fragmentation
-
-Debugging Options
------------------
-
-.. _table_octeontx2_ethdev_debug_options:
-
-.. table:: OCTEON TX2 ethdev debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
- +---+------------+-------------------------------------------------------+
-
-RTE Flow Support
-----------------
-
-The OCTEON TX2 SoC family NIC has support for the following patterns and
-actions.
-
-Patterns:
-
-.. _table_octeontx2_supported_flow_item_types:
-
-.. table:: Item types
-
- +----+--------------------------------+
- | # | Pattern Type |
- +====+================================+
- | 1 | RTE_FLOW_ITEM_TYPE_ETH |
- +----+--------------------------------+
- | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
- +----+--------------------------------+
- | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
- +----+--------------------------------+
- | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
- +----+--------------------------------+
- | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
- +----+--------------------------------+
- | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
- +----+--------------------------------+
- | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
- +----+--------------------------------+
- | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
- +----+--------------------------------+
- | 9 | RTE_FLOW_ITEM_TYPE_UDP |
- +----+--------------------------------+
- | 10 | RTE_FLOW_ITEM_TYPE_TCP |
- +----+--------------------------------+
- | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
- +----+--------------------------------+
- | 12 | RTE_FLOW_ITEM_TYPE_ESP |
- +----+--------------------------------+
- | 13 | RTE_FLOW_ITEM_TYPE_GRE |
- +----+--------------------------------+
- | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
- +----+--------------------------------+
- | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
- +----+--------------------------------+
- | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
- +----+--------------------------------+
- | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
- +----+--------------------------------+
- | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
- +----+--------------------------------+
- | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
- +----+--------------------------------+
- | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
- +----+--------------------------------+
- | 21 | RTE_FLOW_ITEM_TYPE_VOID |
- +----+--------------------------------+
- | 22 | RTE_FLOW_ITEM_TYPE_ANY |
- +----+--------------------------------+
- | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
- +----+--------------------------------+
- | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
- +----+--------------------------------+
- | 25 | RTE_FLOW_ITEM_TYPE_RAW |
- +----+--------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
- bits in the GRE header are equal to 0.
-
-Actions:
-
-.. _table_octeontx2_supported_ingress_action_types:
-
-.. table:: Ingress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_VOID |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_MARK |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
- +----+-----------------------------------------+
- | 7 | RTE_FLOW_ACTION_TYPE_RSS |
- +----+-----------------------------------------+
- | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
- +----+-----------------------------------------+
- | 9 | RTE_FLOW_ACTION_TYPE_PF |
- +----+-----------------------------------------+
- | 10 | RTE_FLOW_ACTION_TYPE_VF |
- +----+-----------------------------------------+
- | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
- +----+-----------------------------------------+
- | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
- +----+-----------------------------------------+
- | 13 | RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR |
- +----+-----------------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ACTION_TYPE_PORT_ID``, ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR``
- are only supported between PF and its VFs.
-
-.. _table_octeontx2_supported_egress_action_types:
-
-.. table:: Egress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
- +----+-----------------------------------------+
-
-Custom protocols supported in RTE Flow
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
-
-* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
-* ``NGIO`` can be parsed at L3 level.
-
-For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
-respective switch header.
-
-For example::
-
- -a 0002:02:00.0,switch_header="vlan_exdsa"
-
-The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
-pattern.
-
-- ``relative`` Selects the layer at which parsing is done.
-
- - 0 for ``exdsa`` and ``vlan_exdsa``.
-
- - 1 for ``NGIO``.
-
-- ``offset`` The offset in the header where the pattern should be matched.
-- ``length`` Length of the pattern.
-- ``pattern`` Pattern as a byte string.
-
-Example usage in testpmd::
-
- ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
- --rx-offloads=0x00080000 --rxq 8 --txq 8
- testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
- spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst
index b512ccfdab..2ec8a034b5 100644
--- a/doc/guides/nics/octeontx_ep.rst
+++ b/doc/guides/nics/octeontx_ep.rst
@@ -5,7 +5,7 @@ OCTEON TX EP Poll Mode driver
=============================
The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode
-ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2**
+ethdev driver support for the virtual functions (VF) of **Marvell OCTEON 9**
and **Cavium OCTEON TX** families of adapters in SR-IOV context.
More information can be found at `Marvell Official Website
@@ -24,4 +24,4 @@ must be installed separately:
allocates resources such as number of VFs, input/output queues for itself and
the number of i/o queues each VF can use.
-See :doc:`../platform/octeontx2` for SDP interface information which provides PCIe endpoint support for a remote host.
+See :doc:`../platform/cnxk` for SDP interface information which provides PCIe endpoint support for a remote host.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 5213df3ccd..97e38c868c 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -13,6 +13,9 @@ More information about CN9K and CN10K SoC can be found at `Marvell Official Webs
Supported OCTEON cnxk SoCs
--------------------------
+- CN93xx
+- CN96xx
+- CN98xx
- CN106xx
- CNF105xx
@@ -583,6 +586,15 @@ Cross Compilation
Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
+CN9K:
+
+.. code-block:: console
+
+ meson build --cross-file config/arm/arm64_cn9k_linux_gcc
+ ninja -C build
+
+CN10K:
+
.. code-block:: console
meson build --cross-file config/arm/arm64_cn10k_linux_gcc
diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
deleted file mode 100644
index ecd575947a..0000000000
--- a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
+++ /dev/null
@@ -1,2804 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_packet_flow_hw_accelerators.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker18508"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path18506" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker18096"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path18094"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker17550"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Sstart"
- inkscape:collect="always">
- <path
- transform="scale(0.2) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17548" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker17156"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#00db00;stroke-width:1pt;stroke-opacity:1;fill:#00db00;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17154" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient13962">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop13958" />
- <stop
- style="stop-color:#fc0000;stop-opacity:0;"
- offset="1"
- id="stop13960" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Send"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Send"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6218"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) rotate(180) translate(6,0)" />
- </marker>
- <linearGradient
- id="linearGradient13170"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop13168" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker12747"
- style="overflow:visible;"
- inkscape:isstock="true">
- <path
- id="path12745"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10821"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend"
- inkscape:collect="always">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10819" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10463"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10461" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow2Mend"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6230"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker9807"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="TriangleOutS">
- <path
- transform="scale(0.2)"
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- id="path9805" />
- </marker>
- <marker
- inkscape:stockid="TriangleOutS"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="TriangleOutS"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6351"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- transform="scale(0.2)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Sstart"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6215"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4340">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4336" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4338" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4330">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4326" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4328" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient3596">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3592" />
- <stop
- style="stop-color:#6ba6fd;stop-opacity:0;"
- offset="1"
- id="stop3594" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3602"
- x1="113.62777"
- y1="238.35289"
- x2="178.07406"
- y2="238.35289"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3604"
- x1="106.04746"
- y1="231.17514"
- x2="170.49375"
- y2="231.17514"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3606"
- x1="97.456466"
- y1="223.48468"
- x2="161.90276"
- y2="223.48468"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,112.21043,-29.394096)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2419105,0,0,0.99933655,110.714,51.863352)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.3078944,0,0,0.99916717,224.87462,63.380078)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,359.82239,-48.56566)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(-35.122992,139.17627)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(32.977515,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(100.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(168.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(236.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(516.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-73"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(448.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-59"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(380.30193,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(312.20142,138.83669)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4330"
- id="radialGradient4334"
- cx="222.02666"
- cy="354.61401"
- fx="222.02666"
- fy="354.61401"
- r="171.25233"
- gradientTransform="matrix(1,0,0,0.15767701,0,298.69953)"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4340"
- id="radialGradient4342"
- cx="535.05641"
- cy="353.56737"
- fx="535.05641"
- fy="353.56737"
- r="136.95767"
- gradientTransform="matrix(1.0000096,0,0,0.19866251,-0.00515595,284.82679)"
- gradientUnits="userSpaceOnUse" />
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker28236"
- refX="0"
- refY="0"
- orient="auto"
- inkscape:stockid="Arrow2Mstart">
- <path
- inkscape:connector-curvature="0"
- transform="scale(0.6)"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- id="path28234" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3706"
- style="overflow:visible">
- <path
- id="path3704"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect14461"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient13962"
- id="linearGradient14808"
- x1="447.95767"
- y1="176.3018"
- x2="576.27008"
- y2="176.3018"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(0,-8)" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1-6"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8-5"
- style="fill:#808080;fill-opacity:1;fill-rule:evenodd;stroke:#808080;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-53"
- style="overflow:visible">
- <path
- id="path4533-35"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-99"
- style="overflow:visible">
- <path
- id="path4533-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.8101934"
- inkscape:cx="434.42776"
- inkscape:cy="99.90063"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-7"
- width="64.18129"
- height="45.550591"
- x="575.72662"
- y="144.79553" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8-5"
- width="64.18129"
- height="45.550591"
- x="584.44391"
- y="152.87041" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42-0"
- width="64.18129"
- height="45.550591"
- x="593.03491"
- y="160.56087" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0-3"
- width="64.18129"
- height="45.550591"
- x="600.61523"
- y="167.73862" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46-4"
- width="64.18129"
- height="45.550591"
- x="608.70087"
- y="175.42906" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#aaffcc;fill-opacity:1;stroke:none"
- transform="matrix(0.71467688,0,0,0.72506311,529.61388,101.41825)"><flowRegion
- id="flowRegion1855-0"
- style="fill:#aaffcc"><rect
- id="rect1857-5"
- width="67.17514"
- height="33.941124"
- x="120.20815"
- y="120.75856"
- style="fill:#aaffcc" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#aaffcc"
- id="flowPara1976" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3606);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8"
- width="64.18129"
- height="45.550591"
- x="101.58897"
- y="178.70938" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3604);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42"
- width="64.18129"
- height="45.550591"
- x="110.17996"
- y="186.39984" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3602);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0"
- width="64.18129"
- height="45.550591"
- x="117.76027"
- y="193.57759" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46"
- width="64.18129"
- height="45.550591"
- x="125.84592"
- y="201.26804" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86"
- width="79.001617"
- height="45.521675"
- x="221.60374"
- y="163.11812" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4-8);stroke-width:0.29522076;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5"
- width="79.70742"
- height="45.52037"
- x="221.08463"
- y="244.37004" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718"
- width="125.8186"
- height="100.36277"
- x="321.87323"
- y="112.72702" />
- <rect
- style="fill:#ffd5d5;fill-opacity:1;stroke:url(#linearGradient3608-4-8-7);stroke-width:0.30293623;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5-3"
- width="83.942352"
- height="45.512653"
- x="341.10928"
- y="255.85414" />
- <rect
- style="fill:#ffb380;fill-opacity:1;stroke:url(#linearGradient3608-4-9);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-2"
- width="79.001617"
- height="45.521675"
- x="469.21576"
- y="143.94656" />
- <rect
- style="opacity:1;fill:url(#radialGradient4334);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.32037571;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783"
- width="342.1843"
- height="53.684738"
- x="50.934502"
- y="327.77164" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="64.18129"
- height="45.550591"
- x="53.748672"
- y="331.81079" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3"
- width="64.18129"
- height="45.550591"
- x="121.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6"
- width="64.18129"
- height="45.550591"
- x="189.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4"
- width="64.18129"
- height="45.550591"
- x="257.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-7);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-9"
- width="64.18129"
- height="45.550591"
- x="325.84918"
- y="331.71741" />
- <rect
- style="opacity:1;fill:url(#radialGradient4342);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.28768006;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783-8"
- width="273.62766"
- height="54.131645"
- x="398.24258"
- y="328.00156" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-5"
- width="64.18129"
- height="45.550591"
- x="401.07309"
- y="331.47122" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-0"
- width="64.18129"
- height="45.550591"
- x="469.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-59);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-3"
- width="64.18129"
- height="45.550591"
- x="537.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-73);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-6"
- width="64.18129"
- height="45.550591"
- x="605.17358"
- y="331.37781" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3"
- width="27.798103"
- height="21.434149"
- x="325.80197"
- y="117.21037" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="140.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="164.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5"
- width="27.798103"
- height="21.434149"
- x="356.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="140.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="164.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5"
- width="27.798103"
- height="21.434149"
- x="386.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="164.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9"
- width="27.798103"
- height="21.434149"
- x="416.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="164.38896" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5"
- width="27.798103"
- height="21.434149"
- x="324.61139"
- y="187.85849" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0"
- width="27.798103"
- height="21.434149"
- x="355.17996"
- y="188.03886" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0"
- width="27.798103"
- height="21.434149"
- x="385.17996"
- y="188.03888" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4"
- width="27.798103"
- height="21.434149"
- x="415.17996"
- y="188.03889" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-5"
- width="125.8186"
- height="100.36277"
- x="452.24075"
- y="208.56764" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-9"
- width="27.798103"
- height="21.434149"
- x="456.16949"
- y="213.05098" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-8"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="236.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-55"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="260.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-7"
- width="27.798103"
- height="21.434149"
- x="486.73807"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-5"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="236.22954" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-3"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-2"
- width="27.798103"
- height="21.434149"
- x="516.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-5"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-1"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9-6"
- width="27.798103"
- height="21.434149"
- x="546.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3-1"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-7"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5-1"
- width="27.798103"
- height="21.434149"
- x="454.97891"
- y="283.6991" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0-6"
- width="27.798103"
- height="21.434149"
- x="485.54749"
- y="283.87946" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0-7"
- width="27.798103"
- height="21.434149"
- x="515.54749"
- y="283.87949" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4-2"
- width="27.798103"
- height="21.434149"
- x="545.54749"
- y="283.87952" />
- <g
- id="g5089"
- transform="matrix(0.7206312,0,0,1.0073979,12.37404,-312.02679)"
- style="fill:#ff8080">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#ff8080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455" />
- <path
- inkscape:connector-curvature="0"
- id="path5083"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#ff8080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
- </g>
- <g
- id="g5089-4"
- transform="matrix(-0.6745281,0,0,0.97266112,143.12774,-266.3349)"
- style="fill:#000080;fill-opacity:1">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#000080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455-9" />
- <path
- inkscape:connector-curvature="0"
- id="path5083-2"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#000080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;fill-opacity:1" />
- </g>
- <flowRoot
- xml:space="preserve"
- id="flowRoot5112"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(52.199711,162.55901)"><flowRegion
- id="flowRegion5114"><rect
- id="rect5116"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118">Tx</flowPara></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5112-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(49.878465,112.26812)"><flowRegion
- id="flowRegion5114-7"><rect
- id="rect5116-7"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118-5">Rx</flowPara></flowRoot> <path
- style="fill:none;stroke:#f60300;stroke-width:0.783;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:0.783, 0.78300000000000003;stroke-dashoffset:0;marker-start:url(#Arrow1Sstart);marker-end:url(#TriangleOutS)"
- d="m 116.81066,179.28348 v -11.31903 l -0.37893,-12.93605 0.37893,-5.25526 3.03134,-5.25526 4.16811,-2.82976 8.3362,-1.61701 h 7.19945 l 7.19946,2.02126 3.03135,2.02126 0.37892,2.02125 -0.37892,3.23401 -0.37892,7.27652 -0.37892,8.48927 -0.37892,14.55304"
- id="path8433"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="104.04285"
- y="144.86398"
- id="text9071"><tspan
- sodipodi:role="line"
- id="tspan9069"
- x="104.04285"
- y="144.86398"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">HW loop back device</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="59.542858"
- y="53.676483"
- id="text9621"><tspan
- sodipodi:role="line"
- id="tspan9619"
- x="59.542858"
- y="65.840889" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2-4-3-9-0-2-9-5-6-7-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,454.1297,247.6848)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9-2-5-4-1-1-1-4-0-5-4"><rect
- id="rect1857-5-1-5-2-6-1-4-9-3-8-1-8-5-7-9-1"
- width="162.09244"
- height="78.764809"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9723" /></flowRoot> <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow2Mend)"
- d="m 181.60025,194.22211 12.72792,-7.07106 14.14214,-2.82843 12.02081,0.70711 h 1.41422 v 0"
- id="path9797"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker10821)"
- d="m 179.47893,193.51501 3.53554,-14.14214 5.65685,-12.72792 16.97056,-9.19239 8.48528,-9.19238 14.84924,-7.77818 24.04163,-8.48528 18.38478,-6.36396 38.89087,-2.82843 h 12.02082 l -2.12132,-0.7071"
- id="path10453"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.70021206;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.70021208, 0.70021208;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3)"
- d="m 299.68795,188.0612 7.97521,-5.53298 8.86135,-2.2132 7.53214,0.5533 h 0.88614 v 0"
- id="path9797-9"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1)"
- d="m 300.49277,174.25976 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker12747)"
- d="m 299.68708,196.34344 9.19239,7.77817 7.07107,1.41421 h 4.94974 v 0"
- id="path12737"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:url(#linearGradient14808);stroke-width:4.66056013;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:4.66056002, 4.66056002;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Send)"
- d="m 447.95767,168.30181 c 119.99171,0 119.99171,0 119.99171,0"
- id="path13236"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#808080;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673000000001;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1-6)"
- d="m 529.56098,142.71226 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 612.93538,222.50639 -5.65686,12.72792 -14.84924,3.53553 -14.14213,0.70711"
- id="path16128"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 624.95619,220.38507 -3.53553,13.43502 -12.72792,14.84925 -9.19239,5.65685 -19.09188,2.82843 -1.41422,-0.70711 h -1.41421"
- id="path16130"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 635.56279,221.09217 -7.77817,33.94113 -4.24264,6.36396 -8.48528,3.53553 -10.6066,4.94975 -19.09189,5.65685 -6.36396,3.53554"
- id="path16132"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1.01083219;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.01083222, 1.01083221999999995;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-53)"
- d="m 456.03282,270.85761 -4.96024,14.83162 -13.02062,4.11988 -12.40058,0.82399"
- id="path16128-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.80101544;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.80101541, 0.80101540999999998;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-99)"
- d="m 341.29831,266.70565 -6.88826,6.70663 -18.08168,1.86296 -17.22065,0.37258"
- id="path16128-6"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00faf5;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 219.78402,264.93279 -6.36396,-9.89949 -3.53554,-16.26346 -7.77817,-8.48528 -8.48528,-4.94975 -4.94975,-2.82842"
- id="path17144"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00db00;stroke-width:1.4;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.4, 1.39999999999999991;stroke-dashoffset:0;marker-end:url(#marker17156);marker-start:url(#marker17550)"
- d="m 651.11914,221.09217 -7.07107,31.81981 -17.67766,34.64823 -21.21321,26.87005 -80.61017,1.41422 -86.97413,1.41421 -79.90306,-3.53553 -52.3259,1.41421 -24.04163,10.6066 -2.82843,1.41422"
- id="path17146"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#000000;stroke-width:1.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.3, 1.30000000000000004;stroke-dashoffset:0;marker-start:url(#marker18096);marker-end:url(#marker18508)"
- d="M 659.60442,221.09217 C 656.776,327.86529 656.776,328.5724 656.776,328.5724"
- id="path18086"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,137.7802,161.1139)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9"><rect
- id="rect1857-5-1-5-2-6-1"
- width="174.19844"
- height="91.867104"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9188-8-4" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="155.96185"
- y="220.07472"
- id="text9071-6"><tspan
- sodipodi:role="line"
- x="158.29518"
- y="220.07472"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2100"> <tspan
- style="fill:#0000ff"
- id="tspan2327">Ethdev Ports </tspan></tspan><tspan
- sodipodi:role="line"
- x="155.96185"
- y="236.74139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104">(NIX)</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2106"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2108"><rect
- id="rect2110"
- width="42.1875"
- height="28.125"
- x="178.125"
- y="71.155365" /></flowRegion><flowPara
- id="flowPara2112" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2114"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2116"><rect
- id="rect2118"
- width="38.28125"
- height="28.90625"
- x="196.09375"
- y="74.280365" /></flowRegion><flowPara
- id="flowPara2120" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2122"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2124"><rect
- id="rect2126"
- width="39.0625"
- height="23.4375"
- x="186.71875"
- y="153.96786" /></flowRegion><flowPara
- id="flowPara2128" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="262.1366"
- y="172.08614"
- id="text9071-6-4"><tspan
- sodipodi:role="line"
- x="264.46994"
- y="172.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0">Ingress </tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="188.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176">Classification</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="205.41946"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="222.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178" /><tspan
- sodipodi:role="line"
- x="262.1366"
- y="238.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="261.26727"
- y="254.46307"
- id="text9071-6-4-9"><tspan
- sodipodi:role="line"
- x="263.60062"
- y="254.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-0">Egress </tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="271.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-8">Classification</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="287.79642"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-9">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="304.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3" /><tspan
- sodipodi:role="line"
- x="261.26727"
- y="321.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="362.7016"
- y="111.81297"
- id="text9071-4"><tspan
- sodipodi:role="line"
- id="tspan9069-8"
- x="362.7016"
- y="111.81297"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Rx Queues</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="488.21777"
- y="207.21898"
- id="text9071-4-3"><tspan
- sodipodi:role="line"
- id="tspan9069-8-8"
- x="488.21777"
- y="207.21898"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Tx Queues</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2311"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2313"><rect
- id="rect2315"
- width="49.21875"
- height="41.40625"
- x="195.3125"
- y="68.811615" /></flowRegion><flowPara
- id="flowPara2317" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2319"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2321"><rect
- id="rect2323"
- width="40.625"
- height="39.0625"
- x="196.09375"
- y="69.592865" /></flowRegion><flowPara
- id="flowPara2325" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="382.20477"
- y="263.74432"
- id="text9071-6-4-6"><tspan
- sodipodi:role="line"
- x="382.20477"
- y="263.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-9">Egress</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="280.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-3">Traffic Manager</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="297.07767"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-1">(NIX)</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="313.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-6" /><tspan
- sodipodi:role="line"
- x="382.20477"
- y="330.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="500.98602"
- y="154.02556"
- id="text9071-6-4-0"><tspan
- sodipodi:role="line"
- x="503.31937"
- y="154.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-97">Scheduler </tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="170.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2389" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="187.35889"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2391">SSO</tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="204.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-60" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="220.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-3" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="571.61627"
- y="119.24016"
- id="text9071-4-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-82"
- x="571.61627"
- y="119.24016"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Supports both poll mode and/or event mode</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="135.90683"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2416">by configuring scheduler</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="152.57349"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2418" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="638.14227"
- y="192.46773"
- id="text9071-6-4-9-2"><tspan
- sodipodi:role="line"
- x="638.14227"
- y="192.46773"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3-2">ARMv8</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="209.1344"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2499">Cores</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="225.80106"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="180.24902"
- y="325.09399"
- id="text9071-4-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7"
- x="180.24902"
- y="325.09399"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Hardware Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="487.8916"
- y="325.91599"
- id="text9071-4-1-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7-1"
- x="487.8916"
- y="325.91599"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Software Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="81.178604"
- y="350.03149"
- id="text9071-4-18"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83"
- x="81.178604"
- y="350.03149"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Mempool</tspan><tspan
- sodipodi:role="line"
- x="81.178604"
- y="366.69815"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555">(NPA)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="151.09518"
- y="348.77365"
- id="text9071-4-18-9"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-3"
- x="151.09518"
- y="348.77365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Timer</tspan><tspan
- sodipodi:role="line"
- x="151.09518"
- y="365.44031"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-9">(TIM)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="222.56393"
- y="347.1174"
- id="text9071-4-18-0"><tspan
- sodipodi:role="line"
- x="222.56393"
- y="347.1174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90">Crypto</tspan><tspan
- sodipodi:role="line"
- x="222.56393"
- y="363.78406"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601">(CPT)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="289.00229"
- y="347.69473"
- id="text9071-4-18-0-5"><tspan
- sodipodi:role="line"
- x="289.00229"
- y="347.69473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9">Compress</tspan><tspan
- sodipodi:role="line"
- x="289.00229"
- y="364.36139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6">(ZIP)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="355.50653"
- y="348.60098"
- id="text9071-4-18-0-5-6"><tspan
- sodipodi:role="line"
- x="355.50653"
- y="348.60098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9-5">Shared</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="365.26764"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2645">Memory</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="381.93433"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6-1" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="430.31393"
- y="356.4924"
- id="text9071-4-18-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-35"
- x="430.31393"
- y="356.4924"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">SW Ring</tspan><tspan
- sodipodi:role="line"
- x="430.31393"
- y="373.15906"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-6" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="569.37646"
- y="341.1799"
- id="text9071-4-18-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-4"
- x="569.37646"
- y="341.1799"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">HASH</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="357.84656"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2742">LPM</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="374.51324"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-2">ACL</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="503.75143"
- y="355.02365"
- id="text9071-4-18-2-3"><tspan
- sodipodi:role="line"
- x="503.75143"
- y="355.02365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2733">Mbuf</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="639.34521"
- y="355.6174"
- id="text9071-4-18-19"><tspan
- sodipodi:role="line"
- x="639.34521"
- y="355.6174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2771">De(Frag)</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg
deleted file mode 100644
index bf976b52af..0000000000
--- a/doc/guides/platform/img/octeontx2_resource_virtualization.svg
+++ /dev/null
@@ -1,2418 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_resource_virtualization.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5464">
- <stop
- style="stop-color:#daeef5;stop-opacity:1;"
- offset="0"
- id="stop5460" />
- <stop
- style="stop-color:#daeef5;stop-opacity:0;"
- offset="1"
- id="stop5462" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5464"
- id="linearGradient5466"
- x1="65.724048"
- y1="169.38839"
- x2="183.38978"
- y2="169.38839"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-14,-4)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5476"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,105.65926,-0.6580533)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5658"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,148.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,192.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,376.02779,12.240541)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8-0"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,497.77779,12.751681)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.4142136"
- inkscape:cx="371.09569"
- inkscape:cy="130.22425"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="90.320152"
- y="299.67871"
- id="text2978"
- inkscape:export-filename="/home/matz/barracuda/rapports/mbuf-api-v2-images/octeon_multi.png"
- inkscape:export-xdpi="112"
- inkscape:export-ydpi="112"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="90.320152"
- y="299.67871"
- id="tspan3006"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15.74255753px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025"> </tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.82973665;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066"
- width="127.44949"
- height="225.03024"
- x="47.185646"
- y="111.20448" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="154.93478" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55900002;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6"
- width="117.1069"
- height="20.907221"
- x="51.955002"
- y="181.51834" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b7dfd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6-2"
- width="117.1069"
- height="20.907221"
- x="51.691605"
- y="205.82234" />
- <rect
- y="154.93478"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5160"
- style="fill:url(#linearGradient5466);fill-opacity:1;stroke:#6b8afd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5162"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="231.92767" />
- <rect
- y="255.45328"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5164"
- style="fill:#daeef5;fill-opacity:1;stroke:#6b6ffd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="281.11758" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.59729731;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-6"
- width="117.0697"
- height="23.892008"
- x="52.659744"
- y="306.01089" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:'Bitstream Vera Sans';-inkscape-font-specification:'Bitstream Vera Sans';fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.955597"
- y="163.55217"
- id="text5219-26-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.955597"
- y="163.55217"
- id="tspan5223-10-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.098343"
- y="187.18845"
- id="text5219-26-1-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.098343"
- y="187.18845"
- id="tspan5223-10-9-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.829468"
- y="211.79611"
- id="text5219-26-1-5"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.829468"
- y="211.79611"
- id="tspan5223-10-9-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.770523"
- y="235.66898"
- id="text5219-26-1-5-7-6"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.770523"
- y="235.66898"
- id="tspan5223-10-9-1-6-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPC AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.895973"
- y="259.25156"
- id="text5219-26-1-5-7-6-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.895973"
- y="259.25156"
- id="tspan5223-10-9-1-6-8-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">CPT AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.645073"
- y="282.35391"
- id="text5219-26-1-5-7-6-3-0"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.645073"
- y="282.35391"
- id="tspan5223-10-9-1-6-8-3-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">RVU AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.93084431px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.07757032"
- x="110.2803"
- y="126.02858"
- id="text5219-26"
- transform="scale(1.0076913,0.9923674)"><tspan
- sodipodi:role="line"
- x="110.2803"
- y="126.02858"
- id="tspan5223-10"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032">Linux AF driver</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="139.49821"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5325">(octeontx2_af)</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="152.96783"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#ff0000;stroke-width:1.07757032"
- id="tspan5327">PF0</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="160.38988"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5329" /></text>
- <rect
- style="fill:url(#linearGradient5476);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468"
- width="36.554455"
- height="18.169683"
- x="49.603416"
- y="357.7995" />
- <g
- id="g5594"
- transform="translate(-18,-40)">
- <text
- id="text5480"
- y="409.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#6a5400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#6a5400;fill-opacity:1"
- y="409.46326"
- x="73.41291"
- id="tspan5478"
- sodipodi:role="line">CGX-0</tspan></text>
- </g>
- <rect
- style="fill:url(#linearGradient5658);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468-2"
- width="36.554455"
- height="18.169683"
- x="92.712852"
- y="358.37842" />
- <g
- id="g5594-7"
- transform="translate(25.109434,2.578931)">
- <text
- id="text5480-9"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#695400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0"
- sodipodi:role="line">CGX-1</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="104.15788"
- y="355.79947"
- id="text5711"><tspan
- sodipodi:role="line"
- id="tspan5709"
- x="104.15788"
- y="392.29269" /></text>
- </g>
- <rect
- style="opacity:1;fill:url(#linearGradient6997);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1"
- width="36.554455"
- height="18.169683"
- x="136.71284"
- y="358.37842" />
- <g
- id="g5594-7-0"
- transform="translate(69.109434,2.578931)">
- <text
- id="text5480-9-7"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0-4"
- sodipodi:role="line">CGX-2</tspan></text>
- </g>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="116.4436"
- y="309.90784"
- id="text5219-26-1-5-7-6-3-0-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="116.4436"
- y="309.90784"
- id="tspan5223-10-9-1-6-8-3-1-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.03398025">CGX-FW Interface</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="m 65.54286,336.17648 v 23"
- id="path7614"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30);marker-end:url(#Arrow1Mend-6)"
- d="m 108.54285,336.67647 v 23"
- id="path7614-2"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0);marker-end:url(#Arrow1Mend-6-8)"
- d="m 152.54285,336.67647 v 23"
- id="path7614-2-2"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50469553;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="100.27454"
- height="105.81976"
- x="242.65558"
- y="233.7666" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6"
- width="100.27335"
- height="106.31857"
- x="361.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7"
- width="100.27335"
- height="106.31857"
- x="467.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.49445513;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7-0"
- width="95.784782"
- height="106.33"
- x="573.40039"
- y="233.76149" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:0.984;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.984, 0.98400000000000021;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="M 176.02438,304.15296 C 237.06133,305.2 237.06133,305.2 237.06133,305.2"
- id="path8315"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="177.04286"
- y="299.17648"
- id="text8319"><tspan
- sodipodi:role="line"
- id="tspan8317"
- x="177.04286"
- y="299.17648"
- style="font-size:10.66666698px;line-height:1">AF-PF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="291.53308"
- y="264.67648"
- id="text8323"><tspan
- sodipodi:role="line"
- id="tspan8321"
- x="291.53308"
- y="264.67648"
- style="font-size:10px;text-align:center;text-anchor:middle"><tspan
- style="font-size:10px;fill:#0000ff"
- id="tspan8339"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11972">Linux</tspan></tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11970"> Netdev </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="281.34314"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345">driver</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="298.00983"
- id="tspan8325"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">(octeontx2_pf)</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="314.67648"
- id="tspan8327"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10511">x</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="331.34314"
- id="tspan8329" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9"
- width="71.28923"
- height="15.589548"
- x="253.89825"
- y="320.63168" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="283.97266"
- y="319.09348"
- id="text5219-26-1-5-7-6-3-0-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="283.97266"
- y="319.09348"
- id="tspan5223-10-9-1-6-8-3-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7"
- width="71.28923"
- height="15.589548"
- x="255.89822"
- y="237.88171" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="285.03787"
- y="239.81017"
- id="text5219-26-1-5-7-6-3-0-1-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="285.03787"
- y="239.81017"
- id="tspan5223-10-9-1-6-8-3-1-0-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9);marker-end:url(#Arrow1Mend-6-8-3)"
- d="m 287.54285,340.99417 v 18.3646"
- id="path7614-2-2-8"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8);fill-opacity:1;stroke:#695400;stroke-width:1.316;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4"
- width="81.505402"
- height="17.62063"
- x="251.04015"
- y="359.86615" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="263.46152"
- y="224.99915"
- id="text8319-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7"
- x="263.46152"
- y="224.99915"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="259.23218"
- y="371.46179"
- id="text8319-7-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3"
- x="259.23218"
- y="371.46179"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3"
- width="80.855743"
- height="92.400963"
- x="197.86496"
- y="112.97599" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4"
- width="80.855743"
- height="92.400963"
- x="286.61499"
- y="112.476" />
- <path
- style="fill:none;stroke:#580000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.3, 0.3;stroke-dashoffset:0;stroke-opacity:1"
- d="m 188.04286,109.67648 c 2.5,238.5 2,238 2,238 163.49999,0.5 163.49999,0.5 163.49999,0.5 v -124 l -70,0.5 -1.5,-116 v 1.5 z"
- id="path9240"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0"
- width="80.855743"
- height="92.400963"
- x="375.11499"
- y="111.976" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0-0"
- width="80.855743"
- height="92.400963"
- x="586.61499"
- y="111.476" />
- <path
- style="fill:none;stroke:#ff00cc;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2, 0.29999999999999999;stroke-dashoffset:0"
- d="m 675.54284,107.17648 1,239.5 -317.99999,0.5 -1,-125 14.5,0.5 -0.5,-113.5 z"
- id="path9272"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2,0.3;stroke-dashoffset:0"
- d="m 284.54285,109.17648 0.5,100 84,-0.5 v -99.5 z"
- id="path9274"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="231.87221"
- y="146.02637"
- id="text8323-1"
- transform="scale(1.0315378,0.96942639)"><tspan
- sodipodi:role="line"
- id="tspan8321-2"
- x="231.87221"
- y="146.02637"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="font-size:8.12077141px;fill:#0000ff;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8339-6">Linux</tspan> Netdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="159.56099"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6">driver</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="173.09561"
- id="tspan8325-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">(octeontx2_vf)</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="186.63022"
- id="tspan8327-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10513">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="200.16484"
- id="tspan8329-3"
- style="stroke-width:0.81207716;fill:#782121" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9"
- width="59.718147"
- height="12.272857"
- x="207.65872"
- y="185.61246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="225.56583"
- y="192.49615"
- id="text5219-26-1-5-7-6-3-0-1-6"
- transform="scale(0.99742277,1.0025839)"><tspan
- sodipodi:role="line"
- x="225.56583"
- y="192.49615"
- id="tspan5223-10-9-1-6-8-3-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5"
- width="59.718147"
- height="12.272857"
- x="209.33406"
- y="116.46765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="226.43088"
- y="124.1223"
- id="text5219-26-1-5-7-6-3-0-1-4-7"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="226.43088"
- y="124.1223"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="317.66635"
- y="121.26925"
- id="text8323-1-9"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-3"
- x="317.66635"
- y="131.14769"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="144.6823"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9400"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9402">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9398">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="158.21692"
- id="tspan8325-2-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">driver</tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="171.75154"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9392" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="185.28616"
- id="tspan8327-7-8"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10515">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-0">-VF1</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="198.82077"
- id="tspan8329-3-3"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3"
- width="59.718147"
- height="12.272857"
- x="295.65872"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="313.79312"
- y="191.99756"
- id="text5219-26-1-5-7-6-3-0-1-6-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="313.79312"
- y="191.99756"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8"
- width="59.718147"
- height="12.272857"
- x="297.33408"
- y="115.96765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="314.65817"
- y="123.62372"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="314.65817"
- y="123.62372"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mstart);marker-start:url(#Arrow1Mstart)"
- d="m 254.54285,205.17648 c 1,29 1,28.5 1,28.5"
- id="path9405"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-1);marker-end:url(#Arrow1Mstart-1)"
- d="m 324.42292,203.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-3"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="408.28308"
- y="265.83011"
- id="text8323-7"><tspan
- sodipodi:role="line"
- id="tspan8321-3"
- x="408.28308"
- y="265.83011"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10440">DPDK</tspan> Ethdev <tspan
- style="font-size:10px;fill:#00d4aa;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8343-5">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="282.49677"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-8">driver</tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="299.16345"
- id="tspan8325-5"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="408.28308"
- y="315.83011"
- id="tspan8327-1"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#ff0000;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10517">y</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="332.49677"
- id="tspan8329-2" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3"
- width="71.28923"
- height="15.589548"
- x="376.64825"
- y="319.78531" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="410.92075"
- y="318.27411"
- id="text5219-26-1-5-7-6-3-0-1-62"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="410.92075"
- y="318.27411"
- id="tspan5223-10-9-1-6-8-3-1-0-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2"
- width="71.28923"
- height="15.589548"
- x="378.64822"
- y="237.03534" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="411.98596"
- y="238.99095"
- id="text5219-26-1-5-7-6-3-0-1-4-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="411.98596"
- y="238.99095"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="386.21152"
- y="224.15277"
- id="text8319-7-5"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8"
- x="386.21152"
- y="224.15277"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48);marker-end:url(#Arrow1Mstart-48)"
- d="m 411.29285,204.33011 c 1,29 1,28.5 1,28.5"
- id="path9405-0"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="520.61176"
- y="265.49265"
- id="text8323-7-8"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3"
- x="520.61176"
- y="265.49265"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a"
- id="tspan10440-2">DPDK</tspan> Eventdev <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="282.1593"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6">driver</tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="298.82599"
- id="tspan8325-5-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="520.61176"
- y="315.49265"
- id="tspan8327-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10519">z</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="332.1593"
- id="tspan8329-2-1" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3-6"
- width="71.28923"
- height="15.589548"
- x="484.97693"
- y="319.44785" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="522.95496"
- y="317.94733"
- id="text5219-26-1-5-7-6-3-0-1-62-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="522.95496"
- y="317.94733"
- id="tspan5223-10-9-1-6-8-3-1-0-4-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">TIM LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2-8"
- width="71.28923"
- height="15.589548"
- x="486.9769"
- y="236.69788" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="524.0202"
- y="238.66432"
- id="text5219-26-1-5-7-6-3-0-1-4-4-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="524.0202"
- y="238.66432"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7-6"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="619.6156"
- y="265.47531"
- id="text8323-7-8-3"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3-1"
- x="619.6156"
- y="265.47531"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"> <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff"
- id="tspan10562">Linux </tspan>Crypto <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3-7">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="282.14197"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6-8">driver</tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="298.80865"
- id="tspan8325-5-4-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="619.6156"
- y="315.47531"
- id="tspan8327-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10560">m</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="332.14197"
- id="tspan8329-2-1-9" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0"
- width="59.718147"
- height="12.272857"
- x="385.10458"
- y="183.92126" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="403.46997"
- y="190.80957"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="403.46997"
- y="190.80957"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8-5"
- width="59.718147"
- height="12.272857"
- x="386.77994"
- y="116.77647" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="404.33502"
- y="124.43062"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9-8"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="404.33502"
- y="124.43062"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="402.97598"
- y="143.8235"
- id="text8323-1-7"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1"
- x="402.97598"
- y="143.8235"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11102">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="157.35812"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6-5">driver</tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="170.89275"
- id="tspan8327-7-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="402.97598"
- y="184.42735"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11106">PF<tspan
- style="fill:#a02c2c;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11110">y</tspan><tspan
- style="font-size:8.12077141px;fill:#a02c2c;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-2">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="197.96198"
- id="tspan8329-3-4"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0-0"
- width="59.718147"
- height="12.272857"
- x="596.60461"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="615.51703"
- y="191.99774"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="615.51703"
- y="191.99774"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">CPT LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="608.00879"
- y="145.05219"
- id="text8323-1-7-3"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1-5"
- x="608.00879"
- y="145.05219"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716"><tspan
- id="tspan1793"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a">DPDK</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11966"> Crypto </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0066ff"
- id="tspan9396-1-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="158.58681"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan8345-6-5-4">driver</tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="172.12143"
- id="tspan8327-7-2-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="608.00879"
- y="185.65604"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan11106-8">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737"
- id="tspan11172">m</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737;stroke-width:0.81207716"
- id="tspan8347-1-2-0">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="199.19066"
- id="tspan8329-3-4-0"
- style="stroke-width:0.81207716" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="603.23218"
- y="224.74855"
- id="text8319-7-5-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8-4"
- x="603.23218"
- y="224.74855"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48-6);marker-end:url(#Arrow1Mstart-48-6)"
- d="m 628.31351,204.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-0-2"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="509.60013"
- y="128.17648"
- id="text11483"><tspan
- sodipodi:role="line"
- id="tspan11481"
- x="511.47513"
- y="128.17648"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544">D<tspan
- style="-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal;fill:#005544"
- id="tspan11962">PDK-APP1 with </tspan></tspan><tspan
- sodipodi:role="line"
- x="511.47513"
- y="144.84315"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485">one ethdev </tspan><tspan
- sodipodi:role="line"
- x="509.60013"
- y="161.50981"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11491">over Linux PF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="518.02197"
- y="179.98117"
- id="text11483-6"><tspan
- sodipodi:role="line"
- id="tspan11481-4"
- x="519.42822"
- y="179.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">DPDK-APP2 with </tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="196.64784"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485-5">Two ethdevs(PF,VF) ,</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="213.3145"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11517">eventdev, timer adapter and</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="229.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11519"> cryptodev</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="246.64784"
- style="font-size:10.66666698px;text-align:center;text-anchor:middle;fill:#00ffff"
- id="tspan11491-6" /></text>
- <path
- style="fill:#005544;stroke:#00ffff;stroke-width:1.02430511;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.02430516, 4.09722065999999963;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mstart-8)"
- d="m 483.99846,150.16496 -112.95349,13.41069 v 0 l -0.48897,-0.53643 h 0.48897"
- id="path11521"
- inkscape:connector-curvature="0" />
- <path
- style="fill:#ff0000;stroke:#ff5555;stroke-width:1.16440296;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.16440301, 2.32880602999999997;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-0)"
- d="m 545.54814,186.52569 c 26.3521,-76.73875 26.3521,-76.73875 26.3521,-76.73875"
- id="path11523"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9-0);marker-end:url(#Arrow1Mend-6-8-3-7)"
- d="m 409.29286,341.50531 v 18.3646"
- id="path7614-2-2-8-2"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8-0);fill-opacity:1;stroke:#695400;stroke-width:1.31599998;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4-9"
- width="81.505402"
- height="17.62063"
- x="372.79016"
- y="360.37729" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="380.98218"
- y="371.97293"
- id="text8319-7-7-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3-1"
- x="380.98218"
- y="371.97293"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
index 7614e1a368..2ff91a6018 100644
--- a/doc/guides/platform/index.rst
+++ b/doc/guides/platform/index.rst
@@ -15,4 +15,3 @@ The following are platform specific guides and setup information.
dpaa
dpaa2
octeontx
- octeontx2
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
deleted file mode 100644
index 5ab43abbdd..0000000000
--- a/doc/guides/platform/octeontx2.rst
+++ /dev/null
@@ -1,520 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-Marvell OCTEON TX2 Platform Guide
-=================================
-
-This document gives an overview of **Marvell OCTEON TX2** RVU H/W block,
-packet flow and procedure to build DPDK on OCTEON TX2 platform.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Supported OCTEON TX2 SoCs
--------------------------
-
-- CN98xx
-- CN96xx
-- CN93xx
-
-OCTEON TX2 Resource Virtualization Unit architecture
-----------------------------------------------------
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the
-RVU architecture and a resource provisioning example.
-
-.. _figure_octeontx2_resource_virtualization:
-
-.. figure:: img/octeontx2_resource_virtualization.*
-
- OCTEON TX2 Resource virtualization architecture and provisioning example
-
-
-Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW
-resources belonging to the network, crypto and other functional blocks onto
-PCI-compatible physical and virtual functions.
-
-Each functional block has multiple local functions (LFs) for
-provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV
-physical functions (PFs) and virtual functions (VFs).
-
-The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local
-functions (LFs) provided by the RVU and its functional mapping to
-DPDK subsystem.
-
-.. _table_octeontx2_rvu_dpdk_mapping:
-
-.. table:: RVU managed functional blocks and its mapping to DPDK subsystem
-
- +---+-----+--------------------------------------------------------------+
- | # | LF | DPDK subsystem mapping |
- +===+=====+==============================================================+
- | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security|
- +---+-----+--------------------------------------------------------------+
- | 2 | NPA | rte_mempool |
- +---+-----+--------------------------------------------------------------+
- | 3 | NPC | rte_flow |
- +---+-----+--------------------------------------------------------------+
- | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter |
- +---+-----+--------------------------------------------------------------+
- | 5 | SSO | rte_eventdev |
- +---+-----+--------------------------------------------------------------+
- | 6 | TIM | rte_event_timer_adapter |
- +---+-----+--------------------------------------------------------------+
- | 7 | LBK | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 8 | DPI | rte_rawdev |
- +---+-----+--------------------------------------------------------------+
- | 9 | SDP | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 10| REE | rte_regexdev |
- +---+-----+--------------------------------------------------------------+
-
-PF0 is called the administrative / admin function (AF) and has exclusive
-privileges to provision RVU functional block's LFs to each of the PF/VF.
-
-PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving
-requests from PF/VF, AF does resource provisioning and other HW configuration.
-
-AF is always attached to host, but PF/VFs may be used by host kernel itself,
-or attached to VMs or to userspace applications like DPDK, etc. So, AF has to
-handle provisioning/configuration requests sent by any device from any domain.
-
-The AF driver does not receive or process any data.
-It is only a configuration driver used in control path.
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a
-resource provisioning example where,
-
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
-
-LBK HW Access
--------------
-
-Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX.
-The loopback block has N channels and contains data buffering that is shared across
-all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's
-VFs are exposed as ethdev device and odd-even pairs of VFs are tied together,
-that is, packets sent on odd VF end up received on even VF and vice versa.
-This would enable HW accelerated means of communication between two domains
-where even VF bound to the first domain and odd VF bound to the second domain.
-
-Typical application usage models are,
-
-#. Communication between the Linux kernel and DPDK application.
-#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
-#. Communication between two different DPDK applications.
-
-SDP interface
--------------
-
-System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host
-to DMA packets into and out of OCTEON TX2 SoC. SDP interface comes in to live only when
-OCTEON TX2 SoC is connected in PCIe endpoint mode. It can be used to send/receive
-packets to/from remote host machine using input/output queue pairs exposed to it.
-SDP interface receives input packets from remote host from NIX-RX and sends packets
-to remote host using NIX-TX. Remote host machine need to use corresponding driver
-(kernel/user mode) to communicate with SDP interface on OCTEON TX2 SoC. SDP supports
-single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users
-can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports.
-
-The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
-
-#. Communication channel between remote host and OCTEON TX2 SoC over PCIe.
-#. Transfer packets received from network interface to remote host over PCIe and
- vice-versa.
-
-OCTEON TX2 packet flow
-----------------------
-
-The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts
-the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators.
-
-.. _figure_octeontx2_packet_flow_hw_accelerators:
-
-.. figure:: img/octeontx2_packet_flow_hw_accelerators.*
-
- OCTEON TX2 packet flow in conjunction with use of HW accelerators
-
-HW Offload Drivers
-------------------
-
-This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
-
-#. **Ethdev Driver**
- See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
-
-#. **Mempool Driver**
- See :doc:`../mempool/octeontx2` for NPA mempool driver information.
-
-#. **Event Device Driver**
- See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
-
-#. **Crypto Device Driver**
- See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-
-Procedure to Setup Platform
----------------------------
-
-There are three main prerequisites for setting up DPDK on OCTEON TX2
-compatible board:
-
-1. **OCTEON TX2 Linux kernel driver**
-
- The dependent kernel drivers can be obtained from the
- `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
-
- Alternatively, the Marvell SDK also provides the required kernel drivers.
-
- Linux kernel should be configured with the following features enabled:
-
-.. code-block:: console
-
- # 64K pages enabled for better performance
- CONFIG_ARM64_64K_PAGES=y
- CONFIG_ARM64_VA_BITS_48=y
- # huge pages support enabled
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- # VFIO enabled with TYPE1 IOMMU at minimum
- CONFIG_VFIO_IOMMU_TYPE1=y
- CONFIG_VFIO_VIRQFD=y
- CONFIG_VFIO=y
- CONFIG_VFIO_NOIOMMU=y
- CONFIG_VFIO_PCI=y
- CONFIG_VFIO_PCI_MMAP=y
- # SMMUv3 driver
- CONFIG_ARM_SMMU_V3=y
- # ARMv8.1 LSE atomics
- CONFIG_ARM64_LSE_ATOMICS=y
- # OCTEONTX2 drivers
- CONFIG_OCTEONTX2_MBOX=y
- CONFIG_OCTEONTX2_AF=y
- # Enable if netdev PF driver required
- CONFIG_OCTEONTX2_PF=y
- # Enable if netdev VF driver required
- CONFIG_OCTEONTX2_VF=y
- CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y
- # Enable if OCTEONTX2 DMA PF driver required
- CONFIG_OCTEONTX2_DPI_PF=n
-
-2. **ARM64 Linux Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
-
- Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
- optimized for OCTEON TX2 CPU.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem may be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- Alternatively, the Marvell SDK provides the buildroot based root filesystem.
- The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board.
-
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
-
-
-Debugging Options
------------------
-
-.. _table_octeontx2_common_debug_options:
-
-.. table:: OCTEON TX2 common debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | Common | --log-level='pmd\.octeontx2\.base,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' |
- +---+------------+-------------------------------------------------------+
-
-Debugfs support
-~~~~~~~~~~~~~~~
-
-The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks
-context or stats using debugfs.
-
-Enable ``debugfs`` by:
-
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``.
-2. Boot OCTEON TX2 with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
-
-.. code-block:: console
-
- # mount -t debugfs none /sys/kernel/debug
-
-Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC,
-SSO & CGX.
-
-The file structure under ``/sys/kernel/debug`` is as follows
-
-.. code-block:: console
-
- octeontx2/
- |-- cgx
- | |-- cgx0
- | | '-- lmac0
- | | '-- stats
- | |-- cgx1
- | | |-- lmac0
- | | | '-- stats
- | | '-- lmac1
- | | '-- stats
- | '-- cgx2
- | '-- lmac0
- | '-- stats
- |-- cpt
- | |-- cpt_engines_info
- | |-- cpt_engines_sts
- | |-- cpt_err_info
- | |-- cpt_lfs_info
- | '-- cpt_pc
- |---- nix
- | |-- cq_ctx
- | |-- ndc_rx_cache
- | |-- ndc_rx_hits_miss
- | |-- ndc_tx_cache
- | |-- ndc_tx_hits_miss
- | |-- qsize
- | |-- rq_ctx
- | |-- sq_ctx
- | '-- tx_stall_hwissue
- |-- npa
- | |-- aura_ctx
- | |-- ndc_cache
- | |-- ndc_hits_miss
- | |-- pool_ctx
- | '-- qsize
- |-- npc
- | |-- mcam_info
- | '-- rx_miss_act_stats
- |-- rsrc_alloc
- '-- sso
- |-- hws
- | '-- sso_hws_info
- '-- hwgrp
- |-- sso_hwgrp_aq_thresh
- |-- sso_hwgrp_iaq_walk
- |-- sso_hwgrp_pc
- |-- sso_hwgrp_free_list_walk
- |-- sso_hwgrp_ient_walk
- '-- sso_hwgrp_taq_walk
-
-RVU block LF allocation:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/rsrc_alloc
-
- pcifunc NPA NIX SSO GROUP SSOWS TIM CPT
- PF1 0 0
- PF4 1
- PF13 0, 1 0, 1 0
-
-CGX example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats
-
- =======Link Status======
- Link is UP 40000 Mbps
- =======RX_STATS======
- Received packets: 0
- Octets of received packets: 0
- Received PAUSE packets: 0
- Received PAUSE and control packets: 0
- Filtered DMAC0 (NIX-bound) packets: 0
- Filtered DMAC0 (NIX-bound) octets: 0
- Packets dropped due to RX FIFO full: 0
- Octets dropped due to RX FIFO full: 0
- Error packets: 0
- Filtered DMAC1 (NCSI-bound) packets: 0
- Filtered DMAC1 (NCSI-bound) octets: 0
- NCSI-bound packets dropped: 0
- NCSI-bound octets dropped: 0
- =======TX_STATS======
- Packets dropped due to excessive collisions: 0
- Packets dropped due to excessive deferral: 0
- Multiple collisions before successful transmission: 0
- Single collisions before successful transmission: 0
- Total octets sent on the interface: 0
- Total frames sent on the interface: 0
- Packets sent with an octet count < 64: 0
- Packets sent with an octet count == 64: 0
- Packets sent with an octet count of 65127: 0
- Packets sent with an octet count of 128-255: 0
- Packets sent with an octet count of 256-511: 0
- Packets sent with an octet count of 512-1023: 0
- Packets sent with an octet count of 1024-1518: 0
- Packets sent with an octet count of > 1518: 0
- Packets sent to a broadcast DMAC: 0
- Packets sent to the multicast DMAC: 0
- Transmit underflow and were truncated: 0
- Control/PAUSE packets sent: 0
-
-CPT example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cpt/cpt_pc
-
- CPT instruction requests 0
- CPT instruction latency 0
- CPT NCB read requests 0
- CPT NCB read latency 0
- CPT read requests caused by UC fills 0
- CPT active cycles pc 1395642
- CPT clock count pc 5579867595493
-
-NIX example usage:
-
-.. code-block:: console
-
- Usage: echo <nixlf> [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
-
- =====cq_ctx for nixlf:0 and qidx:0 is=====
- W0: base 158ef1a00
-
- W1: wrptr 0
- W1: avg_con 0
- W1: cint_idx 0
- W1: cq_err 0
- W1: qint_idx 0
- W1: bpid 0
- W1: bp_ena 0
-
- W2: update_time 31043
- W2:avg_level 255
- W2: head 0
- W2:tail 0
-
- W3: cq_err_int_ena 5
- W3:cq_err_int 0
- W3: qsize 4
- W3:caching 1
- W3: substream 0x000
- W3: ena 1
- W3: drop_ena 1
- W3: drop 64
- W3: bp 0
-
-NPA example usage:
-
-.. code-block:: console
-
- Usage: echo <npalf> [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
-
- ======POOL : 0=======
- W0: Stack base 1375bff00
- W1: ena 1
- W1: nat_align 1
- W1: stack_caching 1
- W1: stack_way_mask 0
- W1: buf_offset 1
- W1: buf_size 19
- W2: stack_max_pages 24315
- W2: stack_pages 24314
- W3: op_pc 267456
- W4: stack_offset 2
- W4: shift 5
- W4: avg_level 255
- W4: avg_con 0
- W4: fc_ena 0
- W4: fc_stype 0
- W4: fc_hyst_bits 0
- W4: fc_up_crossing 0
- W4: update_time 62993
- W5: fc_addr 0
- W6: ptr_start 1593adf00
- W7: ptr_end 180000000
- W8: err_int 0
- W8: err_int_ena 7
- W8: thresh_int 0
- W8: thresh_int_ena 0
- W8: thresh_up 0
- W8: thresh_qint_idx 0
- W8: err_qint_idx 0
-
-NPC example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/npc/mcam_info
-
- NPC MCAM info:
- RX keywidth : 224bits
- TX keywidth : 224bits
-
- MCAM entries : 2048
- Reserved : 158
- Available : 1890
-
- MCAM counters : 512
- Reserved : 1
- Available : 511
-
-SSO example usage:
-
-.. code-block:: console
-
- Usage: echo [<hws>/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
- echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
-
- ==================================================
- SSOW HWS[0] Arbitration State 0x0
- SSOW HWS[0] Guest Machine Control 0x0
- SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff
- ==================================================
-
-Compile DPDK
-------------
-
-DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on
-an x86 based platform.
-
-Native Compilation
-~~~~~~~~~~~~~~~~~~
-
-.. code-block:: console
-
- meson build
- ninja -C build
-
-Cross Compilation
-~~~~~~~~~~~~~~~~~
-
-Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
-
-.. code-block:: console
-
- meson build --cross-file config/arm/arm64_octeontx2_linux_gcc
- ninja -C build
-
-.. note::
-
- By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain,
- if Marvell toolchain is available then it can be used by overriding the
- c, cpp, ar, strip ``binaries`` attributes to respective Marvell
- toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5581822d10..4e5b23c53d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,20 +125,3 @@ Deprecation Notices
applications should be updated to use the ``dmadev`` library instead,
with the underlying HW-functionality being provided by the ``ioat`` or
``idxd`` dma drivers
-
-* drivers/octeontx2: remove octeontx2 drivers
-
- In the view of enabling unified driver for ``octeontx2(cn9k)``/``octeontx3(cn10k)``,
- removing ``drivers/octeontx2`` drivers and replace with ``drivers/cnxk/`` which
- supports both ``octeontx2(cn9k)`` and ``octeontx3(cn10k)`` SoCs.
- This deprecation notice is to do following actions in DPDK v22.02 version.
-
- #. Replace ``drivers/common/octeontx2/`` with ``drivers/common/cnxk/``
- #. Replace ``drivers/mempool/octeontx2/`` with ``drivers/mempool/cnxk/``
- #. Replace ``drivers/net/octeontx2/`` with ``drivers/net/cnxk/``
- #. Replace ``drivers/event/octeontx2/`` with ``drivers/event/cnxk/``
- #. Replace ``drivers/crypto/octeontx2/`` with ``drivers/crypto/cnxk/``
- #. Rename ``drivers/regex/octeontx2/`` as ``drivers/regex/cn9k/``
- #. Rename ``config/arm/arm64_octeontx2_linux_gcc`` as ``config/arm/arm64_cn9k_linux_gcc``
-
- Last two actions are to align naming convention as cnxk scheme.
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 1a0e6111d7..31fcebdf95 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -152,11 +152,11 @@ New Features
``eventdev Tx adapter``, ``eventdev Timer adapter`` and ``rawdev DMA``
drivers for various HW co-processors available in ``OCTEON TX2`` SoC.
- See :doc:`../platform/octeontx2` and driver information:
+ See ``platform/octeontx2`` and driver information:
- * :doc:`../nics/octeontx2`
- * :doc:`../mempool/octeontx2`
- * :doc:`../eventdevs/octeontx2`
+ * ``nics/octeontx2``
+ * ``mempool/octeontx2``
+ * ``eventdevs/octeontx2``
* ``rawdevs/octeontx2_dma``
* **Introduced the Intel NTB PMD.**
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 302b3e5f37..79f3475ae6 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -192,7 +192,7 @@ New Features
Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
SoC.
- See :doc:`../cryptodevs/octeontx2` for more details
+ See ``cryptodevs/octeontx2`` for more details
* **Updated NXP crypto PMDs for PDCP support.**
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index ce93483291..d3d5ebe4dc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -157,7 +157,6 @@ The following are the application command-line options:
crypto_mvsam
crypto_null
crypto_octeontx
- crypto_octeontx2
crypto_openssl
crypto_qat
crypto_scheduler
diff --git a/drivers/common/meson.build b/drivers/common/meson.build
index 4acbad60b1..ea261dd70a 100644
--- a/drivers/common/meson.build
+++ b/drivers/common/meson.build
@@ -8,5 +8,4 @@ drivers = [
'iavf',
'mvep',
'octeontx',
- 'octeontx2',
]
diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h
deleted file mode 100644
index e3b68505b7..0000000000
--- a/drivers/common/octeontx2/hw/otx2_nix.h
+++ /dev/null
@@ -1,1391 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NIX_HW_H__
-#define __OTX2_NIX_HW_H__
-
-/* Register offsets */
-
-#define NIX_AF_CFG (0x0ull)
-#define NIX_AF_STATUS (0x10ull)
-#define NIX_AF_NDC_CFG (0x18ull)
-#define NIX_AF_CONST (0x20ull)
-#define NIX_AF_CONST1 (0x28ull)
-#define NIX_AF_CONST2 (0x30ull)
-#define NIX_AF_CONST3 (0x38ull)
-#define NIX_AF_SQ_CONST (0x40ull)
-#define NIX_AF_CQ_CONST (0x48ull)
-#define NIX_AF_RQ_CONST (0x50ull)
-#define NIX_AF_PSE_CONST (0x60ull)
-#define NIX_AF_TL1_CONST (0x70ull)
-#define NIX_AF_TL2_CONST (0x78ull)
-#define NIX_AF_TL3_CONST (0x80ull)
-#define NIX_AF_TL4_CONST (0x88ull)
-#define NIX_AF_MDQ_CONST (0x90ull)
-#define NIX_AF_MC_MIRROR_CONST (0x98ull)
-#define NIX_AF_LSO_CFG (0xa8ull)
-#define NIX_AF_BLK_RST (0xb0ull)
-#define NIX_AF_TX_TSTMP_CFG (0xc0ull)
-#define NIX_AF_RX_CFG (0xd0ull)
-#define NIX_AF_AVG_DELAY (0xe0ull)
-#define NIX_AF_CINT_DELAY (0xf0ull)
-#define NIX_AF_RX_MCAST_BASE (0x100ull)
-#define NIX_AF_RX_MCAST_CFG (0x110ull)
-#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull)
-#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull)
-#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull)
-#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull)
-#define NIX_AF_LF_RST (0x150ull)
-#define NIX_AF_GEN_INT (0x160ull)
-#define NIX_AF_GEN_INT_W1S (0x168ull)
-#define NIX_AF_GEN_INT_ENA_W1S (0x170ull)
-#define NIX_AF_GEN_INT_ENA_W1C (0x178ull)
-#define NIX_AF_ERR_INT (0x180ull)
-#define NIX_AF_ERR_INT_W1S (0x188ull)
-#define NIX_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NIX_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NIX_AF_RAS (0x1a0ull)
-#define NIX_AF_RAS_W1S (0x1a8ull)
-#define NIX_AF_RAS_ENA_W1S (0x1b0ull)
-#define NIX_AF_RAS_ENA_W1C (0x1b8ull)
-#define NIX_AF_RVU_INT (0x1c0ull)
-#define NIX_AF_RVU_INT_W1S (0x1c8ull)
-#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull)
-#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull)
-#define NIX_AF_TCP_TIMER (0x1e0ull)
-#define NIX_AF_RX_DEF_OL2 (0x200ull)
-#define NIX_AF_RX_DEF_OIP4 (0x210ull)
-#define NIX_AF_RX_DEF_IIP4 (0x220ull)
-#define NIX_AF_RX_DEF_OIP6 (0x230ull)
-#define NIX_AF_RX_DEF_IIP6 (0x240ull)
-#define NIX_AF_RX_DEF_OTCP (0x250ull)
-#define NIX_AF_RX_DEF_ITCP (0x260ull)
-#define NIX_AF_RX_DEF_OUDP (0x270ull)
-#define NIX_AF_RX_DEF_IUDP (0x280ull)
-#define NIX_AF_RX_DEF_OSCTP (0x290ull)
-#define NIX_AF_RX_DEF_ISCTP (0x2a0ull)
-#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull)
-#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3)
-#define NIX_AF_NDC_RX_SYNC (0x3e0ull)
-#define NIX_AF_NDC_TX_SYNC (0x3f0ull)
-#define NIX_AF_AQ_CFG (0x400ull)
-#define NIX_AF_AQ_BASE (0x410ull)
-#define NIX_AF_AQ_STATUS (0x420ull)
-#define NIX_AF_AQ_DOOR (0x430ull)
-#define NIX_AF_AQ_DONE_WAIT (0x440ull)
-#define NIX_AF_AQ_DONE (0x450ull)
-#define NIX_AF_AQ_DONE_ACK (0x460ull)
-#define NIX_AF_AQ_DONE_TIMER (0x470ull)
-#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull)
-#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull)
-#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_SW_SYNC (0x550ull)
-#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16)
-#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull)
-#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull)
-#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull)
-#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull)
-#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull)
-#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3)
-#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3)
-#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16)
-#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull)
-#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull)
-#define NIX_AF_PSE_SHAPER_CFG (0x810ull)
-#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull)
-#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18)
-#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16)
-#define NIX_AF_SDP_LINK_CREDIT (0xa40ull)
-#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3)
-#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3)
-#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHEDULE(a) \
- (0x1000ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE(a) \
- (0x1010ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_CIR(a) \
- (0x1020ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PIR(a) \
- (0x1030ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHED_STATE(a) \
- (0x1040ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE_STATE(a) \
- (0x1050ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SW_XOFF(a) \
- (0x1070ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_TOPOLOGY(a) \
- (0x1080ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PARENT(a) \
- (0x1088ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG0(a) \
- (0x10c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG1(a) \
- (0x10c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG2(a) \
- (0x10d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG3(a) \
- (0x10d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHEDULE(a) \
- (0x1200ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE(a) \
- (0x1210ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_CIR(a) \
- (0x1220ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PIR(a) \
- (0x1230ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHED_STATE(a) \
- (0x1240ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE_STATE(a) \
- (0x1250ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SW_XOFF(a) \
- (0x1270ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_TOPOLOGY(a) \
- (0x1280ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PARENT(a) \
- (0x1288ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG0(a) \
- (0x12c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG1(a) \
- (0x12c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG2(a) \
- (0x12d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG3(a) \
- (0x12d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHEDULE(a) \
- (0x1400ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE(a) \
- (0x1410ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_CIR(a) \
- (0x1420ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PIR(a) \
- (0x1430ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHED_STATE(a) \
- (0x1440ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE_STATE(a) \
- (0x1450ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SW_XOFF(a) \
- (0x1470ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PARENT(a) \
- (0x1480ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_MD_DEBUG(a) \
- (0x14c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_CFG(a) \
- (0x1600ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_BP_STATUS(a) \
- (0x1610ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \
- (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \
- (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3)
-#define NIX_AF_TX_MCASTX(a) \
- (0x1900ull | (uint64_t)(a) << 15)
-#define NIX_AF_TX_VTAG_DEFX_CTL(a) \
- (0x1a00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_VTAG_DEFX_DATA(a) \
- (0x1a10ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_BPIDX_STATUS(a) \
- (0x1a20ull | (uint64_t)(a) << 17)
-#define NIX_AF_RX_CHANX_CFG(a) \
- (0x1a30ull | (uint64_t)(a) << 15)
-#define NIX_AF_CINT_TIMERX(a) \
- (0x1a40ull | (uint64_t)(a) << 18)
-#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \
- (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_CFG(a) \
- (0x4000ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_CFG(a) \
- (0x4020ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG2(a) \
- (0x4028ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_BASE(a) \
- (0x4030ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_CFG(a) \
- (0x4040ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_BASE(a) \
- (0x4050ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_CFG(a) \
- (0x4060ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_BASE(a) \
- (0x4070ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG(a) \
- (0x4080ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_PARSE_CFG(a) \
- (0x4090ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_CFG(a) \
- (0x40a0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_CFG(a) \
- (0x40c0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_BASE(a) \
- (0x40d0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_CFG(a) \
- (0x4100ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_BASE(a) \
- (0x4110ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_CFG(a) \
- (0x4120ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_BASE(a) \
- (0x4130ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \
- (0x4140ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \
- (0x4148ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \
- (0x4150ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \
- (0x4158ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \
- (0x4170ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_STATUS(a) \
- (0x4180ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \
- (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_LOCKX(a, b) \
- (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_TX_STATX(a, b) \
- (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RX_STATX(a, b) \
- (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RSS_GRPX(a, b) \
- (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_NPC_MC_RCV (0x4700ull)
-#define NIX_AF_RX_NPC_MC_DROP (0x4710ull)
-#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull)
-#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull)
-#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \
- (0x4800ull | (uint64_t)(a) << 16)
-#define NIX_PRIV_AF_INT_CFG (0x8000000ull)
-#define NIX_PRIV_LFX_CFG(a) \
- (0x8000010ull | (uint64_t)(a) << 8)
-#define NIX_PRIV_LFX_INT_CFG(a) \
- (0x8000020ull | (uint64_t)(a) << 8)
-#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull)
-
-#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3)
-#define NIX_LF_CFG (0x100ull)
-#define NIX_LF_GINT (0x200ull)
-#define NIX_LF_GINT_W1S (0x208ull)
-#define NIX_LF_GINT_ENA_W1C (0x210ull)
-#define NIX_LF_GINT_ENA_W1S (0x218ull)
-#define NIX_LF_ERR_INT (0x220ull)
-#define NIX_LF_ERR_INT_W1S (0x228ull)
-#define NIX_LF_ERR_INT_ENA_W1C (0x230ull)
-#define NIX_LF_ERR_INT_ENA_W1S (0x238ull)
-#define NIX_LF_RAS (0x240ull)
-#define NIX_LF_RAS_W1S (0x248ull)
-#define NIX_LF_RAS_ENA_W1C (0x250ull)
-#define NIX_LF_RAS_ENA_W1S (0x258ull)
-#define NIX_LF_SQ_OP_ERR_DBG (0x260ull)
-#define NIX_LF_MNQ_ERR_DBG (0x270ull)
-#define NIX_LF_SEND_ERR_DBG (0x280ull)
-#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3)
-#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3)
-#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3)
-#define NIX_LF_RQ_OP_INT (0x900ull)
-#define NIX_LF_RQ_OP_OCTS (0x910ull)
-#define NIX_LF_RQ_OP_PKTS (0x920ull)
-#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull)
-#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull)
-#define NIX_LF_RQ_OP_RE_PKTS (0x950ull)
-#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull)
-#define NIX_LF_SQ_OP_INT (0xa00ull)
-#define NIX_LF_SQ_OP_OCTS (0xa10ull)
-#define NIX_LF_SQ_OP_PKTS (0xa20ull)
-#define NIX_LF_SQ_OP_STATUS (0xa30ull)
-#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull)
-#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull)
-#define NIX_LF_CQ_OP_INT (0xb00ull)
-#define NIX_LF_CQ_OP_DOOR (0xb30ull)
-#define NIX_LF_CQ_OP_STATUS (0xb40ull)
-#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NIX_TX_VTAGOP_NOP (0x0ull)
-#define NIX_TX_VTAGOP_INSERT (0x1ull)
-#define NIX_TX_VTAGOP_REPLACE (0x2ull)
-
-#define NIX_TX_ACTIONOP_DROP (0x0ull)
-#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull)
-#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull)
-#define NIX_TX_ACTIONOP_MCAST (0x3ull)
-#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull)
-
-#define NIX_INTF_RX (0x0ull)
-#define NIX_INTF_TX (0x1ull)
-
-#define NIX_TXLAYER_OL3 (0x0ull)
-#define NIX_TXLAYER_OL4 (0x1ull)
-#define NIX_TXLAYER_IL3 (0x2ull)
-#define NIX_TXLAYER_IL4 (0x3ull)
-
-#define NIX_SUBDC_NOP (0x0ull)
-#define NIX_SUBDC_EXT (0x1ull)
-#define NIX_SUBDC_CRC (0x2ull)
-#define NIX_SUBDC_IMM (0x3ull)
-#define NIX_SUBDC_SG (0x4ull)
-#define NIX_SUBDC_MEM (0x5ull)
-#define NIX_SUBDC_JUMP (0x6ull)
-#define NIX_SUBDC_WORK (0x7ull)
-#define NIX_SUBDC_SOD (0xfull)
-
-#define NIX_STYPE_STF (0x0ull)
-#define NIX_STYPE_STT (0x1ull)
-#define NIX_STYPE_STP (0x2ull)
-
-#define NIX_STAT_LF_TX_TX_UCAST (0x0ull)
-#define NIX_STAT_LF_TX_TX_BCAST (0x1ull)
-#define NIX_STAT_LF_TX_TX_MCAST (0x2ull)
-#define NIX_STAT_LF_TX_TX_DROP (0x3ull)
-#define NIX_STAT_LF_TX_TX_OCTS (0x4ull)
-
-#define NIX_STAT_LF_RX_RX_OCTS (0x0ull)
-#define NIX_STAT_LF_RX_RX_UCAST (0x1ull)
-#define NIX_STAT_LF_RX_RX_BCAST (0x2ull)
-#define NIX_STAT_LF_RX_RX_MCAST (0x3ull)
-#define NIX_STAT_LF_RX_RX_DROP (0x4ull)
-#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull)
-#define NIX_STAT_LF_RX_RX_FCS (0x6ull)
-#define NIX_STAT_LF_RX_RX_ERR (0x7ull)
-#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull)
-#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull)
-#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull)
-#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull)
-
-#define NIX_SQOPERR_SQ_OOR (0x0ull)
-#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull)
-#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull)
-#define NIX_SQOPERR_SQ_DISABLED (0x3ull)
-#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull)
-#define NIX_SQOPERR_SQE_OFLOW (0x5ull)
-#define NIX_SQOPERR_SQB_NULL (0x6ull)
-#define NIX_SQOPERR_SQB_FAULT (0x7ull)
-
-#define NIX_XQESZ_W64 (0x0ull)
-#define NIX_XQESZ_W16 (0x1ull)
-
-#define NIX_VTAGSIZE_T4 (0x0ull)
-#define NIX_VTAGSIZE_T8 (0x1ull)
-
-#define NIX_RX_ACTIONOP_DROP (0x0ull)
-#define NIX_RX_ACTIONOP_UCAST (0x1ull)
-#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull)
-#define NIX_RX_ACTIONOP_MCAST (0x3ull)
-#define NIX_RX_ACTIONOP_RSS (0x4ull)
-#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull)
-#define NIX_RX_ACTIONOP_MIRROR (0x6ull)
-
-#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull)
-#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull)
-#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull)
-#define NIX_TX_VTAGACTION_VTAG0_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6)
-#define NIX_TX_VTAGACTION_VTAG1_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4)
-#define NIX_RQINT_DROP (0x0ull)
-#define NIX_RQINT_RED (0x1ull)
-#define NIX_RQINT_R2 (0x2ull)
-#define NIX_RQINT_R3 (0x3ull)
-#define NIX_RQINT_R4 (0x4ull)
-#define NIX_RQINT_R5 (0x5ull)
-#define NIX_RQINT_R6 (0x6ull)
-#define NIX_RQINT_R7 (0x7ull)
-
-#define NIX_MAXSQESZ_W16 (0x0ull)
-#define NIX_MAXSQESZ_W8 (0x1ull)
-
-#define NIX_LSOALG_NOP (0x0ull)
-#define NIX_LSOALG_ADD_SEGNUM (0x1ull)
-#define NIX_LSOALG_ADD_PAYLEN (0x2ull)
-#define NIX_LSOALG_ADD_OFFSET (0x3ull)
-#define NIX_LSOALG_TCP_FLAGS (0x4ull)
-
-#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull)
-#define NIX_MNQERR_SQ_CTX_POISON (0x1ull)
-#define NIX_MNQERR_SQB_FAULT (0x2ull)
-#define NIX_MNQERR_SQB_POISON (0x3ull)
-#define NIX_MNQERR_TOTAL_ERR (0x4ull)
-#define NIX_MNQERR_LSO_ERR (0x5ull)
-#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull)
-#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull)
-#define NIX_MNQERR_MAXLEN_ERR (0x8ull)
-#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull)
-
-#define NIX_MDTYPE_RSVD (0x0ull)
-#define NIX_MDTYPE_FLUSH (0x1ull)
-#define NIX_MDTYPE_PMD (0x2ull)
-
-#define NIX_NDC_TX_PORT_LMT (0x0ull)
-#define NIX_NDC_TX_PORT_ENQ (0x1ull)
-#define NIX_NDC_TX_PORT_MNQ (0x2ull)
-#define NIX_NDC_TX_PORT_DEQ (0x3ull)
-#define NIX_NDC_TX_PORT_DMA (0x4ull)
-#define NIX_NDC_TX_PORT_XQE (0x5ull)
-
-#define NIX_NDC_RX_PORT_AQ (0x0ull)
-#define NIX_NDC_RX_PORT_CQ (0x1ull)
-#define NIX_NDC_RX_PORT_CINT (0x2ull)
-#define NIX_NDC_RX_PORT_MC (0x3ull)
-#define NIX_NDC_RX_PORT_PKT (0x4ull)
-#define NIX_NDC_RX_PORT_RQ (0x5ull)
-
-#define NIX_RE_OPCODE_RE_NONE (0x0ull)
-#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull)
-#define NIX_RE_OPCODE_RE_JABBER (0x2ull)
-#define NIX_RE_OPCODE_RE_FCS (0x7ull)
-#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull)
-#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull)
-#define NIX_RE_OPCODE_RE_RX_CTL (0xbull)
-#define NIX_RE_OPCODE_RE_SKIP (0xcull)
-#define NIX_RE_OPCODE_RE_DMAPKT (0xfull)
-#define NIX_RE_OPCODE_UNDERSIZE (0x10ull)
-#define NIX_RE_OPCODE_OVERSIZE (0x11ull)
-#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull)
-
-#define NIX_REDALG_STD (0x0ull)
-#define NIX_REDALG_SEND (0x1ull)
-#define NIX_REDALG_STALL (0x2ull)
-#define NIX_REDALG_DISCARD (0x3ull)
-
-#define NIX_RX_MCOP_RQ (0x0ull)
-#define NIX_RX_MCOP_RSS (0x1ull)
-
-#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull)
-#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull)
-#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull)
-#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull)
-#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull)
-#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull)
-#define NIX_RX_PERRCODE_MEMOUT (0x9ull)
-#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull)
-#define NIX_RX_PERRCODE_OL3_LEN (0x10ull)
-#define NIX_RX_PERRCODE_OL4_LEN (0x11ull)
-#define NIX_RX_PERRCODE_OL4_CHK (0x12ull)
-#define NIX_RX_PERRCODE_OL4_PORT (0x13ull)
-#define NIX_RX_PERRCODE_IL3_LEN (0x20ull)
-#define NIX_RX_PERRCODE_IL4_LEN (0x21ull)
-#define NIX_RX_PERRCODE_IL4_CHK (0x22ull)
-#define NIX_RX_PERRCODE_IL4_PORT (0x23ull)
-
-#define NIX_SENDCRCALG_CRC32 (0x0ull)
-#define NIX_SENDCRCALG_CRC32C (0x1ull)
-#define NIX_SENDCRCALG_ONES16 (0x2ull)
-
-#define NIX_SENDL3TYPE_NONE (0x0ull)
-#define NIX_SENDL3TYPE_IP4 (0x2ull)
-#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull)
-#define NIX_SENDL3TYPE_IP6 (0x4ull)
-
-#define NIX_SENDL4TYPE_NONE (0x0ull)
-#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull)
-#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull)
-#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull)
-
-#define NIX_SENDLDTYPE_LDD (0x0ull)
-#define NIX_SENDLDTYPE_LDT (0x1ull)
-#define NIX_SENDLDTYPE_LDWB (0x2ull)
-
-#define NIX_SENDMEMALG_SET (0x0ull)
-#define NIX_SENDMEMALG_SETTSTMP (0x1ull)
-#define NIX_SENDMEMALG_SETRSLT (0x2ull)
-#define NIX_SENDMEMALG_ADD (0x8ull)
-#define NIX_SENDMEMALG_SUB (0x9ull)
-#define NIX_SENDMEMALG_ADDLEN (0xaull)
-#define NIX_SENDMEMALG_SUBLEN (0xbull)
-#define NIX_SENDMEMALG_ADDMBUF (0xcull)
-#define NIX_SENDMEMALG_SUBMBUF (0xdull)
-
-#define NIX_SENDMEMDSZ_B64 (0x0ull)
-#define NIX_SENDMEMDSZ_B32 (0x1ull)
-#define NIX_SENDMEMDSZ_B16 (0x2ull)
-#define NIX_SENDMEMDSZ_B8 (0x3ull)
-
-#define NIX_SEND_STATUS_GOOD (0x0ull)
-#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull)
-#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull)
-#define NIX_SEND_STATUS_SQB_FAULT (0x3ull)
-#define NIX_SEND_STATUS_SQB_POISON (0x4ull)
-#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull)
-#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull)
-#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull)
-#define NIX_SEND_STATUS_JUMP_POISON (0x8ull)
-#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull)
-#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull)
-#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull)
-#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull)
-#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull)
-#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull)
-#define NIX_SEND_STATUS_DATA_FAULT (0x16ull)
-#define NIX_SEND_STATUS_DATA_POISON (0x17ull)
-#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull)
-#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull)
-#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull)
-#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull)
-#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull)
-#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull)
-#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull)
-#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull)
-
-#define NIX_SQINT_LMT_ERR (0x0ull)
-#define NIX_SQINT_MNQ_ERR (0x1ull)
-#define NIX_SQINT_SEND_ERR (0x2ull)
-#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull)
-
-#define NIX_XQE_TYPE_INVALID (0x0ull)
-#define NIX_XQE_TYPE_RX (0x1ull)
-#define NIX_XQE_TYPE_RX_IPSECS (0x2ull)
-#define NIX_XQE_TYPE_RX_IPSECH (0x3ull)
-#define NIX_XQE_TYPE_RX_IPSECD (0x4ull)
-#define NIX_XQE_TYPE_SEND (0x8ull)
-
-#define NIX_AQ_COMP_NOTDONE (0x0ull)
-#define NIX_AQ_COMP_GOOD (0x1ull)
-#define NIX_AQ_COMP_SWERR (0x2ull)
-#define NIX_AQ_COMP_CTX_POISON (0x3ull)
-#define NIX_AQ_COMP_CTX_FAULT (0x4ull)
-#define NIX_AQ_COMP_LOCKERR (0x5ull)
-#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull)
-
-#define NIX_AF_INT_VEC_RVU (0x0ull)
-#define NIX_AF_INT_VEC_GEN (0x1ull)
-#define NIX_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NIX_AF_INT_VEC_AF_ERR (0x3ull)
-#define NIX_AF_INT_VEC_POISON (0x4ull)
-
-#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull)
-#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull)
-#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull)
-#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull)
-
-#define NIX_AQ_INSTOP_NOP (0x0ull)
-#define NIX_AQ_INSTOP_INIT (0x1ull)
-#define NIX_AQ_INSTOP_WRITE (0x2ull)
-#define NIX_AQ_INSTOP_READ (0x3ull)
-#define NIX_AQ_INSTOP_LOCK (0x4ull)
-#define NIX_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NIX_AQ_CTYPE_RQ (0x0ull)
-#define NIX_AQ_CTYPE_SQ (0x1ull)
-#define NIX_AQ_CTYPE_CQ (0x2ull)
-#define NIX_AQ_CTYPE_MCE (0x3ull)
-#define NIX_AQ_CTYPE_RSS (0x4ull)
-#define NIX_AQ_CTYPE_DYNO (0x5ull)
-
-#define NIX_COLORRESULT_GREEN (0x0ull)
-#define NIX_COLORRESULT_YELLOW (0x1ull)
-#define NIX_COLORRESULT_RED_SEND (0x2ull)
-#define NIX_COLORRESULT_RED_DROP (0x3ull)
-
-#define NIX_CHAN_LBKX_CHX(a, b) \
- (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b))
-#define NIX_CHAN_R4 (0x400ull)
-#define NIX_CHAN_R5 (0x500ull)
-#define NIX_CHAN_R6 (0x600ull)
-#define NIX_CHAN_SDP_CH_END (0x7ffull)
-#define NIX_CHAN_SDP_CH_START (0x700ull)
-#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \
- (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \
- (uint64_t)(c))
-
-#define NIX_INTF_SDP (0x4ull)
-#define NIX_INTF_CGX0 (0x0ull)
-#define NIX_INTF_CGX1 (0x1ull)
-#define NIX_INTF_CGX2 (0x2ull)
-#define NIX_INTF_LBK0 (0x3ull)
-
-#define NIX_CQERRINT_DOOR_ERR (0x0ull)
-#define NIX_CQERRINT_WR_FULL (0x1ull)
-#define NIX_CQERRINT_CQE_FAULT (0x2ull)
-
-#define NIX_LF_INT_VEC_GINT (0x80ull)
-#define NIX_LF_INT_VEC_ERR_INT (0x81ull)
-#define NIX_LF_INT_VEC_POISON (0x82ull)
-#define NIX_LF_INT_VEC_QINT_END (0x3full)
-#define NIX_LF_INT_VEC_QINT_START (0x0ull)
-#define NIX_LF_INT_VEC_CINT_END (0x7full)
-#define NIX_LF_INT_VEC_CINT_START (0x40ull)
-
-/* Enums definitions */
-
-/* Structures definitions */
-
-/* NIX admin queue instruction structure */
-struct nix_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 7;
- uint64_t rsvd_23_15 : 9;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NIX admin queue result structure */
-struct nix_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NIX completion interrupt context hardware structure */
-struct nix_cint_hw_s {
- uint64_t ecount : 32;
- uint64_t qcount : 16;
- uint64_t intr : 1;
- uint64_t ena : 1;
- uint64_t timer_idx : 8;
- uint64_t rsvd_63_58 : 6;
- uint64_t ecount_wait : 32;
- uint64_t qcount_wait : 16;
- uint64_t time_wait : 8;
- uint64_t rsvd_127_120 : 8;
-};
-
-/* NIX completion queue entry header structure */
-struct nix_cqe_hdr_s {
- uint64_t tag : 32;
- uint64_t q : 20;
- uint64_t rsvd_57_52 : 6;
- uint64_t node : 2;
- uint64_t cqe_type : 4;
-};
-
-/* NIX completion queue context structure */
-struct nix_cq_ctx_s {
- uint64_t base : 64;/* W0 */
- uint64_t rsvd_67_64 : 4;
- uint64_t bp_ena : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t bpid : 9;
- uint64_t rsvd_83_81 : 3;
- uint64_t qint_idx : 7;
- uint64_t cq_err : 1;
- uint64_t cint_idx : 7;
- uint64_t avg_con : 9;
- uint64_t wrptr : 20;
- uint64_t tail : 20;
- uint64_t head : 20;
- uint64_t avg_level : 8;
- uint64_t update_time : 16;
- uint64_t bp : 8;
- uint64_t drop : 8;
- uint64_t drop_ena : 1;
- uint64_t ena : 1;
- uint64_t rsvd_211_210 : 2;
- uint64_t substream : 20;
- uint64_t caching : 1;
- uint64_t rsvd_235_233 : 3;
- uint64_t qsize : 4;
- uint64_t cq_err_int : 8;
- uint64_t cq_err_int_ena : 8;
-};
-
-/* NIX instruction header structure */
-struct nix_inst_hdr_s {
- uint64_t pf_func : 16;
- uint64_t sq : 20;
- uint64_t rsvd_63_36 : 28;
-};
-
-/* NIX i/o virtual address structure */
-struct nix_iova_s {
- uint64_t addr : 64; /* W0 */
-};
-
-/* NIX IPsec dynamic ordering counter structure */
-struct nix_ipsec_dyno_s {
- uint32_t count : 32; /* W0 */
-};
-
-/* NIX memory value structure */
-struct nix_mem_result_s {
- uint64_t v : 1;
- uint64_t color : 2;
- uint64_t rsvd_63_3 : 61;
-};
-
-/* NIX statistics operation write data structure */
-struct nix_op_q_wdata_s {
- uint64_t rsvd_31_0 : 32;
- uint64_t q : 20;
- uint64_t rsvd_63_52 : 12;
-};
-
-/* NIX queue interrupt context hardware structure */
-struct nix_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_127_124 : 4;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t rsvd_150_146 : 5;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_319_288 : 32;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_702_688 : 15;
- uint64_t ena_copy : 1;
- uint64_t rsvd_739_704 : 36;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_767_763 : 5;
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t rsvd_127_122 : 6;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_150_148 : 3;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_291_288 : 4;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_319_315 : 5;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_703_688 : 16;
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive side scaling entry structure */
-struct nix_rsse_s {
- uint32_t rq : 20;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NIX receive action structure */
-struct nix_rx_action_s {
- uint64_t op : 4;
- uint64_t pf_func : 16;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_63_61 : 3;
-};
-
-/* NIX receive immediate sub descriptor structure */
-struct nix_rx_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX receive multicast/mirror entry structure */
-struct nix_rx_mce_s {
- uint64_t op : 2;
- uint64_t rsvd_2 : 1;
- uint64_t eol : 1;
- uint64_t index : 20;
- uint64_t rsvd_31_24 : 8;
- uint64_t pf_func : 16;
- uint64_t next : 16;
-};
-
-/* NIX receive parse structure */
-struct nix_rx_parse_s {
- uint64_t chan : 12;
- uint64_t desc_sizem1 : 5;
- uint64_t imm_copy : 1;
- uint64_t express : 1;
- uint64_t wqwd : 1;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t latype : 4;
- uint64_t lbtype : 4;
- uint64_t lctype : 4;
- uint64_t ldtype : 4;
- uint64_t letype : 4;
- uint64_t lftype : 4;
- uint64_t lgtype : 4;
- uint64_t lhtype : 4;
- uint64_t pkt_lenm1 : 16;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t vtag0_valid : 1;
- uint64_t vtag0_gone : 1;
- uint64_t vtag1_valid : 1;
- uint64_t vtag1_gone : 1;
- uint64_t pkind : 6;
- uint64_t rsvd_95_94 : 2;
- uint64_t vtag0_tci : 16;
- uint64_t vtag1_tci : 16;
- uint64_t laflags : 8;
- uint64_t lbflags : 8;
- uint64_t lcflags : 8;
- uint64_t ldflags : 8;
- uint64_t leflags : 8;
- uint64_t lfflags : 8;
- uint64_t lgflags : 8;
- uint64_t lhflags : 8;
- uint64_t eoh_ptr : 8;
- uint64_t wqe_aura : 20;
- uint64_t pb_aura : 20;
- uint64_t match_id : 16;
- uint64_t laptr : 8;
- uint64_t lbptr : 8;
- uint64_t lcptr : 8;
- uint64_t ldptr : 8;
- uint64_t leptr : 8;
- uint64_t lfptr : 8;
- uint64_t lgptr : 8;
- uint64_t lhptr : 8;
- uint64_t vtag0_ptr : 8;
- uint64_t vtag1_ptr : 8;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_383_341 : 43;
- uint64_t rsvd_447_384 : 64; /* W6 */
-};
-
-/* NIX receive scatter/gather sub descriptor structure */
-struct nix_rx_sg_s {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_59_50 : 10;
- uint64_t subdc : 4;
-};
-
-/* NIX receive vtag action structure */
-struct nix_rx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_type : 3;
- uint64_t vtag0_valid : 1;
- uint64_t rsvd_31_16 : 16;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_type : 3;
- uint64_t vtag1_valid : 1;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX send completion structure */
-struct nix_send_comp_s {
- uint64_t status : 8;
- uint64_t sqe_id : 16;
- uint64_t rsvd_63_24 : 40;
-};
-
-/* NIX send CRC sub descriptor structure */
-struct nix_send_crc_s {
- uint64_t size : 16;
- uint64_t start : 16;
- uint64_t insert : 16;
- uint64_t rsvd_57_48 : 10;
- uint64_t alg : 2;
- uint64_t subdc : 4;
- uint64_t iv : 32;
- uint64_t rsvd_127_96 : 32;
-};
-
-/* NIX send extended header sub descriptor structure */
-RTE_STD_C11
-union nix_send_ext_w0_u {
- uint64_t u;
- struct {
- uint64_t lso_mps : 14;
- uint64_t lso : 1;
- uint64_t tstmp : 1;
- uint64_t lso_sb : 8;
- uint64_t lso_format : 5;
- uint64_t rsvd_31_29 : 3;
- uint64_t shp_chg : 9;
- uint64_t shp_dis : 1;
- uint64_t shp_ra : 2;
- uint64_t markptr : 8;
- uint64_t markform : 7;
- uint64_t mark_en : 1;
- uint64_t subdc : 4;
- };
-};
-
-RTE_STD_C11
-union nix_send_ext_w1_u {
- uint64_t u;
- struct {
- uint64_t vlan0_ins_ptr : 8;
- uint64_t vlan0_ins_tci : 16;
- uint64_t vlan1_ins_ptr : 8;
- uint64_t vlan1_ins_tci : 16;
- uint64_t vlan0_ins_ena : 1;
- uint64_t vlan1_ins_ena : 1;
- uint64_t rsvd_127_114 : 14;
- };
-};
-
-struct nix_send_ext_s {
- union nix_send_ext_w0_u w0;
- union nix_send_ext_w1_u w1;
-};
-
-/* NIX send header sub descriptor structure */
-RTE_STD_C11
-union nix_send_hdr_w0_u {
- uint64_t u;
- struct {
- uint64_t total : 18;
- uint64_t rsvd_18 : 1;
- uint64_t df : 1;
- uint64_t aura : 20;
- uint64_t sizem1 : 3;
- uint64_t pnc : 1;
- uint64_t sq : 20;
- };
-};
-
-RTE_STD_C11
-union nix_send_hdr_w1_u {
- uint64_t u;
- struct {
- uint64_t ol3ptr : 8;
- uint64_t ol4ptr : 8;
- uint64_t il3ptr : 8;
- uint64_t il4ptr : 8;
- uint64_t ol3type : 4;
- uint64_t ol4type : 4;
- uint64_t il3type : 4;
- uint64_t il4type : 4;
- uint64_t sqe_id : 16;
- };
-};
-
-struct nix_send_hdr_s {
- union nix_send_hdr_w0_u w0;
- union nix_send_hdr_w1_u w1;
-};
-
-/* NIX send immediate sub descriptor structure */
-struct nix_send_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX send jump sub descriptor structure */
-struct nix_send_jump_s {
- uint64_t sizem1 : 7;
- uint64_t rsvd_13_7 : 7;
- uint64_t ld_type : 2;
- uint64_t aura : 20;
- uint64_t rsvd_58_36 : 23;
- uint64_t f : 1;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send memory sub descriptor structure */
-struct nix_send_mem_s {
- uint64_t offset : 16;
- uint64_t rsvd_52_16 : 37;
- uint64_t wmem : 1;
- uint64_t dsz : 2;
- uint64_t alg : 4;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send scatter/gather sub descriptor structure */
-RTE_STD_C11
-union nix_send_sg_s {
- uint64_t u;
- struct {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_54_50 : 5;
- uint64_t i1 : 1;
- uint64_t i2 : 1;
- uint64_t i3 : 1;
- uint64_t ld_type : 2;
- uint64_t subdc : 4;
- };
-};
-
-/* NIX send work sub descriptor structure */
-struct nix_send_work_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_59_44 : 16;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX sq context hardware structure */
-struct nix_sq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t substream : 20;
- uint64_t max_sqe_size : 2;
- uint64_t sqe_way_mask : 16;
- uint64_t sqb_aura : 20;
- uint64_t gbl_rsvd1 : 5;
- uint64_t cq_id : 20;
- uint64_t cq_ena : 1;
- uint64_t qint_idx : 6;
- uint64_t gbl_rsvd2 : 1;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t xoff : 1;
- uint64_t sqe_stype : 2;
- uint64_t gbl_rsvd : 17;
- uint64_t head_sqb : 64;/* W2 */
- uint64_t head_offset : 6;
- uint64_t sqb_dequeue_count : 16;
- uint64_t default_chan : 12;
- uint64_t sdp_mcast : 1;
- uint64_t sso_ena : 1;
- uint64_t dse_rsvd1 : 28;
- uint64_t sqb_enqueue_count : 16;
- uint64_t tail_offset : 6;
- uint64_t lmt_dis : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t dnq_rsvd1 : 17;
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t next_sqb : 64;/* W6 */
- uint64_t mnq_dis : 1;
- uint64_t smq : 9;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_next_sq_vld : 1;
- uint64_t scm1_rsvd2 : 32;
- uint64_t smenq_sqb : 64;/* W8 */
- uint64_t smenq_offset : 6;
- uint64_t cq_limit : 8;
- uint64_t smq_rr_count : 25;
- uint64_t scm_lso_rem : 18;
- uint64_t scm_dq_rsvd0 : 7;
- uint64_t smq_lso_segnum : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t scm_dq_rsvd1 : 9;
- uint64_t smenq_next_sqb : 64;/* W11 */
- uint64_t seb_rsvd1 : 64;/* W12 */
- uint64_t drop_pkts : 48;
- uint64_t drop_octs_lsw : 16;
- uint64_t drop_octs_msw : 32;
- uint64_t pkts_lsw : 32;
- uint64_t pkts_msw : 16;
- uint64_t octs : 48;
-};
-
-/* NIX send queue context structure */
-struct nix_sq_ctx_s {
- uint64_t ena : 1;
- uint64_t qint_idx : 6;
- uint64_t substream : 20;
- uint64_t sdp_mcast : 1;
- uint64_t cq : 20;
- uint64_t sqe_way_mask : 16;
- uint64_t smq : 9;
- uint64_t cq_ena : 1;
- uint64_t xoff : 1;
- uint64_t sso_ena : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t default_chan : 12;
- uint64_t sqb_count : 16;
- uint64_t smq_rr_count : 25;
- uint64_t sqb_aura : 20;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t sqe_stype : 2;
- uint64_t rsvd_191 : 1;
- uint64_t max_sqe_size : 2;
- uint64_t cq_limit : 8;
- uint64_t lmt_dis : 1;
- uint64_t mnq_dis : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_lso_segnum : 8;
- uint64_t tail_offset : 6;
- uint64_t smenq_offset : 6;
- uint64_t head_offset : 6;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq_vld : 1;
- uint64_t rsvd_255_253 : 3;
- uint64_t next_sqb : 64;/* W4 */
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t smenq_sqb : 64;/* W6 */
- uint64_t smenq_next_sqb : 64;/* W7 */
- uint64_t head_sqb : 64;/* W8 */
- uint64_t rsvd_583_576 : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t rsvd_639_630 : 10;
- uint64_t scm_lso_rem : 18;
- uint64_t rsvd_703_658 : 46;
- uint64_t octs : 48;
- uint64_t rsvd_767_752 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_831_816 : 16;
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t drop_octs : 48;
- uint64_t rsvd_959_944 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_1023_1008 : 16;
-};
-
-/* NIX transmit action structure */
-struct nix_tx_action_s {
- uint64_t op : 4;
- uint64_t rsvd_11_4 : 8;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX transmit vtag action structure */
-struct nix_tx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_op : 2;
- uint64_t rsvd_15_14 : 2;
- uint64_t vtag0_def : 10;
- uint64_t rsvd_31_26 : 6;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_op : 2;
- uint64_t rsvd_47_46 : 2;
- uint64_t vtag1_def : 10;
- uint64_t rsvd_63_58 : 6;
-};
-
-/* NIX work queue entry header structure */
-struct nix_wqe_hdr_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t node : 2;
- uint64_t q : 14;
- uint64_t wqe_type : 4;
-};
-
-/* NIX Rx flow key algorithm field structure */
-struct nix_rx_flowkey_alg {
- uint64_t key_offset :6;
- uint64_t ln_mask :1;
- uint64_t fn_mask :1;
- uint64_t hdr_offset :8;
- uint64_t bytesm1 :5;
- uint64_t lid :3;
- uint64_t reserved_24_24 :1;
- uint64_t ena :1;
- uint64_t sel_chan :1;
- uint64_t ltype_mask :4;
- uint64_t ltype_match :4;
- uint64_t reserved_35_63 :29;
-};
-
-/* NIX LSO format field structure */
-struct nix_lso_format {
- uint64_t offset : 8;
- uint64_t layer : 2;
- uint64_t rsvd_10_11 : 2;
- uint64_t sizem1 : 2;
- uint64_t rsvd_14_15 : 2;
- uint64_t alg : 3;
- uint64_t rsvd_19_63 : 45;
-};
-
-#define NIX_LSO_FIELD_MAX (8)
-#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16)
-#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12)
-#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8)
-#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0)
-
-#define NIX_LSO_FIELD_MASK \
- (NIX_LSO_FIELD_OFF_MASK | \
- NIX_LSO_FIELD_LY_MASK | \
- NIX_LSO_FIELD_SZ_MASK | \
- NIX_LSO_FIELD_ALG_MASK)
-
-#endif /* __OTX2_NIX_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h
deleted file mode 100644
index 2224216c96..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npa.h
+++ /dev/null
@@ -1,305 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPA_HW_H__
-#define __OTX2_NPA_HW_H__
-
-/* Register offsets */
-
-#define NPA_AF_BLK_RST (0x0ull)
-#define NPA_AF_CONST (0x10ull)
-#define NPA_AF_CONST1 (0x18ull)
-#define NPA_AF_LF_RST (0x20ull)
-#define NPA_AF_GEN_CFG (0x30ull)
-#define NPA_AF_NDC_CFG (0x40ull)
-#define NPA_AF_NDC_SYNC (0x50ull)
-#define NPA_AF_INP_CTL (0xd0ull)
-#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull)
-#define NPA_AF_AVG_DELAY (0x100ull)
-#define NPA_AF_GEN_INT (0x140ull)
-#define NPA_AF_GEN_INT_W1S (0x148ull)
-#define NPA_AF_GEN_INT_ENA_W1S (0x150ull)
-#define NPA_AF_GEN_INT_ENA_W1C (0x158ull)
-#define NPA_AF_RVU_INT (0x160ull)
-#define NPA_AF_RVU_INT_W1S (0x168ull)
-#define NPA_AF_RVU_INT_ENA_W1S (0x170ull)
-#define NPA_AF_RVU_INT_ENA_W1C (0x178ull)
-#define NPA_AF_ERR_INT (0x180ull)
-#define NPA_AF_ERR_INT_W1S (0x188ull)
-#define NPA_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NPA_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NPA_AF_RAS (0x1a0ull)
-#define NPA_AF_RAS_W1S (0x1a8ull)
-#define NPA_AF_RAS_ENA_W1S (0x1b0ull)
-#define NPA_AF_RAS_ENA_W1C (0x1b8ull)
-#define NPA_AF_AQ_CFG (0x600ull)
-#define NPA_AF_AQ_BASE (0x610ull)
-#define NPA_AF_AQ_STATUS (0x620ull)
-#define NPA_AF_AQ_DOOR (0x630ull)
-#define NPA_AF_AQ_DONE_WAIT (0x640ull)
-#define NPA_AF_AQ_DONE (0x650ull)
-#define NPA_AF_AQ_DONE_ACK (0x660ull)
-#define NPA_AF_AQ_DONE_TIMER (0x670ull)
-#define NPA_AF_AQ_DONE_INT (0x680ull)
-#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull)
-#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull)
-#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18)
-#define NPA_PRIV_AF_INT_CFG (0x10000ull)
-#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8)
-#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8)
-#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull)
-#define NPA_AF_DTX_FILTER_CTL (0x10040ull)
-
-#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3)
-#define NPA_LF_AURA_OP_FREE0 (0x20ull)
-#define NPA_LF_AURA_OP_FREE1 (0x28ull)
-#define NPA_LF_AURA_OP_CNT (0x30ull)
-#define NPA_LF_AURA_OP_LIMIT (0x50ull)
-#define NPA_LF_AURA_OP_INT (0x60ull)
-#define NPA_LF_AURA_OP_THRESH (0x70ull)
-#define NPA_LF_POOL_OP_PC (0x100ull)
-#define NPA_LF_POOL_OP_AVAILABLE (0x110ull)
-#define NPA_LF_POOL_OP_PTR_START0 (0x120ull)
-#define NPA_LF_POOL_OP_PTR_START1 (0x128ull)
-#define NPA_LF_POOL_OP_PTR_END0 (0x130ull)
-#define NPA_LF_POOL_OP_PTR_END1 (0x138ull)
-#define NPA_LF_POOL_OP_INT (0x160ull)
-#define NPA_LF_POOL_OP_THRESH (0x170ull)
-#define NPA_LF_ERR_INT (0x200ull)
-#define NPA_LF_ERR_INT_W1S (0x208ull)
-#define NPA_LF_ERR_INT_ENA_W1C (0x210ull)
-#define NPA_LF_ERR_INT_ENA_W1S (0x218ull)
-#define NPA_LF_RAS (0x220ull)
-#define NPA_LF_RAS_W1S (0x228ull)
-#define NPA_LF_RAS_ENA_W1C (0x230ull)
-#define NPA_LF_RAS_ENA_W1S (0x238ull)
-#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NPA_AQ_COMP_NOTDONE (0x0ull)
-#define NPA_AQ_COMP_GOOD (0x1ull)
-#define NPA_AQ_COMP_SWERR (0x2ull)
-#define NPA_AQ_COMP_CTX_POISON (0x3ull)
-#define NPA_AQ_COMP_CTX_FAULT (0x4ull)
-#define NPA_AQ_COMP_LOCKERR (0x5ull)
-
-#define NPA_AF_INT_VEC_RVU (0x0ull)
-#define NPA_AF_INT_VEC_GEN (0x1ull)
-#define NPA_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NPA_AF_INT_VEC_AF_ERR (0x3ull)
-#define NPA_AF_INT_VEC_POISON (0x4ull)
-
-#define NPA_AQ_INSTOP_NOP (0x0ull)
-#define NPA_AQ_INSTOP_INIT (0x1ull)
-#define NPA_AQ_INSTOP_WRITE (0x2ull)
-#define NPA_AQ_INSTOP_READ (0x3ull)
-#define NPA_AQ_INSTOP_LOCK (0x4ull)
-#define NPA_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NPA_AQ_CTYPE_AURA (0x0ull)
-#define NPA_AQ_CTYPE_POOL (0x1ull)
-
-#define NPA_BPINTF_NIX0_RX (0x0ull)
-#define NPA_BPINTF_NIX1_RX (0x1ull)
-
-#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull)
-#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull)
-#define NPA_AURA_ERR_INT_R4 (0x4ull)
-#define NPA_AURA_ERR_INT_R5 (0x5ull)
-#define NPA_AURA_ERR_INT_R6 (0x6ull)
-#define NPA_AURA_ERR_INT_R7 (0x7ull)
-
-#define NPA_LF_INT_VEC_ERR_INT (0x40ull)
-#define NPA_LF_INT_VEC_POISON (0x41ull)
-#define NPA_LF_INT_VEC_QINT_END (0x3full)
-#define NPA_LF_INT_VEC_QINT_START (0x0ull)
-
-#define NPA_INPQ_SSO (0x4ull)
-#define NPA_INPQ_TIM (0x5ull)
-#define NPA_INPQ_DPI (0x6ull)
-#define NPA_INPQ_AURA_OP (0xeull)
-#define NPA_INPQ_INTERNAL_RSV (0xfull)
-#define NPA_INPQ_NIX0_RX (0x0ull)
-#define NPA_INPQ_NIX1_RX (0x2ull)
-#define NPA_INPQ_NIX0_TX (0x1ull)
-#define NPA_INPQ_NIX1_TX (0x3ull)
-#define NPA_INPQ_R_END (0xdull)
-#define NPA_INPQ_R_START (0x7ull)
-
-#define NPA_POOL_ERR_INT_OVFLS (0x0ull)
-#define NPA_POOL_ERR_INT_RANGE (0x1ull)
-#define NPA_POOL_ERR_INT_PERR (0x2ull)
-#define NPA_POOL_ERR_INT_R3 (0x3ull)
-#define NPA_POOL_ERR_INT_R4 (0x4ull)
-#define NPA_POOL_ERR_INT_R5 (0x5ull)
-#define NPA_POOL_ERR_INT_R6 (0x6ull)
-#define NPA_POOL_ERR_INT_R7 (0x7ull)
-
-#define NPA_NDC0_PORT_AURA0 (0x0ull)
-#define NPA_NDC0_PORT_AURA1 (0x1ull)
-#define NPA_NDC0_PORT_POOL0 (0x2ull)
-#define NPA_NDC0_PORT_POOL1 (0x3ull)
-#define NPA_NDC0_PORT_STACK0 (0x4ull)
-#define NPA_NDC0_PORT_STACK1 (0x5ull)
-
-#define NPA_LF_ERR_INT_AURA_DIS (0x0ull)
-#define NPA_LF_ERR_INT_AURA_OOR (0x1ull)
-#define NPA_LF_ERR_INT_AURA_FAULT (0xcull)
-#define NPA_LF_ERR_INT_POOL_FAULT (0xdull)
-#define NPA_LF_ERR_INT_STACK_FAULT (0xeull)
-#define NPA_LF_ERR_INT_QINT_FAULT (0xfull)
-
-/* Structures definitions */
-
-/* NPA admin queue instruction structure */
-struct npa_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 9;
- uint64_t rsvd_23_17 : 7;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NPA admin queue result structure */
-struct npa_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NPA aura operation write data structure */
-struct npa_aura_op_wdata_s {
- uint64_t aura : 20;
- uint64_t rsvd_62_20 : 43;
- uint64_t drop : 1;
-};
-
-/* NPA aura context structure */
-struct npa_aura_s {
- uint64_t pool_addr : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t rsvd_66_65 : 2;
- uint64_t pool_caching : 1;
- uint64_t pool_way_mask : 16;
- uint64_t avg_con : 9;
- uint64_t rsvd_93 : 1;
- uint64_t pool_drop_ena : 1;
- uint64_t aura_drop_ena : 1;
- uint64_t bp_ena : 2;
- uint64_t rsvd_103_98 : 6;
- uint64_t aura_drop : 8;
- uint64_t shift : 6;
- uint64_t rsvd_119_118 : 2;
- uint64_t avg_level : 8;
- uint64_t count : 36;
- uint64_t rsvd_167_164 : 4;
- uint64_t nix0_bpid : 9;
- uint64_t rsvd_179_177 : 3;
- uint64_t nix1_bpid : 9;
- uint64_t rsvd_191_189 : 3;
- uint64_t limit : 36;
- uint64_t rsvd_231_228 : 4;
- uint64_t bp : 8;
- uint64_t rsvd_243_240 : 4;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t rsvd_255_252 : 4;
- uint64_t fc_addr : 64;/* W4 */
- uint64_t pool_drop : 8;
- uint64_t update_time : 16;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_363 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_371 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_383_379 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_447_420 : 28;
- uint64_t rsvd_511_448 : 64;/* W7 */
-};
-
-/* NPA pool context structure */
-struct npa_pool_s {
- uint64_t stack_base : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t nat_align : 1;
- uint64_t rsvd_67_66 : 2;
- uint64_t stack_caching : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t stack_way_mask : 16;
- uint64_t buf_offset : 12;
- uint64_t rsvd_103_100 : 4;
- uint64_t buf_size : 11;
- uint64_t rsvd_127_115 : 13;
- uint64_t stack_max_pages : 32;
- uint64_t stack_pages : 32;
- uint64_t op_pc : 48;
- uint64_t rsvd_255_240 : 16;
- uint64_t stack_offset : 4;
- uint64_t rsvd_263_260 : 4;
- uint64_t shift : 6;
- uint64_t rsvd_271_270 : 2;
- uint64_t avg_level : 8;
- uint64_t avg_con : 9;
- uint64_t fc_ena : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t fc_up_crossing : 1;
- uint64_t rsvd_299_297 : 3;
- uint64_t update_time : 16;
- uint64_t rsvd_319_316 : 4;
- uint64_t fc_addr : 64;/* W5 */
- uint64_t ptr_start : 64;/* W6 */
- uint64_t ptr_end : 64;/* W7 */
- uint64_t rsvd_535_512 : 24;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_555 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_563 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_575_571 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_639_612 : 28;
- uint64_t rsvd_703_640 : 64;/* W10 */
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NPA queue interrupt context hardware structure */
-struct npa_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-#endif /* __OTX2_NPA_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h
deleted file mode 100644
index b4e3c1eedc..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npc.h
+++ /dev/null
@@ -1,503 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPC_HW_H__
-#define __OTX2_NPC_HW_H__
-
-/* Register offsets */
-
-#define NPC_AF_CFG (0x0ull)
-#define NPC_AF_ACTIVE_PC (0x10ull)
-#define NPC_AF_CONST (0x20ull)
-#define NPC_AF_CONST1 (0x30ull)
-#define NPC_AF_BLK_RST (0x40ull)
-#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull)
-#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull)
-#define NPC_AF_KPUX_CFG(a) \
- (0x500ull | (uint64_t)(a) << 3)
-#define NPC_AF_PCK_CFG (0x600ull)
-#define NPC_AF_PCK_DEF_OL2 (0x610ull)
-#define NPC_AF_PCK_DEF_OIP4 (0x620ull)
-#define NPC_AF_PCK_DEF_OIP6 (0x630ull)
-#define NPC_AF_PCK_DEF_IIP4 (0x640ull)
-#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \
- (0x800ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_KEX_CFG(a) \
- (0x1010ull | (uint64_t)(a) << 8)
-#define NPC_AF_PKINDX_ACTION0(a) \
- (0x80000ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_ACTION1(a) \
- (0x80008ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_CPI_DEFX(a, b) \
- (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CHLEN90B_PKIND (0x3bull)
-#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \
- (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \
- (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \
- (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRY_DISX(a, b) \
- (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CPIX_CFG(a) \
- (0x200000ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \
- (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 5 | (uint64_t)(d) << 3)
-#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \
- (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \
- (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \
- (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \
- (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \
- (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \
- (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MATCH_STATX(a) \
- (0x1880008ull | (uint64_t)(a) << 8)
-#define NPC_AF_INTFX_MISS_STAT_ACT(a) \
- (0x1880040ull + (uint64_t)(a) * 0x8)
-#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \
- (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \
- (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_INTFX_MISS_ACT(a) \
- (0x1a00000ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_MISS_TAG_ACT(a) \
- (0x1b00008ull | (uint64_t)(a) << 4)
-#define NPC_AF_MCAM_BANKX_HITX(a, b) \
- (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_LKUP_CTL (0x2000000ull)
-#define NPC_AF_LKUP_DATAX(a) \
- (0x2000200ull | (uint64_t)(a) << 4)
-#define NPC_AF_LKUP_RESULTX(a) \
- (0x2000400ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_STAT(a) \
- (0x2000800ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_CTL (0x3000000ull)
-#define NPC_AF_DBG_STATUS (0x3000010ull)
-#define NPC_AF_KPUX_DBG(a) \
- (0x3000020ull | (uint64_t)(a) << 8)
-#define NPC_AF_IKPU_ERR_CTL (0x3000080ull)
-#define NPC_AF_KPUX_ERR_CTL(a) \
- (0x30000a0ull | (uint64_t)(a) << 8)
-#define NPC_AF_MCAM_DBG (0x3001000ull)
-#define NPC_AF_DBG_DATAX(a) \
- (0x3001400ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_RESULTX(a) \
- (0x3001800ull | (uint64_t)(a) << 4)
-
-
-/* Enum offsets */
-
-#define NPC_INTF_NIX0_RX (0x0ull)
-#define NPC_INTF_NIX0_TX (0x1ull)
-
-#define NPC_LKUPOP_PKT (0x0ull)
-#define NPC_LKUPOP_KEY (0x1ull)
-
-#define NPC_MCAM_KEY_X1 (0x0ull)
-#define NPC_MCAM_KEY_X2 (0x1ull)
-#define NPC_MCAM_KEY_X4 (0x2ull)
-
-enum NPC_ERRLEV_E {
- NPC_ERRLEV_RE = 0,
- NPC_ERRLEV_LA = 1,
- NPC_ERRLEV_LB = 2,
- NPC_ERRLEV_LC = 3,
- NPC_ERRLEV_LD = 4,
- NPC_ERRLEV_LE = 5,
- NPC_ERRLEV_LF = 6,
- NPC_ERRLEV_LG = 7,
- NPC_ERRLEV_LH = 8,
- NPC_ERRLEV_R9 = 9,
- NPC_ERRLEV_R10 = 10,
- NPC_ERRLEV_R11 = 11,
- NPC_ERRLEV_R12 = 12,
- NPC_ERRLEV_R13 = 13,
- NPC_ERRLEV_R14 = 14,
- NPC_ERRLEV_NIX = 15,
- NPC_ERRLEV_ENUM_LAST = 16,
-};
-
-enum npc_kpu_err_code {
- NPC_EC_NOERR = 0, /* has to be zero */
- NPC_EC_UNK,
- NPC_EC_IH_LENGTH,
- NPC_EC_EDSA_UNK,
- NPC_EC_L2_K1,
- NPC_EC_L2_K2,
- NPC_EC_L2_K3,
- NPC_EC_L2_K3_ETYPE_UNK,
- NPC_EC_L2_K4,
- NPC_EC_MPLS_2MANY,
- NPC_EC_MPLS_UNK,
- NPC_EC_NSH_UNK,
- NPC_EC_IP_TTL_0,
- NPC_EC_IP_FRAG_OFFSET_1,
- NPC_EC_IP_VER,
- NPC_EC_IP6_HOP_0,
- NPC_EC_IP6_VER,
- NPC_EC_TCP_FLAGS_FIN_ONLY,
- NPC_EC_TCP_FLAGS_ZERO,
- NPC_EC_TCP_FLAGS_RST_FIN,
- NPC_EC_TCP_FLAGS_URG_SYN,
- NPC_EC_TCP_FLAGS_RST_SYN,
- NPC_EC_TCP_FLAGS_SYN_FIN,
- NPC_EC_VXLAN,
- NPC_EC_NVGRE,
- NPC_EC_GRE,
- NPC_EC_GRE_VER1,
- NPC_EC_L4,
- NPC_EC_OIP4_CSUM,
- NPC_EC_IIP4_CSUM,
- NPC_EC_LAST /* has to be the last item */
-};
-
-enum NPC_LID_E {
- NPC_LID_LA = 0,
- NPC_LID_LB,
- NPC_LID_LC,
- NPC_LID_LD,
- NPC_LID_LE,
- NPC_LID_LF,
- NPC_LID_LG,
- NPC_LID_LH,
-};
-
-#define NPC_LT_NA 0
-
-enum npc_kpu_la_ltype {
- NPC_LT_LA_8023 = 1,
- NPC_LT_LA_ETHER,
- NPC_LT_LA_IH_NIX_ETHER,
- NPC_LT_LA_IH_8_ETHER,
- NPC_LT_LA_IH_4_ETHER,
- NPC_LT_LA_IH_2_ETHER,
- NPC_LT_LA_HIGIG2_ETHER,
- NPC_LT_LA_IH_NIX_HIGIG2_ETHER,
- NPC_LT_LA_CUSTOM_L2_90B_ETHER,
- NPC_LT_LA_CPT_HDR,
- NPC_LT_LA_CUSTOM_L2_24B_ETHER,
- NPC_LT_LA_CUSTOM0 = 0xE,
- NPC_LT_LA_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lb_ltype {
- NPC_LT_LB_ETAG = 1,
- NPC_LT_LB_CTAG,
- NPC_LT_LB_STAG_QINQ,
- NPC_LT_LB_BTAG,
- NPC_LT_LB_ITAG,
- NPC_LT_LB_DSA,
- NPC_LT_LB_DSA_VLAN,
- NPC_LT_LB_EDSA,
- NPC_LT_LB_EDSA_VLAN,
- NPC_LT_LB_EXDSA,
- NPC_LT_LB_EXDSA_VLAN,
- NPC_LT_LB_FDSA,
- NPC_LT_LB_VLAN_EXDSA,
- NPC_LT_LB_CUSTOM0 = 0xE,
- NPC_LT_LB_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lc_ltype {
- NPC_LT_LC_PTP = 1,
- NPC_LT_LC_IP,
- NPC_LT_LC_IP_OPT,
- NPC_LT_LC_IP6,
- NPC_LT_LC_IP6_EXT,
- NPC_LT_LC_ARP,
- NPC_LT_LC_RARP,
- NPC_LT_LC_MPLS,
- NPC_LT_LC_NSH,
- NPC_LT_LC_FCOE,
- NPC_LT_LC_NGIO,
- NPC_LT_LC_CUSTOM0 = 0xE,
- NPC_LT_LC_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_ld_ltype {
- NPC_LT_LD_TCP = 1,
- NPC_LT_LD_UDP,
- NPC_LT_LD_ICMP,
- NPC_LT_LD_SCTP,
- NPC_LT_LD_ICMP6,
- NPC_LT_LD_CUSTOM0,
- NPC_LT_LD_CUSTOM1,
- NPC_LT_LD_IGMP = 8,
- NPC_LT_LD_AH,
- NPC_LT_LD_GRE,
- NPC_LT_LD_NVGRE,
- NPC_LT_LD_NSH,
- NPC_LT_LD_TU_MPLS_IN_NSH,
- NPC_LT_LD_TU_MPLS_IN_IP,
-};
-
-enum npc_kpu_le_ltype {
- NPC_LT_LE_VXLAN = 1,
- NPC_LT_LE_GENEVE,
- NPC_LT_LE_ESP,
- NPC_LT_LE_GTPU = 4,
- NPC_LT_LE_VXLANGPE,
- NPC_LT_LE_GTPC,
- NPC_LT_LE_NSH,
- NPC_LT_LE_TU_MPLS_IN_GRE,
- NPC_LT_LE_TU_NSH_IN_GRE,
- NPC_LT_LE_TU_MPLS_IN_UDP,
- NPC_LT_LE_CUSTOM0 = 0xE,
- NPC_LT_LE_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lf_ltype {
- NPC_LT_LF_TU_ETHER = 1,
- NPC_LT_LF_TU_PPP,
- NPC_LT_LF_TU_MPLS_IN_VXLANGPE,
- NPC_LT_LF_TU_NSH_IN_VXLANGPE,
- NPC_LT_LF_TU_MPLS_IN_NSH,
- NPC_LT_LF_TU_3RD_NSH,
- NPC_LT_LF_CUSTOM0 = 0xE,
- NPC_LT_LF_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lg_ltype {
- NPC_LT_LG_TU_IP = 1,
- NPC_LT_LG_TU_IP6,
- NPC_LT_LG_TU_ARP,
- NPC_LT_LG_TU_ETHER_IN_NSH,
- NPC_LT_LG_CUSTOM0 = 0xE,
- NPC_LT_LG_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_lh_ltype {
- NPC_LT_LH_TU_TCP = 1,
- NPC_LT_LH_TU_UDP,
- NPC_LT_LH_TU_ICMP,
- NPC_LT_LH_TU_SCTP,
- NPC_LT_LH_TU_ICMP6,
- NPC_LT_LH_TU_IGMP = 8,
- NPC_LT_LH_TU_ESP,
- NPC_LT_LH_TU_AH,
- NPC_LT_LH_CUSTOM0 = 0xE,
- NPC_LT_LH_CUSTOM1 = 0xF,
-};
-
-/* Structures definitions */
-struct npc_kpu_profile_cam {
- uint8_t state;
- uint8_t state_mask;
- uint16_t dp0;
- uint16_t dp0_mask;
- uint16_t dp1;
- uint16_t dp1_mask;
- uint16_t dp2;
- uint16_t dp2_mask;
-};
-
-struct npc_kpu_profile_action {
- uint8_t errlev;
- uint8_t errcode;
- uint8_t dp0_offset;
- uint8_t dp1_offset;
- uint8_t dp2_offset;
- uint8_t bypass_count;
- uint8_t parse_done;
- uint8_t next_state;
- uint8_t ptr_advance;
- uint8_t cap_ena;
- uint8_t lid;
- uint8_t ltype;
- uint8_t flags;
- uint8_t offset;
- uint8_t mask;
- uint8_t right;
- uint8_t shift;
-};
-
-struct npc_kpu_profile {
- int cam_entries;
- int action_entries;
- struct npc_kpu_profile_cam *cam;
- struct npc_kpu_profile_action *action;
-};
-
-/* NPC KPU register formats */
-struct npc_kpu_cam {
- uint64_t dp0_data : 16;
- uint64_t dp1_data : 16;
- uint64_t dp2_data : 16;
- uint64_t state : 8;
- uint64_t rsvd_63_56 : 8;
-};
-
-struct npc_kpu_action0 {
- uint64_t var_len_shift : 3;
- uint64_t var_len_right : 1;
- uint64_t var_len_mask : 8;
- uint64_t var_len_offset : 8;
- uint64_t ptr_advance : 8;
- uint64_t capture_flags : 8;
- uint64_t capture_ltype : 4;
- uint64_t capture_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t next_state : 8;
- uint64_t parse_done : 1;
- uint64_t capture_ena : 1;
- uint64_t byp_count : 3;
- uint64_t rsvd_63_57 : 7;
-};
-
-struct npc_kpu_action1 {
- uint64_t dp0_offset : 8;
- uint64_t dp1_offset : 8;
- uint64_t dp2_offset : 8;
- uint64_t errcode : 8;
- uint64_t errlev : 4;
- uint64_t rsvd_63_36 : 28;
-};
-
-struct npc_kpu_pkind_cpi_def {
- uint64_t cpi_base : 10;
- uint64_t rsvd_11_10 : 2;
- uint64_t add_shift : 3;
- uint64_t rsvd_15 : 1;
- uint64_t add_mask : 8;
- uint64_t add_offset : 8;
- uint64_t flags_mask : 8;
- uint64_t flags_match : 8;
- uint64_t ltype_mask : 4;
- uint64_t ltype_match : 4;
- uint64_t lid : 3;
- uint64_t rsvd_62_59 : 4;
- uint64_t ena : 1;
-};
-
-struct nix_rx_action {
- uint64_t op :4;
- uint64_t pf_func :16;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t flow_key_alg :5;
- uint64_t rsvd_63_61 :3;
-};
-
-struct nix_tx_action {
- uint64_t op :4;
- uint64_t rsvd_11_4 :8;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t rsvd_63_48 :16;
-};
-
-/* NPC layer parse information structure */
-struct npc_layer_info_s {
- uint32_t lptr : 8;
- uint32_t flags : 8;
- uint32_t ltype : 4;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NPC layer mcam search key extract structure */
-struct npc_layer_kex_s {
- uint16_t flags : 8;
- uint16_t ltype : 4;
- uint16_t rsvd_15_12 : 4;
-};
-
-/* NPC mcam search key x1 structure */
-struct npc_mcam_key_x1_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 48;
- uint64_t rsvd_191_176 : 16;
-};
-
-/* NPC mcam search key x2 structure */
-struct npc_mcam_key_x2_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 32;
- uint64_t rsvd_319_288 : 32;
-};
-
-/* NPC mcam search key x4 structure */
-struct npc_mcam_key_x4_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 64; /* W4 */
- uint64_t kw4 : 64; /* W5 */
- uint64_t kw5 : 64; /* W6 */
- uint64_t kw6 : 64; /* W7 */
-};
-
-/* NPC parse key extract structure */
-struct npc_parse_kex_s {
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t la : 12;
- uint64_t lb : 12;
- uint64_t lc : 12;
- uint64_t ld : 12;
- uint64_t le : 12;
- uint64_t lf : 12;
- uint64_t lg : 12;
- uint64_t lh : 12;
- uint64_t rsvd_127_124 : 4;
-};
-
-/* NPC result structure */
-struct npc_result_s {
- uint64_t intf : 2;
- uint64_t pkind : 6;
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t eoh_ptr : 8;
- uint64_t rsvd_63_44 : 20;
- uint64_t action : 64; /* W1 */
- uint64_t vtag_action : 64; /* W2 */
- uint64_t la : 20;
- uint64_t lb : 20;
- uint64_t lc : 20;
- uint64_t rsvd_255_252 : 4;
- uint64_t ld : 20;
- uint64_t le : 20;
- uint64_t lf : 20;
- uint64_t rsvd_319_316 : 4;
- uint64_t lg : 20;
- uint64_t lh : 20;
- uint64_t rsvd_383_360 : 24;
-};
-
-#endif /* __OTX2_NPC_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ree.h b/drivers/common/octeontx2/hw/otx2_ree.h
deleted file mode 100644
index b7481f125f..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ree.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_REE_HW_H__
-#define __OTX2_REE_HW_H__
-
-/* REE BAR0*/
-#define REE_AF_REEXM_MAX_MATCH (0x80c8)
-
-/* REE BAR02 */
-#define REE_LF_MISC_INT (0x300)
-#define REE_LF_DONE_INT (0x120)
-
-#define REE_AF_QUEX_GMCTL(a) (0x800 | (a) << 3)
-
-#define REE_AF_INT_VEC_RAS (0x0ull)
-#define REE_AF_INT_VEC_RVU (0x1ull)
-#define REE_AF_INT_VEC_QUE_DONE (0x2ull)
-#define REE_AF_INT_VEC_AQ (0x3ull)
-
-/* ENUMS */
-
-#define REE_LF_INT_VEC_QUE_DONE (0x0ull)
-#define REE_LF_INT_VEC_MISC (0x1ull)
-
-#endif /* __OTX2_REE_HW_H__*/
diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h
deleted file mode 100644
index b98dbcb1cd..0000000000
--- a/drivers/common/octeontx2/hw/otx2_rvu.h
+++ /dev/null
@@ -1,219 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RVU_HW_H__
-#define __OTX2_RVU_HW_H__
-
-/* Register offsets */
-
-#define RVU_AF_MSIXTR_BASE (0x10ull)
-#define RVU_AF_BLK_RST (0x30ull)
-#define RVU_AF_PF_BAR4_ADDR (0x40ull)
-#define RVU_AF_RAS (0x100ull)
-#define RVU_AF_RAS_W1S (0x108ull)
-#define RVU_AF_RAS_ENA_W1S (0x110ull)
-#define RVU_AF_RAS_ENA_W1C (0x118ull)
-#define RVU_AF_GEN_INT (0x120ull)
-#define RVU_AF_GEN_INT_W1S (0x128ull)
-#define RVU_AF_GEN_INT_ENA_W1S (0x130ull)
-#define RVU_AF_GEN_INT_ENA_W1C (0x138ull)
-#define RVU_AF_AFPFX_MBOXX(a, b) \
- (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3)
-#define RVU_AF_PFME_STATUS (0x2800ull)
-#define RVU_AF_PFTRPEND (0x2810ull)
-#define RVU_AF_PFTRPEND_W1S (0x2820ull)
-#define RVU_AF_PF_RST (0x2840ull)
-#define RVU_AF_HWVF_RST (0x2850ull)
-#define RVU_AF_PFAF_MBOX_INT (0x2880ull)
-#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull)
-#define RVU_AF_PFFLR_INT (0x28a0ull)
-#define RVU_AF_PFFLR_INT_W1S (0x28a8ull)
-#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull)
-#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull)
-#define RVU_AF_PFME_INT (0x28c0ull)
-#define RVU_AF_PFME_INT_W1S (0x28c8ull)
-#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull)
-#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull)
-#define RVU_PRIV_CONST (0x8000000ull)
-#define RVU_PRIV_GEN_CFG (0x8000010ull)
-#define RVU_PRIV_CLK_CFG (0x8000020ull)
-#define RVU_PRIV_ACTIVE_PC (0x8000030ull)
-#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_NIXX_CFG(a, b) \
- (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_CPTX_CFG(a, b) \
- (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3)
-#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \
- (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \
- (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-
-#define RVU_PF_VFX_PFVF_MBOXX(a, b) \
- (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3)
-#define RVU_PF_VF_BAR4_ADDR (0x10ull)
-#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3)
-#define RVU_PF_INT (0xc20ull)
-#define RVU_PF_INT_W1S (0xc28ull)
-#define RVU_PF_INT_ENA_W1S (0xc30ull)
-#define RVU_PF_INT_ENA_W1C (0xc38ull)
-#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3)
-#define RVU_VF_INT (0x20ull)
-#define RVU_VF_INT_W1S (0x28ull)
-#define RVU_VF_INT_ENA_W1S (0x30ull)
-#define RVU_VF_INT_ENA_W1C (0x38ull)
-#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-
-
-/* Enum offsets */
-
-#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull)
-#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull)
-#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \
- (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25))
-
-#define RVU_AF_INT_VEC_POISON (0x0ull)
-#define RVU_AF_INT_VEC_PFFLR (0x1ull)
-#define RVU_AF_INT_VEC_PFME (0x2ull)
-#define RVU_AF_INT_VEC_GEN (0x3ull)
-#define RVU_AF_INT_VEC_MBOX (0x4ull)
-
-#define RVU_BLOCK_TYPE_RVUM (0x0ull)
-#define RVU_BLOCK_TYPE_LMT (0x2ull)
-#define RVU_BLOCK_TYPE_NIX (0x3ull)
-#define RVU_BLOCK_TYPE_NPA (0x4ull)
-#define RVU_BLOCK_TYPE_NPC (0x5ull)
-#define RVU_BLOCK_TYPE_SSO (0x6ull)
-#define RVU_BLOCK_TYPE_SSOW (0x7ull)
-#define RVU_BLOCK_TYPE_TIM (0x8ull)
-#define RVU_BLOCK_TYPE_CPT (0x9ull)
-#define RVU_BLOCK_TYPE_NDC (0xaull)
-#define RVU_BLOCK_TYPE_DDF (0xbull)
-#define RVU_BLOCK_TYPE_ZIP (0xcull)
-#define RVU_BLOCK_TYPE_RAD (0xdull)
-#define RVU_BLOCK_TYPE_DFA (0xeull)
-#define RVU_BLOCK_TYPE_HNA (0xfull)
-#define RVU_BLOCK_TYPE_REE (0xeull)
-
-#define RVU_BLOCK_ADDR_RVUM (0x0ull)
-#define RVU_BLOCK_ADDR_LMT (0x1ull)
-#define RVU_BLOCK_ADDR_NPA (0x3ull)
-#define RVU_BLOCK_ADDR_NIX0 (0x4ull)
-#define RVU_BLOCK_ADDR_NIX1 (0x5ull)
-#define RVU_BLOCK_ADDR_NPC (0x6ull)
-#define RVU_BLOCK_ADDR_SSO (0x7ull)
-#define RVU_BLOCK_ADDR_SSOW (0x8ull)
-#define RVU_BLOCK_ADDR_TIM (0x9ull)
-#define RVU_BLOCK_ADDR_CPT0 (0xaull)
-#define RVU_BLOCK_ADDR_CPT1 (0xbull)
-#define RVU_BLOCK_ADDR_NDC0 (0xcull)
-#define RVU_BLOCK_ADDR_NDC1 (0xdull)
-#define RVU_BLOCK_ADDR_NDC2 (0xeull)
-#define RVU_BLOCK_ADDR_R_END (0x1full)
-#define RVU_BLOCK_ADDR_R_START (0x14ull)
-#define RVU_BLOCK_ADDR_REE0 (0x14ull)
-#define RVU_BLOCK_ADDR_REE1 (0x15ull)
-
-#define RVU_VF_INT_VEC_MBOX (0x0ull)
-
-#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull)
-#define RVU_PF_INT_VEC_VFFLR0 (0x0ull)
-#define RVU_PF_INT_VEC_VFFLR1 (0x1ull)
-#define RVU_PF_INT_VEC_VFME0 (0x2ull)
-#define RVU_PF_INT_VEC_VFME1 (0x3ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull)
-
-
-#define AF_BAR2_ALIASX_SIZE (0x100000ull)
-
-#define TIM_AF_BAR2_SEL (0x9000000ull)
-#define SSO_AF_BAR2_SEL (0x9000000ull)
-#define NIX_AF_BAR2_SEL (0x9000000ull)
-#define SSOW_AF_BAR2_SEL (0x9000000ull)
-#define NPA_AF_BAR2_SEL (0x9000000ull)
-#define CPT_AF_BAR2_SEL (0x9000000ull)
-#define RVU_AF_BAR2_SEL (0x9000000ull)
-#define REE_AF_BAR2_SEL (0x9000000ull)
-
-#define AF_BAR2_ALIASX(a, b) \
- (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b))
-#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define REE_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-
-/* Structures definitions */
-
-/* RVU admin function register address structure */
-struct rvu_af_addr_s {
- uint64_t addr : 28;
- uint64_t block : 5;
- uint64_t rsvd_63_33 : 31;
-};
-
-/* RVU function-unique address structure */
-struct rvu_func_addr_s {
- uint32_t addr : 12;
- uint32_t lf_slot : 8;
- uint32_t block : 5;
- uint32_t rsvd_31_25 : 7;
-};
-
-/* RVU msi-x vector structure */
-struct rvu_msix_vec_s {
- uint64_t addr : 64; /* W0 */
- uint64_t data : 32;
- uint64_t mask : 1;
- uint64_t pend : 1;
- uint64_t rsvd_127_98 : 30;
-};
-
-/* RVU pf function identification structure */
-struct rvu_pf_func_s {
- uint16_t func : 10;
- uint16_t pf : 6;
-};
-
-#endif /* __OTX2_RVU_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_sdp.h b/drivers/common/octeontx2/hw/otx2_sdp.h
deleted file mode 100644
index 1e690f8b32..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sdp.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SDP_HW_H_
-#define __OTX2_SDP_HW_H_
-
-/* SDP VF IOQs */
-#define SDP_MIN_RINGS_PER_VF (1)
-#define SDP_MAX_RINGS_PER_VF (8)
-
-/* SDP VF IQ configuration */
-#define SDP_VF_MAX_IQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_IQ_DESCRIPTORS (128)
-
-#define SDP_VF_DB_MIN (1)
-#define SDP_VF_DB_TIMEOUT (1)
-#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF)
-
-#define SDP_VF_64BYTE_INSTR (64)
-#define SDP_VF_32BYTE_INSTR (32)
-
-/* SDP VF OQ configuration */
-#define SDP_VF_MAX_OQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_OQ_DESCRIPTORS (128)
-#define SDP_VF_OQ_BUF_SIZE (2048)
-#define SDP_VF_OQ_REFIL_THRESHOLD (16)
-
-#define SDP_VF_OQ_INFOPTR_MODE (1)
-#define SDP_VF_OQ_BUFPTR_MODE (0)
-
-#define SDP_VF_OQ_INTR_PKT (1)
-#define SDP_VF_OQ_INTR_TIME (10)
-#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF
-
-/* Wait time in milliseconds for FLR */
-#define SDP_VF_PCI_FLR_WAIT (100)
-#define SDP_VF_BUSY_LOOP_COUNT (10000)
-
-#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF
-#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF
-
-/* SDP VF IOQs per rawdev */
-#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES
-#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES
-
-/* SDP VF Register definitions */
-#define SDP_VF_RING_OFFSET (0x1ull << 17)
-
-/* SDP VF IQ Registers */
-#define SDP_VF_R_IN_CONTROL_START (0x10000)
-#define SDP_VF_R_IN_ENABLE_START (0x10010)
-#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020)
-#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030)
-#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040)
-#define SDP_VF_R_IN_CNTS_START (0x10050)
-#define SDP_VF_R_IN_INT_LEVELS_START (0x10060)
-#define SDP_VF_R_IN_PKT_CNT_START (0x10080)
-#define SDP_VF_R_IN_BYTE_CNT_START (0x10090)
-
-#define SDP_VF_R_IN_CONTROL(ring) \
- (SDP_VF_R_IN_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_ENABLE(ring) \
- (SDP_VF_R_IN_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_BADDR(ring) \
- (SDP_VF_R_IN_INSTR_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_RSIZE(ring) \
- (SDP_VF_R_IN_INSTR_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_DBELL(ring) \
- (SDP_VF_R_IN_INSTR_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_CNTS(ring) \
- (SDP_VF_R_IN_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INT_LEVELS(ring) \
- (SDP_VF_R_IN_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_PKT_CNT(ring) \
- (SDP_VF_R_IN_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_BYTE_CNT(ring) \
- (SDP_VF_R_IN_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF IQ Masks */
-#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF)
-#define SDP_VF_R_IN_CTL_RPVF_POS (48)
-
-#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28)
-#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */
-#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24)
-#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8)
-#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6)
-#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5)
-#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3)
-#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1)
-#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0)
-
-#define SDP_VF_R_IN_CTL_MASK \
- (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B)
-
-/* SDP VF OQ Registers */
-#define SDP_VF_R_OUT_CNTS_START (0x10100)
-#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110)
-#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120)
-#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130)
-#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140)
-#define SDP_VF_R_OUT_CONTROL_START (0x10150)
-#define SDP_VF_R_OUT_ENABLE_START (0x10160)
-#define SDP_VF_R_OUT_PKT_CNT_START (0x10180)
-#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190)
-
-#define SDP_VF_R_OUT_CONTROL(ring) \
- (SDP_VF_R_OUT_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_ENABLE(ring) \
- (SDP_VF_R_OUT_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_BADDR(ring) \
- (SDP_VF_R_OUT_SLIST_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \
- (SDP_VF_R_OUT_SLIST_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_DBELL(ring) \
- (SDP_VF_R_OUT_SLIST_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_CNTS(ring) \
- (SDP_VF_R_OUT_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_INT_LEVELS(ring) \
- (SDP_VF_R_OUT_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_PKT_CNT(ring) \
- (SDP_VF_R_OUT_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_BYTE_CNT(ring) \
- (SDP_VF_R_OUT_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF OQ Masks */
-#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40)
-#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34)
-#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33)
-#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32)
-#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30)
-#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29)
-#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28)
-#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26)
-#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25)
-#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24)
-#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23)
-
-#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63)
-#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32)
-
-/* SDP Instruction Header */
-struct sdp_instr_ih {
- /* Data Len */
- uint64_t tlen:16;
-
- /* Reserved1 */
- uint64_t rsvd1:20;
-
- /* PKIND for SDP */
- uint64_t pkind:6;
-
- /* Front Data size */
- uint64_t fsz:6;
-
- /* No. of entries in gather list */
- uint64_t gsz:14;
-
- /* Gather indicator */
- uint64_t gather:1;
-
- /* Reserved2 */
- uint64_t rsvd2:1;
-} __rte_packed;
-
-#endif /* __OTX2_SDP_HW_H_ */
-
diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h
deleted file mode 100644
index 98a8130b16..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sso.h
+++ /dev/null
@@ -1,209 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSO_HW_H__
-#define __OTX2_SSO_HW_H__
-
-/* Register offsets */
-
-#define SSO_AF_CONST (0x1000ull)
-#define SSO_AF_CONST1 (0x1008ull)
-#define SSO_AF_WQ_INT_PC (0x1020ull)
-#define SSO_AF_NOS_CNT (0x1050ull)
-#define SSO_AF_AW_WE (0x1080ull)
-#define SSO_AF_WS_CFG (0x1088ull)
-#define SSO_AF_GWE_CFG (0x1098ull)
-#define SSO_AF_GWE_RANDOM (0x10b0ull)
-#define SSO_AF_LF_HWGRP_RST (0x10e0ull)
-#define SSO_AF_AW_CFG (0x10f0ull)
-#define SSO_AF_BLK_RST (0x10f8ull)
-#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull)
-#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull)
-#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull)
-#define SSO_AF_ERR0 (0x1220ull)
-#define SSO_AF_ERR0_W1S (0x1228ull)
-#define SSO_AF_ERR0_ENA_W1C (0x1230ull)
-#define SSO_AF_ERR0_ENA_W1S (0x1238ull)
-#define SSO_AF_ERR2 (0x1260ull)
-#define SSO_AF_ERR2_W1S (0x1268ull)
-#define SSO_AF_ERR2_ENA_W1C (0x1270ull)
-#define SSO_AF_ERR2_ENA_W1S (0x1278ull)
-#define SSO_AF_UNMAP_INFO (0x12f0ull)
-#define SSO_AF_UNMAP_INFO2 (0x1300ull)
-#define SSO_AF_UNMAP_INFO3 (0x1310ull)
-#define SSO_AF_RAS (0x1420ull)
-#define SSO_AF_RAS_W1S (0x1430ull)
-#define SSO_AF_RAS_ENA_W1C (0x1460ull)
-#define SSO_AF_RAS_ENA_W1S (0x1470ull)
-#define SSO_AF_AW_INP_CTL (0x2070ull)
-#define SSO_AF_AW_ADD (0x2080ull)
-#define SSO_AF_AW_READ_ARB (0x2090ull)
-#define SSO_AF_XAQ_REQ_PC (0x20b0ull)
-#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull)
-#define SSO_AF_TAQ_CNT (0x20c0ull)
-#define SSO_AF_TAQ_ADD (0x20e0ull)
-#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3)
-#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_AF_INT_CFG (0x3000ull)
-#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull)
-#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \
- (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \
- (uint64_t)(c) << 3)
-#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_WAEX_TAG(a, b) \
- (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define SSO_AF_TAQX_WAEX_WQP(a, b) \
- (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK
-#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_IAQ_RSVD_THR 0x2
-
-#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK
-#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull
-#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_TAQ_RSVD_THR 0x3
-
-#define SSO_HWGRP_PRI_AFF_MASK 0xFull
-#define SSO_HWGRP_PRI_AFF_SHIFT 8
-#define SSO_HWGRP_PRI_WGT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_SHIFT 16
-#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24
-
-#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0)
-#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1)
-#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2)
-#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3)
-#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4)
-
-#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8)
-#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9)
-#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull
-#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull
-
-/* Enum offsets */
-
-#define SSO_LF_INT_VEC_GRP (0x0ull)
-
-#define SSO_AF_INT_VEC_ERR0 (0x0ull)
-#define SSO_AF_INT_VEC_ERR2 (0x1ull)
-#define SSO_AF_INT_VEC_RAS (0x2ull)
-
-#define SSO_WA_IOBN (0x0ull)
-#define SSO_WA_NIXRX (0x1ull)
-#define SSO_WA_CPT (0x2ull)
-#define SSO_WA_ADDWQ (0x3ull)
-#define SSO_WA_DPI (0x4ull)
-#define SSO_WA_NIXTX (0x5ull)
-#define SSO_WA_TIM (0x6ull)
-#define SSO_WA_ZIP (0x7ull)
-
-#define SSO_TT_ORDERED (0x0ull)
-#define SSO_TT_ATOMIC (0x1ull)
-#define SSO_TT_UNTAGGED (0x2ull)
-#define SSO_TT_EMPTY (0x3ull)
-
-
-/* Structures definitions */
-
-#endif /* __OTX2_SSO_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h
deleted file mode 100644
index 8a44578036..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ssow.h
+++ /dev/null
@@ -1,56 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSOW_HW_H__
-#define __OTX2_SSOW_HW_H__
-
-/* Register offsets */
-
-#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull)
-#define SSOW_AF_LF_HWS_RST (0x30ull)
-#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3)
-#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3)
-#define SSOW_AF_SCRATCH_WS (0x100000ull)
-#define SSOW_AF_SCRATCH_GW (0x200000ull)
-#define SSOW_AF_SCRATCH_AW (0x300000ull)
-
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-
-/* Enum offsets */
-
-#define SSOW_LF_INT_VEC_IOP (0x0ull)
-
-
-#endif /* __OTX2_SSOW_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h
deleted file mode 100644
index 41442ad0a8..0000000000
--- a/drivers/common/octeontx2/hw/otx2_tim.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_HW_H__
-#define __OTX2_TIM_HW_H__
-
-/* TIM */
-#define TIM_AF_CONST (0x90)
-#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3)
-#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3)
-#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_LF_RST (0x20)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3)
-#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3)
-#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3)
-#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3)
-#define TIM_AF_FLAGS_REG (0x80)
-#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0)
-#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47)
-#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50)
-#define TIM_AF_RINGX_CLT1_CLK_10NS (0)
-#define TIM_AF_RINGX_CLT1_CLK_GPIO (1)
-#define TIM_AF_RINGX_CLT1_CLK_GTI (2)
-#define TIM_AF_RINGX_CLT1_CLK_PTP (3)
-
-/* ENUMS */
-
-#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull)
-#define TIM_LF_INT_VEC_RAS_INT (0x1ull)
-
-#endif /* __OTX2_TIM_HW_H__ */
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
deleted file mode 100644
index 223ba5ef51..0000000000
--- a/drivers/common/octeontx2/meson.build
+++ /dev/null
@@ -1,24 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources= files(
- 'otx2_common.c',
- 'otx2_dev.c',
- 'otx2_irq.c',
- 'otx2_mbox.c',
- 'otx2_sec_idev.c',
-)
-
-deps = ['eal', 'pci', 'ethdev', 'kvargs']
-includes += include_directories(
- '../../common/octeontx2',
- '../../mempool/octeontx2',
- '../../bus/pci',
-)
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
deleted file mode 100644
index d23c50242e..0000000000
--- a/drivers/common/octeontx2/otx2_common.c
+++ /dev/null
@@ -1,216 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-/**
- * @internal
- * Set default NPA configuration.
- */
-void
-otx2_npa_set_defaults(struct otx2_idev_cfg *idev)
-{
- idev->npa_pf_func = 0;
- rte_atomic16_set(&idev->npa_refcnt, 0);
-}
-
-/**
- * @internal
- * Get intra device config structure.
- */
-struct otx2_idev_cfg *
-otx2_intra_dev_get_cfg(void)
-{
- const char name[] = "octeontx2_intra_device_conf";
- const struct rte_memzone *mz;
- struct otx2_idev_cfg *idev;
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- idev = mz->addr;
- idev->sso_pf_func = 0;
- idev->npa_lf = NULL;
- otx2_npa_set_defaults(idev);
- return idev;
- }
- return NULL;
-}
-
-/**
- * @internal
- * Get SSO PF_FUNC.
- */
-uint16_t
-otx2_sso_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t sso_pf_func;
-
- sso_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- sso_pf_func = idev->sso_pf_func;
-
- return sso_pf_func;
-}
-
-/**
- * @internal
- * Set SSO PF_FUNC.
- */
-void
-otx2_sso_pf_func_set(uint16_t sso_pf_func)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL) {
- idev->sso_pf_func = sso_pf_func;
- rte_smp_wmb();
- }
-}
-
-/**
- * @internal
- * Get NPA PF_FUNC.
- */
-uint16_t
-otx2_npa_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t npa_pf_func;
-
- npa_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- npa_pf_func = idev->npa_pf_func;
-
- return npa_pf_func;
-}
-
-/**
- * @internal
- * Get NPA lf object.
- */
-struct otx2_npa_lf *
-otx2_npa_lf_obj_get(void)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt))
- return idev->npa_lf;
-
- return NULL;
-}
-
-/**
- * @internal
- * Is NPA lf active for the given device?.
- */
-int
-otx2_npa_lf_active(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
-
- /* Check if npalf is actively used on this dev */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox)
- return 0;
-
- return rte_atomic16_read(&idev->npa_refcnt);
-}
-
-/*
- * @internal
- * Gets reference only to existing NPA LF object.
- */
-int otx2_npa_lf_obj_ref(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t cnt;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
-
- /* Check if ref not possible */
- if (idev == NULL)
- return -EINVAL;
-
-
- /* Get ref only if > 0 */
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- while (cnt != 0) {
- rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1);
- if (rc)
- break;
-
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- }
-
- return cnt ? 0 : -EINVAL;
-}
-
-static int
-parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint64_t val;
-
- val = strtoull(value, NULL, 16);
-
- *(uint64_t *)extra_args = val;
-
- return 0;
-}
-
-/*
- * @internal
- * Parse common device arguments
- */
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
-{
-
- struct otx2_idev_cfg *idev;
- uint64_t npa_lock_mask = 0;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
- &parse_npa_lock_mask, &npa_lock_mask);
-
- idev->npa_lock_mask = npa_lock_mask;
-}
-
-RTE_LOG_REGISTER(otx2_logtype_base, pmd.octeontx2.base, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_mbox, pmd.octeontx2.mbox, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npa, pmd.mempool.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_nix, pmd.net.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npc, pmd.net.octeontx2.flow, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tm, pmd.net.octeontx2.tm, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_sso, pmd.event.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tim, pmd.event.octeontx2.timer, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_dpi, pmd.raw.octeontx2.dpi, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ep, pmd.raw.octeontx2.ep, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ree, pmd.regex.octeontx2, NOTICE);
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
deleted file mode 100644
index cd52e098e6..0000000000
--- a/drivers/common/octeontx2/otx2_common.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_COMMON_H_
-#define _OTX2_COMMON_H_
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_kvargs.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_io.h>
-
-#include "hw/otx2_rvu.h"
-#include "hw/otx2_nix.h"
-#include "hw/otx2_npc.h"
-#include "hw/otx2_npa.h"
-#include "hw/otx2_sdp.h"
-#include "hw/otx2_sso.h"
-#include "hw/otx2_ssow.h"
-#include "hw/otx2_tim.h"
-#include "hw/otx2_ree.h"
-
-/* Alignment */
-#define OTX2_ALIGN 128
-
-/* Bits manipulation */
-#ifndef BIT_ULL
-#define BIT_ULL(nr) (1ULL << (nr))
-#endif
-#ifndef BIT
-#define BIT(nr) (1UL << (nr))
-#endif
-
-#ifndef BITS_PER_LONG
-#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
-#endif
-#ifndef BITS_PER_LONG_LONG
-#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
-#endif
-
-#ifndef GENMASK
-#define GENMASK(h, l) \
- (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-#endif
-#ifndef GENMASK_ULL
-#define GENMASK_ULL(h, l) \
- (((~0ULL) - (1ULL << (l)) + 1) & \
- (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
-#endif
-
-#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
-
-/* Intra device related functions */
-struct otx2_npa_lf;
-struct otx2_idev_cfg {
- uint16_t sso_pf_func;
- uint16_t npa_pf_func;
- struct otx2_npa_lf *npa_lf;
- RTE_STD_C11
- union {
- rte_atomic16_t npa_refcnt;
- uint16_t npa_refcnt_u16;
- };
- uint64_t npa_lock_mask;
-};
-
-__rte_internal
-struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
-__rte_internal
-void otx2_sso_pf_func_set(uint16_t sso_pf_func);
-__rte_internal
-uint16_t otx2_sso_pf_func_get(void);
-__rte_internal
-uint16_t otx2_npa_pf_func_get(void);
-__rte_internal
-struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
-__rte_internal
-void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
-__rte_internal
-int otx2_npa_lf_active(void *dev);
-__rte_internal
-int otx2_npa_lf_obj_ref(void);
-__rte_internal
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
-
-/* Log */
-extern int otx2_logtype_base;
-extern int otx2_logtype_mbox;
-extern int otx2_logtype_npa;
-extern int otx2_logtype_nix;
-extern int otx2_logtype_sso;
-extern int otx2_logtype_npc;
-extern int otx2_logtype_tm;
-extern int otx2_logtype_tim;
-extern int otx2_logtype_dpi;
-extern int otx2_logtype_ep;
-extern int otx2_logtype_ree;
-
-#define otx2_err(fmt, args...) \
- RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", \
- __func__, __LINE__, ## args)
-
-#define otx2_info(fmt, args...) \
- RTE_LOG(INFO, PMD, fmt"\n", ## args)
-
-#define otx2_dbg(subsystem, fmt, args...) \
- rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \
- "[%s] %s():%u " fmt "\n", \
- #subsystem, __func__, __LINE__, ##args)
-
-#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__)
-#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__)
-#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__)
-#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__)
-#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__)
-#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__)
-#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__)
-#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__)
-#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__)
-#define otx2_sdp_dbg(fmt, ...) otx2_dbg(ep, fmt, ##__VA_ARGS__)
-#define otx2_ree_dbg(fmt, ...) otx2_dbg(ree, fmt, ##__VA_ARGS__)
-
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063
-#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064
-#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE
-#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8
-#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
-/* OCTEON TX2 98xx EP mode */
-#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
-#define PCI_DEVID_OCTEONTX2_EP_RAW_VF 0xB204 /* OCTEON TX2 EP mode */
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7
-#define PCI_DEVID_OCTEONTX2_RVU_REE_PF 0xA0f4
-#define PCI_DEVID_OCTEONTX2_RVU_REE_VF 0xA0f5
-
-/*
- * REVID for RVU PCIe devices.
- * Bits 0..1: minor pass
- * Bits 3..2: major pass
- * Bits 7..4: midr id, 0:96, 1:95, 2:loki, f:unknown
- */
-
-#define RVU_PCI_REV_MIDR_ID(rev_id) (rev_id >> 4)
-#define RVU_PCI_REV_MAJOR(rev_id) ((rev_id >> 2) & 0x3)
-#define RVU_PCI_REV_MINOR(rev_id) (rev_id & 0x3)
-
-#define RVU_PCI_CN96XX_MIDR_ID 0x0
-#define RVU_PCI_CNF95XX_MIDR_ID 0x1
-
-/* PCI Config offsets */
-#define RVU_PCI_REVISION_ID 0x08
-
-/* IO Access */
-#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
-#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-
-#if defined(RTE_ARCH_ARM64)
-#include "otx2_io_arm64.h"
-#else
-#include "otx2_io_generic.h"
-#endif
-
-/* Fastpath lookup */
-#define OTX2_NIX_FASTPATH_LOOKUP_MEM "otx2_nix_fastpath_lookup_mem"
-#define OTX2_NIX_SA_TBL_START (4096*4 + 69632*2)
-
-#endif /* _OTX2_COMMON_H_ */
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
deleted file mode 100644
index 08dca87848..0000000000
--- a/drivers/common/octeontx2/otx2_dev.c
+++ /dev/null
@@ -1,1074 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <fcntl.h>
-#include <inttypes.h>
-#include <sys/mman.h>
-#include <unistd.h>
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_memcpy.h>
-#include <rte_eal_paging.h>
-
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */
-#define RVU_MAX_INT_RETRY 3
-
-/* PF/VF message handling timer */
-#define VF_PF_MBOX_TIMER_MS (20 * 1000)
-
-static void *
-mbox_mem_map(off_t off, size_t size)
-{
- void *va = MAP_FAILED;
- int mem_fd;
-
- if (size <= 0)
- goto error;
-
- mem_fd = open("/dev/mem", O_RDWR);
- if (mem_fd < 0)
- goto error;
-
- va = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
- RTE_MAP_SHARED, mem_fd, off);
- close(mem_fd);
-
- if (va == NULL)
- otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd",
- size, mem_fd, (intmax_t)off);
-error:
- return va;
-}
-
-static void
-mbox_mem_unmap(void *va, size_t size)
-{
- if (va)
- rte_mem_unmap(va, size);
-}
-
-static int
-pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_msghdr *msghdr;
- uint64_t off;
- int rc = 0;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout += sleep;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT);
- rc = -EIO;
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(int_status, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- if (rc == 0) {
- /* Get message */
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off);
- if (rsp)
- *rsp = msghdr;
- rc = msghdr->rc;
- }
-
- return rc;
-}
-
-static int
-af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- struct mbox_msghdr *rsp;
- uint64_t offset;
- size_t size;
- int i;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout++;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Routed messages %d timeout: %dms",
- num_msg, MBOX_RSP_TIMEOUT);
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- rte_spinlock_lock(&mdev->mbox_lock);
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs != num_msg)
- otx2_err("Routed messages: %d received: %d", num_msg,
- req_hdr->num_msgs);
-
- /* Get messages from mbox */
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* Reserve PF/VF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size);
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* Copy message from AF<->PF mbox to PF<->VF mbox */
- otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
-
- /* Set status and sender pf_func data */
- rsp->rc = msg->rc;
- rsp->pcifunc = msg->pcifunc;
-
- /* Whenever a PF comes up, AF sends the link status to it but
- * when VF comes up no such event is sent to respective VF.
- * Using MBOX_MSG_NIX_LF_START_RX response from AF for the
- * purpose and send the link status of PF to VF.
- */
- if (msg->id == MBOX_MSG_NIX_LF_START_RX) {
- /* Send link status to VF */
- struct cgx_link_user_info linfo;
- struct mbox_msghdr *vf_msg;
- size_t sz;
-
- /* Get the link status */
- if (dev->ops && dev->ops->link_status_get)
- dev->ops->link_status_get(dev, &linfo);
-
- sz = RTE_ALIGN(otx2_mbox_id2size(
- MBOX_MSG_CGX_LINK_EVENT), MBOX_MSG_ALIGN);
- /* Prepare the message to be sent */
- vf_msg = otx2_mbox_alloc_msg(&dev->mbox_vfpf_up, vf,
- sz);
- otx2_mbox_req_init(MBOX_MSG_CGX_LINK_EVENT, vf_msg);
- memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr),
- &linfo, sizeof(struct cgx_link_user_info));
-
- vf_msg->rc = msg->rc;
- vf_msg->pcifunc = msg->pcifunc;
- /* Send to VF */
- otx2_mbox_msg_send(&dev->mbox_vfpf_up, vf);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return req_hdr->num_msgs;
-}
-
-static int
-vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- size_t size;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (!req_hdr->num_msgs)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
-
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- if (msg->id == MBOX_MSG_READY) {
- struct ready_msg_rsp *rsp;
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8;
-
- /* Handle READY message in PF */
- dev->active_vfs[vf / max_bits] |=
- BIT_ULL(vf % max_bits);
- rsp = (struct ready_msg_rsp *)
- otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp));
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* PF/VF function ID */
- rsp->hdr.pcifunc = msg->pcifunc;
- rsp->hdr.rc = 0;
- } else {
- struct mbox_msghdr *af_req;
- /* Reserve AF/PF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size);
- otx2_mbox_req_init(msg->id, af_req);
-
- /* Copy message from VF<->PF mbox to PF<->AF mbox */
- otx2_mbox_memcpy((uint8_t *)af_req +
- sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
- af_req->pcifunc = msg->pcifunc;
- routed++;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- if (routed > 0) {
- otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF",
- dev->pf, routed, vf);
- af_pf_wait_msg(dev, vf, routed);
- otx2_mbox_reset(dev->mbox, 0);
- }
-
- /* Send mbox responses to VF */
- if (mdev->num_msgs) {
- otx2_base_dbg("pf:%d reply %d messages to vf:%d",
- dev->pf, mdev->num_msgs, vf);
- otx2_mbox_msg_send(mbox, vf);
- }
-
- return i;
-}
-
-static int
-vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = &dev->mbox_vfpf_up;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- switch (msg->id) {
- case MBOX_MSG_CGX_LINK_EVENT:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- case MBOX_MSG_CGX_PTP_RX_INFO:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- default:
- otx2_err("Not handled UP msg 0x%x (%s) func:0x%x",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- otx2_mbox_reset(mbox, vf);
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-
- return i;
-}
-
-static void
-otx2_vf_pf_mbox_handle_msg(void *param)
-{
- uint16_t vf, max_vf, max_bits;
- struct otx2_dev *dev = param;
-
- max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t);
- max_vf = max_bits * MAX_VFPF_DWORD_BITS;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) {
- otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)",
- vf, dev->pf, dev->vf);
- vf_pf_process_msgs(dev, vf);
- /* UP messages */
- vf_pf_process_up_msgs(dev, vf);
- dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits));
- }
- }
- dev->timer_set = 0;
-}
-
-static void
-otx2_vf_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- bool alarm_set = false;
- uint64_t intr;
- int vfpf;
-
- for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) {
- intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- if (!intr)
- continue;
-
- otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)",
- vfpf, intr, dev->pf, dev->vf);
-
- /* Save and clear intr bits */
- dev->intr.bits[vfpf] |= intr;
- otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- alarm_set = true;
- }
-
- if (!dev->timer_set && alarm_set) {
- dev->timer_set = 1;
- /* Start timer to handle messages */
- rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS,
- otx2_vf_pf_mbox_handle_msg, dev);
- }
-}
-
-static void
-otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
-
- switch (msg->id) {
- /* Add message id's that are handled here */
- case MBOX_MSG_READY:
- /* Get our identity */
- dev->pf_func = msg->pcifunc;
- break;
-
- default:
- if (msg->rc)
- otx2_err("Message (%s) response has err=%d",
- otx2_mbox_id2name(msg->id), msg->rc);
- break;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- otx2_mbox_reset(mbox, 0);
- /* Update acked if someone is waiting a message */
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-}
-
-/* Copies the message received from AF and sends it to VF */
-static void
-pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg)
-{
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t);
- struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up;
- struct msg_req *msg = rec_msg;
- struct mbox_msghdr *vf_msg;
- uint16_t vf;
- size_t size;
-
- size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
- /* Send UP message to all VF's */
- for (vf = 0; vf < vf_mbox->ndevs; vf++) {
- /* VF active */
- if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf))))
- continue;
-
- otx2_base_dbg("(%s) size: %zx to VF: %d",
- otx2_mbox_id2name(msg->hdr.id), size, vf);
-
- /* Reserve PF/VF mbox message */
- vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size);
- if (!vf_msg) {
- otx2_err("Failed to alloc VF%d UP message", vf);
- continue;
- }
- otx2_mbox_req_init(msg->hdr.id, vf_msg);
-
- /*
- * Copy message from AF<->PF UP mbox
- * to PF<->VF UP mbox
- */
- otx2_mbox_memcpy((uint8_t *)vf_msg +
- sizeof(struct mbox_msghdr), (uint8_t *)msg
- + sizeof(struct mbox_msghdr), size -
- sizeof(struct mbox_msghdr));
-
- vf_msg->rc = msg->hdr.rc;
- /* Set PF to be a sender */
- vf_msg->pcifunc = dev->pf_func;
-
- /* Send to VF */
- otx2_mbox_msg_send(vf_mbox, vf);
- }
-}
-
-static int
-otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev,
- struct cgx_link_info_msg *msg,
- struct msg_rsp *rsp)
-{
- struct cgx_link_user_info *linfo = &msg->link_info;
-
- otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func),
- linfo->link_up ? "UP" : "DOWN", msg->hdr.id,
- otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets link notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets link up notification */
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev,
- struct cgx_ptp_rx_info_msg *msg,
- struct msg_rsp *rsp)
-{
- otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func),
- otx2_get_vf(dev->pf_func),
- msg->ptp_en ? "ENABLED" : "DISABLED",
- msg->hdr.id, otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets PTP notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets PTP notification */
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req)
-{
- /* Check if valid, if not reply with a invalid msg */
- if (req->sig != OTX2_MBOX_REQ_SIG)
- return -EIO;
-
- switch (req->id) {
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
- case _id: { \
- struct _rsp_type *rsp; \
- int err; \
- \
- rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \
- &dev->mbox_up, 0, \
- sizeof(struct _rsp_type)); \
- if (!rsp) \
- return -ENOMEM; \
- \
- rsp->hdr.id = _id; \
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \
- rsp->hdr.pcifunc = dev->pf_func; \
- rsp->hdr.rc = 0; \
- \
- err = otx2_mbox_up_handler_ ## _fn_name( \
- dev, (struct _req_type *)req, rsp); \
- return err; \
- }
-MBOX_UP_CGX_MESSAGES
-#undef M
-
- default :
- otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
- }
-
- return -ENODEV;
-}
-
-static void
-otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int i, err, offset;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- err = mbox_process_msgs_up(dev, msg);
- if (err)
- otx2_err("Error %d handling 0x%x (%s)",
- err, msg->id, otx2_mbox_id2name(msg->id));
- offset = mbox->rx_start + msg->next_msgoff;
- }
- /* Send mbox responses */
- if (mdev->num_msgs) {
- otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs);
- otx2_mbox_msg_send(mbox, 0);
- }
-}
-
-static void
-otx2_pf_vf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_VF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_VF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static void
-otx2_af_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_PF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_PF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static int
-mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i, rc;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- /* MBOX interrupt for VF(0...63) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- if (rc) {
- otx2_err("Fail to register PF(VF0-63) mbox irq");
- return rc;
- }
- /* MBOX interrupt for VF(64...128) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- if (rc) {
- otx2_err("Fail to register PF(VF64-128) mbox irq");
- return rc;
- }
- /* MBOX interrupt AF <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq,
- dev, RVU_PF_INT_VEC_AFPF_MBOX);
- if (rc) {
- otx2_err("Fail to register AF<->PF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int rc;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* MBOX interrupt PF <-> VF */
- rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq,
- dev, RVU_VF_INT_VEC_MBOX);
- if (rc) {
- otx2_err("Fail to register PF<->VF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return mbox_register_vf_irq(pci_dev, dev);
- else
- return mbox_register_pf_irq(pci_dev, dev);
-}
-
-static void
-mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev);
-
- /* Unregister the interrupt handler for each vectors */
- /* MBOX interrupt for VF(0...63) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- /* MBOX interrupt for VF(64...128) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- /* MBOX interrupt AF <-> PF */
- otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_AFPF_MBOX);
-
-}
-
-static void
-mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* Unregister the interrupt handler */
- otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev,
- RVU_VF_INT_VEC_MBOX);
-}
-
-static void
-mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- mbox_unregister_vf_irq(pci_dev, dev);
- else
- mbox_unregister_pf_irq(pci_dev, dev);
-}
-
-static int
-vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msg_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_vf_flr(mbox);
- /* Overwrite pcifunc to indicate VF */
- req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- /* Sync message in interrupt context */
- rc = pf_af_sync_msg(dev, NULL);
- if (rc)
- otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc);
-
- return rc;
-}
-
-static void
-otx2_pf_vf_flr_irq(void *param)
-{
- struct otx2_dev *dev = (struct otx2_dev *)param;
- uint16_t max_vf = 64, vf;
- uintptr_t bar2;
- uint64_t intr;
- int i;
-
- max_vf = (dev->maxvf > 0) ? dev->maxvf : 64;
- bar2 = dev->bar2;
-
- otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf);
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i));
- if (!intr)
- continue;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (!(intr & (1ULL << vf)))
- continue;
-
- otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d",
- i, intr, (64 * i + vf));
- /* Clear interrupt */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i));
- /* Disable the interrupt */
- otx2_write64(BIT_ULL(vf),
- bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
- /* Inform AF about VF reset */
- vf_flr_send_msg(dev, vf);
-
- /* Signal FLR finish */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i));
- /* Enable interrupt */
- otx2_write64(~0ull,
- bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- }
-}
-
-static int
-vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
-}
-
-static int
-vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int i, rc;
-
- otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc);
-
- /* Enable HW interrupt */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- return 0;
-}
-
-/**
- * @internal
- * Get number of active VFs for the given PF device.
- */
-int
-otx2_dev_active_vfs(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- int i, count = 0;
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- count += __builtin_popcount(dev->active_vfs[i]);
-
- return count;
-}
-
-static void
-otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- switch (pci_dev->id.device_id) {
- case PCI_DEVID_OCTEONTX2_RVU_PF:
- break;
- case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF:
- case PCI_DEVID_OCTEONTX2_RVU_NPA_VF:
- case PCI_DEVID_OCTEONTX2_RVU_CPT_VF:
- case PCI_DEVID_OCTEONTX2_RVU_AF_VF:
- case PCI_DEVID_OCTEONTX2_RVU_VF:
- case PCI_DEVID_OCTEONTX2_RVU_SDP_VF:
- dev->hwcap |= OTX2_HWCAP_F_VF;
- break;
- }
-}
-
-/**
- * @internal
- * Initialize the otx2 device
- */
-int
-otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- int up_direction = MBOX_DIR_PFAF_UP;
- int rc, direction = MBOX_DIR_PFAF;
- uint64_t intr_offset = RVU_PF_INT;
- struct otx2_dev *dev = otx2_dev;
- uintptr_t bar2, bar4;
- uint64_t bar4_addr;
- void *hwbase;
-
- bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
- bar4 = (uintptr_t)pci_dev->mem_resource[4].addr;
-
- if (bar2 == 0 || bar4 == 0) {
- otx2_err("Failed to get pci bars");
- rc = -ENODEV;
- goto error;
- }
-
- dev->node = pci_dev->device.numa_node;
- dev->maxvf = pci_dev->max_vfs;
- dev->bar2 = bar2;
- dev->bar4 = bar4;
-
- otx2_update_vf_hwcap(pci_dev, dev);
-
- if (otx2_dev_is_vf(dev)) {
- direction = MBOX_DIR_VFPF;
- up_direction = MBOX_DIR_VFPF_UP;
- intr_offset = RVU_VF_INT;
- }
-
- /* Initialize the local mbox */
- rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1,
- intr_offset);
- if (rc)
- goto error;
- dev->mbox = &dev->mbox_local;
-
- rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1,
- intr_offset);
- if (rc)
- goto error;
-
- /* Register mbox interrupts */
- rc = mbox_register_irq(pci_dev, dev);
- if (rc)
- goto mbox_fini;
-
- /* Check the readiness of PF/VF */
- rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func);
- if (rc)
- goto mbox_unregister;
-
- dev->pf = otx2_get_pf(dev->pf_func);
- dev->vf = otx2_get_vf(dev->pf_func);
- memset(&dev->active_vfs, 0, sizeof(dev->active_vfs));
-
- /* Found VF devices in a PF device */
- if (pci_dev->max_vfs > 0) {
-
- /* Remap mbox area for all vf's */
- bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR);
- if (bar4_addr == 0) {
- rc = -ENODEV;
- goto mbox_fini;
- }
-
- hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs);
- if (hwbase == MAP_FAILED) {
- rc = -ENOMEM;
- goto mbox_fini;
- }
- /* Init mbox object */
- rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto iounmap;
-
- /* PF -> VF UP messages */
- rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto mbox_fini;
- }
-
- /* Register VF-FLR irq handlers */
- if (otx2_dev_is_pf(dev)) {
- rc = vf_flr_register_irqs(pci_dev, dev);
- if (rc)
- goto iounmap;
- }
- dev->mbox_active = 1;
- return rc;
-
-iounmap:
- mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs);
-mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
-mbox_fini:
- otx2_mbox_fini(dev->mbox);
- otx2_mbox_fini(&dev->mbox_up);
-error:
- return rc;
-}
-
-/**
- * @internal
- * Finalize the otx2 device
- */
-void
-otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_mbox *mbox;
-
- /* Clear references to this pci dev */
- idev = otx2_intra_dev_get_cfg();
- if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev)
- idev->npa_lf = NULL;
-
- mbox_unregister_irq(pci_dev, dev);
-
- if (otx2_dev_is_pf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
- /* Release PF - VF */
- mbox = &dev->mbox_vfpf;
- if (mbox->hwbase && mbox->dev)
- mbox_mem_unmap((void *)mbox->hwbase,
- MBOX_SIZE * pci_dev->max_vfs);
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_vfpf_up;
- otx2_mbox_fini(mbox);
-
- /* Release PF - AF */
- mbox = dev->mbox;
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_up;
- otx2_mbox_fini(mbox);
- dev->mbox_active = 0;
-
- /* Disable MSIX vectors */
- otx2_disable_irqs(intr_handle);
-}
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
deleted file mode 100644
index d5b2b0d9af..0000000000
--- a/drivers/common/octeontx2/otx2_dev.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_DEV_H
-#define _OTX2_DEV_H
-
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mbox.h"
-#include "otx2_mempool.h"
-
-/* Common HWCAP flags. Use from LSB bits */
-#define OTX2_HWCAP_F_VF BIT_ULL(8) /* VF device */
-#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF)
-#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF))
-#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \
- (dev->tx_chan_base < 0x700))
-#define otx2_dev_revid(dev) (dev->hwcap & 0xFF)
-#define otx2_dev_is_sdp(dev) (dev->sdp_link)
-
-#define otx2_dev_is_vf_or_sdp(dev) \
- (otx2_dev_is_vf(dev) || otx2_dev_is_sdp(dev))
-
-#define otx2_dev_is_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_95xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-#define otx2_dev_is_95xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-
-#define otx2_dev_is_96xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_96xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_Cx(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_C0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_98xx(dev) \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x3)
-
-struct otx2_dev;
-
-/* Link status update callback */
-typedef void (*otx2_link_status_update_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-/* PTP info callback */
-typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en);
-/* Link status get callback */
-typedef void (*otx2_link_status_get_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-
-struct otx2_dev_ops {
- otx2_link_status_update_t link_status_update;
- otx2_ptp_info_t ptp_info_update;
- otx2_link_status_get_t link_status_get;
-};
-
-#define OTX2_DEV \
- int node __rte_cache_aligned; \
- uint16_t pf; \
- int16_t vf; \
- uint16_t pf_func; \
- uint8_t mbox_active; \
- bool drv_inited; \
- uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \
- uintptr_t bar2; \
- uintptr_t bar4; \
- struct otx2_mbox mbox_local; \
- struct otx2_mbox mbox_up; \
- struct otx2_mbox mbox_vfpf; \
- struct otx2_mbox mbox_vfpf_up; \
- otx2_intr_t intr; \
- int timer_set; /* ~0 : no alarm handling */ \
- uint64_t hwcap; \
- struct otx2_npa_lf npalf; \
- struct otx2_mbox *mbox; \
- uint16_t maxvf; \
- const struct otx2_dev_ops *ops
-
-struct otx2_dev {
- OTX2_DEV;
-};
-
-__rte_internal
-int otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-
-/* Common dev init and fini routines */
-
-static __rte_always_inline int
-otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- uint8_t rev_id;
- int rc;
-
- rc = rte_pci_read_config(pci_dev, &rev_id,
- 1, RVU_PCI_REVISION_ID);
- if (rc != 1) {
- otx2_err("Failed to read pci revision id, rc=%d", rc);
- return rc;
- }
-
- dev->hwcap = rev_id;
- return otx2_dev_priv_init(pci_dev, otx2_dev);
-}
-
-__rte_internal
-void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_dev_active_vfs(void *otx2_dev);
-
-#define RVU_PFVF_PF_SHIFT 10
-#define RVU_PFVF_PF_MASK 0x3F
-#define RVU_PFVF_FUNC_SHIFT 0
-#define RVU_PFVF_FUNC_MASK 0x3FF
-
-static inline int
-otx2_get_vf(uint16_t pf_func)
-{
- return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1);
-}
-
-static inline int
-otx2_get_pf(uint16_t pf_func)
-{
- return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
-}
-
-static inline int
-otx2_pfvf_func(int pf, int vf)
-{
- return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
-}
-
-static inline int
-otx2_is_afvf(uint16_t pf_func)
-{
- return !(pf_func & ~RVU_PFVF_FUNC_MASK);
-}
-
-#endif /* _OTX2_DEV_H */
diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h
deleted file mode 100644
index 34268e3af3..0000000000
--- a/drivers/common/octeontx2/otx2_io_arm64.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_ARM64_H_
-#define _OTX2_IO_ARM64_H_
-
-#define otx2_load_pair(val0, val1, addr) ({ \
- asm volatile( \
- "ldp %x[x0], %x[x1], [%x[p1]]" \
- :[x0]"=r"(val0), [x1]"=r"(val1) \
- :[p1]"r"(addr) \
- ); })
-
-#define otx2_store_pair(val0, val1, addr) ({ \
- asm volatile( \
- "stp %x[x0], %x[x1], [%x[p1],#0]!" \
- ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
- ); })
-
-#define otx2_prefetch_store_keep(ptr) ({\
- asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); })
-
-#if defined(__ARM_FEATURE_SVE)
-#define __LSE_PREAMBLE " .cpu generic+lse+sve\n"
-#else
-#define __LSE_PREAMBLE " .cpu generic+lse\n"
-#endif
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with no ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadd %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadda %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeor xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result): [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit_release(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeorl xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result) : [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- dst128[0] = src128[0];
- dst128[1] = src128[1];
- /* lmtext receives following value:
- * 1: NIX_SUBDC_EXT needed i.e. tx vlan case
- * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case
- */
- if (lmtext) {
- dst128[2] = src128[2];
- if (lmtext > 1)
- dst128[3] = src128[3];
- }
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- uint8_t i;
-
- for (i = 0; i < segdw; i++)
- dst128[i] = src128[i];
-}
-
-#undef __LSE_PREAMBLE
-#endif /* _OTX2_IO_ARM64_H_ */
diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h
deleted file mode 100644
index 3436a6c3d5..0000000000
--- a/drivers/common/octeontx2/otx2_io_generic.h
+++ /dev/null
@@ -1,75 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_GENERIC_H_
-#define _OTX2_IO_GENERIC_H_
-
-#include <string.h>
-
-#define otx2_load_pair(val0, val1, addr) \
-do { \
- val0 = rte_read64_relaxed((void *)(addr)); \
- val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \
-} while (0)
-
-#define otx2_store_pair(val0, val1, addr) \
-do { \
- rte_write64_relaxed(val0, (void *)(addr)); \
- rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \
-} while (0)
-
-#define otx2_prefetch_store_keep(ptr) do {} while (0)
-
-static inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit_release(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- /* Copy four words if lmtext = 0
- * six words if lmtext = 1
- * eight words if lmtext =2
- */
- memcpy(out, in, (4 + (2 * lmtext)) * sizeof(uint64_t));
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- RTE_SET_USED(out);
- RTE_SET_USED(in);
- RTE_SET_USED(segdw);
-}
-#endif /* _OTX2_IO_GENERIC_H_ */
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
deleted file mode 100644
index 93fc95c0e1..0000000000
--- a/drivers/common/octeontx2/otx2_irq.c
+++ /dev/null
@@ -1,288 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-
-#ifdef RTE_EAL_VFIO
-
-#include <inttypes.h>
-#include <linux/vfio.h>
-#include <sys/eventfd.h>
-#include <sys/ioctl.h>
-#include <unistd.h>
-
-#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
-#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \
- sizeof(int) * (MAX_INTR_VEC_ID))
-
-static int
-irq_get_info(struct rte_intr_handle *intr_handle)
-{
- struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc, vfio_dev_fd;
-
- irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
- if (rc < 0) {
- otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
- return rc;
- }
-
- otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x",
- irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID);
-
- if (irq.count > MAX_INTR_VEC_ID) {
- otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
- return -1;
- } else {
- if (rte_intr_max_intr_set(intr_handle, irq.count))
- return -1;
- }
-
- return 0;
-}
-
-static int
-irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- len = sizeof(struct vfio_irq_set) + sizeof(int32_t);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
-
- irq_set->start = vec;
- irq_set->count = 1;
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- /* Use vec fd to set interrupt vectors */
- fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
-
- return rc;
-}
-
-static int
-irq_init(struct rte_intr_handle *intr_handle)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
- uint32_t i;
-
- if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
- otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- return -ERANGE;
- }
-
- len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
- irq_set->start = 0;
- irq_set->count = rte_intr_max_intr_get(intr_handle);
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- fd_ptr = (int32_t *)&irq_set->data[0];
- for (i = 0; i < irq_set->count; i++)
- fd_ptr[i] = -1;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set irqs vector rc=%d", rc);
-
- return rc;
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int
-otx2_disable_irqs(struct rte_intr_handle *intr_handle)
-{
- /* Clear max_intr to indicate re-init next time */
- if (rte_intr_max_intr_set(intr_handle, 0))
- return -1;
- return rte_intr_disable(intr_handle);
-}
-
-/**
- * @internal
- * Register IRQ
- */
-int
-otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint32_t nb_efd, tmp_nb_efd;
- int rc, fd;
-
- /* If no max_intr read from VFIO */
- if (rte_intr_max_intr_get(intr_handle) == 0) {
- irq_get_info(intr_handle);
- irq_init(intr_handle);
- }
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- tmp_handle = intr_handle;
- /* Create new eventfd for interrupt vector */
- fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (fd == -1)
- return -ENODEV;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return errno;
-
- /* Register vector interrupt callback */
- rc = rte_intr_callback_register(tmp_handle, cb, data);
- if (rc) {
- otx2_err("Failed to register vector:0x%x irq callback.", vec);
- return rc;
- }
-
- rte_intr_efds_index_set(intr_handle, vec, fd);
- nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
- vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
- rte_intr_nb_efd_set(intr_handle, nb_efd);
-
- tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
- if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
- rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
-
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- /* Enable MSIX vectors to VFIO */
- return irq_config(intr_handle, vec);
-}
-
-/**
- * @internal
- * Unregister IRQ
- */
-void
-otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint8_t retries = 5; /* 5 ms */
- int rc, fd;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, rte_intr_max_intr_get(intr_handle));
- return;
- }
-
- tmp_handle = intr_handle;
- fd = rte_intr_efds_index_get(intr_handle, vec);
- if (fd == -1)
- return;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return;
-
- do {
- /* Un-register callback func from platform lib */
- rc = rte_intr_callback_unregister(tmp_handle, cb, data);
- /* Retry only if -EAGAIN */
- if (rc != -EAGAIN)
- break;
- rte_delay_ms(1);
- retries--;
- } while (retries);
-
- if (rc < 0) {
- otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
- return;
- }
-
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- if (rte_intr_efds_index_get(intr_handle, vec) != -1)
- close(rte_intr_efds_index_get(intr_handle, vec));
- /* Disable MSIX vectors from VFIO */
- rte_intr_efds_index_set(intr_handle, vec, -1);
- irq_config(intr_handle, vec);
-}
-
-#else
-
-/**
- * @internal
- * Register IRQ
- */
-int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
- return -ENOTSUP;
-}
-
-
-/**
- * @internal
- * Unregister IRQ
- */
-void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle)
-{
- return -ENOTSUP;
-}
-
-#endif /* RTE_EAL_VFIO */
diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h
deleted file mode 100644
index 0683cf5543..0000000000
--- a/drivers/common/octeontx2/otx2_irq.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IRQ_H_
-#define _OTX2_IRQ_H_
-
-#include <rte_pci.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-
-typedef struct {
-/* 128 devices translate to two 64 bits dwords */
-#define MAX_VFPF_DWORD_BITS 2
- uint64_t bits[MAX_VFPF_DWORD_BITS];
-} otx2_intr_t;
-
-__rte_internal
-int otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-void otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-int otx2_disable_irqs(struct rte_intr_handle *intr_handle);
-
-#endif /* _OTX2_IRQ_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
deleted file mode 100644
index 6df1e8ea63..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <errno.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "otx2_mbox.h"
-#include "otx2_dev.h"
-
-#define RVU_AF_AFPF_MBOX0 (0x02000)
-#define RVU_AF_AFPF_MBOX1 (0x02008)
-
-#define RVU_PF_PFAF_MBOX0 (0xC00)
-#define RVU_PF_PFAF_MBOX1 (0xC08)
-
-#define RVU_PF_VFX_PFVF_MBOX0 (0x0000)
-#define RVU_PF_VFX_PFVF_MBOX1 (0x0008)
-
-#define RVU_VF_VFPF_MBOX0 (0x0000)
-#define RVU_VF_VFPF_MBOX1 (0x0008)
-
-static inline uint16_t
-msgs_offset(void)
-{
- return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
-}
-
-void
-otx2_mbox_fini(struct otx2_mbox *mbox)
-{
- mbox->reg_base = 0;
- mbox->hwbase = 0;
- rte_free(mbox->dev);
- mbox->dev = NULL;
-}
-
-void
-otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- rte_spinlock_lock(&mdev->mbox_lock);
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- tx_hdr->msg_size = 0;
- tx_hdr->num_msgs = 0;
- rx_hdr->msg_size = 0;
- rx_hdr->num_msgs = 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-}
-
-int
-otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevs, uint64_t intr_offset)
-{
- struct otx2_mbox_dev *mdev;
- int devid;
-
- mbox->intr_offset = intr_offset;
- mbox->reg_base = reg_base;
- mbox->hwbase = hwbase;
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_PFVF:
- mbox->tx_start = MBOX_DOWN_TX_START;
- mbox->rx_start = MBOX_DOWN_RX_START;
- mbox->tx_size = MBOX_DOWN_TX_SIZE;
- mbox->rx_size = MBOX_DOWN_RX_SIZE;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_VFPF:
- mbox->tx_start = MBOX_DOWN_RX_START;
- mbox->rx_start = MBOX_DOWN_TX_START;
- mbox->tx_size = MBOX_DOWN_RX_SIZE;
- mbox->rx_size = MBOX_DOWN_TX_SIZE;
- break;
- case MBOX_DIR_AFPF_UP:
- case MBOX_DIR_PFVF_UP:
- mbox->tx_start = MBOX_UP_TX_START;
- mbox->rx_start = MBOX_UP_RX_START;
- mbox->tx_size = MBOX_UP_TX_SIZE;
- mbox->rx_size = MBOX_UP_RX_SIZE;
- break;
- case MBOX_DIR_PFAF_UP:
- case MBOX_DIR_VFPF_UP:
- mbox->tx_start = MBOX_UP_RX_START;
- mbox->rx_start = MBOX_UP_TX_START;
- mbox->tx_size = MBOX_UP_RX_SIZE;
- mbox->rx_size = MBOX_UP_TX_SIZE;
- break;
- default:
- return -ENODEV;
- }
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_AFPF_UP:
- mbox->trigger = RVU_AF_AFPF_MBOX0;
- mbox->tr_shift = 4;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_PFAF_UP:
- mbox->trigger = RVU_PF_PFAF_MBOX1;
- mbox->tr_shift = 0;
- break;
- case MBOX_DIR_PFVF:
- case MBOX_DIR_PFVF_UP:
- mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
- mbox->tr_shift = 12;
- break;
- case MBOX_DIR_VFPF:
- case MBOX_DIR_VFPF_UP:
- mbox->trigger = RVU_VF_VFPF_MBOX1;
- mbox->tr_shift = 0;
- break;
- default:
- return -ENODEV;
- }
-
- mbox->dev = rte_zmalloc("mbox dev",
- ndevs * sizeof(struct otx2_mbox_dev),
- OTX2_ALIGN);
- if (!mbox->dev) {
- otx2_mbox_fini(mbox);
- return -ENOMEM;
- }
- mbox->ndevs = ndevs;
- for (devid = 0; devid < ndevs; devid++) {
- mdev = &mbox->dev[devid];
- mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE));
- rte_spinlock_init(&mdev->mbox_lock);
- /* Init header to reset value */
- otx2_mbox_reset(mbox, devid);
- }
-
- return 0;
-}
-
-/**
- * @internal
- * Allocate a message response
- */
-struct mbox_msghdr *
-otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size,
- int size_rsp)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr = NULL;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN);
- /* Check if there is space in mailbox */
- if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset())
- goto exit;
- if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset())
- goto exit;
- if (mdev->msg_size == 0)
- mdev->num_msgs = 0;
- mdev->num_msgs++;
-
- msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase +
- mbox->tx_start + msgs_offset() + mdev->msg_size));
-
- /* Clear the whole msg region */
- otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size);
- /* Init message header with reset values */
- msghdr->ver = OTX2_MBOX_VERSION;
- mdev->msg_size += size;
- mdev->rsp_size += size_rsp;
- msghdr->next_msgoff = mdev->msg_size + msgs_offset();
-exit:
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return msghdr;
-}
-
-/**
- * @internal
- * Send a mailbox message
- */
-void
-otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- /* Reset header for next messages */
- tx_hdr->msg_size = mdev->msg_size;
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- mdev->msgs_acked = 0;
-
- /* num_msgs != 0 signals to the peer that the buffer has a number of
- * messages. So this should be written after copying txmem
- */
- tx_hdr->num_msgs = mdev->num_msgs;
- rx_hdr->num_msgs = 0;
-
- /* Sync mbox data into memory */
- rte_wmb();
-
- /* The interrupt should be fired after num_msgs is written
- * to the shared memory
- */
- rte_write64(1, (volatile void *)(mbox->reg_base +
- (mbox->trigger | (devid << mbox->tr_shift))));
-}
-
-/**
- * @internal
- * Wait and get mailbox response
- */
-int
-otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp(mbox, devid);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-/**
- * Polling for given wait time to get mailbox response
- */
-static int
-mbox_poll(struct otx2_mbox *mbox, uint32_t wait)
-{
- uint32_t timeout = 0, sleep = 1;
- uint32_t wait_us = wait * 1000;
- uint64_t rsp_reg = 0;
- uintptr_t reg_addr;
-
- reg_addr = mbox->reg_base + mbox->intr_offset;
- do {
- rsp_reg = otx2_read64(reg_addr);
-
- if (timeout >= wait_us)
- return -ETIMEDOUT;
-
- rte_delay_us(sleep);
- timeout += sleep;
- } while (!rsp_reg);
-
- rte_smp_rmb();
-
- /* Clear interrupt */
- otx2_write64(rsp_reg, reg_addr);
-
- /* Reset mbox */
- otx2_mbox_reset(mbox, 0);
-
- return 0;
-}
-
-/**
- * @internal
- * Wait and get mailbox response with timeout
- */
-int
-otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-static int
-mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo)
-{
- volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- uint32_t timeout = 0, sleep = 1;
-
- rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */
- while (mdev->num_msgs > mdev->msgs_acked) {
- rte_delay_us(sleep);
- timeout += sleep;
- if (timeout >= rst_timo) {
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->rx_start);
-
- otx2_err("MBOX[devid: %d] message wait timeout %d, "
- "num_msgs: %d, msgs_acked: %d "
- "(tx/rx num_msgs: %d/%d), msg_size: %d, "
- "rsp_size: %d",
- devid, timeout, mdev->num_msgs,
- mdev->msgs_acked, tx_hdr->num_msgs,
- rx_hdr->num_msgs, mdev->msg_size,
- mdev->rsp_size);
-
- return -EIO;
- }
- rte_rmb();
- }
- return 0;
-}
-
-int
-otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int rc = 0;
-
- /* Sync with mbox region */
- rte_rmb();
-
- if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 ||
- mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) {
- /* In case of VF, Wait a bit more to account round trip delay */
- tmo = tmo * 2;
- }
-
- /* Wait message */
- if (rte_thread_is_intr())
- rc = mbox_poll(mbox, tmo);
- else
- rc = mbox_wait(mbox, devid, tmo);
-
- if (!rc)
- rc = mdev->num_msgs;
-
- return rc;
-}
-
-/**
- * @internal
- * Wait for the mailbox response
- */
-int
-otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
-{
- return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
-}
-
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int avail;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- avail = mbox->tx_size - mdev->msg_size - msgs_offset();
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return avail;
-}
-
-int
-otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
-{
- struct ready_msg_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_ready(mbox);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->hdr.ver != OTX2_MBOX_VERSION) {
- otx2_err("Incompatible MBox versions(AF: 0x%04x DPDK: 0x%04x)",
- rsp->hdr.ver, OTX2_MBOX_VERSION);
- return -EPIPE;
- }
-
- if (pcifunc)
- *pcifunc = rsp->hdr.pcifunc;
-
- return 0;
-}
-
-int
-otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc,
- uint16_t id)
-{
- struct msg_rsp *rsp;
-
- rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
- if (!rsp)
- return -ENOMEM;
- rsp->hdr.id = id;
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
- rsp->hdr.rc = MBOX_MSG_INVALID;
- rsp->hdr.pcifunc = pcifunc;
-
- return 0;
-}
-
-/**
- * @internal
- * Convert mail box ID to name
- */
-const char *otx2_mbox_id2name(uint16_t id)
-{
- switch (id) {
-#define M(_name, _id, _1, _2, _3) case _id: return # _name;
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return "INVALID ID";
- }
-}
-
-int otx2_mbox_id2size(uint16_t id)
-{
- switch (id) {
-#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type);
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return 0;
- }
-}
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
deleted file mode 100644
index 25b521a7fa..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ /dev/null
@@ -1,1958 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MBOX_H__
-#define __OTX2_MBOX_H__
-
-#include <errno.h>
-#include <stdbool.h>
-
-#include <rte_ether.h>
-#include <rte_spinlock.h>
-
-#include <otx2_common.h>
-
-#define SZ_64K (64ULL * 1024ULL)
-#define SZ_1K (1ULL * 1024ULL)
-#define MBOX_SIZE SZ_64K
-
-/* AF/PF: PF initiated, PF/VF VF initiated */
-#define MBOX_DOWN_RX_START 0
-#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
-#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
-#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
-/* AF/PF: AF initiated, PF/VF PF initiated */
-#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
-#define MBOX_UP_RX_SIZE SZ_1K
-#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
-#define MBOX_UP_TX_SIZE SZ_1K
-
-#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
-# error "Incorrect mailbox area sizes"
-#endif
-
-#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
-
-#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */
-
-#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
-
-/* Mailbox directions */
-#define MBOX_DIR_AFPF 0 /* AF replies to PF */
-#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
-#define MBOX_DIR_PFVF 2 /* PF replies to VF */
-#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
-#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
-#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
-#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
-#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
-
-/* Device memory does not support unaligned access, instruct compiler to
- * not optimize the memory access when working with mailbox memory.
- */
-#define __otx2_io volatile
-
-struct otx2_mbox_dev {
- void *mbase; /* This dev's mbox region */
- rte_spinlock_t mbox_lock;
- uint16_t msg_size; /* Total msg size to be sent */
- uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */
- uint16_t num_msgs; /* No of msgs sent or waiting for response */
- uint16_t msgs_acked; /* No of msgs for which response is received */
-};
-
-struct otx2_mbox {
- uintptr_t hwbase; /* Mbox region advertised by HW */
- uintptr_t reg_base;/* CSR base for this dev */
- uint64_t trigger; /* Trigger mbox notification */
- uint16_t tr_shift; /* Mbox trigger shift */
- uint64_t rx_start; /* Offset of Rx region in mbox memory */
- uint64_t tx_start; /* Offset of Tx region in mbox memory */
- uint16_t rx_size; /* Size of Rx region */
- uint16_t tx_size; /* Size of Tx region */
- uint16_t ndevs; /* The number of peers */
- struct otx2_mbox_dev *dev;
- uint64_t intr_offset; /* Offset to interrupt register */
-};
-
-/* Header which precedes all mbox messages */
-struct mbox_hdr {
- uint64_t __otx2_io msg_size; /* Total msgs size embedded */
- uint16_t __otx2_io num_msgs; /* No of msgs embedded */
-};
-
-/* Header which precedes every msg and is also part of it */
-struct mbox_msghdr {
- uint16_t __otx2_io pcifunc; /* Who's sending this msg */
- uint16_t __otx2_io id; /* Mbox message ID */
-#define OTX2_MBOX_REQ_SIG (0xdead)
-#define OTX2_MBOX_RSP_SIG (0xbeef)
- /* Signature, for validating corrupted msgs */
- uint16_t __otx2_io sig;
-#define OTX2_MBOX_VERSION (0x000b)
- /* Version of msg's structure for this ID */
- uint16_t __otx2_io ver;
- /* Offset of next msg within mailbox region */
- uint16_t __otx2_io next_msgoff;
- int __otx2_io rc; /* Msg processed response code */
-};
-
-/* Mailbox message types */
-#define MBOX_MSG_MASK 0xFFFF
-#define MBOX_MSG_INVALID 0xFFFE
-#define MBOX_MSG_MAX 0xFFFF
-
-#define MBOX_MESSAGES \
-/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
-M(READY, 0x001, ready, msg_req, ready_msg_rsp) \
-M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\
-M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\
-M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \
-M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \
-M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \
-M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \
-M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \
-M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \
-/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
-M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
-M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
-M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \
-M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \
-M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \
-M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \
-M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \
-M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\
-M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \
-M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \
-M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \
- cgx_pause_frm_cfg) \
-M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \
-M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \
-M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \
- cgx_mac_addr_add_rsp) \
-M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \
- msg_rsp) \
-M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \
- cgx_max_dmac_entries_get_rsp) \
-M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \
- cgx_set_link_state_msg, msg_rsp) \
-M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \
- cgx_phy_mod_type) \
-M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \
- msg_rsp) \
-M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
-M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\
- cgx_set_link_mode_rsp) \
-M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \
-M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
-/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
-M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \
- npa_lf_alloc_rsp) \
-M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \
-M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\
-M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\
-/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
-M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \
- sso_lf_alloc_rsp) \
-M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \
-M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\
-M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \
-M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \
- msg_rsp) \
-M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \
- msg_rsp) \
-M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
- sso_grp_priority) \
-M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
-M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
- msg_rsp) \
-M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
- sso_grp_stats) \
-M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \
- sso_hws_stats) \
-M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
- sso_release_xaq, msg_rsp) \
-/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
-M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
- tim_lf_alloc_rsp) \
-M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \
-M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\
-M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \
- tim_enable_rsp) \
-M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \
-/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
-M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, \
- cpt_lf_alloc_rsp_msg) \
-M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \
-M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \
- cpt_rd_wr_reg_msg) \
-M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \
- cpt_set_crypto_grp_req_msg, \
- msg_rsp) \
-M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \
- cpt_inline_ipsec_cfg_msg, msg_rsp) \
-M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \
- cpt_rx_inline_lf_cfg_msg, msg_rsp) \
-M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \
-/* REE mbox IDs (range 0xE00 - 0xFFF) */ \
-M(REE_CONFIG_LF, 0xE01, ree_config_lf, ree_lf_req_msg, \
- msg_rsp) \
-M(REE_RD_WR_REGISTER, 0xE02, ree_rd_wr_register, ree_rd_wr_reg_msg, \
- ree_rd_wr_reg_msg) \
-M(REE_RULE_DB_PROG, 0xE03, ree_rule_db_prog, \
- ree_rule_db_prog_req_msg, \
- msg_rsp) \
-M(REE_RULE_DB_LEN_GET, 0xE04, ree_rule_db_len_get, ree_req_msg, \
- ree_rule_db_len_rsp_msg) \
-M(REE_RULE_DB_GET, 0xE05, ree_rule_db_get, \
- ree_rule_db_get_req_msg, \
- ree_rule_db_get_rsp_msg) \
-/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
-M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \
- npc_mcam_alloc_entry_req, \
- npc_mcam_alloc_entry_rsp) \
-M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \
- npc_mcam_free_entry_req, msg_rsp) \
-M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \
- npc_mcam_write_entry_req, msg_rsp) \
-M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \
- npc_mcam_shift_entry_req, \
- npc_mcam_shift_entry_rsp) \
-M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \
- npc_mcam_alloc_counter_req, \
- npc_mcam_alloc_counter_rsp) \
-M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \
- npc_mcam_unmap_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \
- npc_mcam_oper_counter_req, \
- npc_mcam_oper_counter_rsp) \
-M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\
- npc_mcam_alloc_and_write_entry_req, \
- npc_mcam_alloc_and_write_entry_rsp) \
-M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \
- npc_get_kex_cfg_rsp) \
-M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, \
- npc_install_flow_req, \
- npc_install_flow_rsp) \
-M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, \
- npc_delete_flow_req, msg_rsp) \
-M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \
- npc_mcam_read_entry_req, \
- npc_mcam_read_entry_rsp) \
-M(NPC_SET_PKIND, 0x6010, npc_set_pkind, \
- npc_set_pkind, \
- msg_rsp) \
-M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \
- npc_mcam_read_base_rule_rsp) \
-/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
-M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \
- nix_lf_alloc_rsp) \
-M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \
-M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \
- nix_aq_enq_rsp) \
-M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \
- msg_rsp) \
-M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \
- nix_txsch_alloc_rsp) \
-M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \
- msg_rsp) \
-M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \
- nix_txschq_config) \
-M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \
-M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \
-M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg_rsp) \
-M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \
- msg_rsp) \
-M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \
-M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \
-M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \
-M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \
-M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \
- nix_mark_format_cfg, \
- nix_mark_format_cfg_rsp) \
-M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \
-M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \
- nix_lso_format_cfg_rsp) \
-M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \
- msg_rsp) \
-M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \
- msg_rsp) \
-M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \
- msg_rsp) \
-M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \
- nix_bp_cfg_rsp) \
-M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\
-M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \
- nix_get_mac_addr_rsp) \
-M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \
- nix_inline_ipsec_cfg, msg_rsp) \
-M(NIX_INLINE_IPSEC_LF_CFG, \
- 0x801a, nix_inline_ipsec_lf_cfg, \
- nix_inline_ipsec_lf_cfg, msg_rsp)
-
-/* Messages initiated by AF (range 0xC00 - 0xDFF) */
-#define MBOX_UP_CGX_MESSAGES \
-M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \
- msg_rsp) \
-M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \
- msg_rsp)
-
-enum {
-#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
-MBOX_MESSAGES
-MBOX_UP_CGX_MESSAGES
-#undef M
-};
-
-/* Mailbox message formats */
-
-#define RVU_DEFAULT_PF_FUNC 0xFFFF
-
-/* Generic request msg used for those mbox messages which
- * don't send any data in the request.
- */
-struct msg_req {
- struct mbox_msghdr hdr;
-};
-
-/* Generic response msg used a ack or response for those mbox
- * messages which doesn't have a specific rsp msg format.
- */
-struct msg_rsp {
- struct mbox_msghdr hdr;
-};
-
-/* RVU mailbox error codes
- * Range 256 - 300.
- */
-enum rvu_af_status {
- RVU_INVALID_VF_ID = -256,
-};
-
-struct ready_msg_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sclk_feq; /* SCLK frequency */
- uint16_t __otx2_io rclk_freq; /* RCLK frequency */
-};
-
-enum npc_pkind_type {
- NPC_RX_CUSTOM_PRE_L2_PKIND = 55ULL,
- NPC_RX_VLAN_EXDSA_PKIND = 56ULL,
- NPC_RX_CHLEN24B_PKIND,
- NPC_RX_CPT_HDR_PKIND,
- NPC_RX_CHLEN90B_PKIND,
- NPC_TX_HIGIG_PKIND,
- NPC_RX_HIGIG_PKIND,
- NPC_RX_EXDSA_PKIND,
- NPC_RX_EDSA_PKIND,
- NPC_TX_DEF_PKIND,
-};
-
-#define OTX2_PRIV_FLAGS_CH_LEN_90B 254
-#define OTX2_PRIV_FLAGS_CH_LEN_24B 255
-
-/* Struct to set pkind */
-struct npc_set_pkind {
- struct mbox_msghdr hdr;
-#define OTX2_PRIV_FLAGS_DEFAULT BIT_ULL(0)
-#define OTX2_PRIV_FLAGS_EDSA BIT_ULL(1)
-#define OTX2_PRIV_FLAGS_HIGIG BIT_ULL(2)
-#define OTX2_PRIV_FLAGS_FDSA BIT_ULL(3)
-#define OTX2_PRIV_FLAGS_EXDSA BIT_ULL(4)
-#define OTX2_PRIV_FLAGS_VLAN_EXDSA BIT_ULL(5)
-#define OTX2_PRIV_FLAGS_CUSTOM BIT_ULL(63)
- uint64_t __otx2_io mode;
-#define PKIND_TX BIT_ULL(0)
-#define PKIND_RX BIT_ULL(1)
- uint8_t __otx2_io dir;
- uint8_t __otx2_io pkind; /* valid only in case custom flag */
- uint8_t __otx2_io var_len_off;
- /* Offset of custom header length field.
- * Valid only for pkind NPC_RX_CUSTOM_PRE_L2_PKIND
- */
- uint8_t __otx2_io var_len_off_mask; /* Mask for length with in offset */
- uint8_t __otx2_io shift_dir;
- /* Shift direction to get length of the
- * header at var_len_off
- */
-};
-
-/* Structure for requesting resource provisioning.
- * 'modify' flag to be used when either requesting more
- * or to detach partial of a certain resource type.
- * Rest of the fields specify how many of what type to
- * be attached.
- * To request LFs from two blocks of same type this mailbox
- * can be sent twice as below:
- * struct rsrc_attach *attach;
- * .. Allocate memory for message ..
- * attach->cptlfs = 3; <3 LFs from CPT0>
- * .. Send message ..
- * .. Allocate memory for message ..
- * attach->modify = 1;
- * attach->cpt_blkaddr = BLKADDR_CPT1;
- * attach->cptlfs = 2; <2 LFs from CPT1>
- * .. Send message ..
- */
-struct rsrc_attach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io modify:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io reelfs;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- int __otx2_io cpt_blkaddr;
- /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */
- int __otx2_io ree_blkaddr;
-};
-
-/* Structure for relinquishing resources.
- * 'partial' flag to be used when relinquishing all resources
- * but only of a certain type. If not set, all resources of all
- * types provisioned to the RVU function will be detached.
- */
-struct rsrc_detach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io partial:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint8_t __otx2_io sso:1;
- uint8_t __otx2_io ssow:1;
- uint8_t __otx2_io timlfs:1;
- uint8_t __otx2_io cptlfs:1;
- uint8_t __otx2_io reelfs:1;
-};
-
-/* NIX Transmit schedulers */
-#define NIX_TXSCH_LVL_SMQ 0x0
-#define NIX_TXSCH_LVL_MDQ 0x0
-#define NIX_TXSCH_LVL_TL4 0x1
-#define NIX_TXSCH_LVL_TL3 0x2
-#define NIX_TXSCH_LVL_TL2 0x3
-#define NIX_TXSCH_LVL_TL1 0x4
-#define NIX_TXSCH_LVL_CNT 0x5
-
-/*
- * Number of resources available to the caller.
- * In reply to MBOX_MSG_FREE_RSRC_CNT.
- */
-struct free_rsrcs_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT];
- uint16_t __otx2_io sso;
- uint16_t __otx2_io tim;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io cpt;
- uint8_t __otx2_io npa;
- uint8_t __otx2_io nix;
- uint16_t __otx2_io schq_nix1[NIX_TXSCH_LVL_CNT];
- uint8_t __otx2_io nix1;
- uint8_t __otx2_io cpt1;
- uint8_t __otx2_io ree0;
- uint8_t __otx2_io ree1;
-};
-
-#define MSIX_VECTOR_INVALID 0xFFFF
-#define MAX_RVU_BLKLF_CNT 256
-
-struct msix_offset_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io npa_msixoff;
- uint16_t __otx2_io nix_msixoff;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cpt1_lfs;
- uint16_t __otx2_io ree0_lfs;
- uint16_t __otx2_io ree1_lfs;
- uint16_t __otx2_io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT];
-
-};
-
-/* CGX mbox message formats */
-
-struct cgx_stats_rsp {
- struct mbox_msghdr hdr;
-#define CGX_RX_STATS_COUNT 13
-#define CGX_TX_STATS_COUNT 18
- uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT];
- uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT];
-};
-
-struct cgx_fec_stats_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io fec_corr_blks;
- uint64_t __otx2_io fec_uncorr_blks;
-};
-/* Structure for requesting the operation for
- * setting/getting mac address in the CGX interface
- */
-struct cgx_mac_addr_set_or_get {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for requesting the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for response against the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for requesting the operation to
- * delete DMAC filter entry from CGX interface
- */
-struct cgx_mac_addr_del_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for response against the operation to
- * get maximum supported DMAC filter entries
- */
-struct cgx_max_dmac_entries_get_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io max_dmac_filters;
-};
-
-struct cgx_link_user_info {
- uint64_t __otx2_io link_up:1;
- uint64_t __otx2_io full_duplex:1;
- uint64_t __otx2_io lmac_type_id:4;
- uint64_t __otx2_io speed:20; /* speed in Mbps */
- uint64_t __otx2_io an:1; /* AN supported or not */
- uint64_t __otx2_io fec:2; /* FEC type if enabled else 0 */
- uint64_t __otx2_io port:8;
-#define LMACTYPE_STR_LEN 16
- char lmac_type[LMACTYPE_STR_LEN];
-};
-
-struct cgx_link_info_msg {
- struct mbox_msghdr hdr;
- struct cgx_link_user_info link_info;
-};
-
-struct cgx_ptp_rx_info_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ptp_en;
-};
-
-struct cgx_pause_frm_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io set;
- /* set = 1 if the request is to config pause frames */
- /* set = 0 if the request is to fetch pause frames config */
- uint8_t __otx2_io rx_pause;
- uint8_t __otx2_io tx_pause;
-};
-
-struct sfp_eeprom_s {
-#define SFP_EEPROM_SIZE 256
- uint16_t __otx2_io sff_id;
- uint8_t __otx2_io buf[SFP_EEPROM_SIZE];
- uint64_t __otx2_io reserved;
-};
-
-enum fec_type {
- OTX2_FEC_NONE,
- OTX2_FEC_BASER,
- OTX2_FEC_RS,
-};
-
-struct phy_s {
- uint64_t __otx2_io can_change_mod_type : 1;
- uint64_t __otx2_io mod_type : 1;
-};
-
-struct cgx_lmac_fwdata_s {
- uint16_t __otx2_io rw_valid;
- uint64_t __otx2_io supported_fec;
- uint64_t __otx2_io supported_an;
- uint64_t __otx2_io supported_link_modes;
- /* Only applicable if AN is supported */
- uint64_t __otx2_io advertised_fec;
- uint64_t __otx2_io advertised_link_modes;
- /* Only applicable if SFP/QSFP slot is present */
- struct sfp_eeprom_s sfp_eeprom;
- struct phy_s phy;
-#define LMAC_FWDATA_RESERVED_MEM 1023
- uint64_t __otx2_io reserved[LMAC_FWDATA_RESERVED_MEM];
-};
-
-struct cgx_fw_data {
- struct mbox_msghdr hdr;
- struct cgx_lmac_fwdata_s fwdata;
-};
-
-struct fec_mode {
- struct mbox_msghdr hdr;
- int __otx2_io fec;
-};
-
-struct cgx_set_link_state_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
-};
-
-struct cgx_phy_mod_type {
- struct mbox_msghdr hdr;
- int __otx2_io mod;
-};
-
-struct cgx_set_link_mode_args {
- uint32_t __otx2_io speed;
- uint8_t __otx2_io duplex;
- uint8_t __otx2_io an;
- uint8_t __otx2_io ports;
- uint64_t __otx2_io mode;
-};
-
-struct cgx_set_link_mode_req {
- struct mbox_msghdr hdr;
- struct cgx_set_link_mode_args args;
-};
-
-struct cgx_set_link_mode_rsp {
- struct mbox_msghdr hdr;
- int __otx2_io status;
-};
-/* NPA mbox message formats */
-
-/* NPA mailbox error codes
- * Range 301 - 400.
- */
-enum npa_af_status {
- NPA_AF_ERR_PARAM = -301,
- NPA_AF_ERR_AQ_FULL = -302,
- NPA_AF_ERR_AQ_ENQUEUE = -303,
- NPA_AF_ERR_AF_LF_INVALID = -304,
- NPA_AF_ERR_AF_LF_ALLOC = -305,
- NPA_AF_ERR_LF_RESET = -306,
-};
-
-#define NPA_AURA_SZ_0 0
-#define NPA_AURA_SZ_128 1
-#define NPA_AURA_SZ_256 2
-#define NPA_AURA_SZ_512 3
-#define NPA_AURA_SZ_1K 4
-#define NPA_AURA_SZ_2K 5
-#define NPA_AURA_SZ_4K 6
-#define NPA_AURA_SZ_8K 7
-#define NPA_AURA_SZ_16K 8
-#define NPA_AURA_SZ_32K 9
-#define NPA_AURA_SZ_64K 10
-#define NPA_AURA_SZ_128K 11
-#define NPA_AURA_SZ_256K 12
-#define NPA_AURA_SZ_512K 13
-#define NPA_AURA_SZ_1M 14
-#define NPA_AURA_SZ_MAX 15
-
-/* For NPA LF context alloc and init */
-struct npa_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */
- uint32_t __otx2_io nr_pools; /* No of pools */
- uint64_t __otx2_io way_mask;
-};
-
-struct npa_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */
- uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */
- uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */
-};
-
-/* NPA AQ enqueue msg */
-struct npa_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io aura_id;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == AURA.
- * LF fills the pool_id in aura.pool_addr. AF will translate
- * the pool_id to pool context pointer.
- */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == WRITE/INIT and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == AURA */
- __otx2_io struct npa_aura_s aura_mask;
- /* Valid when op == WRITE and ctype == POOL */
- __otx2_io struct npa_pool_s pool_mask;
- };
-};
-
-struct npa_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- /* Valid when op == READ and ctype == AURA */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == READ and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
-};
-
-/* Disable all contexts of type 'ctype' */
-struct hwctx_disable_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ctype;
-};
-
-/* NIX mbox message formats */
-
-/* NIX mailbox error codes
- * Range 401 - 500.
- */
-enum nix_af_status {
- NIX_AF_ERR_PARAM = -401,
- NIX_AF_ERR_AQ_FULL = -402,
- NIX_AF_ERR_AQ_ENQUEUE = -403,
- NIX_AF_ERR_AF_LF_INVALID = -404,
- NIX_AF_ERR_AF_LF_ALLOC = -405,
- NIX_AF_ERR_TLX_ALLOC_FAIL = -406,
- NIX_AF_ERR_TLX_INVALID = -407,
- NIX_AF_ERR_RSS_SIZE_INVALID = -408,
- NIX_AF_ERR_RSS_GRPS_INVALID = -409,
- NIX_AF_ERR_FRS_INVALID = -410,
- NIX_AF_ERR_RX_LINK_INVALID = -411,
- NIX_AF_INVAL_TXSCHQ_CFG = -412,
- NIX_AF_SMQ_FLUSH_FAILED = -413,
- NIX_AF_ERR_LF_RESET = -414,
- NIX_AF_ERR_RSS_NOSPC_FIELD = -415,
- NIX_AF_ERR_RSS_NOSPC_ALGO = -416,
- NIX_AF_ERR_MARK_CFG_FAIL = -417,
- NIX_AF_ERR_LSO_CFG_FAIL = -418,
- NIX_AF_INVAL_NPA_PF_FUNC = -419,
- NIX_AF_INVAL_SSO_PF_FUNC = -420,
- NIX_AF_ERR_TX_VTAG_NOSPC = -421,
- NIX_AF_ERR_RX_VTAG_INUSE = -422,
- NIX_AF_ERR_PTP_CONFIG_FAIL = -423,
-};
-
-/* For NIX LF context alloc and init */
-struct nix_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint32_t __otx2_io rq_cnt; /* No of receive queues */
- uint32_t __otx2_io sq_cnt; /* No of send queues */
- uint32_t __otx2_io cq_cnt; /* No of completion queues */
- uint8_t __otx2_io xqe_sz;
- uint16_t __otx2_io rss_sz;
- uint8_t __otx2_io rss_grps;
- uint16_t __otx2_io npa_func;
- /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */
- uint16_t __otx2_io sso_func;
- uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */
- uint64_t __otx2_io way_mask;
-#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0)
- uint64_t flags;
-};
-
-struct nix_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sqb_size;
- uint16_t __otx2_io rx_chan_base;
- uint16_t __otx2_io tx_chan_base;
- uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */
- uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */
- uint8_t __otx2_io lso_tsov4_idx;
- uint8_t __otx2_io lso_tsov6_idx;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
- uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
- uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */
- uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */
- uint8_t __otx2_io hw_rx_tstamp_en; /*set if rx timestamping enabled */
- uint8_t __otx2_io cgx_links; /* No. of CGX links present in HW */
- uint8_t __otx2_io lbk_links; /* No. of LBK links present in HW */
- uint8_t __otx2_io sdp_links; /* No. of SDP links present in HW */
- uint8_t __otx2_io tx_link; /* Transmit channel link number */
-};
-
-struct nix_lf_free_req {
- struct mbox_msghdr hdr;
-#define NIX_LF_DISABLE_FLOWS BIT_ULL(0)
-#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1)
- uint64_t __otx2_io flags;
-};
-
-/* NIX AQ enqueue msg */
-struct nix_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io qidx;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce_mask;
- };
-};
-
-struct nix_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- __otx2_io struct nix_rq_ctx_s rq;
- __otx2_io struct nix_sq_ctx_s sq;
- __otx2_io struct nix_cq_ctx_s cq;
- __otx2_io struct nix_rsse_s rss;
- __otx2_io struct nix_rx_mce_s mce;
- };
-};
-
-/* Tx scheduler/shaper mailbox messages */
-
-#define MAX_TXSCHQ_PER_FUNC 128
-
-struct nix_txsch_alloc_req {
- struct mbox_msghdr hdr;
- /* Scheduler queue count request at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
-};
-
-struct nix_txsch_alloc_rsp {
- struct mbox_msghdr hdr;
- /* Scheduler queue count allocated at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
- /* Scheduler queue list allocated at each level */
- uint16_t __otx2_io
- schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Traffic aggregation scheduler level */
- uint8_t __otx2_io aggr_level;
- /* Aggregation lvl's RR_PRIO config */
- uint8_t __otx2_io aggr_lvl_rr_prio;
- /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
- uint8_t __otx2_io link_cfg_lvl;
-};
-
-struct nix_txsch_free_req {
- struct mbox_msghdr hdr;
-#define TXSCHQ_FREE_ALL BIT_ULL(0)
- uint16_t __otx2_io flags;
- /* Scheduler queue level to be freed */
- uint16_t __otx2_io schq_lvl;
- /* List of scheduler queues to be freed */
- uint16_t __otx2_io schq;
-};
-
-struct nix_txschq_config {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */
- uint8_t __otx2_io read;
-#define TXSCHQ_IDX_SHIFT 16
-#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1)
-#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK)
- uint8_t __otx2_io num_regs;
-#define MAX_REGS_PER_MBOX_MSG 20
- uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG];
- uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG];
- /* All 0's => overwrite with new value */
- uint64_t __otx2_io regval_mask[MAX_REGS_PER_MBOX_MSG];
-};
-
-struct nix_vtag_config {
- struct mbox_msghdr hdr;
- /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
- uint8_t __otx2_io vtag_size;
- /* cfg_type is '0' for tx vlan cfg
- * cfg_type is '1' for rx vlan cfg
- */
- uint8_t __otx2_io cfg_type;
- union {
- /* Valid when cfg_type is '0' */
- struct {
- uint64_t __otx2_io vtag0;
- uint64_t __otx2_io vtag1;
-
- /* cfg_vtag0 & cfg_vtag1 fields are valid
- * when free_vtag0 & free_vtag1 are '0's.
- */
- /* cfg_vtag0 = 1 to configure vtag0 */
- uint8_t __otx2_io cfg_vtag0 :1;
- /* cfg_vtag1 = 1 to configure vtag1 */
- uint8_t __otx2_io cfg_vtag1 :1;
-
- /* vtag0_idx & vtag1_idx are only valid when
- * both cfg_vtag0 & cfg_vtag1 are '0's,
- * these fields are used along with free_vtag0
- * & free_vtag1 to free the nix lf's tx_vlan
- * configuration.
- *
- * Denotes the indices of tx_vtag def registers
- * that needs to be cleared and freed.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-
- /* Free_vtag0 & free_vtag1 fields are valid
- * when cfg_vtag0 & cfg_vtag1 are '0's.
- */
- /* Free_vtag0 = 1 clears vtag0 configuration
- * vtag0_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag0 :1;
- /* Free_vtag1 = 1 clears vtag1 configuration
- * vtag1_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag1 :1;
- } tx;
-
- /* Valid when cfg_type is '1' */
- struct {
- /* Rx vtag type index, valid values are in 0..7 range */
- uint8_t __otx2_io vtag_type;
- /* Rx vtag strip */
- uint8_t __otx2_io strip_vtag :1;
- /* Rx vtag capture */
- uint8_t __otx2_io capture_vtag :1;
- } rx;
- };
-};
-
-struct nix_vtag_config_rsp {
- struct mbox_msghdr hdr;
- /* Indices of tx_vtag def registers used to configure
- * tx vtag0 & vtag1 headers, these indices are valid
- * when nix_vtag_config mbox requested for vtag0 and/
- * or vtag1 configuration.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-};
-
-struct nix_rss_flowkey_cfg {
- struct mbox_msghdr hdr;
- int __otx2_io mcam_index; /* MCAM entry index to modify */
- uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */
-#define FLOW_KEY_TYPE_PORT BIT(0)
-#define FLOW_KEY_TYPE_IPV4 BIT(1)
-#define FLOW_KEY_TYPE_IPV6 BIT(2)
-#define FLOW_KEY_TYPE_TCP BIT(3)
-#define FLOW_KEY_TYPE_UDP BIT(4)
-#define FLOW_KEY_TYPE_SCTP BIT(5)
-#define FLOW_KEY_TYPE_NVGRE BIT(6)
-#define FLOW_KEY_TYPE_VXLAN BIT(7)
-#define FLOW_KEY_TYPE_GENEVE BIT(8)
-#define FLOW_KEY_TYPE_ETH_DMAC BIT(9)
-#define FLOW_KEY_TYPE_IPV6_EXT BIT(10)
-#define FLOW_KEY_TYPE_GTPU BIT(11)
-#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12)
-#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13)
-#define FLOW_KEY_TYPE_INNR_TCP BIT(14)
-#define FLOW_KEY_TYPE_INNR_UDP BIT(15)
-#define FLOW_KEY_TYPE_INNR_SCTP BIT(16)
-#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17)
-#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18)
-#define FLOW_KEY_TYPE_CUSTOM0 BIT(19)
-#define FLOW_KEY_TYPE_VLAN BIT(20)
-#define FLOW_KEY_TYPE_L4_DST BIT(28)
-#define FLOW_KEY_TYPE_L4_SRC BIT(29)
-#define FLOW_KEY_TYPE_L3_DST BIT(30)
-#define FLOW_KEY_TYPE_L3_SRC BIT(31)
- uint8_t __otx2_io group; /* RSS context or group */
-};
-
-struct nix_rss_flowkey_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io alg_idx; /* Selected algo index */
-};
-
-struct nix_set_mac_addr {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_get_mac_addr_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_mark_format_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io offset;
- uint8_t __otx2_io y_mask;
- uint8_t __otx2_io y_val;
- uint8_t __otx2_io r_mask;
- uint8_t __otx2_io r_val;
-};
-
-struct nix_mark_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mark_format_idx;
-};
-
-struct nix_lso_format_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io field_mask;
- uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX];
-};
-
-struct nix_lso_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lso_format_idx;
-};
-
-struct nix_rx_mode {
- struct mbox_msghdr hdr;
-#define NIX_RX_MODE_UCAST BIT(0)
-#define NIX_RX_MODE_PROMISC BIT(1)
-#define NIX_RX_MODE_ALLMULTI BIT(2)
- uint16_t __otx2_io mode;
-};
-
-struct nix_rx_cfg {
- struct mbox_msghdr hdr;
-#define NIX_RX_OL3_VERIFY BIT(0)
-#define NIX_RX_OL4_VERIFY BIT(1)
- uint8_t __otx2_io len_verify; /* Outer L3/L4 len check */
-#define NIX_RX_CSUM_OL4_VERIFY BIT(0)
- uint8_t __otx2_io csum_verify; /* Outer L4 checksum verification */
-};
-
-struct nix_frs_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */
- uint8_t __otx2_io update_minlen; /* Set minlen also */
- uint8_t __otx2_io sdp_link; /* Set SDP RX link */
- uint16_t __otx2_io maxlen;
- uint16_t __otx2_io minlen;
-};
-
-struct nix_set_vlan_tpid {
- struct mbox_msghdr hdr;
-#define NIX_VLAN_TYPE_INNER 0
-#define NIX_VLAN_TYPE_OUTER 1
- uint8_t __otx2_io vlan_type;
- uint16_t __otx2_io tpid;
-};
-
-struct nix_bp_cfg_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io chan_base; /* Starting channel number */
- uint8_t __otx2_io chan_cnt; /* Number of channels */
- uint8_t __otx2_io bpid_per_chan;
- /* bpid_per_chan = 0 assigns single bp id for range of channels */
- /* bpid_per_chan = 1 assigns separate bp id for each channel */
-};
-
-/* PF can be mapped to either CGX or LBK interface,
- * so maximum 64 channels are possible.
- */
-#define NIX_MAX_CHAN 64
-struct nix_bp_cfg_rsp {
- struct mbox_msghdr hdr;
- /* Channel and bpid mapping */
- uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN];
- /* Number of channel for which bpids are assigned */
- uint8_t __otx2_io chan_cnt;
-};
-
-/* Global NIX inline IPSec configuration */
-struct nix_inline_ipsec_cfg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io cpt_credit;
- struct {
- uint8_t __otx2_io egrp;
- uint8_t __otx2_io opcode;
- } gen_cfg;
- struct {
- uint16_t __otx2_io cpt_pf_func;
- uint8_t __otx2_io cpt_slot;
- } inst_qsel;
- uint8_t __otx2_io enable;
-};
-
-/* Per NIX LF inline IPSec configuration */
-struct nix_inline_ipsec_lf_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io sa_base_addr;
- struct {
- uint32_t __otx2_io tag_const;
- uint16_t __otx2_io lenm1_max;
- uint8_t __otx2_io sa_pow2_size;
- uint8_t __otx2_io tt;
- } ipsec_cfg0;
- struct {
- uint32_t __otx2_io sa_idx_max;
- uint8_t __otx2_io sa_idx_w;
- } ipsec_cfg1;
- uint8_t __otx2_io enable;
-};
-
-/* SSO mailbox error codes
- * Range 501 - 600.
- */
-enum sso_af_status {
- SSO_AF_ERR_PARAM = -501,
- SSO_AF_ERR_LF_INVALID = -502,
- SSO_AF_ERR_AF_LF_ALLOC = -503,
- SSO_AF_ERR_GRP_EBUSY = -504,
- SSO_AF_INVAL_NPA_PF_FUNC = -505,
-};
-
-struct sso_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io xaq_buf_size;
- uint32_t __otx2_io xaq_wq_entries;
- uint32_t __otx2_io in_unit_entries;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-/* SSOW mailbox error codes
- * Range 601 - 700.
- */
-enum ssow_af_status {
- SSOW_AF_ERR_PARAM = -601,
- SSOW_AF_ERR_LF_INVALID = -602,
- SSOW_AF_ERR_AF_LF_ALLOC = -603,
-};
-
-struct ssow_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct ssow_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct sso_hw_setconfig {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io npa_aura_id;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_release_xaq {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_info_req {
- struct mbox_msghdr hdr;
- union {
- uint16_t __otx2_io grp;
- uint16_t __otx2_io hws;
- };
-};
-
-struct sso_grp_priority {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint8_t __otx2_io priority;
- uint8_t __otx2_io affinity;
- uint8_t __otx2_io weight;
-};
-
-struct sso_grp_qos_cfg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint32_t __otx2_io xaq_limit;
- uint16_t __otx2_io taq_thr;
- uint16_t __otx2_io iaq_thr;
-};
-
-struct sso_grp_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint64_t __otx2_io ws_pc;
- uint64_t __otx2_io ext_pc;
- uint64_t __otx2_io wa_pc;
- uint64_t __otx2_io ts_pc;
- uint64_t __otx2_io ds_pc;
- uint64_t __otx2_io dq_pc;
- uint64_t __otx2_io aw_status;
- uint64_t __otx2_io page_cnt;
-};
-
-struct sso_hws_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hws;
- uint64_t __otx2_io arbitration;
-};
-
-/* CPT mailbox error codes
- * Range 901 - 1000.
- */
-enum cpt_af_status {
- CPT_AF_ERR_PARAM = -901,
- CPT_AF_ERR_GRP_INVALID = -902,
- CPT_AF_ERR_LF_INVALID = -903,
- CPT_AF_ERR_ACCESS_DENIED = -904,
- CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905,
- CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906,
- CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907,
- CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908
-};
-
-/* CPT mbox message formats */
-
-struct cpt_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint8_t __otx2_io is_write;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_set_crypto_grp_req_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io crypto_eng_grp;
-};
-
-struct cpt_lf_alloc_req_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io nix_pf_func;
- uint16_t __otx2_io sso_pf_func;
- uint16_t __otx2_io eng_grpmask;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_lf_alloc_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io eng_grpmsk;
-};
-
-#define CPT_INLINE_INBOUND 0
-#define CPT_INLINE_OUTBOUND 1
-
-struct cpt_inline_ipsec_cfg_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
- uint8_t __otx2_io slot;
- uint8_t __otx2_io dir;
- uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */
- uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */
-};
-
-struct cpt_rx_inline_lf_cfg_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sso_pf_func;
-};
-
-enum cpt_eng_type {
- CPT_ENG_TYPE_AE = 1,
- CPT_ENG_TYPE_SE = 2,
- CPT_ENG_TYPE_IE = 3,
- CPT_MAX_ENG_TYPES,
-};
-
-/* CPT HW capabilities */
-union cpt_eng_caps {
- uint64_t __otx2_io u;
- struct {
- uint64_t __otx2_io reserved_0_4:5;
- uint64_t __otx2_io mul:1;
- uint64_t __otx2_io sha1_sha2:1;
- uint64_t __otx2_io chacha20:1;
- uint64_t __otx2_io zuc_snow3g:1;
- uint64_t __otx2_io sha3:1;
- uint64_t __otx2_io aes:1;
- uint64_t __otx2_io kasumi:1;
- uint64_t __otx2_io des:1;
- uint64_t __otx2_io crc:1;
- uint64_t __otx2_io reserved_14_63:50;
- };
-};
-
-struct cpt_caps_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cpt_pf_drv_version;
- uint8_t __otx2_io cpt_revision;
- union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES];
-};
-
-/* NPC mbox message structs */
-
-#define NPC_MCAM_ENTRY_INVALID 0xFFFF
-#define NPC_MCAM_INVALID_MAP 0xFFFF
-
-/* NPC mailbox error codes
- * Range 701 - 800.
- */
-enum npc_af_status {
- NPC_MCAM_INVALID_REQ = -701,
- NPC_MCAM_ALLOC_DENIED = -702,
- NPC_MCAM_ALLOC_FAILED = -703,
- NPC_MCAM_PERM_DENIED = -704,
- NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705,
-};
-
-struct npc_mcam_alloc_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MAX_NONCONTIG_ENTRIES 256
- uint8_t __otx2_io contig; /* Contiguous entries ? */
-#define NPC_MCAM_ANY_PRIO 0
-#define NPC_MCAM_LOWER_PRIO 1
-#define NPC_MCAM_HIGHER_PRIO 2
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint16_t __otx2_io ref_entry;
- uint16_t __otx2_io count; /* Number of entries requested */
-};
-
-struct npc_mcam_alloc_entry_rsp {
- struct mbox_msghdr hdr;
- /* Entry alloc'ed or start index if contiguous.
- * Invalid in case of non-contiguous.
- */
- uint16_t __otx2_io entry;
- uint16_t __otx2_io count; /* Number of entries allocated */
- uint16_t __otx2_io free_count; /* Number of entries available */
- uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES];
-};
-
-struct npc_mcam_free_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry; /* Entry index to be freed */
- uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */
-};
-
-struct mcam_entry {
-#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */
- uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io action;
- uint64_t __otx2_io vtag_action;
-};
-
-struct npc_mcam_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io entry; /* MCAM entry to write this match key */
- uint16_t __otx2_io cntr; /* Counter for this MCAM entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */
-};
-
-/* Enable/Disable a given entry */
-struct npc_mcam_ena_dis_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_shift_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MCAM_MAX_SHIFTS 64
- uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io shift_count; /* Number of entries to shift */
-};
-
-struct npc_mcam_shift_entry_rsp {
- struct mbox_msghdr hdr;
- /* Index in 'curr_entry', not entry itself */
- uint16_t __otx2_io failed_entry_idx;
-};
-
-struct npc_mcam_alloc_counter_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io contig; /* Contiguous counters ? */
-#define NPC_MAX_NONCONTIG_COUNTERS 64
- uint16_t __otx2_io count; /* Number of counters requested */
-};
-
-struct npc_mcam_alloc_counter_rsp {
- struct mbox_msghdr hdr;
- /* Counter alloc'ed or start idx if contiguous.
- * Invalid incase of non-contiguous.
- */
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io count; /* Number of counters allocated */
- uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
-};
-
-struct npc_mcam_oper_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */
-};
-
-struct npc_mcam_oper_counter_rsp {
- struct mbox_msghdr hdr;
- /* valid only while fetching counter's stats */
- uint64_t __otx2_io stat;
-};
-
-struct npc_mcam_unmap_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io entry; /* Entry and counter to be unmapped */
- uint8_t __otx2_io all; /* Unmap all entries using this counter ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io ref_entry;
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io cntr;
-};
-
-struct npc_get_kex_cfg_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */
- uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */
-#define NPC_MAX_INTF 2
-#define NPC_MAX_LID 8
-#define NPC_MAX_LT 16
-#define NPC_MAX_LD 2
-#define NPC_MAX_LFL 16
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
- uint64_t __otx2_io
- intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
- uint64_t __otx2_io
- intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-#define MKEX_NAME_LEN 128
- uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN];
-};
-
-enum header_fields {
- NPC_DMAC,
- NPC_SMAC,
- NPC_ETYPE,
- NPC_OUTER_VID,
- NPC_TOS,
- NPC_SIP_IPV4,
- NPC_DIP_IPV4,
- NPC_SIP_IPV6,
- NPC_DIP_IPV6,
- NPC_SPORT_TCP,
- NPC_DPORT_TCP,
- NPC_SPORT_UDP,
- NPC_DPORT_UDP,
- NPC_FDSA_VAL,
- NPC_HEADER_FIELDS_MAX,
-};
-
-struct flow_msg {
- unsigned char __otx2_io dmac[6];
- unsigned char __otx2_io smac[6];
- uint16_t __otx2_io etype;
- uint16_t __otx2_io vlan_etype;
- uint16_t __otx2_io vlan_tci;
- union {
- uint32_t __otx2_io ip4src;
- uint32_t __otx2_io ip6src[4];
- };
- union {
- uint32_t __otx2_io ip4dst;
- uint32_t __otx2_io ip6dst[4];
- };
- uint8_t __otx2_io tos;
- uint8_t __otx2_io ip_ver;
- uint8_t __otx2_io ip_proto;
- uint8_t __otx2_io tc;
- uint16_t __otx2_io sport;
- uint16_t __otx2_io dport;
-};
-
-struct npc_install_flow_req {
- struct mbox_msghdr hdr;
- struct flow_msg packet;
- struct flow_msg mask;
- uint64_t __otx2_io features;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io channel;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io set_cntr;
- uint8_t __otx2_io default_rule;
- /* Overwrite(0) or append(1) flow to default rule? */
- uint8_t __otx2_io append;
- uint16_t __otx2_io vf;
- /* action */
- uint32_t __otx2_io index;
- uint16_t __otx2_io match_id;
- uint8_t __otx2_io flow_key_alg;
- uint8_t __otx2_io op;
- /* vtag action */
- uint8_t __otx2_io vtag0_type;
- uint8_t __otx2_io vtag0_valid;
- uint8_t __otx2_io vtag1_type;
- uint8_t __otx2_io vtag1_valid;
-
- /* vtag tx action */
- uint16_t __otx2_io vtag0_def;
- uint8_t __otx2_io vtag0_op;
- uint16_t __otx2_io vtag1_def;
- uint8_t __otx2_io vtag1_op;
-};
-
-struct npc_install_flow_rsp {
- struct mbox_msghdr hdr;
- /* Negative if no counter else counter number */
- int __otx2_io counter;
-};
-
-struct npc_delete_flow_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io start;/*Disable range of entries */
- uint16_t __otx2_io end;
- uint8_t __otx2_io all; /* PF + VFs */
-};
-
-struct npc_mcam_read_entry_req {
- struct mbox_msghdr hdr;
- /* MCAM entry to read */
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_read_entry_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io enable;
-};
-
-struct npc_mcam_read_base_rule_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
-};
-
-/* TIM mailbox error codes
- * Range 801 - 900.
- */
-enum tim_af_status {
- TIM_AF_NO_RINGS_LEFT = -801,
- TIM_AF_INVALID_NPA_PF_FUNC = -802,
- TIM_AF_INVALID_SSO_PF_FUNC = -803,
- TIM_AF_RING_STILL_RUNNING = -804,
- TIM_AF_LF_INVALID = -805,
- TIM_AF_CSIZE_NOT_ALIGNED = -806,
- TIM_AF_CSIZE_TOO_SMALL = -807,
- TIM_AF_CSIZE_TOO_BIG = -808,
- TIM_AF_INTERVAL_TOO_SMALL = -809,
- TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810,
- TIM_AF_INVALID_CLOCK_SOURCE = -811,
- TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812,
- TIM_AF_INVALID_BSIZE = -813,
- TIM_AF_INVALID_ENABLE_PERIODIC = -814,
- TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
- TIM_AF_RING_ALREADY_DISABLED = -817,
-};
-
-enum tim_clk_srcs {
- TIM_CLK_SRCS_TENNS = 0,
- TIM_CLK_SRCS_GPIO = 1,
- TIM_CLK_SRCS_GTI = 2,
- TIM_CLK_SRCS_PTP = 3,
- TIM_CLK_SRSC_INVALID,
-};
-
-enum tim_gpio_edge {
- TIM_GPIO_NO_EDGE = 0,
- TIM_GPIO_LTOH_TRANS = 1,
- TIM_GPIO_HTOL_TRANS = 2,
- TIM_GPIO_BOTH_TRANS = 3,
- TIM_GPIO_INVALID,
-};
-
-enum ptp_op {
- PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */
- PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */
-};
-
-struct ptp_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io op;
- int64_t __otx2_io scaled_ppm;
- uint8_t __otx2_io is_pmu;
-};
-
-struct ptp_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io clk;
- uint64_t __otx2_io tsc;
-};
-
-struct get_hw_cap_rsp {
- struct mbox_msghdr hdr;
- /* Schq mapping fixed or flexible */
- uint8_t __otx2_io nix_fixed_txschq_mapping;
- uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */
-};
-
-struct ndc_sync_op {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io nix_lf_tx_sync;
- uint8_t __otx2_io nix_lf_rx_sync;
- uint8_t __otx2_io npa_lf_sync;
-};
-
-struct tim_lf_alloc_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io sso_pf_func;
-};
-
-struct tim_ring_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
-};
-
-struct tim_config_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint8_t __otx2_io bigendian;
- uint8_t __otx2_io clocksource;
- uint8_t __otx2_io enableperiodic;
- uint8_t __otx2_io enabledontfreebuffer;
- uint32_t __otx2_io bucketsize;
- uint32_t __otx2_io chunksize;
- uint32_t __otx2_io interval;
-};
-
-struct tim_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io tenns_clk;
-};
-
-struct tim_enable_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io timestarted;
- uint32_t __otx2_io currentbucket;
-};
-
-/* REE mailbox error codes
- * Range 1001 - 1100.
- */
-enum ree_af_status {
- REE_AF_ERR_RULE_UNKNOWN_VALUE = -1001,
- REE_AF_ERR_LF_NO_MORE_RESOURCES = -1002,
- REE_AF_ERR_LF_INVALID = -1003,
- REE_AF_ERR_ACCESS_DENIED = -1004,
- REE_AF_ERR_RULE_DB_PARTIAL = -1005,
- REE_AF_ERR_RULE_DB_EQ_BAD_VALUE = -1006,
- REE_AF_ERR_RULE_DB_BLOCK_ALLOC_FAILED = -1007,
- REE_AF_ERR_BLOCK_NOT_IMPLEMENTED = -1008,
- REE_AF_ERR_RULE_DB_INC_OFFSET_TOO_BIG = -1009,
- REE_AF_ERR_RULE_DB_OFFSET_TOO_BIG = -1010,
- REE_AF_ERR_Q_IS_GRACEFUL_DIS = -1011,
- REE_AF_ERR_Q_NOT_GRACEFUL_DIS = -1012,
- REE_AF_ERR_RULE_DB_ALLOC_FAILED = -1013,
- REE_AF_ERR_RULE_DB_TOO_BIG = -1014,
- REE_AF_ERR_RULE_DB_GEQ_BAD_VALUE = -1015,
- REE_AF_ERR_RULE_DB_LEQ_BAD_VALUE = -1016,
- REE_AF_ERR_RULE_DB_WRONG_LENGTH = -1017,
- REE_AF_ERR_RULE_DB_WRONG_OFFSET = -1018,
- REE_AF_ERR_RULE_DB_BLOCK_TOO_BIG = -1019,
- REE_AF_ERR_RULE_DB_SHOULD_FILL_REQUEST = -1020,
- REE_AF_ERR_RULE_DBI_ALLOC_FAILED = -1021,
- REE_AF_ERR_LF_WRONG_PRIORITY = -1022,
- REE_AF_ERR_LF_SIZE_TOO_BIG = -1023,
-};
-
-/* REE mbox message formats */
-
-struct ree_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
-};
-
-struct ree_lf_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io size;
- uint8_t __otx2_io lf;
- uint8_t __otx2_io pri;
-};
-
-struct ree_rule_db_prog_req_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_REQ_BLOCK_SIZE (MBOX_SIZE >> 1)
- uint8_t __otx2_io rule_db[REE_RULE_DB_REQ_BLOCK_SIZE];
- uint32_t __otx2_io blkaddr; /* REE0 or REE1 */
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
- uint8_t __otx2_io is_incremental; /* is incremental flow */
- uint8_t __otx2_io is_dbi; /* is rule db incremental */
-};
-
-struct ree_rule_db_get_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io offset; /* retrieve db from this offset */
- uint8_t __otx2_io is_dbi; /* is request for rule db incremental */
-};
-
-struct ree_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint32_t __otx2_io blkaddr;
- uint8_t __otx2_io is_write;
-};
-
-struct ree_rule_db_len_rsp_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io len;
- uint32_t __otx2_io inc_len;
-};
-
-struct ree_rule_db_get_rsp_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_RSP_BLOCK_SIZE (MBOX_DOWN_TX_SIZE - SZ_1K)
- uint8_t __otx2_io rule_db[REE_RULE_DB_RSP_BLOCK_SIZE];
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
-};
-
-__rte_internal
-const char *otx2_mbox_id2name(uint16_t id);
-int otx2_mbox_id2size(uint16_t id);
-void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevsi, uint64_t intr_offset);
-void otx2_mbox_fini(struct otx2_mbox *mbox);
-__rte_internal
-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
-__rte_internal
-int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo);
-__rte_internal
-int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
-__rte_internal
-int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
-__rte_internal
-struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
- int size, int size_rsp);
-
-static inline struct mbox_msghdr *
-otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size)
-{
- return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
-}
-
-static inline void
-otx2_mbox_req_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_REQ_SIG;
- hdr->ver = OTX2_MBOX_VERSION;
- hdr->id = mbox_id;
- hdr->pcifunc = 0;
-}
-
-static inline void
-otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_RSP_SIG;
- hdr->rc = -ETIMEDOUT;
- hdr->id = mbox_id;
-}
-
-static inline bool
-otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- bool ret;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- ret = mdev->num_msgs != 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return ret;
-}
-
-static inline int
-otx2_mbox_process(struct otx2_mbox *mbox)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, NULL);
-}
-
-static inline int
-otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, msg);
-}
-
-static inline int
-otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo);
-}
-
-static inline int
-otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo);
-}
-
-int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */);
-int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func,
- uint16_t id);
-
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
-static inline struct _req_type \
-*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \
-{ \
- struct _req_type *req; \
- \
- req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
- mbox, 0, sizeof(struct _req_type), \
- sizeof(struct _rsp_type)); \
- if (!req) \
- return NULL; \
- \
- req->hdr.sig = OTX2_MBOX_REQ_SIG; \
- req->hdr.id = _id; \
- otx2_mbox_dbg("id=0x%x (%s)", \
- req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \
- return req; \
-}
-
-MBOX_MESSAGES
-#undef M
-
-/* This is required for copy operations from device memory which do not work on
- * addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memcpy.
- */
-static inline volatile void *
-otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l)
-{
- const volatile uint8_t *sb;
- volatile uint8_t *db;
- size_t i;
-
- if (!d || !s)
- return NULL;
- db = (volatile uint8_t *)d;
- sb = (const volatile uint8_t *)s;
- for (i = 0; i < l; i++)
- db[i] = sb[i];
- return d;
-}
-
-/* This is required for memory operations from device memory which do not
- * work on addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memset.
- */
-static inline void
-otx2_mbox_memset(volatile void *d, uint8_t val, size_t l)
-{
- volatile uint8_t *db;
- size_t i = 0;
-
- if (!d || !l)
- return;
- db = (volatile uint8_t *)d;
- for (i = 0; i < l; i++)
- db[i] = val;
-}
-
-#endif /* __OTX2_MBOX_H__ */
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
deleted file mode 100644
index b561b67174..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <ethdev_driver.h>
-#include <rte_spinlock.h>
-
-#include "otx2_common.h"
-#include "otx2_sec_idev.h"
-
-static struct otx2_sec_idev_cfg sec_cfg[OTX2_MAX_INLINE_PORTS];
-
-/**
- * @internal
- * Check if rte_eth_dev is security offload capable otx2_eth_dev
- */
-uint8_t
-otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_VF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- return 1;
-
- return 0;
-}
-
-int
-otx2_sec_idev_cfg_init(int port_id)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i;
-
- cfg = &sec_cfg[port_id];
- cfg->tx_cpt_idx = 0;
- rte_spinlock_init(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- cfg->tx_cpt[i].qp = NULL;
- rte_atomic16_set(&cfg->tx_cpt[i].ref_cnt, 0);
- }
-
- return 0;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i, ret;
-
- if (qp == NULL || port_id >= OTX2_MAX_INLINE_PORTS)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- /* Find a free slot to save CPT LF */
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == NULL) {
- cfg->tx_cpt[i].qp = qp;
- ret = 0;
- goto unlock;
- }
- }
-
- ret = -EINVAL;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i, ret;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp != qp)
- continue;
-
- /* Don't free if the QP is in use by any sec session */
- if (rte_atomic16_read(&cfg->tx_cpt[i].ref_cnt)) {
- ret = -EBUSY;
- } else {
- cfg->tx_cpt[i].qp = NULL;
- ret = 0;
- }
-
- goto unlock;
- }
-
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- }
-
- return -ENOENT;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t index;
- int i, ret;
-
- if (port_id >= OTX2_MAX_INLINE_PORTS || qp == NULL)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- index = cfg->tx_cpt_idx;
-
- /* Get the next index with valid data */
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[index].qp != NULL)
- break;
- index = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
- }
-
- if (i >= OTX2_MAX_CPT_QP_PER_PORT) {
- ret = -EINVAL;
- goto unlock;
- }
-
- *qp = cfg->tx_cpt[index].qp;
- rte_atomic16_inc(&cfg->tx_cpt[index].ref_cnt);
-
- cfg->tx_cpt_idx = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
-
- ret = 0;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == qp) {
- rte_atomic16_dec(&cfg->tx_cpt[i].ref_cnt);
- return 0;
- }
- }
- }
-
- return -EINVAL;
-}
diff --git a/drivers/common/octeontx2/otx2_sec_idev.h b/drivers/common/octeontx2/otx2_sec_idev.h
deleted file mode 100644
index 89cdaf66ab..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_SEC_IDEV_H_
-#define _OTX2_SEC_IDEV_H_
-
-#include <rte_ethdev.h>
-
-#define OTX2_MAX_CPT_QP_PER_PORT 64
-#define OTX2_MAX_INLINE_PORTS 64
-
-struct otx2_cpt_qp;
-
-struct otx2_sec_idev_cfg {
- struct {
- struct otx2_cpt_qp *qp;
- rte_atomic16_t ref_cnt;
- } tx_cpt[OTX2_MAX_CPT_QP_PER_PORT];
-
- uint16_t tx_cpt_idx;
- rte_spinlock_t tx_cpt_lock;
-};
-
-__rte_internal
-uint8_t otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev);
-
-__rte_internal
-int otx2_sec_idev_cfg_init(int port_id);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp);
-
-#endif /* _OTX2_SEC_IDEV_H_ */
diff --git a/drivers/common/octeontx2/version.map b/drivers/common/octeontx2/version.map
deleted file mode 100644
index b58f19ce32..0000000000
--- a/drivers/common/octeontx2/version.map
+++ /dev/null
@@ -1,44 +0,0 @@
-INTERNAL {
- global:
-
- otx2_dev_active_vfs;
- otx2_dev_fini;
- otx2_dev_priv_init;
- otx2_disable_irqs;
- otx2_eth_dev_is_sec_capable;
- otx2_intra_dev_get_cfg;
- otx2_logtype_base;
- otx2_logtype_dpi;
- otx2_logtype_ep;
- otx2_logtype_mbox;
- otx2_logtype_nix;
- otx2_logtype_npa;
- otx2_logtype_npc;
- otx2_logtype_ree;
- otx2_logtype_sso;
- otx2_logtype_tim;
- otx2_logtype_tm;
- otx2_mbox_alloc_msg_rsp;
- otx2_mbox_get_rsp;
- otx2_mbox_get_rsp_tmo;
- otx2_mbox_id2name;
- otx2_mbox_msg_send;
- otx2_mbox_wait_for_rsp;
- otx2_npa_lf_active;
- otx2_npa_lf_obj_get;
- otx2_npa_lf_obj_ref;
- otx2_npa_pf_func_get;
- otx2_npa_set_defaults;
- otx2_parse_common_devargs;
- otx2_register_irq;
- otx2_sec_idev_cfg_init;
- otx2_sec_idev_tx_cpt_qp_add;
- otx2_sec_idev_tx_cpt_qp_get;
- otx2_sec_idev_tx_cpt_qp_put;
- otx2_sec_idev_tx_cpt_qp_remove;
- otx2_sso_pf_func_get;
- otx2_sso_pf_func_set;
- otx2_unregister_irq;
-
- local: *;
-};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 59f02ea47c..147b8cf633 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -16,7 +16,6 @@ drivers = [
'nitrox',
'null',
'octeontx',
- 'octeontx2',
'openssl',
'scheduler',
'virtio',
diff --git a/drivers/crypto/octeontx2/meson.build b/drivers/crypto/octeontx2/meson.build
deleted file mode 100644
index 3b387cc570..0000000000
--- a/drivers/crypto/octeontx2/meson.build
+++ /dev/null
@@ -1,30 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (C) 2019 Marvell International Ltd.
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-deps += ['bus_pci']
-deps += ['common_cpt']
-deps += ['common_octeontx2']
-deps += ['ethdev']
-deps += ['eventdev']
-deps += ['security']
-
-sources = files(
- 'otx2_cryptodev.c',
- 'otx2_cryptodev_capabilities.c',
- 'otx2_cryptodev_hw_access.c',
- 'otx2_cryptodev_mbox.c',
- 'otx2_cryptodev_ops.c',
- 'otx2_cryptodev_sec.c',
-)
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../common/octeontx2')
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../mempool/octeontx2')
-includes += include_directories('../../net/octeontx2')
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
deleted file mode 100644
index fc7ad05366..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ /dev/null
@@ -1,188 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_crypto.h>
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_dev.h>
-#include <rte_errno.h>
-#include <rte_mempool.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_dev.h"
-
-/* CPT common headers */
-#include "cpt_common.h"
-#include "cpt_pmd_logs.h"
-
-uint8_t otx2_cryptodev_driver_id;
-
-static struct rte_pci_id pci_id_cpt_table[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_CPT_VF)
- },
- /* sentinel */
- {
- .device_id = 0
- },
-};
-
-uint64_t
-otx2_cpt_default_ff_get(void)
-{
- return RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_IN_PLACE_SGL |
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
- RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_SECURITY |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-}
-
-static int
-otx2_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- struct rte_cryptodev_pmd_init_params init_params = {
- .name = "",
- .socket_id = rte_socket_id(),
- .private_data_size = sizeof(struct otx2_cpt_vf)
- };
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
- struct otx2_dev *otx2_dev;
- struct otx2_cpt_vf *vf;
- uint16_t nb_queues;
- int ret;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
- if (dev == NULL) {
- ret = -ENODEV;
- goto exit;
- }
-
- dev->dev_ops = &otx2_cpt_ops;
-
- dev->driver_id = otx2_cryptodev_driver_id;
-
- /* Get private data space allocated */
- vf = dev->data->dev_private;
-
- otx2_dev = &vf->otx2_dev;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
- if (ret) {
- CPT_LOG_ERR("Could not initialize otx2_dev");
- goto pmd_destroy;
- }
-
- /* Get number of queues available on the device */
- ret = otx2_cpt_available_queues_get(dev, &nb_queues);
- if (ret) {
- CPT_LOG_ERR("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_CPT_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- CPT_LOG_ERR("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- CPT_LOG_INFO("Max queues supported by device: %d",
- vf->max_queues);
-
- ret = otx2_cpt_hardware_caps_get(dev, vf->hw_caps);
- if (ret) {
- CPT_LOG_ERR("Could not determine hardware capabilities");
- goto otx2_dev_fini;
- }
- }
-
- otx2_crypto_capabilities_init(vf->hw_caps);
- otx2_crypto_sec_capabilities_init(vf->hw_caps);
-
- /* Create security ctx */
- ret = otx2_crypto_sec_ctx_create(dev);
- if (ret)
- goto otx2_dev_fini;
-
- dev->feature_flags = otx2_cpt_default_ff_get();
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- otx2_cpt_set_enqdeq_fns(dev);
-
- rte_cryptodev_pmd_probing_finish(dev);
-
- return 0;
-
-otx2_dev_fini:
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- otx2_dev_fini(pci_dev, otx2_dev);
-pmd_destroy:
- rte_cryptodev_pmd_destroy(dev);
-exit:
- CPT_LOG_ERR("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
- pci_dev->id.vendor_id, pci_dev->id.device_id);
- return ret;
-}
-
-static int
-otx2_cpt_pci_remove(struct rte_pci_device *pci_dev)
-{
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
-
- if (pci_dev == NULL)
- return -EINVAL;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_get_named_dev(name);
- if (dev == NULL)
- return -ENODEV;
-
- /* Destroy security ctx */
- otx2_crypto_sec_ctx_destroy(dev);
-
- return rte_cryptodev_pmd_destroy(dev);
-}
-
-static struct rte_pci_driver otx2_cryptodev_pmd = {
- .id_table = pci_id_cpt_table,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_cpt_pci_probe,
- .remove = otx2_cpt_pci_remove,
-};
-
-static struct cryptodev_driver otx2_cryptodev_drv;
-
-RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_OCTEONTX2_PMD, otx2_cryptodev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_OCTEONTX2_PMD, pci_id_cpt_table);
-RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_OCTEONTX2_PMD, "vfio-pci");
-RTE_PMD_REGISTER_CRYPTO_DRIVER(otx2_cryptodev_drv, otx2_cryptodev_pmd.driver,
- otx2_cryptodev_driver_id);
-RTE_LOG_REGISTER_DEFAULT(otx2_cpt_logtype, NOTICE);
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.h b/drivers/crypto/octeontx2/otx2_cryptodev.h
deleted file mode 100644
index 15ecfe45b6..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_H_
-#define _OTX2_CRYPTODEV_H_
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-
-#include "otx2_dev.h"
-
-/* Marvell OCTEON TX2 Crypto PMD device name */
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
-
-#define OTX2_CPT_MAX_LFS 128
-#define OTX2_CPT_MAX_QUEUES_PER_VF 64
-#define OTX2_CPT_MAX_BLKS 2
-#define OTX2_CPT_PMD_VERSION 3
-#define OTX2_CPT_REVISION_ID_3 3
-
-/**
- * Device private data
- */
-struct otx2_cpt_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of crypto queues attached */
- uint16_t lf_msixoff[OTX2_CPT_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t lf_blkaddr[OTX2_CPT_MAX_LFS];
- /**< CPT0/1 BLKADDR of LFs */
- uint8_t cpt_revision;
- /**< CPT revision */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
- union cpt_eng_caps hw_caps[CPT_MAX_ENG_TYPES];
- /**< CPT device capabilities */
-};
-
-struct cpt_meta_info {
- uint64_t deq_op_info[5];
- uint64_t comp_code_sz;
- union cpt_res_s cpt_res __rte_aligned(16);
- struct cpt_request_info cpt_req;
-};
-
-#define CPT_LOGTYPE otx2_cpt_logtype
-
-extern int otx2_cpt_logtype;
-
-/*
- * Crypto device driver ID
- */
-extern uint8_t otx2_cryptodev_driver_id;
-
-uint64_t otx2_cpt_default_ff_get(void);
-void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
deleted file mode 100644
index ba3fbbbe22..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
+++ /dev/null
@@ -1,924 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_mbox.h"
-
-#define CPT_EGRP_GET(hw_caps, name, egrp) do { \
- if ((hw_caps[CPT_ENG_TYPE_SE].name) && \
- (hw_caps[CPT_ENG_TYPE_IE].name)) \
- *egrp = OTX2_CPT_EGRP_SE_IE; \
- else if (hw_caps[CPT_ENG_TYPE_SE].name) \
- *egrp = OTX2_CPT_EGRP_SE; \
- else if (hw_caps[CPT_ENG_TYPE_AE].name) \
- *egrp = OTX2_CPT_EGRP_AE; \
- else \
- *egrp = OTX2_CPT_EGRP_MAX; \
-} while (0)
-
-#define CPT_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- cpt_caps_add(caps_##name, RTE_DIM(caps_##name)); \
-} while (0)
-
-#define SEC_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- sec_caps_add(sec_caps_##name, RTE_DIM(sec_caps_##name));\
-} while (0)
-
-#define OTX2_CPT_MAX_CAPS 34
-#define OTX2_SEC_MAX_CAPS 4
-
-static struct rte_cryptodev_capabilities otx2_cpt_caps[OTX2_CPT_MAX_CAPS];
-static struct rte_cryptodev_capabilities otx2_cpt_sec_caps[OTX2_SEC_MAX_CAPS];
-
-static const struct rte_cryptodev_capabilities caps_mul[] = {
- { /* RSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
- (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
- (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* MOD_EXP */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,
- .op_types = 0,
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* ECDSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY)),
- }
- },
- }
- },
- { /* ECPM */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM,
- .op_types = 0
- }
- },
- }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_sha1_sha2[] = {
- { /* SHA1 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 20,
- .max = 20,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA224 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA224 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
- { /* SHA384 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 48,
- .max = 48,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA384 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 24,
- .max = 48,
- .increment = 24
- },
- }, }
- }, }
- },
- { /* SHA512 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512,
- .block_size = 128,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 64,
- .max = 64,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA512 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
- .block_size = 128,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 32,
- .max = 64,
- .increment = 32
- },
- }, }
- }, }
- },
- { /* MD5 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* MD5 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 8,
- .max = 64,
- .increment = 8
- },
- .digest_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_chacha20[] = {
- { /* Chacha20-Poly1305 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
- .block_size = 64,
- .key_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- }
-};
-
-static const struct rte_cryptodev_capabilities caps_zuc_snow3g[] = {
- { /* SNOW 3G (UEA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EEA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SNOW 3G (UIA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EIA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_aes[] = {
- { /* AES GMAC (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_AES_GMAC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 8,
- .max = 16,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CTR */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CTR,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- }
- }, }
- }, }
- },
- { /* AES XTS */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_XTS,
- .block_size = 16,
- .key_size = {
- .min = 32,
- .max = 64,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 4,
- .max = 16,
- .increment = 1
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_kasumi[] = {
- { /* KASUMI (F8) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* KASUMI (F9) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_KASUMI_F9,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_des[] = {
- { /* 3DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 16,
- .increment = 8
- }
- }, }
- }, }
- },
- { /* 3DES ECB */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_null[] = {
- { /* NULL (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- }, },
- }, },
- },
- { /* NULL (CIPHER) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, },
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_end[] = {
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_security_capability
-otx2_crypto_sec_capabilities[] = {
- { /* IPsec Lookaside Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Lookaside Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-cpt_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_CPT_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- CPT_CAPS_ADD(hw_caps, mul);
- CPT_CAPS_ADD(hw_caps, sha1_sha2);
- CPT_CAPS_ADD(hw_caps, chacha20);
- CPT_CAPS_ADD(hw_caps, zuc_snow3g);
- CPT_CAPS_ADD(hw_caps, aes);
- CPT_CAPS_ADD(hw_caps, kasumi);
- CPT_CAPS_ADD(hw_caps, des);
-
- cpt_caps_add(caps_null, RTE_DIM(caps_null));
- cpt_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void)
-{
- return otx2_cpt_caps;
-}
-
-static void
-sec_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_SEC_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_sec_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- SEC_CAPS_ADD(hw_caps, aes);
- SEC_CAPS_ADD(hw_caps, sha1_sha2);
-
- sec_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_crypto_sec_capabilities;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
deleted file mode 100644
index c1e0001190..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_CAPABILITIES_H_
-#define _OTX2_CRYPTODEV_CAPABILITIES_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_mbox.h"
-
-enum otx2_cpt_egrp {
- OTX2_CPT_EGRP_SE = 0,
- OTX2_CPT_EGRP_SE_IE = 1,
- OTX2_CPT_EGRP_AE = 2,
- OTX2_CPT_EGRP_MAX,
-};
-
-/*
- * Initialize crypto capabilities for the device
- *
- */
-void otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get capabilities list for the device
- *
- */
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void);
-
-/*
- * Initialize security capabilities for the device
- *
- */
-void otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get security capabilities list for the device
- *
- */
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused);
-
-#endif /* _OTX2_CRYPTODEV_CAPABILITIES_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
deleted file mode 100644
index d5d6b5bad7..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ /dev/null
@@ -1,225 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <rte_cryptodev.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_dev.h"
-
-#include "cpt_pmd_logs.h"
-
-static void
-otx2_cpt_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_CPT_LF_MISC_INT);
- if (intr == 0)
- return;
-
- CPT_LOG_ERR("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_CPT_LF_MISC_INT);
-}
-
-static void
-otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, otx2_cpt_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, otx2_cpt_lf_err_intr_handler,
- (void *)base, msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_cpt_err_intr_register(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- CPT_LOG_ERR("Invalid CPT LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- ret = otx2_cpt_lf_err_intr_register(dev, vf->lf_msixoff[i],
- base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[j], j);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
-
- /*
- * Failed to register error interrupt. Not returning error as this would
- * prevent application from enabling larger number of devs.
- *
- * This failure is a known issue because otx2_dev_init() initializes
- * interrupts based on static values from ATF, and the actual number
- * of interrupts needed (which is based on LFs) can be determined only
- * after otx2_dev_init() sets up interrupts which includes mbox
- * interrupts.
- */
- return 0;
-}
-
-int
-otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask, uint8_t pri,
- uint32_t size_div40)
-{
- union otx2_cpt_af_lf_ctl af_lf_ctl;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_q_base base;
- union otx2_cpt_lf_q_size size;
- union otx2_cpt_lf_ctl lf_ctl;
- int ret;
-
- /* Set engine group mask and priority */
-
- ret = otx2_cpt_af_reg_read(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, &af_lf_ctl.u);
- if (ret)
- return ret;
- af_lf_ctl.s.grp = grp_mask;
- af_lf_ctl.s.pri = pri ? 1 : 0;
- ret = otx2_cpt_af_reg_write(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, af_lf_ctl.u);
- if (ret)
- return ret;
-
- /* Set instruction queue base address */
-
- base.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_BASE);
- base.s.fault = 0;
- base.s.stopped = 0;
- base.s.addr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_CPT_LF_Q_BASE);
-
- /* Set instruction queue size */
-
- size.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_SIZE);
- size.s.size_div40 = size_div40;
- otx2_write64(size.u, qp->base + OTX2_CPT_LF_Q_SIZE);
-
- /* Enable instruction queue */
-
- lf_ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- lf_ctl.s.ena = 1;
- otx2_write64(lf_ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Start instruction execution */
-
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 1;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- return 0;
-}
-
-void
-otx2_cpt_iq_disable(struct otx2_cpt_qp *qp)
-{
- union otx2_cpt_lf_q_grp_ptr grp_ptr;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_ctl ctl;
- int cnt;
-
- /* Stop instruction execution */
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 0x0;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- /* Disable instructions enqueuing */
- ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- ctl.s.ena = 0;
- otx2_write64(ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Wait for instruction queue to become empty */
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if (inprog.s.grb_partial)
- cnt = 0;
- else
- cnt++;
- grp_ptr.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_GRP_PTR);
- } while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
-
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if ((inprog.s.inflight == 0) &&
- (inprog.s.gwb_cnt < 40) &&
- ((inprog.s.grb_cnt == 0) || (inprog.s.grb_cnt == 40)))
- cnt++;
- else
- cnt = 0;
- } while (cnt < 10);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
deleted file mode 100644
index 90a338e05a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_HW_ACCESS_H_
-#define _OTX2_CRYPTODEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include <rte_cryptodev.h>
-#include <rte_memory.h>
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-#include "cpt_mcode_defines.h"
-
-#include "otx2_dev.h"
-#include "otx2_cryptodev_qp.h"
-
-/* CPT instruction queue length.
- * Use queue size as power of 2 for aiding in pending queue calculations.
- */
-#define OTX2_CPT_DEFAULT_CMD_QLEN 8192
-
-/* Mask which selects all engine groups */
-#define OTX2_CPT_ENG_GRPS_MASK 0xFF
-
-/* Register offsets */
-
-/* LMT LF registers */
-#define OTX2_LMT_LF_LMTLINE(a) (0x0ull | (uint64_t)(a) << 3)
-
-/* CPT LF registers */
-#define OTX2_CPT_LF_CTL 0x10ull
-#define OTX2_CPT_LF_INPROG 0x40ull
-#define OTX2_CPT_LF_MISC_INT 0xb0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1S 0xd0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1C 0xe0ull
-#define OTX2_CPT_LF_Q_BASE 0xf0ull
-#define OTX2_CPT_LF_Q_SIZE 0x100ull
-#define OTX2_CPT_LF_Q_GRP_PTR 0x120ull
-#define OTX2_CPT_LF_NQ(a) (0x400ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_AF_LF_CTL(a) (0x27000ull | (uint64_t)(a) << 3)
-#define OTX2_CPT_AF_LF_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_LF_BAR2(vf, blk_addr, q_id) \
- ((vf)->otx2_dev.bar2 + \
- ((blk_addr << 20) | ((q_id) << 12)))
-
-#define OTX2_CPT_QUEUE_HI_PRIO 0x1
-
-union otx2_cpt_lf_ctl {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t reserved_3_3 : 1;
- uint64_t fc_hyst_bits : 4;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_cpt_lf_inprog {
- uint64_t u;
- struct {
- uint64_t inflight : 9;
- uint64_t reserved_9_15 : 7;
- uint64_t eena : 1;
- uint64_t grp_drp : 1;
- uint64_t reserved_18_30 : 13;
- uint64_t grb_partial : 1;
- uint64_t grb_cnt : 8;
- uint64_t gwb_cnt : 8;
- uint64_t reserved_48_63 : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_base {
- uint64_t u;
- struct {
- uint64_t fault : 1;
- uint64_t stopped : 1;
- uint64_t reserved_2_6 : 5;
- uint64_t addr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_cpt_lf_q_size {
- uint64_t u;
- struct {
- uint64_t size_div40 : 15;
- uint64_t reserved_15_63 : 49;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl {
- uint64_t u;
- struct {
- uint64_t pri : 1;
- uint64_t reserved_1_8 : 8;
- uint64_t pf_func_inst : 1;
- uint64_t cont_err : 1;
- uint64_t reserved_11_15 : 5;
- uint64_t nixtx_en : 1;
- uint64_t reserved_17_47 : 31;
- uint64_t grp : 8;
- uint64_t reserved_56_63 : 8;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl2 {
- uint64_t u;
- struct {
- uint64_t exe_no_swap : 1;
- uint64_t exe_ldwb : 1;
- uint64_t reserved_2_31 : 30;
- uint64_t sso_pf_func : 16;
- uint64_t nix_pf_func : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_grp_ptr {
- uint64_t u;
- struct {
- uint64_t dq_ptr : 15;
- uint64_t reserved_31_15 : 17;
- uint64_t nq_ptr : 15;
- uint64_t reserved_47_62 : 16;
- uint64_t xq_xor : 1;
- } s;
-};
-
-/*
- * Enumeration cpt_9x_comp_e
- *
- * CPT 9X Completion Enumeration
- * Enumerates the values of CPT_RES_S[COMPCODE].
- */
-enum cpt_9x_comp_e {
- CPT_9X_COMP_E_NOTDONE = 0x00,
- CPT_9X_COMP_E_GOOD = 0x01,
- CPT_9X_COMP_E_FAULT = 0x02,
- CPT_9X_COMP_E_HWERR = 0x04,
- CPT_9X_COMP_E_INSTERR = 0x05,
- CPT_9X_COMP_E_LAST_ENTRY = 0x06
-};
-
-void otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev);
-
-int otx2_cpt_err_intr_register(const struct rte_cryptodev *dev);
-
-int otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask,
- uint8_t pri, uint32_t size_div40);
-
-void otx2_cpt_iq_disable(struct otx2_cpt_qp *qp);
-
-#endif /* _OTX2_CRYPTODEV_HW_ACCESS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
deleted file mode 100644
index f9e7b0b474..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ /dev/null
@@ -1,285 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <cryptodev_pmd.h>
-#include <rte_ethdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_sec_idev.h"
-#include "otx2_mbox.h"
-
-#include "cpt_pmd_logs.h"
-
-int
-otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct cpt_caps_rsp_msg *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_cpt_caps_get(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (rsp->cpt_pf_drv_version != OTX2_CPT_PMD_VERSION) {
- otx2_err("Incompatible CPT PMD version"
- "(Kernel: 0x%04x DPDK: 0x%04x)",
- rsp->cpt_pf_drv_version, OTX2_CPT_PMD_VERSION);
- return -EPIPE;
- }
-
- vf->cpt_revision = rsp->cpt_revision;
- otx2_mbox_memcpy(hw_caps, rsp->eng_caps,
- sizeof(union cpt_eng_caps) * CPT_MAX_ENG_TYPES);
-
- return 0;
-}
-
-int
-otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct free_rsrcs_rsp *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- *nb_queues = rsp->cpt + rsp->cpt1;
- return 0;
-}
-
-int
-otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int blkaddr[OTX2_CPT_MAX_BLKS];
- struct rsrc_attach_req *req;
- int blknum = 0;
- int i, ret;
-
- blkaddr[0] = RVU_BLOCK_ADDR_CPT0;
- blkaddr[1] = RVU_BLOCK_ADDR_CPT1;
-
- /* Ask AF to attach required LFs */
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- if ((vf->cpt_revision == OTX2_CPT_REVISION_ID_3) &&
- (vf->otx2_dev.pf_func & 0x1))
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
-
- /* 1 LF = 1 queue */
- req->cptlfs = nb_queues;
- req->cpt_blkaddr = blkaddr[blknum];
-
- ret = otx2_mbox_process(mbox);
- if (ret == -ENOSPC) {
- if (vf->cpt_revision == OTX2_CPT_REVISION_ID_3) {
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
- req->cpt_blkaddr = blkaddr[blknum];
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- return -EIO;
- }
- } else if (ret < 0) {
- return -EIO;
- }
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
- for (i = 0; i < nb_queues; i++)
- vf->lf_blkaddr[i] = req->cpt_blkaddr;
-
- return 0;
-}
-
-int
-otx2_cpt_queues_detach(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->cptlfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct msix_offset_rsp *rsp;
- uint32_t i, ret;
-
- /* Get CPT MSI-X vector offsets */
-
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++)
- vf->lf_msixoff[i] = (vf->lf_blkaddr[i] == RVU_BLOCK_ADDR_CPT1) ?
- rsp->cpt1_lf_msixoff[i] : rsp->cptlf_msixoff[i];
-
- return 0;
-}
-
-static int
-otx2_cpt_send_mbox_msg(struct otx2_cpt_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- CPT_LOG_ERR("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct cpt_rd_wr_reg_msg *msg;
- int ret, off;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = blkaddr;
-
- ret = otx2_cpt_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct cpt_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rd_wr_reg_msg *msg;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = blkaddr;
-
- return otx2_cpt_send_mbox_msg(vf);
-}
-
-int
-otx2_cpt_inline_init(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rx_inline_lf_cfg_msg *msg;
- int ret;
-
- msg = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(mbox);
- msg->sso_pf_func = otx2_sso_pf_func_get();
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
-
-int
-otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp,
- uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_inline_ipsec_cfg_msg *msg;
- struct otx2_eth_dev *otx2_eth_dev;
- int ret;
-
- if (!otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- return -EINVAL;
-
- otx2_eth_dev = otx2_eth_pmd_priv(eth_dev);
-
- msg = otx2_mbox_alloc_msg_cpt_inline_ipsec_cfg(mbox);
- msg->dir = CPT_INLINE_OUTBOUND;
- msg->enable = 1;
- msg->slot = qp->id;
-
- msg->nix_pf_func = otx2_eth_dev->pf_func;
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
deleted file mode 100644
index 03323e418c..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_MBOX_H_
-#define _OTX2_CRYPTODEV_MBOX_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_cryptodev_hw_access.h"
-
-int otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps);
-
-int otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues);
-
-int otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues);
-
-int otx2_cpt_queues_detach(const struct rte_cryptodev *dev);
-
-int otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev);
-
-__rte_internal
-int otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val);
-
-__rte_internal
-int otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val);
-
-int otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint16_t port_id);
-
-int otx2_cpt_inline_init(const struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_MBOX_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
deleted file mode 100644
index 339b82f33e..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ /dev/null
@@ -1,1438 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <unistd.h>
-
-#include <cryptodev_pmd.h>
-#include <rte_errno.h>
-#include <ethdev_driver.h>
-#include <rte_event_crypto_adapter.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_po_ops.h"
-#include "otx2_mbox.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#include "cpt_hw_types.h"
-#include "cpt_pmd_logs.h"
-#include "cpt_pmd_ops_helper.h"
-#include "cpt_ucode.h"
-#include "cpt_ucode_asym.h"
-
-#define METABUF_POOL_CACHE_SIZE 512
-
-static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX];
-
-/* Forward declarations */
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id);
-
-static void
-qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
-{
- snprintf(name, size, "otx2_cpt_lf_mem_%u:%u", dev_id, qp_id);
-}
-
-static int
-otx2_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint8_t qp_id,
- unsigned int nb_elements)
-{
- char mempool_name[RTE_MEMPOOL_NAMESIZE];
- struct cpt_qp_meta_info *meta_info;
- int lcore_cnt = rte_lcore_count();
- int ret, max_mlen, mb_pool_sz;
- struct rte_mempool *pool;
- int asym_mlen = 0;
- int lb_mlen = 0;
- int sg_mlen = 0;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) {
-
- /* Get meta len for scatter gather mode */
- sg_mlen = cpt_pmd_ops_helper_get_mlen_sg_mode();
-
- /* Extra 32B saved for future considerations */
- sg_mlen += 4 * sizeof(uint64_t);
-
- /* Get meta len for linear buffer (direct) mode */
- lb_mlen = cpt_pmd_ops_helper_get_mlen_direct_mode();
-
- /* Extra 32B saved for future considerations */
- lb_mlen += 4 * sizeof(uint64_t);
- }
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
-
- /* Get meta len required for asymmetric operations */
- asym_mlen = cpt_pmd_ops_helper_asym_get_mlen();
- }
-
- /*
- * Check max requirement for meta buffer to
- * support crypto op of any type (sym/asym).
- */
- max_mlen = RTE_MAX(RTE_MAX(lb_mlen, sg_mlen), asym_mlen);
-
- /* Allocate mempool */
-
- snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "otx2_cpt_mb_%u:%u",
- dev->data->dev_id, qp_id);
-
- mb_pool_sz = nb_elements;
-
- /* For poll mode, core that enqueues and core that dequeues can be
- * different. For event mode, all cores are allowed to use same crypto
- * queue pair.
- */
- mb_pool_sz += (RTE_MAX(2, lcore_cnt) * METABUF_POOL_CACHE_SIZE);
-
- pool = rte_mempool_create_empty(mempool_name, mb_pool_sz, max_mlen,
- METABUF_POOL_CACHE_SIZE, 0,
- rte_socket_id(), 0);
-
- if (pool == NULL) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- return rte_errno;
- }
-
- ret = rte_mempool_set_ops_byname(pool, RTE_MBUF_DEFAULT_MEMPOOL_OPS,
- NULL);
- if (ret) {
- CPT_LOG_ERR("Could not set mempool ops");
- goto mempool_free;
- }
-
- ret = rte_mempool_populate_default(pool);
- if (ret <= 0) {
- CPT_LOG_ERR("Could not populate metabuf pool");
- goto mempool_free;
- }
-
- meta_info = &qp->meta_info;
-
- meta_info->pool = pool;
- meta_info->lb_mlen = lb_mlen;
- meta_info->sg_mlen = sg_mlen;
-
- return 0;
-
-mempool_free:
- rte_mempool_free(pool);
- return ret;
-}
-
-static void
-otx2_cpt_metabuf_mempool_destroy(struct otx2_cpt_qp *qp)
-{
- struct cpt_qp_meta_info *meta_info = &qp->meta_info;
-
- rte_mempool_free(meta_info->pool);
-
- meta_info->pool = NULL;
- meta_info->lb_mlen = 0;
- meta_info->sg_mlen = 0;
-}
-
-static int
-otx2_cpt_qp_inline_cfg(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- static rte_atomic16_t port_offset = RTE_ATOMIC16_INIT(-1);
- uint16_t port_id, nb_ethport = rte_eth_dev_count_avail();
- int i, ret;
-
- for (i = 0; i < nb_ethport; i++) {
- port_id = rte_atomic16_add_return(&port_offset, 1) % nb_ethport;
- if (otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- break;
- }
-
- if (i >= nb_ethport)
- return 0;
-
- ret = otx2_cpt_qp_ethdev_bind(dev, qp, port_id);
- if (ret)
- return ret;
-
- /* Publish inline Tx QP to eth dev security */
- ret = otx2_sec_idev_tx_cpt_qp_add(port_id, qp);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static struct otx2_cpt_qp *
-otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id,
- uint8_t group)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- const struct rte_memzone *lf_mem;
- uint32_t len, iq_len, size_div40;
- char name[RTE_MEMZONE_NAMESIZE];
- uint64_t used_len, iova;
- struct otx2_cpt_qp *qp;
- uint64_t lmtline;
- uint8_t *va;
- int ret;
-
- /* Allocate queue pair */
- qp = rte_zmalloc_socket("OCTEON TX2 Crypto PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN, 0);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not allocate queue pair");
- return NULL;
- }
-
- /*
- * Pending queue updates make assumption that queue size is a power
- * of 2.
- */
- RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(OTX2_CPT_DEFAULT_CMD_QLEN));
-
- iq_len = OTX2_CPT_DEFAULT_CMD_QLEN;
-
- /*
- * Queue size must be a multiple of 40 and effective queue size to
- * software is (size_div40 - 1) * 40
- */
- size_div40 = (iq_len + 40 - 1) / 40 + 1;
-
- /* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
-
- /* Space for instruction group memory */
- len += size_div40 * 16;
-
- /* So that instruction queues start as pg size aligned */
- len = RTE_ALIGN(len, pg_sz);
-
- /* For instruction queues */
- len += OTX2_CPT_DEFAULT_CMD_QLEN * sizeof(union cpt_inst_s);
-
- /* Wastage after instruction queues */
- len = RTE_ALIGN(len, pg_sz);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp_id);
-
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
- RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
- RTE_CACHE_LINE_SIZE);
- if (lf_mem == NULL) {
- CPT_LOG_ERR("Could not allocate reserved memzone");
- goto qp_free;
- }
-
- va = lf_mem->addr;
- iova = lf_mem->iova;
-
- memset(va, 0, len);
-
- ret = otx2_cpt_metabuf_mempool_create(dev, qp, qp_id, iq_len);
- if (ret) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- goto lf_mem_free;
- }
-
- /* Initialize pending queue */
- qp->pend_q.rid_queue = (void **)va;
- qp->pend_q.tail = 0;
- qp->pend_q.head = 0;
-
- used_len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
- used_len += size_div40 * 16;
- used_len = RTE_ALIGN(used_len, pg_sz);
- iova += used_len;
-
- qp->iq_dma_addr = iova;
- qp->id = qp_id;
- qp->blkaddr = vf->lf_blkaddr[qp_id];
- qp->base = OTX2_CPT_LF_BAR2(vf, qp->blkaddr, qp_id);
-
- lmtline = vf->otx2_dev.bar2 +
- (RVU_BLOCK_ADDR_LMT << 20 | qp_id << 12) +
- OTX2_LMT_LF_LMTLINE(0);
-
- qp->lmtline = (void *)lmtline;
-
- qp->lf_nq_reg = qp->base + OTX2_CPT_LF_NQ(0);
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- goto mempool_destroy;
- }
-
- otx2_cpt_iq_disable(qp);
-
- ret = otx2_cpt_qp_inline_cfg(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not configure queue for inline IPsec");
- goto mempool_destroy;
- }
-
- ret = otx2_cpt_iq_enable(dev, qp, group, OTX2_CPT_QUEUE_HI_PRIO,
- size_div40);
- if (ret) {
- CPT_LOG_ERR("Could not enable instruction queue");
- goto mempool_destroy;
- }
-
- return qp;
-
-mempool_destroy:
- otx2_cpt_metabuf_mempool_destroy(qp);
-lf_mem_free:
- rte_memzone_free(lf_mem);
-qp_free:
- rte_free(qp);
- return NULL;
-}
-
-static int
-otx2_cpt_qp_destroy(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- const struct rte_memzone *lf_mem;
- char name[RTE_MEMZONE_NAMESIZE];
- int ret;
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- return ret;
- }
-
- otx2_cpt_iq_disable(qp);
-
- otx2_cpt_metabuf_mempool_destroy(qp);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp->id);
-
- lf_mem = rte_memzone_lookup(name);
-
- ret = rte_memzone_free(lf_mem);
- if (ret)
- return ret;
-
- rte_free(qp);
-
- return 0;
-}
-
-static int
-sym_xform_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->next) {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
- (xform->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC ||
- xform->next->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC ||
- xform->next->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->auth.algo == RTE_CRYPTO_AUTH_SHA1)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_SHA1 &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC)
- return -ENOTSUP;
-
- } else {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_NULL &&
- xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)
- return -ENOTSUP;
- }
- return 0;
-}
-
-static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- struct rte_crypto_sym_xform *temp_xform = xform;
- struct cpt_sess_misc *misc;
- vq_cmd_word3_t vq_cmd_w3;
- void *priv;
- int ret;
-
- ret = sym_xform_verify(xform);
- if (unlikely(ret))
- return ret;
-
- if (unlikely(rte_mempool_get(pool, &priv))) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_sess_misc) +
- offsetof(struct cpt_ctx, mc_ctx));
-
- misc = priv;
-
- for ( ; xform != NULL; xform = xform->next) {
- switch (xform->type) {
- case RTE_CRYPTO_SYM_XFORM_AEAD:
- ret = fill_sess_aead(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_CIPHER:
- ret = fill_sess_cipher(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_AUTH:
- if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC)
- ret = fill_sess_gmac(xform, misc);
- else
- ret = fill_sess_auth(xform, misc);
- break;
- default:
- ret = -1;
- }
-
- if (ret)
- goto priv_put;
- }
-
- if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
- cpt_mac_len_verify(&temp_xform->auth)) {
- CPT_LOG_ERR("MAC length is not supported");
- struct cpt_ctx *ctx = SESS_PRIV(misc);
- if (ctx->auth_key != NULL) {
- rte_free(ctx->auth_key);
- ctx->auth_key = NULL;
- }
- ret = -ENOTSUP;
- goto priv_put;
- }
-
- set_sym_session_private_data(sess, driver_id, misc);
-
- misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
- sizeof(struct cpt_sess_misc);
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.cptr = misc->ctx_dma_addr + offsetof(struct cpt_ctx,
- mc_ctx);
-
- /*
- * IE engines support IPsec operations
- * SE engines support IPsec operations, Chacha-Poly and
- * Air-Crypto operations
- */
- if (misc->zsk_flag || misc->chacha_poly)
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE;
- else
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE_IE;
-
- misc->cpt_inst_w7 = vq_cmd_w3.u64;
-
- return 0;
-
-priv_put:
- rte_mempool_put(pool, priv);
-
- return -ENOTSUP;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
- struct cpt_request_info *req,
- void *lmtline,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7)
-{
- union rte_event_crypto_metadata *m_data;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- op->sym->session);
- if (m_data == NULL) {
- rte_pktmbuf_free(op->sym->m_src);
- rte_crypto_op_free(op);
- rte_errno = EINVAL;
- return -EINVAL;
- }
- } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)op +
- op->private_data_offset);
- } else {
- return -EINVAL;
- }
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) |
- m_data->response_info.flow_id) |
- ((uint64_t)m_data->response_info.sched_type << 32) |
- ((uint64_t)m_data->response_info.queue_id << 34));
- inst.u[3] = 1 | (((uint64_t)req >> 3) << 3);
- req->qp = qp;
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp,
- struct pending_queue *pend_q,
- struct cpt_request_info *req,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7,
- unsigned int burst_index)
-{
- void *lmtline = qp->lmtline;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (qp->ca_enable)
- return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7);
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- req->time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- pending_queue_push(pend_q, req, burst_index, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp,
- struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- struct cpt_qp_meta_info *minfo = &qp->meta_info;
- struct rte_crypto_asym_op *asym_op = op->asym;
- struct asym_op_params params = {0};
- struct cpt_asym_sess_misc *sess;
- uintptr_t *cop;
- void *mdata;
- int ret;
-
- if (unlikely(rte_mempool_get(minfo->pool, &mdata) < 0)) {
- CPT_LOG_ERR("Could not allocate meta buffer for request");
- return -ENOMEM;
- }
-
- sess = get_asym_session_private_data(asym_op->session,
- otx2_cryptodev_driver_id);
-
- /* Store IO address of the mdata to meta_buf */
- params.meta_buf = rte_mempool_virt2iova(mdata);
-
- cop = mdata;
- cop[0] = (uintptr_t)mdata;
- cop[1] = (uintptr_t)op;
- cop[2] = cop[3] = 0ULL;
-
- params.req = RTE_PTR_ADD(cop, 4 * sizeof(uintptr_t));
- params.req->op = cop;
-
- /* Adjust meta_buf to point to end of cpt_request_info structure */
- params.meta_buf += (4 * sizeof(uintptr_t)) +
- sizeof(struct cpt_request_info);
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- ret = cpt_modex_prep(¶ms, &sess->mod_ctx);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- ret = cpt_enqueue_rsa_op(op, ¶ms, sess);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- ret = cpt_enqueue_ecdsa_op(op, ¶ms, sess, otx2_fpm_iova);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- ret = cpt_ecpm_prep(&asym_op->ecpm, ¶ms,
- sess->ec_ctx.curveid);
- if (unlikely(ret))
- goto req_fail;
- break;
- default:
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ret = -EINVAL;
- goto req_fail;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op,
- sess->cpt_inst_w7, burst_index);
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Could not enqueue crypto req");
- goto req_fail;
- }
-
- return 0;
-
-req_fail:
- free_op_meta(mdata, minfo->pool);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q, unsigned int burst_index)
-{
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct cpt_request_info *req;
- struct cpt_sess_misc *sess;
- uint64_t cpt_op;
- void *mdata;
- int ret;
-
- sess = get_sym_session_private_data(sym_op->session,
- otx2_cryptodev_driver_id);
-
- cpt_op = sess->cpt_op;
-
- if (cpt_op & CPT_OP_CIPHER_MASK)
- ret = fill_fc_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
- else
- ret = fill_digest_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
-
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Crypto req : op %p, cpt_op 0x%x ret 0x%x",
- op, (unsigned int)cpt_op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
- if (unlikely(ret)) {
- /* Free buffer allocated by fill params routines */
- free_op_meta(mdata, qp->meta_info.pool);
- }
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- const unsigned int burst_index)
-{
- uint32_t winsz, esn_low = 0, esn_hi = 0, seql = 0, seqh = 0;
- struct rte_mbuf *m_src = op->sym->m_src;
- struct otx2_sec_session_ipsec_lp *sess;
- struct otx2_ipsec_po_sa_ctl *ctl_wrd;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *priv;
- struct cpt_request_info *req;
- uint64_t seq_in_sa, seq = 0;
- uint8_t esn;
- int ret;
-
- priv = get_sec_session_private_data(op->sym->sec_session);
- sess = &priv->ipsec.lp;
- sa = &sess->in_sa;
-
- ctl_wrd = &sa->ctl;
- esn = ctl_wrd->esn_en;
- winsz = sa->replay_win_sz;
-
- if (ctl_wrd->direction == OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND)
- ret = process_outb_sa(op, sess, &qp->meta_info, (void **)&req);
- else {
- if (winsz) {
- esn_low = rte_be_to_cpu_32(sa->esn_low);
- esn_hi = rte_be_to_cpu_32(sa->esn_hi);
- seql = *rte_pktmbuf_mtod_offset(m_src, uint32_t *,
- sizeof(struct rte_ipv4_hdr) + 4);
- seql = rte_be_to_cpu_32(seql);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = anti_replay_get_seqh(winsz, seql, esn_hi,
- esn_low);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- ret = anti_replay_check(sa->replay, seq, winsz);
- if (unlikely(ret)) {
- otx2_err("Anti replay check failed");
- return IPSEC_ANTI_REPLAY_FAILED;
- }
-
- if (esn) {
- seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low;
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- }
-
- ret = process_inb_sa(op, sess, &qp->meta_info, (void **)&req);
- }
-
- if (unlikely(ret)) {
- otx2_err("Crypto req : op %p, ret 0x%x", op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- const int driver_id = otx2_cryptodev_driver_id;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_cryptodev_sym_session *sess;
- int ret;
-
- /* Create temporary session */
- sess = rte_cryptodev_sym_session_create(qp->sess_mp);
- if (sess == NULL)
- return -ENOMEM;
-
- ret = sym_session_configure(driver_id, sym_op->xform, sess,
- qp->sess_mp_priv);
- if (ret)
- goto sess_put;
-
- sym_op->session = sess;
-
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q, burst_index);
-
- if (unlikely(ret))
- goto priv_put;
-
- return 0;
-
-priv_put:
- sym_session_clear(driver_id, sess);
-sess_put:
- rte_mempool_put(qp->sess_mp, sess);
- return ret;
-}
-
-static uint16_t
-otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- uint16_t nb_allowed, count = 0;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct rte_crypto_op *op;
- int ret;
-
- pend_q = &qp->pend_q;
-
- nb_allowed = pending_queue_free_slots(pend_q,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
- nb_ops = RTE_MIN(nb_ops, nb_allowed);
-
- for (count = 0; count < nb_ops; count++) {
- op = ops[count];
- if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
- ret = otx2_cpt_enqueue_sec(qp, op, pend_q,
- count);
- else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q,
- count);
- else
- ret = otx2_cpt_enqueue_sym_sessless(qp, op,
- pend_q, count);
- } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_asym(qp, op, pend_q,
- count);
- else
- break;
- } else
- break;
-
- if (unlikely(ret))
- break;
- }
-
- if (unlikely(!qp->ca_enable))
- pending_queue_commit(pend_q, count, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return count;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
- struct rte_crypto_rsa_xform *rsa_ctx)
-{
- struct rte_crypto_rsa_op_param *rsa = &cop->asym->rsa;
-
- switch (rsa->op_type) {
- case RTE_CRYPTO_ASYM_OP_ENCRYPT:
- rsa->cipher.length = rsa_ctx->n.length;
- memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
- break;
- case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->message.length = rsa_ctx->n.length;
- memcpy(rsa->message.data, req->rptr,
- rsa->message.length);
- } else {
- /* Get length of decrypted output */
- rsa->message.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy decrypted data.
- */
- memcpy(rsa->message.data, req->rptr + 2,
- rsa->message.length);
- }
- break;
- case RTE_CRYPTO_ASYM_OP_SIGN:
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- break;
- case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- } else {
- /* Get length of signed output */
- rsa->sign.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy signed data.
- */
- memcpy(rsa->sign.data, req->rptr + 2,
- rsa->sign.length);
- }
- if (memcmp(rsa->sign.data, rsa->message.data,
- rsa->message.length)) {
- CPT_LOG_DP_ERR("RSA verification failed");
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid RSA operation type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecdsa_op(struct rte_crypto_ecdsa_op_param *ecdsa,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- if (ecdsa->op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
- return;
-
- /* Separate out sign r and s components */
- memcpy(ecdsa->r.data, req->rptr, prime_len);
- memcpy(ecdsa->s.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecdsa->r.length = prime_len;
- ecdsa->s.length = prime_len;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecpm_op(struct rte_crypto_ecpm_op_param *ecpm,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- memcpy(ecpm->r.x.data, req->rptr, prime_len);
- memcpy(ecpm->r.y.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecpm->r.x.length = prime_len;
- ecpm->r.y.length = prime_len;
-}
-
-static void
-otx2_cpt_asym_post_process(struct rte_crypto_op *cop,
- struct cpt_request_info *req)
-{
- struct rte_crypto_asym_op *op = cop->asym;
- struct cpt_asym_sess_misc *sess;
-
- sess = get_asym_session_private_data(op->session,
- otx2_cryptodev_driver_id);
-
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- otx2_cpt_asym_rsa_op(cop, req, &sess->rsa_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- op->modex.result.length = sess->mod_ctx.modulus.length;
- memcpy(op->modex.result.data, req->rptr,
- op->modex.result.length);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- otx2_cpt_asym_dequeue_ecdsa_op(&op->ecdsa, req, &sess->ec_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- otx2_cpt_asym_dequeue_ecpm_op(&op->ecpm, req, &sess->ec_ctx);
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid crypto xform type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static void
-otx2_cpt_sec_post_process(struct rte_crypto_op *cop, uintptr_t *rsp)
-{
- struct cpt_request_info *req = (struct cpt_request_info *)rsp[2];
- vq_cmd_word0_t *word0 = (vq_cmd_word0_t *)&req->ist.ei0;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m = sym_op->m_src;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- uint16_t m_len = 0;
- int mdata_len;
- char *data;
-
- mdata_len = (int)rsp[3];
- rte_pktmbuf_trim(m, mdata_len);
-
- if (word0->s.opcode.major == OTX2_IPSEC_PO_PROCESS_IPSEC_INB) {
- data = rte_pktmbuf_mtod(m, char *);
- ip = (struct rte_ipv4_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
-
- if ((ip->version_ihl >> 4) == 4) {
- m_len = rte_be_to_cpu_16(ip->total_length);
- } else {
- ip6 = (struct rte_ipv6_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
- m_len = rte_be_to_cpu_16(ip6->payload_len) +
- sizeof(struct rte_ipv6_hdr);
- }
-
- m->data_len = m_len;
- m->pkt_len = m_len;
- m->data_off += OTX2_IPSEC_PO_INB_RPTR_HDR;
- }
-}
-
-static inline void
-otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
- uintptr_t *rsp, uint8_t cc)
-{
- unsigned int sz;
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
- if (likely(cc == OTX2_IPSEC_PO_CC_SUCCESS)) {
- otx2_cpt_sec_post_process(cop, rsp);
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
-
- return;
- }
-
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- sz = rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session);
- memset(cop->sym->session, 0, sz);
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /*
- * Pass cpt_req_info stored in metabuf during
- * enqueue.
- */
- rsp = RTE_PTR_ADD(rsp, 4 * sizeof(uintptr_t));
- otx2_cpt_asym_post_process(cop,
- (struct cpt_request_info *)rsp);
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-}
-
-static uint16_t
-otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- int i, nb_pending, nb_completed;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct cpt_request_info *req;
- struct rte_crypto_op *cop;
- uint8_t cc[nb_ops];
- uintptr_t *rsp;
- void *metabuf;
-
- pend_q = &qp->pend_q;
-
- nb_pending = pending_queue_level(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- /* Ensure pcount isn't read before data lands */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
-
- nb_ops = RTE_MIN(nb_ops, nb_pending);
-
- for (i = 0; i < nb_ops; i++) {
- pending_queue_peek(pend_q, (void **)&req,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
-
- cc[i] = otx2_cpt_compcode_get(req);
-
- if (unlikely(cc[i] == ERR_REQ_PENDING))
- break;
-
- ops[i] = req->op;
-
- pending_queue_pop(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
- }
-
- nb_completed = i;
-
- for (i = 0; i < nb_completed; i++) {
- rsp = (void *)ops[i];
-
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- ops[i] = cop;
-
- otx2_cpt_dequeue_post_process(qp, cop, rsp, cc[i]);
-
- free_op_meta(metabuf, qp->meta_info.pool);
- }
-
- return nb_completed;
-}
-
-void
-otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev)
-{
- dev->enqueue_burst = otx2_cpt_enqueue_burst;
- dev->dequeue_burst = otx2_cpt_dequeue_burst;
-
- rte_mb();
-}
-
-/* PMD ops */
-
-static int
-otx2_cpt_dev_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *conf)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int ret;
-
- if (conf->nb_queue_pairs > vf->max_queues) {
- CPT_LOG_ERR("Invalid number of queue pairs requested");
- return -EINVAL;
- }
-
- dev->feature_flags = otx2_cpt_default_ff_get() & ~conf->ff_disable;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
- /* Initialize shared FPM table */
- ret = cpt_fpm_init(otx2_fpm_iova);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret) {
- CPT_LOG_ERR("Could not detach CPT queues");
- return ret;
- }
- }
-
- /* Attach queues */
- ret = otx2_cpt_queues_attach(dev, conf->nb_queue_pairs);
- if (ret) {
- CPT_LOG_ERR("Could not attach CPT queues");
- return -ENODEV;
- }
-
- ret = otx2_cpt_msix_offsets_get(dev);
- if (ret) {
- CPT_LOG_ERR("Could not get MSI-X offsets");
- goto queues_detach;
- }
-
- /* Register error interrupts */
- ret = otx2_cpt_err_intr_register(dev);
- if (ret) {
- CPT_LOG_ERR("Could not register error interrupts");
- goto queues_detach;
- }
-
- ret = otx2_cpt_inline_init(dev);
- if (ret) {
- CPT_LOG_ERR("Could not enable inline IPsec");
- goto intr_unregister;
- }
-
- otx2_cpt_set_enqdeq_fns(dev);
-
- return 0;
-
-intr_unregister:
- otx2_cpt_err_intr_unregister(dev);
-queues_detach:
- otx2_cpt_queues_detach(dev);
- return ret;
-}
-
-static int
-otx2_cpt_dev_start(struct rte_cryptodev *dev)
-{
- RTE_SET_USED(dev);
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- return 0;
-}
-
-static void
-otx2_cpt_dev_stop(struct rte_cryptodev *dev)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO)
- cpt_fpm_clear();
-}
-
-static int
-otx2_cpt_dev_close(struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int i, ret = 0;
-
- for (i = 0; i < dev->data->nb_queue_pairs; i++) {
- ret = otx2_cpt_queue_pair_release(dev, i);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret)
- CPT_LOG_ERR("Could not detach CPT queues");
- }
-
- return ret;
-}
-
-static void
-otx2_cpt_dev_info_get(struct rte_cryptodev *dev,
- struct rte_cryptodev_info *info)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
-
- if (info != NULL) {
- info->max_nb_queue_pairs = vf->max_queues;
- info->feature_flags = otx2_cpt_default_ff_get();
- info->capabilities = otx2_cpt_capabilities_get();
- info->sym.max_nb_sessions = 0;
- info->driver_id = otx2_cryptodev_driver_id;
- info->min_mbuf_headroom_req = OTX2_CPT_MIN_HEADROOM_REQ;
- info->min_mbuf_tailroom_req = OTX2_CPT_MIN_TAILROOM_REQ;
- }
-}
-
-static int
-otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *conf,
- int socket_id __rte_unused)
-{
- uint8_t grp_mask = OTX2_CPT_ENG_GRPS_MASK;
- struct rte_pci_device *pci_dev;
- struct otx2_cpt_qp *qp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->data->queue_pairs[qp_id] != NULL)
- otx2_cpt_queue_pair_release(dev, qp_id);
-
- if (conf->nb_descriptors > OTX2_CPT_DEFAULT_CMD_QLEN) {
- CPT_LOG_ERR("Could not setup queue pair for %u descriptors",
- conf->nb_descriptors);
- return -EINVAL;
- }
-
- pci_dev = RTE_DEV_TO_PCI(dev->device);
-
- if (pci_dev->mem_resource[2].addr == NULL) {
- CPT_LOG_ERR("Invalid PCI mem address");
- return -EIO;
- }
-
- qp = otx2_cpt_qp_create(dev, qp_id, grp_mask);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not create queue pair %d", qp_id);
- return -ENOMEM;
- }
-
- qp->sess_mp = conf->mp_session;
- qp->sess_mp_priv = conf->mp_session_private;
- dev->data->queue_pairs[qp_id] = qp;
-
- return 0;
-}
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id)
-{
- struct otx2_cpt_qp *qp = dev->data->queue_pairs[qp_id];
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (qp == NULL)
- return -EINVAL;
-
- CPT_LOG_INFO("Releasing queue pair %d", qp_id);
-
- ret = otx2_cpt_qp_destroy(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not destroy queue pair %d", qp_id);
- return ret;
- }
-
- dev->data->queue_pairs[qp_id] = NULL;
-
- return 0;
-}
-
-static unsigned int
-otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
-{
- return cpt_get_session_size();
-}
-
-static int
-otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_configure(dev->driver_id, xform, sess, pool);
-}
-
-static void
-otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_clear(dev->driver_id, sess);
-}
-
-static unsigned int
-otx2_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
-{
- return sizeof(struct cpt_asym_sess_misc);
-}
-
-static int
-otx2_cpt_asym_session_cfg(struct rte_cryptodev *dev,
- struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
-{
- struct cpt_asym_sess_misc *priv;
- vq_cmd_word3_t vq_cmd_w3;
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session_private_data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
- ret = cpt_fill_asym_session_parameters(priv, xform);
- if (ret) {
- CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
- return ret;
- }
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_AE;
- priv->cpt_inst_w7 = vq_cmd_w3.u64;
-
- set_asym_session_private_data(sess, dev->driver_id, priv);
-
- return 0;
-}
-
-static void
-otx2_cpt_asym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_asym_session *sess)
-{
- struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- priv = get_asym_session_private_data(sess, dev->driver_id);
- if (priv == NULL)
- return;
-
- /* Free resources allocated in session_cfg */
- cpt_free_asym_session_parameters(priv);
-
- /* Reset and free object back to pool */
- memset(priv, 0, otx2_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
-}
-
-struct rte_cryptodev_ops otx2_cpt_ops = {
- /* Device control ops */
- .dev_configure = otx2_cpt_dev_config,
- .dev_start = otx2_cpt_dev_start,
- .dev_stop = otx2_cpt_dev_stop,
- .dev_close = otx2_cpt_dev_close,
- .dev_infos_get = otx2_cpt_dev_info_get,
-
- .stats_get = NULL,
- .stats_reset = NULL,
- .queue_pair_setup = otx2_cpt_queue_pair_setup,
- .queue_pair_release = otx2_cpt_queue_pair_release,
-
- /* Symmetric crypto ops */
- .sym_session_get_size = otx2_cpt_sym_session_get_size,
- .sym_session_configure = otx2_cpt_sym_session_configure,
- .sym_session_clear = otx2_cpt_sym_session_clear,
-
- /* Asymmetric crypto ops */
- .asym_session_get_size = otx2_cpt_asym_session_size_get,
- .asym_session_configure = otx2_cpt_asym_session_cfg,
- .asym_session_clear = otx2_cpt_asym_session_clear,
-
-};
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
deleted file mode 100644
index 7faf7ad034..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_H_
-#define _OTX2_CRYPTODEV_OPS_H_
-
-#include <cryptodev_pmd.h>
-
-#define OTX2_CPT_MIN_HEADROOM_REQ 48
-#define OTX2_CPT_MIN_TAILROOM_REQ 208
-
-extern struct rte_cryptodev_ops otx2_cpt_ops;
-
-#endif /* _OTX2_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
deleted file mode 100644
index 01c081a216..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_HELPER_H_
-#define _OTX2_CRYPTODEV_OPS_HELPER_H_
-
-#include "cpt_pmd_logs.h"
-
-static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
-{
- void *priv = get_sym_session_private_data(sess, driver_id);
- struct cpt_sess_misc *misc;
- struct rte_mempool *pool;
- struct cpt_ctx *ctx;
-
- if (priv == NULL)
- return;
-
- misc = priv;
- ctx = SESS_PRIV(misc);
-
- if (ctx->auth_key != NULL)
- rte_free(ctx->auth_key);
-
- memset(priv, 0, cpt_get_session_size());
-
- pool = rte_mempool_from_obj(priv);
-
- set_sym_session_private_data(sess, driver_id, NULL);
-
- rte_mempool_put(pool, priv);
-}
-
-static __rte_always_inline uint8_t
-otx2_cpt_compcode_get(struct cpt_request_info *req)
-{
- volatile struct cpt_res_s_9s *res;
- uint8_t ret;
-
- res = (volatile struct cpt_res_s_9s *)req->completion_addr;
-
- if (unlikely(res->compcode == CPT_9X_COMP_E_NOTDONE)) {
- if (rte_get_timer_cycles() < req->time_out)
- return ERR_REQ_PENDING;
-
- CPT_LOG_DP_ERR("Request timed out");
- return ERR_REQ_TIMEOUT;
- }
-
- if (likely(res->compcode == CPT_9X_COMP_E_GOOD)) {
- ret = NO_ERR;
- if (unlikely(res->uc_compcode)) {
- ret = res->uc_compcode;
- CPT_LOG_DP_DEBUG("Request failed with microcode error");
- CPT_LOG_DP_DEBUG("MC completion code 0x%x",
- res->uc_compcode);
- }
- } else {
- CPT_LOG_DP_DEBUG("HW completion code 0x%x", res->compcode);
-
- ret = res->compcode;
- switch (res->compcode) {
- case CPT_9X_COMP_E_INSTERR:
- CPT_LOG_DP_ERR("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- CPT_LOG_DP_ERR("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- CPT_LOG_DP_ERR("Request failed with hardware error");
- break;
- default:
- CPT_LOG_DP_ERR("Request failed with unknown completion code");
- }
- }
-
- return ret;
-}
-
-#endif /* _OTX2_CRYPTODEV_OPS_HELPER_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
deleted file mode 100644
index 95bce3621a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#ifndef _OTX2_CRYPTODEV_QP_H_
-#define _OTX2_CRYPTODEV_QP_H_
-
-#include <rte_common.h>
-#include <rte_eventdev.h>
-#include <rte_mempool.h>
-#include <rte_spinlock.h>
-
-#include "cpt_common.h"
-
-struct otx2_cpt_qp {
- uint32_t id;
- /**< Queue pair id */
- uint8_t blkaddr;
- /**< CPT0/1 BLKADDR of LF */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- void *lmtline;
- /**< Address of LMTLINE */
- rte_iova_t lf_nq_reg;
- /**< LF enqueue register address */
- struct pending_queue pend_q;
- /**< Pending queue */
- struct rte_mempool *sess_mp;
- /**< Session mempool */
- struct rte_mempool *sess_mp_priv;
- /**< Session private data mempool */
- struct cpt_qp_meta_info meta_info;
- /**< Metabuf info required to support operations on the queue pair */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- struct rte_event ev;
- /**< Event information required for binding cryptodev queue to
- * eventdev queue. Used by crypto adapter.
- */
- uint8_t ca_enable;
- /**< Set when queue pair is added to crypto adapter */
- uint8_t qp_ev_bind;
- /**< Set when queue pair is bound to event queue */
-};
-
-#endif /* _OTX2_CRYPTODEV_QP_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
deleted file mode 100644
index 9a4f84f8d8..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ /dev/null
@@ -1,655 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_security.h"
-
-static int
-ipsec_lp_len_precalc(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_lp *lp)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- lp->partial_len = 0;
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- lp->partial_len = sizeof(struct rte_ipv4_hdr);
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- lp->partial_len = sizeof(struct rte_ipv6_hdr);
- else
- return -EINVAL;
- }
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- lp->partial_len += sizeof(struct rte_esp_hdr);
- lp->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- lp->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- lp->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- lp->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- lp->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- lp->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- return 0;
- } else {
- return -EINVAL;
- }
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- lp->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- lp->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- lp->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- lp->partial_len += OTX2_SEC_SHA2_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-otx2_cpt_enq_sa_write(struct otx2_sec_session_ipsec_lp *lp,
- struct otx2_cpt_qp *qptr, uint8_t opcode)
-{
- uint64_t lmt_status, time_out;
- void *lmtline = qptr->lmtline;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_res *res;
- uint64_t *mdata;
- int ret = 0;
-
- if (unlikely(rte_mempool_get(qptr->meta_info.pool,
- (void **)&mdata) < 0))
- return -ENOMEM;
-
- res = (struct otx2_cpt_res *)RTE_PTR_ALIGN(mdata, 16);
- res->compcode = CPT_9X_COMP_E_NOTDONE;
-
- inst.opcode = opcode | (lp->ctx_len << 8);
- inst.param1 = 0;
- inst.param2 = 0;
- inst.dlen = lp->ctx_len << 3;
- inst.dptr = rte_mempool_virt2iova(lp);
- inst.rptr = 0;
- inst.cptr = rte_mempool_virt2iova(lp);
- inst.egrp = OTX2_CPT_EGRP_SE;
-
- inst.u64[0] = 0;
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.res_addr = rte_mempool_virt2iova(res);
-
- rte_io_wmb();
-
- do {
- /* Copy CPT command to LMTLINE */
- otx2_lmt_mov(lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qptr->lf_nq_reg);
- } while (lmt_status == 0);
-
- time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- while (res->compcode == CPT_9X_COMP_E_NOTDONE) {
- if (rte_get_timer_cycles() > time_out) {
- rte_mempool_put(qptr->meta_info.pool, mdata);
- otx2_err("Request timed out");
- return -ETIMEDOUT;
- }
- rte_io_rmb();
- }
-
- if (unlikely(res->compcode != CPT_9X_COMP_E_GOOD)) {
- ret = res->compcode;
- switch (ret) {
- case CPT_9X_COMP_E_INSTERR:
- otx2_err("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- otx2_err("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- otx2_err("Request failed with hardware error");
- break;
- default:
- otx2_err("Request failed with unknown hardware "
- "completion code : 0x%x", ret);
- }
- goto mempool_put;
- }
-
- if (unlikely(res->uc_compcode != OTX2_IPSEC_PO_CC_SUCCESS)) {
- ret = res->uc_compcode;
- switch (ret) {
- case OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED:
- otx2_err("Invalid auth type");
- break;
- case OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED:
- otx2_err("Invalid encrypt type");
- break;
- default:
- otx2_err("Request failed with unknown microcode "
- "completion code : 0x%x", ret);
- }
- }
-
-mempool_put:
- rte_mempool_put(qptr->meta_info.pool, mdata);
- return ret;
-}
-
-static void
-set_session_misc_attributes(struct otx2_sec_session_ipsec_lp *sess,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_crypto_sym_xform *auth_xform,
- struct rte_crypto_sym_xform *cipher_xform)
-{
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- sess->iv_offset = crypto_xform->aead.iv.offset;
- sess->iv_length = crypto_xform->aead.iv.length;
- sess->aad_length = crypto_xform->aead.aad_length;
- sess->mac_len = crypto_xform->aead.digest_length;
- } else {
- sess->iv_offset = cipher_xform->cipher.iv.offset;
- sess->iv_length = cipher_xform->cipher.iv.length;
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
- sess->auth_iv_length = auth_xform->auth.iv.length;
- sess->mac_len = auth_xform->auth.digest_length;
- }
-}
-
-static int
-crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_ipsec_po_ip_template *template = NULL;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_out_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- int ret, ctx_len;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_out_sa));
-
- /* Initialize lookaside ipsec private data */
- lp->ip_id = 0;
- lp->seq_lo = 1;
- lp->seq_hi = 0;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- ret = ipsec_lp_len_precalc(ipsec, crypto_xform, lp);
- if (ret)
- return ret;
-
- /* Start ip id from 1 */
- lp->ip_id = 1;
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
- ip = &template->ip4.ipv4_hdr;
- if (ipsec->options.udp_encap) {
- ip->next_proto_id = IPPROTO_UDP;
- template->ip4.udp_src = rte_be_to_cpu_16(4500);
- template->ip4.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip->next_proto_id = IPPROTO_ESP;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- ip->version_ihl = RTE_IPV4_VHL_DEF;
- ip->time_to_live = ipsec->tunnel.ipv4.ttl;
- ip->type_of_service |= (ipsec->tunnel.ipv4.dscp << 2);
- if (ipsec->tunnel.ipv4.df)
- ip->fragment_offset = BIT(14);
- memcpy(&ip->src_addr, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&ip->dst_addr, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else if (ipsec->tunnel.type ==
- RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
-
- ip6 = &template->ip6.ipv6_hdr;
- if (ipsec->options.udp_encap) {
- ip6->proto = IPPROTO_UDP;
- template->ip6.udp_src = rte_be_to_cpu_16(4500);
- template->ip6.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip6->proto = (ipsec->proto ==
- RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
- IPPROTO_ESP : IPPROTO_AH;
- }
- ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 |
- ((ipsec->tunnel.ipv6.dscp <<
- RTE_IPV6_HDR_TC_SHIFT) &
- RTE_IPV6_HDR_TC_MASK) |
- ((ipsec->tunnel.ipv6.flabel <<
- RTE_IPV6_HDR_FL_SHIFT) &
- RTE_IPV6_HDR_FL_MASK));
- ip6->hop_limits = ipsec->tunnel.ipv6.hlimit;
- memcpy(&ip6->src_addr, &ipsec->tunnel.ipv6.src_addr,
- sizeof(struct in6_addr));
- memcpy(&ip6->dst_addr, &ipsec->tunnel.ipv6.dst_addr,
- sizeof(struct in6_addr));
- }
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- memcpy(sa->sha1.hmac_key, auth_key, auth_key_len);
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB);
-
- /* Set per packet IV and IKEv2 bits */
- lp->ucmd_param1 = BIT(11) | BIT(9);
- lp->ucmd_param2 = 0;
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_OUTB);
-}
-
-static int
-crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- int ret;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->in_sa;
- ctl = &sa->ctl;
-
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_in_sa));
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
-
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.hmac_key[0]) >> 3;
- RTE_ASSERT(lp->ctx_len == OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN);
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- memcpy(sa->aes_gcm.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.selector) >> 3;
- } else if (auth_xform->auth.algo ==
- RTE_CRYPTO_AUTH_SHA256_HMAC) {
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- sha2.selector) >> 3;
- }
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_INB);
- lp->ucmd_param1 = 0;
-
- /* Set IKEv2 bit */
- lp->ucmd_param2 = BIT(12);
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- otx2_err("Replay window size is not supported");
- return -ENOTSUP;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL)
- return -ENOMEM;
-
- /* Set window bottom to 1, base and top to size of window */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_INB);
-}
-
-static int
-crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- if (crypto_dev->data->queue_pairs[0] == NULL) {
- otx2_err("Setup cpt queue pair before creating sec session");
- return -EPERM;
- }
-
- ret = ipsec_po_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return crypto_sec_ipsec_inb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
- else
- return crypto_sec_ipsec_outb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_crypto_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
- struct rte_security_session *sess)
-{
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
-
- priv = get_sec_session_private_data(sess);
-
- if (priv == NULL)
- return 0;
-
- sess_mp = rte_mempool_from_obj(priv);
-
- memset(priv, 0, sizeof(*priv));
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_crypto_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
- struct rte_security_session *session,
- struct rte_mbuf *m, void *params __rte_unused)
-{
- /* Set security session as the pkt metadata */
- *rte_security_dynfield(m) = (rte_security_dynfield_t)session;
-
- return 0;
-}
-
-static int
-otx2_crypto_sec_get_userdata(void *device __rte_unused, uint64_t md,
- void **userdata)
-{
- /* Retrieve userdata */
- *userdata = (void *)md;
-
- return 0;
-}
-
-static struct rte_security_ops otx2_crypto_sec_ops = {
- .session_create = otx2_crypto_sec_session_create,
- .session_destroy = otx2_crypto_sec_session_destroy,
- .session_get_size = otx2_crypto_sec_session_get_size,
- .set_pkt_metadata = otx2_crypto_sec_set_pkt_mdata,
- .get_userdata = otx2_crypto_sec_get_userdata,
- .capabilities_get = otx2_crypto_sec_capabilities_get
-};
-
-int
-otx2_crypto_sec_ctx_create(struct rte_cryptodev *cdev)
-{
- struct rte_security_ctx *ctx;
-
- ctx = rte_malloc("otx2_cpt_dev_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
-
- if (ctx == NULL)
- return -ENOMEM;
-
- /* Populate ctx */
- ctx->device = cdev;
- ctx->ops = &otx2_crypto_sec_ops;
- ctx->sess_cnt = 0;
-
- cdev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *cdev)
-{
- rte_free(cdev->security_ctx);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h b/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
deleted file mode 100644
index ff3329c9c1..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_CRYPTODEV_SEC_H__
-#define __OTX2_CRYPTODEV_SEC_H__
-
-#include <rte_cryptodev.h>
-
-#include "otx2_ipsec_po.h"
-
-struct otx2_sec_session_ipsec_lp {
- RTE_STD_C11
- union {
- /* Inbound SA */
- struct otx2_ipsec_po_in_sa in_sa;
- /* Outbound SA */
- struct otx2_ipsec_po_out_sa out_sa;
- };
-
- uint64_t cpt_inst_w7;
- union {
- uint64_t ucmd_w0;
- struct {
- uint16_t ucmd_dlen;
- uint16_t ucmd_param2;
- uint16_t ucmd_param1;
- uint16_t ucmd_opcode;
- };
- };
-
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq_lo;
- uint32_t seq_hi;
- };
- };
-
- /** Context length in 8-byte words */
- size_t ctx_len;
- /** Auth IV offset in bytes */
- uint16_t auth_iv_offset;
- /** IV offset in bytes */
- uint16_t iv_offset;
- /** AAD length */
- uint16_t aad_length;
- /** MAC len in bytes */
- uint8_t mac_len;
- /** IV length in bytes */
- uint8_t iv_length;
- /** Auth IV length in bytes */
- uint8_t auth_iv_length;
-};
-
-int otx2_crypto_sec_ctx_create(struct rte_cryptodev *crypto_dev);
-
-void otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *crypto_dev);
-
-#endif /* __OTX2_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h b/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
deleted file mode 100644
index 089a3d073a..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
+++ /dev/null
@@ -1,227 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_ANTI_REPLAY_H__
-#define __OTX2_IPSEC_ANTI_REPLAY_H__
-
-#include <rte_mbuf.h>
-
-#include "otx2_ipsec_fp.h"
-
-#define WORD_SHIFT 6
-#define WORD_SIZE (1 << WORD_SHIFT)
-#define WORD_MASK (WORD_SIZE - 1)
-
-#define IPSEC_ANTI_REPLAY_FAILED (-1)
-
-static inline int
-anti_replay_check(struct otx2_ipsec_replay *replay, uint64_t seq,
- uint64_t winsz)
-{
- uint64_t *window = &replay->window[0];
- uint64_t ex_winsz = winsz + WORD_SIZE;
- uint64_t winwords = ex_winsz >> WORD_SHIFT;
- uint64_t base = replay->base;
- uint32_t winb = replay->winb;
- uint32_t wint = replay->wint;
- uint64_t seqword, shiftwords;
- uint64_t bit_pos;
- uint64_t shift;
- uint64_t *wptr;
- uint64_t tmp;
-
- if (winsz > 64)
- goto slow_shift;
- /* Check if the seq is the biggest one yet */
- if (likely(seq > base)) {
- shift = seq - base;
- if (shift < winsz) { /* In window */
- /*
- * If more than 64-bit anti-replay window,
- * use slow shift routine
- */
- wptr = window + (shift >> WORD_SHIFT);
- *wptr <<= shift;
- *wptr |= 1ull;
- } else {
- /* No special handling of window size > 64 */
- wptr = window + ((winsz - 1) >> WORD_SHIFT);
- /*
- * Zero out the whole window (especially for
- * bigger than 64b window) till the last 64b word
- * as the incoming sequence number minus
- * base sequence is more than the window size.
- */
- while (window != wptr)
- *window++ = 0ull;
- /*
- * Set the last bit (of the window) to 1
- * as that corresponds to the base sequence number.
- * Now any incoming sequence number which is
- * (base - window size - 1) will pass anti-replay check
- */
- *wptr = 1ull;
- }
- /*
- * Set the base to incoming sequence number as
- * that is the biggest sequence number seen yet
- */
- replay->base = seq;
- return 0;
- }
-
- bit_pos = base - seq;
-
- /* If seq falls behind the window, return failure */
- if (bit_pos >= winsz)
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* seq is within anti-replay window */
- wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
- bit_pos &= WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (*wptr & ((1ull) << bit_pos))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* mark as seen */
- *wptr |= ((1ull) << bit_pos);
- return 0;
-
-slow_shift:
- if (likely(seq > base)) {
- uint32_t i;
-
- shift = seq - base;
- if (unlikely(shift >= winsz)) {
- /*
- * shift is bigger than the window,
- * so just zero out everything
- */
- for (i = 0; i < winwords; i++)
- window[i] = 0;
-winupdate:
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- /* wint and winb range from 1 to ex_winsz */
- replay->wint = ((wint + shift - 1) % ex_winsz) + 1;
- replay->winb = ((winb + shift - 1) % ex_winsz) + 1;
-
- replay->base = seq;
- return 0;
- }
-
- /*
- * New sequence number is bigger than the base but
- * it's not bigger than base + window size
- */
-
- shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
- ((wint - 1) >> WORD_SHIFT);
- if (unlikely(shiftwords)) {
- tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
- for (i = 0; i < shiftwords; i++) {
- tmp %= winwords;
- window[tmp++] = 0;
- }
- }
-
- goto winupdate;
- }
-
- /* Sequence number is before the window */
- if (unlikely((seq + winsz) <= base))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* Sequence number is within the window */
-
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (window[seqword] & (1ull << (63 - bit_pos)))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- return 0;
-}
-
-static inline int
-cpt_ipsec_ip_antireplay_check(struct otx2_ipsec_fp_in_sa *sa, void *l3_ptr)
-{
- struct otx2_ipsec_fp_res_hdr *hdr = l3_ptr;
- uint64_t seq_in_sa;
- uint32_t seqh = 0;
- uint32_t seql;
- uint64_t seq;
- uint8_t esn;
- int ret;
-
- esn = sa->ctl.esn_en;
- seql = rte_be_to_cpu_32(hdr->seq_no_lo);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = rte_be_to_cpu_32(hdr->seq_no_hi);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- rte_spinlock_lock(&sa->replay->lock);
- ret = anti_replay_check(sa->replay, seq, sa->replay_win_sz);
- if (esn && (ret == 0)) {
- seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
- rte_be_to_cpu_32(sa->esn_low);
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- rte_spinlock_unlock(&sa->replay->lock);
-
- return ret;
-}
-
-static inline uint32_t
-anti_replay_get_seqh(uint32_t winsz, uint32_t seql,
- uint32_t esn_hi, uint32_t esn_low)
-{
- uint32_t win_low = esn_low - winsz + 1;
-
- if (esn_low > winsz - 1) {
- /* Window is in one sequence number subspace */
- if (seql > win_low)
- return esn_hi;
- else
- return esn_hi + 1;
- } else {
- /* Window is split across two sequence number subspaces */
- if (seql > win_low)
- return esn_hi - 1;
- else
- return esn_hi;
- }
-}
-#endif /* __OTX2_IPSEC_ANTI_REPLAY_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_fp.h b/drivers/crypto/octeontx2/otx2_ipsec_fp.h
deleted file mode 100644
index 2461e7462b..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_fp.h
+++ /dev/null
@@ -1,371 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_FP_H__
-#define __OTX2_IPSEC_FP_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-/* Macros for anti replay and ESN */
-#define OTX2_IPSEC_MAX_REPLAY_WIN_SZ 1024
-
-struct otx2_ipsec_fp_res_hdr {
- uint32_t spi;
- uint32_t seq_no_lo;
- uint32_t seq_no_hi;
- uint32_t rsvd;
-};
-
-enum {
- OTX2_IPSEC_FP_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_FP_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_FP_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_FP_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENC_NULL = 0,
- OTX2_IPSEC_FP_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_FP_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_FP_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_FP_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_FP_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_FP_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AUTH_NULL = 0,
- OTX2_IPSEC_FP_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_FP_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_FP_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_FRAG_POST = 0,
- OTX2_IPSEC_FP_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_FP_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_fp_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_fp_out_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4];
- uint16_t udp_src;
- uint16_t udp_dst;
-
- /* w2 */
- uint32_t ip_src;
- uint32_t ip_dst;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
-struct otx2_ipsec_replay {
- rte_spinlock_t lock;
- uint32_t winb;
- uint32_t wint;
- uint64_t base; /**< base of the anti-replay window */
- uint64_t window[17]; /**< anti-replay window */
-};
-
-struct otx2_ipsec_fp_in_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4]; /* Only for AES-GCM */
- uint32_t unused;
-
- /* w2 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-
- RTE_STD_C11
- union {
- void *userdata;
- uint64_t udata64;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-
- uint32_t reserved1;
-};
-
-static inline int
-ipsec_fp_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_fp_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_fp_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_fp_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_fp_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_fp_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn == 1)
- ctl->esn_en = 1;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_FP_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po.h b/drivers/crypto/octeontx2/otx2_ipsec_po.h
deleted file mode 100644
index 695f552644..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po.h
+++ /dev/null
@@ -1,447 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_H__
-#define __OTX2_IPSEC_PO_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_ip.h>
-#include <rte_security.h>
-
-#define OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN 0x09
-
-#define OTX2_IPSEC_PO_WRITE_IPSEC_OUTB 0x20
-#define OTX2_IPSEC_PO_WRITE_IPSEC_INB 0x21
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB 0x23
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_INB 0x24
-
-#define OTX2_IPSEC_PO_INB_RPTR_HDR 0x8
-
-enum otx2_ipsec_po_comp_e {
- OTX2_IPSEC_PO_CC_SUCCESS = 0x00,
- OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED = 0xB0,
- OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED = 0xB1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_PO_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_PO_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_PO_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENC_NULL = 0,
- OTX2_IPSEC_PO_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_PO_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_PO_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_PO_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_PO_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_PO_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AUTH_NULL = 0,
- OTX2_IPSEC_PO_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_PO_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_PO_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_FRAG_POST = 0,
- OTX2_IPSEC_PO_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_PO_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_po_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-union otx2_ipsec_po_bit_perfect_iv {
- uint8_t aes_iv[16];
- uint8_t des_iv[8];
- struct {
- uint8_t nonce[4];
- uint8_t iv[8];
- uint8_t counter[4];
- } gcm;
-};
-
-struct otx2_ipsec_po_traffic_selector {
- rte_be16_t src_port[2];
- rte_be16_t dst_port[2];
- RTE_STD_C11
- union {
- struct {
- rte_be32_t src_addr[2];
- rte_be32_t dst_addr[2];
- } ipv4;
- struct {
- uint8_t src_addr[32];
- uint8_t dst_addr[32];
- } ipv6;
- };
-};
-
-struct otx2_ipsec_po_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_po_in_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8 */
- uint8_t udp_encap[8];
-
- /* w9-w33 */
- union {
- struct {
- uint8_t hmac_key[48];
- struct otx2_ipsec_po_traffic_selector selector;
- } aes_gcm;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_traffic_selector selector;
- } sha2;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-};
-
-struct otx2_ipsec_po_ip_template {
- RTE_STD_C11
- union {
- struct {
- struct rte_ipv4_hdr ipv4_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip4;
- struct {
- struct rte_ipv6_hdr ipv6_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip6;
- };
-};
-
-struct otx2_ipsec_po_out_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8-w55 */
- union {
- struct {
- struct otx2_ipsec_po_ip_template template;
- } aes_gcm;
- struct {
- uint8_t hmac_key[24];
- uint8_t unused[24];
- struct otx2_ipsec_po_ip_template template;
- } sha1;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_ip_template template;
- } sha2;
- };
-};
-
-static inline int
-ipsec_po_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- } else if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) {
- if (keylen >= 32 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (ipsec->life.bytes_hard_limit != 0 ||
- ipsec->life.bytes_soft_limit != 0 ||
- ipsec->life.packets_hard_limit != 0 ||
- ipsec->life.packets_soft_limit != 0)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_po_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_po_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_po_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_po_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_po_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = ctl->outer_ip_ver;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn)
- ctl->esn_en = 1;
-
- if (ipsec->options.udp_encap == 1)
- ctl->encap_type = OTX2_IPSEC_PO_SA_ENCAP_UDP;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
- ctl->valid = 1;
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_PO_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h b/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
deleted file mode 100644
index c3abf02187..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
+++ /dev/null
@@ -1,167 +0,0 @@
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_OPS_H__
-#define __OTX2_IPSEC_PO_OPS_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_security.h"
-
-static __rte_always_inline int32_t
-otx2_ipsec_po_out_rlen_get(struct otx2_sec_session_ipsec_lp *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline struct cpt_request_info *
-alloc_request_struct(char *maddr, void *cop, int mdata_len)
-{
- struct cpt_request_info *req;
- struct cpt_meta_info *meta;
- uint8_t *resp_addr;
- uintptr_t *op;
-
- meta = (void *)RTE_PTR_ALIGN((uint8_t *)maddr, 16);
-
- op = (uintptr_t *)meta->deq_op_info;
- req = &meta->cpt_req;
- resp_addr = (uint8_t *)&meta->cpt_res;
-
- req->completion_addr = (uint64_t *)((uint8_t *)resp_addr);
- *req->completion_addr = COMPLETION_CODE_INIT;
- req->comp_baddr = rte_mem_virt2iova(resp_addr);
- req->op = op;
-
- op[0] = (uintptr_t)((uint64_t)meta | 1ull);
- op[1] = (uintptr_t)cop;
- op[2] = (uintptr_t)req;
- op[3] = mdata_len;
-
- return req;
-}
-
-static __rte_always_inline int
-process_outb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- uint32_t dlen, rlen, extend_head, extend_tail;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- struct otx2_ipsec_po_out_hdr *hdr;
- struct otx2_ipsec_po_out_sa *sa;
- int hdr_len, mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- char *mdata, *data;
-
- sa = &sess->out_sa;
- hdr_len = sizeof(*hdr);
-
- dlen = rte_pktmbuf_pkt_len(m_src) + hdr_len;
- rlen = otx2_ipsec_po_out_rlen_get(sess, dlen - hdr_len);
-
- extend_head = hdr_len + RTE_ETHER_HDR_LEN;
- extend_tail = rlen - dlen;
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, extend_tail + mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- mdata += extend_tail; /* mdata follows encrypted data */
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- data = rte_pktmbuf_prepend(m_src, extend_head);
- if (unlikely(data == NULL)) {
- otx2_err("Not enough head room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_po_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + hdr_len, RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_po_out_hdr *)rte_pktmbuf_adj(m_src,
- RTE_ETHER_HDR_LEN);
-
- memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *,
- sess->iv_offset), sess->iv_length);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
- sa->esn_hi = sess->seq_hi;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq_lo);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
-exit:
- *prep_req = req;
-
- return ret;
-}
-
-static __rte_always_inline int
-process_inb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- int mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- uint32_t dlen;
- char *mdata;
-
- dlen = rte_pktmbuf_pkt_len(m_src);
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
-exit:
- *prep_req = req;
- return ret;
-}
-#endif /* __OTX2_IPSEC_PO_OPS_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_security.h b/drivers/crypto/octeontx2/otx2_security.h
deleted file mode 100644
index 29c8fc351b..0000000000
--- a/drivers/crypto/octeontx2/otx2_security.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SECURITY_H__
-#define __OTX2_SECURITY_H__
-
-#include <rte_security.h>
-
-#include "otx2_cryptodev_sec.h"
-#include "otx2_ethdev_sec.h"
-
-#define OTX2_SEC_AH_HDR_LEN 12
-#define OTX2_SEC_AES_GCM_IV_LEN 8
-#define OTX2_SEC_AES_GCM_MAC_LEN 16
-#define OTX2_SEC_AES_CBC_IV_LEN 16
-#define OTX2_SEC_SHA1_HMAC_LEN 12
-#define OTX2_SEC_SHA2_HMAC_LEN 16
-
-#define OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN 16
-
-struct otx2_sec_session_ipsec {
- union {
- struct otx2_sec_session_ipsec_ip ip;
- struct otx2_sec_session_ipsec_lp lp;
- };
- enum rte_security_ipsec_sa_direction dir;
-};
-
-struct otx2_sec_session {
- struct otx2_sec_session_ipsec ipsec;
- void *userdata;
- /**< Userdata registered by the application */
-} __rte_cache_aligned;
-
-#endif /* __OTX2_SECURITY_H__ */
diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map
deleted file mode 100644
index d36663132a..0000000000
--- a/drivers/crypto/octeontx2/version.map
+++ /dev/null
@@ -1,13 +0,0 @@
-DPDK_22 {
- local: *;
-};
-
-INTERNAL {
- global:
-
- otx2_cryptodev_driver_id;
- otx2_cpt_af_reg_read;
- otx2_cpt_af_reg_write;
-
- local: *;
-};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b68ce6c0a4..8db9775d7b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1127,6 +1127,16 @@ cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 63d6b410b2..d6706b57f7 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -11,7 +11,6 @@ drivers = [
'dpaa',
'dpaa2',
'dsw',
- 'octeontx2',
'opdl',
'skeleton',
'sw',
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
deleted file mode 100644
index ce360af5f8..0000000000
--- a/drivers/event/octeontx2/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_worker.c',
- 'otx2_worker_dual.c',
- 'otx2_evdev.c',
- 'otx2_evdev_adptr.c',
- 'otx2_evdev_crypto_adptr.c',
- 'otx2_evdev_irq.c',
- 'otx2_evdev_selftest.c',
- 'otx2_tim_evdev.c',
- 'otx2_tim_worker.c',
-)
-
-deps += ['bus_pci', 'common_octeontx2', 'crypto_octeontx2', 'mempool_octeontx2', 'net_octeontx2']
-
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../common/cpt')
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
deleted file mode 100644
index ccf28b678b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ /dev/null
@@ -1,1900 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <eventdev_pmd_pci.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_tx.h"
-#include "otx2_evdev_stats.h"
-#include "otx2_irq.h"
-#include "otx2_tim_evdev.h"
-
-static inline int
-sso_get_msix_offsets(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get SSO and SSOW MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < nb_ports; i++)
- dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i];
-
- for (i = 0; i < dev->nb_event_queues; i++)
- dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i];
-
- return rc;
-}
-
-void
-sso_fastpath_fns_set(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- /* Single WS modes */
- const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
-
- /* Dual WS modes */
- const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t
- ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- /* Tx modes */
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- event_dev->enqueue = otx2_ssogws_enq;
- event_dev->enqueue_burst = otx2_ssogws_enq_burst;
- event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst;
- event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst;
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_deq_seg
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_seg_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_seg_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_seg_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_deq
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_tx_adptr_enq
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_ca_enq;
-
- if (dev->dual_ws) {
- event_dev->enqueue = otx2_ssogws_dual_enq;
- event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst;
- event_dev->enqueue_new_burst =
- otx2_ssogws_dual_enq_new_burst;
- event_dev->enqueue_forward_burst =
- otx2_ssogws_dual_enq_fwd_burst;
-
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_dual_deq_seg
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_seg_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_seg_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_seg_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_dual_deq
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq;
- }
-
- event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
- rte_mb();
-}
-
-static void
-otx2_sso_info_get(struct rte_eventdev *event_dev,
- struct rte_event_dev_info *dev_info)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD);
- dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
- dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
- dev_info->max_event_queues = dev->max_event_queues;
- dev_info->max_event_queue_flows = (1ULL << 20);
- dev_info->max_event_queue_priority_levels = 8;
- dev_info->max_event_priority_levels = 1;
- dev_info->max_event_ports = dev->max_event_ports;
- dev_info->max_event_port_dequeue_depth = 1;
- dev_info->max_event_port_enqueue_depth = 1;
- dev_info->max_num_events = dev->max_num_events;
- dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
- RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
- RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
- RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-}
-
-static void
-sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t val;
-
- val = queue;
- val |= 0ULL << 12; /* SET 0 */
- val |= 0x8000800080000000; /* Dont modify rest of the masks */
- val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */
-
- otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG);
-}
-
-static int
-otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t link;
-
- RTE_SET_USED(priorities);
- for (link = 0; link < nb_links; link++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[link], true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[link], true);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[link], true);
- }
- }
- sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
-
- return (int)nb_links;
-}
-
-static int
-otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t unlink;
-
- for (unlink = 0; unlink < nb_unlinks; unlink++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[unlink],
- false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[unlink],
- false);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[unlink], false);
- }
- }
- sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
-
- return (int)nb_unlinks;
-}
-
-static int
-sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
- uint16_t nb_lf, uint8_t attach)
-{
- if (attach) {
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = nb_lf;
- break;
- case SSO_LF_GWS:
- req->ssow = nb_lf;
- break;
- default:
- return -EINVAL;
- }
- req->modify = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = true;
- break;
- case SSO_LF_GWS:
- req->ssow = true;
- break;
- default:
- return -EINVAL;
- }
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- }
-
- return 0;
-}
-
-static int
-sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
- enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc)
-{
- void *rsp;
- int rc;
-
- if (alloc) {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_alloc_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_alloc_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- } else {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_free_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_free_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- }
-
- rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0);
- if (rc < 0)
- return rc;
-
- if (alloc && type == SSO_LF_GGRP) {
- struct sso_lf_alloc_rsp *rsp_ggrp = rsp;
-
- dev->xaq_buf_size = rsp_ggrp->xaq_buf_size;
- dev->xae_waes = rsp_ggrp->xaq_wq_entries;
- dev->iue = rsp_ggrp->in_unit_entries;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_release(void *port)
-{
- struct otx2_ssogws_cookie *gws_cookie = ssogws_get_cookie(port);
- struct otx2_sso_evdev *dev;
- int i;
-
- if (!gws_cookie->configured)
- goto free;
-
- dev = sso_pmd_priv(gws_cookie->event_dev);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], i, false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], i, false);
- }
- memset(ws, 0, sizeof(*ws));
- } else {
- struct otx2_ssogws *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++)
- sso_port_link_modify(ws, i, false);
- memset(ws, 0, sizeof(*ws));
- }
-
- memset(gws_cookie, 0, sizeof(*gws_cookie));
-
-free:
- rte_free(gws_cookie);
-}
-
-static void
-otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-}
-
-static void
-sso_restore_links(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t *links_map;
- int i, j;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], j, true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify(ws, j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- }
- }
-}
-
-static void
-sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
-{
- ws->tag_op = base + SSOW_LF_GWS_TAG;
- ws->wqp_op = base + SSOW_LF_GWS_WQP;
- ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK;
- ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
- ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
- ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
-}
-
-static int
-sso_configure_dual_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t vws = 0;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports * 2;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws_dual *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws_dual) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws_dual *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base);
- ws->base[0] = base;
- vws++;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base);
- ws->base[1] = base;
- vws++;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < nb_lf; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12);
- sso_set_port_ops(ws, base);
- ws->base = base;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_queues(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int rc;
-
- otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues);
-
- nb_lf = dev->nb_event_queues;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GGRP LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false);
- otx2_err("Failed to init SSO GGRP LF");
- return -ENODEV;
- }
-
- return rc;
-}
-
-static int
-sso_xaq_allocate(struct otx2_sso_evdev *dev)
-{
- const struct rte_memzone *mz;
- struct npa_aura_s *aura;
- static int reconfig_cnt;
- char pool_name[RTE_MEMZONE_NAMESIZE];
- uint32_t xaq_cnt;
- int rc;
-
- if (dev->xaq_pool)
- rte_mempool_free(dev->xaq_pool);
-
- /*
- * Allocate memory for Add work backpressure.
- */
- mz = rte_memzone_lookup(OTX2_SSO_FC_NAME);
- if (mz == NULL)
- mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME,
- OTX2_ALIGN +
- sizeof(struct npa_aura_s),
- rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG,
- OTX2_ALIGN);
- if (mz == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- return -ENOMEM;
- }
-
- dev->fc_iova = mz->iova;
- dev->fc_mem = mz->addr;
- *dev->fc_mem = 0;
- aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN);
- memset(aura, 0, sizeof(struct npa_aura_s));
-
- aura->fc_ena = 1;
- aura->fc_addr = dev->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
-
- /* Taken from HRM 14.3.3(4) */
- xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
- if (dev->xae_cnt)
- xaq_cnt += dev->xae_cnt / dev->xae_waes;
- else if (dev->adptr_xae_cnt)
- xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
- else
- xaq_cnt += (dev->iue / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
-
- otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
- /* Setup XAQ based on number of nb queues. */
- snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt);
- dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name,
- xaq_cnt, dev->xaq_buf_size, 0, 0,
- rte_socket_id(), 0);
-
- if (dev->xaq_pool == NULL) {
- otx2_err("Unable to create empty mempool.");
- rte_memzone_free(mz);
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(dev->xaq_pool,
- rte_mbuf_platform_mempool_ops(), aura);
- if (rc != 0) {
- otx2_err("Unable to set xaqpool ops.");
- goto alloc_fail;
- }
-
- rc = rte_mempool_populate_default(dev->xaq_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate xaqpool.");
- goto alloc_fail;
- }
- reconfig_cnt++;
- /* When SW does addwork (enqueue) check if there is space in XAQ by
- * comparing fc_addr above against the xaq_lmt calculated below.
- * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO
- * to request XAQ to cache them even before enqueue is called.
- */
- dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 *
- dev->nb_event_queues);
- dev->nb_xaq_cfg = xaq_cnt;
-
- return 0;
-alloc_fail:
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(mz);
- return rc;
-}
-
-static int
-sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_hw_setconfig *req;
-
- otx2_sso_dbg("Configuring XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-sso_ggrp_free_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_release_xaq *req;
-
- otx2_sso_dbg("Freeing XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_release_xaq_aura(mbox);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static void
-sso_lf_teardown(struct otx2_sso_evdev *dev,
- enum otx2_sso_lf_type lf_type)
-{
- uint8_t nb_lf;
-
- switch (lf_type) {
- case SSO_LF_GGRP:
- nb_lf = dev->nb_event_queues;
- break;
- case SSO_LF_GWS:
- nb_lf = dev->nb_event_ports;
- nb_lf *= dev->dual_ws ? 2 : 1;
- break;
- default:
- return;
- }
-
- sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false);
- sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false);
-}
-
-static int
-otx2_sso_configure(const struct rte_eventdev *event_dev)
-{
- struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t deq_tmo_ns;
- int rc;
-
- sso_func_trace();
- deq_tmo_ns = conf->dequeue_timeout_ns;
-
- if (deq_tmo_ns == 0)
- deq_tmo_ns = dev->min_dequeue_timeout_ns;
-
- if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
- deq_tmo_ns > dev->max_dequeue_timeout_ns) {
- otx2_err("Unsupported dequeue timeout requested");
- return -EINVAL;
- }
-
- if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
- dev->is_timeout_deq = 1;
-
- dev->deq_tmo_ns = deq_tmo_ns;
-
- if (conf->nb_event_ports > dev->max_event_ports ||
- conf->nb_event_queues > dev->max_event_queues) {
- otx2_err("Unsupported event queues/ports requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_dequeue_depth > 1) {
- otx2_err("Unsupported event port deq depth requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_enqueue_depth > 1) {
- otx2_err("Unsupported event port enq depth requested");
- return -EINVAL;
- }
-
- if (dev->configured)
- sso_unregister_irqs(event_dev);
-
- if (dev->nb_event_queues) {
- /* Finit any previous queues. */
- sso_lf_teardown(dev, SSO_LF_GGRP);
- }
- if (dev->nb_event_ports) {
- /* Finit any previous ports. */
- sso_lf_teardown(dev, SSO_LF_GWS);
- }
-
- dev->nb_event_queues = conf->nb_event_queues;
- dev->nb_event_ports = conf->nb_event_ports;
-
- if (dev->dual_ws)
- rc = sso_configure_dual_ports(event_dev);
- else
- rc = sso_configure_ports(event_dev);
-
- if (rc < 0) {
- otx2_err("Failed to configure event ports");
- return -ENODEV;
- }
-
- if (sso_configure_queues(event_dev) < 0) {
- otx2_err("Failed to configure event queues");
- rc = -ENODEV;
- goto teardown_hws;
- }
-
- if (sso_xaq_allocate(dev) < 0) {
- rc = -ENOMEM;
- goto teardown_hwggrp;
- }
-
- /* Restore any prior port-queue mapping. */
- sso_restore_links(event_dev);
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_get_msix_offsets(event_dev);
- if (rc < 0) {
- otx2_err("Failed to get msix offsets %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_register_irqs(event_dev);
- if (rc < 0) {
- otx2_err("Failed to register irq %d", rc);
- goto teardown_hwggrp;
- }
-
- dev->configured = 1;
- rte_mb();
-
- return 0;
-teardown_hwggrp:
- sso_lf_teardown(dev, SSO_LF_GGRP);
-teardown_hws:
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
- dev->configured = 0;
- return rc;
-}
-
-static void
-otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
- struct rte_event_queue_conf *queue_conf)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-
- queue_conf->nb_atomic_flows = (1ULL << 20);
- queue_conf->nb_atomic_order_sequences = (1ULL << 20);
- queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
- queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
-}
-
-static int
-otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
- const struct rte_event_queue_conf *queue_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_grp_priority *req;
- int rc;
-
- sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority);
-
- req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
- req->grp = queue_id;
- req->weight = 0xFF;
- req->affinity = 0xFF;
- /* Normalize <0-255> to <0-7> */
- req->priority = queue_conf->priority / 32;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to set priority queue=%d", queue_id);
- return rc;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
- struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- RTE_SET_USED(port_id);
- port_conf->new_event_threshold = dev->max_num_events;
- port_conf->dequeue_depth = 1;
- port_conf->enqueue_depth = 1;
-}
-
-static int
-otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
- const struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0};
- uint64_t val;
- uint16_t q;
-
- sso_func_trace("Port=%d", port_id);
- RTE_SET_USED(port_conf);
-
- if (event_dev->data->ports[port_id] == NULL) {
- otx2_err("Invalid port Id %d", port_id);
- return -EINVAL;
- }
-
- for (q = 0; q < dev->nb_event_queues; q++) {
- grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12);
- if (grps_base[q] == 0) {
- otx2_err("Failed to get grp[%d] base addr", q);
- return -EINVAL;
- }
- }
-
- /* Set get_work timeout for HWS */
- val = NSEC2USEC(dev->deq_tmo_ns) - 1;
-
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id];
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[port_id];
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
- }
-
- otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
-
- return 0;
-}
-
-static int
-otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
- uint64_t *tmo_ticks)
-{
- RTE_SET_USED(event_dev);
- *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
-
- return 0;
-}
-
-static void
-ssogws_dump(struct otx2_ssogws *ws, FILE *f)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_LINKS));
- fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDWQP));
- fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDSTATE));
- fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_NW_TIM));
- fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_SWTP));
- fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDTAG));
-}
-
-static void
-ssoggrp_dump(uintptr_t base, FILE *f)
-{
- fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_QCTL));
- fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_XAQ_CNT));
- fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_THR));
- fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_THR));
- fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_MISC_CNT));
-}
-
-static void
-otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t queue;
- uint8_t port;
-
- fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ?
- "dual_ws" : "single_ws");
- /* Dump SSOW registers */
- for (port = 0; port < dev->nb_event_ports; port++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws =
- event_dev->data->ports[port];
-
- fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 0);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f);
- fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 1);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f);
- } else {
- fprintf(f, "[%s]SSO single workslot[%d] dump\n",
- __func__, port);
- ssogws_dump(event_dev->data->ports[port], f);
- }
- }
-
- /* Dump SSO registers */
- for (queue = 0; queue < dev->nb_event_queues; queue++) {
- fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- }
- }
-}
-
-static void
-otx2_handle_event(void *arg, struct rte_event event)
-{
- struct rte_eventdev *event_dev = arg;
-
- if (event_dev->dev_ops->dev_stop_flush != NULL)
- event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id,
- event, event_dev->data->dev_stop_flush_arg);
-}
-
-static void
-sso_qos_cfg(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct sso_grp_qos_cfg *req;
- uint16_t i;
-
- for (i = 0; i < dev->qos_queue_cnt; i++) {
- uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
- uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
- uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
-
- if (dev->qos_parse_data[i].queue >= dev->nb_event_queues)
- continue;
-
- req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
- req->xaq_limit = (dev->nb_xaq_cfg *
- (xaq_prcnt ? xaq_prcnt : 100)) / 100;
- req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK *
- (iaq_prcnt ? iaq_prcnt : 100)) / 100;
- req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK *
- (taq_prcnt ? taq_prcnt : 100)) / 100;
- }
-
- if (dev->qos_queue_cnt)
- otx2_mbox_process(dev->mbox);
-}
-
-static void
-sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]);
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]);
- ws->swtag_req = 0;
- ws->vws = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset(ws);
- ws->swtag_req = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- }
- }
-
- rte_mb();
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- struct otx2_ssogws temp_ws;
-
- memcpy(&temp_ws, &ws->ws_state[0],
- sizeof(struct otx2_ssogws_state));
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(&temp_ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- }
-
- /* reset SSO GWS cache */
- otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox);
- otx2_mbox_process(dev->mbox);
-}
-
-int
-sso_xae_reconfigure(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int rc = 0;
-
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 0);
-
- rc = sso_ggrp_free_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to free XAQ\n");
- return rc;
- }
-
- rte_mempool_free(dev->xaq_pool);
- dev->xaq_pool = NULL;
- rc = sso_xaq_allocate(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq pool %d", rc);
- return rc;
- }
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- return rc;
- }
-
- rte_mb();
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 1);
-
- return 0;
-}
-
-static int
-otx2_sso_start(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_qos_cfg(event_dev);
- sso_cleanup(event_dev, 1);
- sso_fastpath_fns_set(event_dev);
-
- return 0;
-}
-
-static void
-otx2_sso_stop(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_cleanup(event_dev, 0);
- rte_mb();
-}
-
-static int
-otx2_sso_close(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- uint16_t i;
-
- if (!dev->configured)
- return 0;
-
- sso_unregister_irqs(event_dev);
-
- for (i = 0; i < dev->nb_event_queues; i++)
- all_queues[i] = i;
-
- for (i = 0; i < dev->nb_event_ports; i++)
- otx2_sso_port_unlink(event_dev, event_dev->data->ports[i],
- all_queues, dev->nb_event_queues);
-
- sso_lf_teardown(dev, SSO_LF_GGRP);
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_ports = 0;
- dev->nb_event_queues = 0;
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME));
-
- return 0;
-}
-
-/* Initialize and register event driver with DPDK Application */
-static struct eventdev_ops otx2_sso_ops = {
- .dev_infos_get = otx2_sso_info_get,
- .dev_configure = otx2_sso_configure,
- .queue_def_conf = otx2_sso_queue_def_conf,
- .queue_setup = otx2_sso_queue_setup,
- .queue_release = otx2_sso_queue_release,
- .port_def_conf = otx2_sso_port_def_conf,
- .port_setup = otx2_sso_port_setup,
- .port_release = otx2_sso_port_release,
- .port_link = otx2_sso_port_link,
- .port_unlink = otx2_sso_port_unlink,
- .timeout_ticks = otx2_sso_timeout_ticks,
-
- .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get,
- .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add,
- .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del,
- .eth_rx_adapter_start = otx2_sso_rx_adapter_start,
- .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop,
-
- .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get,
- .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add,
- .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del,
-
- .timer_adapter_caps_get = otx2_tim_caps_get,
-
- .crypto_adapter_caps_get = otx2_ca_caps_get,
- .crypto_adapter_queue_pair_add = otx2_ca_qp_add,
- .crypto_adapter_queue_pair_del = otx2_ca_qp_del,
-
- .xstats_get = otx2_sso_xstats_get,
- .xstats_reset = otx2_sso_xstats_reset,
- .xstats_get_names = otx2_sso_xstats_get_names,
-
- .dump = otx2_sso_dump,
- .dev_start = otx2_sso_start,
- .dev_stop = otx2_sso_stop,
- .dev_close = otx2_sso_close,
- .dev_selftest = otx2_sso_selftest,
-};
-
-#define OTX2_SSO_XAE_CNT "xae_cnt"
-#define OTX2_SSO_SINGLE_WS "single_ws"
-#define OTX2_SSO_GGRP_QOS "qos"
-#define OTX2_SSO_FORCE_BP "force_rx_bp"
-
-static void
-parse_queue_param(char *value, void *opaque)
-{
- struct otx2_sso_qos queue_qos = {0};
- uint8_t *val = (uint8_t *)&queue_qos;
- struct otx2_sso_evdev *dev = opaque;
- char *tok = strtok(value, "-");
- struct otx2_sso_qos *old_ptr;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&queue_qos.iaq_prcnt + 1)) {
- otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
- return;
- }
-
- dev->qos_queue_cnt++;
- old_ptr = dev->qos_parse_data;
- dev->qos_parse_data = rte_realloc(dev->qos_parse_data,
- sizeof(struct otx2_sso_qos) *
- dev->qos_queue_cnt, 0);
- if (dev->qos_parse_data == NULL) {
- dev->qos_parse_data = old_ptr;
- dev->qos_queue_cnt--;
- return;
- }
- dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
-}
-
-static void
-parse_qos_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- parse_queue_param(start + 1, opaque);
- s = end;
- start = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
- * isn't allowed. Everything is expressed in percentages, 0 represents
- * default.
- */
- parse_qos_list(value, opaque);
-
- return 0;
-}
-
-static void
-sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
-{
- struct rte_kvargs *kvlist;
- uint8_t single_ws = 0;
-
- if (devargs == NULL)
- return;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
- &dev->xae_cnt);
- rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
- &single_ws);
- rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
- dev);
- rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag,
- &dev->force_rx_bp);
- otx2_parse_common_devargs(kvlist);
- dev->dual_ws = !single_ws;
- rte_kvargs_free(kvlist);
-}
-
-static int
-otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_probe(pci_drv, pci_dev,
- sizeof(struct otx2_sso_evdev),
- otx2_sso_init);
-}
-
-static int
-otx2_sso_remove(struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini);
-}
-
-static const struct rte_pci_id pci_sso_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_sso = {
- .id_table = pci_sso_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = otx2_sso_probe,
- .remove = otx2_sso_remove,
-};
-
-int
-otx2_sso_init(struct rte_eventdev *event_dev)
-{
- struct free_rsrcs_rsp *rsrc_cnt;
- struct rte_pci_device *pci_dev;
- struct otx2_sso_evdev *dev;
- int rc;
-
- event_dev->dev_ops = &otx2_sso_ops;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- sso_fastpath_fns_set(event_dev);
- return 0;
- }
-
- dev = sso_pmd_priv(event_dev);
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
-
- /* Get SSO and SSOW MSIX rsrc cnt */
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count");
- goto otx2_dev_uninit;
- }
- otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso,
- rsrc_cnt->ssow, rsrc_cnt->npa);
-
- dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS);
- dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP);
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Unable to init NPA lf. It might not be provisioned");
- goto otx2_dev_uninit;
- }
-
- dev->drv_inited = true;
- dev->is_timeout_deq = 0;
- dev->min_dequeue_timeout_ns = USEC2NSEC(1);
- dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
- dev->max_num_events = -1;
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
-
- if (!dev->max_event_ports || !dev->max_event_queues) {
- otx2_err("Not enough eventdev resource queues=%d ports=%d",
- dev->max_event_queues, dev->max_event_ports);
- rc = -ENODEV;
- goto otx2_npa_lf_uninit;
- }
-
- dev->dual_ws = 1;
- sso_parse_devargs(dev, pci_dev->device.devargs);
- if (dev->dual_ws) {
- otx2_sso_dbg("Using dual workslot mode");
- dev->max_event_ports = dev->max_event_ports / 2;
- } else {
- otx2_sso_dbg("Using single workslot mode");
- }
-
- otx2_sso_pf_func_set(dev->pf_func);
- otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
- event_dev->data->name, dev->max_event_queues,
- dev->max_event_ports);
-
- otx2_tim_init(pci_dev, (struct otx2_dev *)dev);
-
- return 0;
-
-otx2_npa_lf_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- return rc;
-}
-
-int
-otx2_sso_fini(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct rte_pci_device *pci_dev;
-
- /* For secondary processes, nothing to be done */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("Common resource in use by other devices");
- return -EAGAIN;
- }
-
- otx2_tim_fini();
- otx2_dev_fini(pci_dev, dev);
-
- return 0;
-}
-
-RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
-RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
-RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
- OTX2_SSO_SINGLE_WS "=1"
- OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_FORCE_BP "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
deleted file mode 100644
index a5d34b7df7..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_H__
-#define __OTX2_EVDEV_H__
-
-#include <rte_eventdev.h>
-#include <eventdev_pmd.h>
-#include <rte_event_eth_rx_adapter.h>
-#include <rte_event_eth_tx_adapter.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_mempool.h"
-#include "otx2_tim_evdev.h"
-
-#define EVENTDEV_NAME_OCTEONTX2_PMD event_octeontx2
-
-#define sso_func_trace otx2_sso_dbg
-
-#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
-#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
-#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc"
-#define OTX2_SSO_SQB_LIMIT (0x180)
-#define OTX2_SSO_XAQ_SLACK (8)
-#define OTX2_SSO_XAQ_CACHE_CNT (0x7)
-#define OTX2_SSO_WQE_SG_PTR (9)
-
-/* SSO LF register offsets (BAR2) */
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-/* SSOW LF register offsets (BAR2) */
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK)
-#define OTX2_SSOW_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
-#define OTX2_SSOW_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
-
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us) * 1E3)
-#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
-#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq))
-
-enum otx2_sso_lf_type {
- SSO_LF_GGRP,
- SSO_LF_GWS
-};
-
-union otx2_sso_event {
- uint64_t get_work0;
- struct {
- uint32_t flow_id:20;
- uint32_t sub_event_type:8;
- uint32_t event_type:4;
- uint8_t op:2;
- uint8_t rsvd:4;
- uint8_t sched_type:2;
- uint8_t queue_id;
- uint8_t priority;
- uint8_t impl_opaque;
- };
-} __rte_aligned(64);
-
-enum {
- SSO_SYNC_ORDERED,
- SSO_SYNC_ATOMIC,
- SSO_SYNC_UNTAGGED,
- SSO_SYNC_EMPTY
-};
-
-struct otx2_sso_qos {
- uint8_t queue;
- uint8_t xaq_prcnt;
- uint8_t taq_prcnt;
- uint8_t iaq_prcnt;
-};
-
-struct otx2_sso_evdev {
- OTX2_DEV; /* Base class */
- uint8_t max_event_queues;
- uint8_t max_event_ports;
- uint8_t is_timeout_deq;
- uint8_t nb_event_queues;
- uint8_t nb_event_ports;
- uint8_t configured;
- uint32_t deq_tmo_ns;
- uint32_t min_dequeue_timeout_ns;
- uint32_t max_dequeue_timeout_ns;
- int32_t max_num_events;
- uint64_t *fc_mem;
- uint64_t xaq_lmt;
- uint64_t nb_xaq_cfg;
- rte_iova_t fc_iova;
- struct rte_mempool *xaq_pool;
- uint64_t rx_offloads;
- uint64_t tx_offloads;
- uint64_t adptr_xae_cnt;
- uint16_t rx_adptr_pool_cnt;
- uint64_t *rx_adptr_pools;
- uint16_t max_port_id;
- uint16_t tim_adptr_ring_cnt;
- uint16_t *timer_adptr_rings;
- uint64_t *timer_adptr_sz;
- /* Dev args */
- uint8_t dual_ws;
- uint32_t xae_cnt;
- uint8_t qos_queue_cnt;
- uint8_t force_rx_bp;
- struct otx2_sso_qos *qos_parse_data;
- /* HW const */
- uint32_t xae_waes;
- uint32_t xaq_buf_size;
- uint32_t iue;
- /* MSIX offsets */
- uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP];
- uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS];
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
-} __rte_cache_aligned;
-
-#define OTX2_SSOGWS_OPS \
- /* WS ops */ \
- uintptr_t getwrk_op; \
- uintptr_t tag_op; \
- uintptr_t wqp_op; \
- uintptr_t swtag_flush_op; \
- uintptr_t swtag_norm_op; \
- uintptr_t swtag_desched_op;
-
-/* Event port aka GWS */
-struct otx2_ssogws {
- /* Get Work Fastpath data */
- OTX2_SSOGWS_OPS;
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-struct otx2_ssogws_state {
- OTX2_SSOGWS_OPS;
-};
-
-struct otx2_ssogws_dual {
- /* Get Work Fastpath data */
- struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t vws; /* Ping pong bit */
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base[2] __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-static inline struct otx2_sso_evdev *
-sso_pmd_priv(const struct rte_eventdev *event_dev)
-{
- return event_dev->data->dev_private;
-}
-
-struct otx2_ssogws_cookie {
- const struct rte_eventdev *event_dev;
- bool configured;
-};
-
-static inline struct otx2_ssogws_cookie *
-ssogws_get_cookie(void *ws)
-{
- return (struct otx2_ssogws_cookie *)
- ((uint8_t *)ws - RTE_CACHE_LINE_SIZE);
-}
-
-static const union mbuf_initializer mbuf_init = {
- .fields = {
- .data_off = RTE_PKTMBUF_HEADROOM,
- .refcnt = 1,
- .nb_segs = 1,
- .port = 0
- }
-};
-
-static __rte_always_inline void
-otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id,
- const uint32_t tag, const uint32_t flags,
- const void * const lookup_mem)
-{
- struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1;
- uint64_t val = mbuf_init.value | (uint64_t)port_id << 48;
-
- if (flags & NIX_RX_OFFLOAD_TSTAMP_F)
- val |= NIX_TIMESYNC_RX_OFFSET;
-
- otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
- (struct rte_mbuf *)mbuf, lookup_mem,
- val, flags);
-
-}
-
-static inline int
-parse_kvargs_flag(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint8_t *)opaque = !!atoi(value);
- return 0;
-}
-
-static inline int
-parse_kvargs_value(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint32_t *)opaque = (uint32_t)atoi(value);
- return 0;
-}
-
-#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES
-#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES
-
-/* Single WS API's */
-uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Dual WS API's */
-uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Auto generated API's */
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
- \
-uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks);\
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events); \
-uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data,
- uint32_t event_type);
-int sso_xae_reconfigure(struct rte_eventdev *event_dev);
-void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
-
-int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
-int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id);
-int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_tx_adapter_queue_add(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-int otx2_sso_tx_adapter_queue_del(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-/* Event crypto adapter API's */
-int otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps);
-
-int otx2_ca_qp_add(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id,
- const struct rte_event *event);
-
-int otx2_ca_qp_del(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id);
-
-/* Clean up API's */
-typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
-void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
- uintptr_t base, otx2_handle_event_t fn, void *arg);
-void ssogws_reset(struct otx2_ssogws *ws);
-/* Selftest */
-int otx2_sso_selftest(void);
-/* Init and Fini API's */
-int otx2_sso_init(struct rte_eventdev *event_dev);
-int otx2_sso_fini(struct rte_eventdev *event_dev);
-/* IRQ handlers */
-int sso_register_irqs(const struct rte_eventdev *event_dev);
-void sso_unregister_irqs(const struct rte_eventdev *event_dev);
-
-#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c
deleted file mode 100644
index a91f784b1e..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_adptr.c
+++ /dev/null
@@ -1,656 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019-2021 Marvell.
- */
-
-#include "otx2_evdev.h"
-
-#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100)
-
-int
-otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int rc;
-
- RTE_SET_USED(event_dev);
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
- else
- *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ;
-
- return 0;
-}
-
-static inline int
-sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp,
- uint16_t eth_port_id)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq.caching = 0;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 1;
- aq->rq.sso_tt = tt;
- aq->rq.sso_grp = ggrp;
- aq->rq.ena_wqwd = 1;
- /* Mbuf Header generation :
- * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as
- * it already has data related to mbuf size, headroom, private area.
- * > Using WQE_SKIP we can directly assign
- * mbuf = wqe - sizeof(struct mbuf);
- * so that mbuf header will not have unpredicted values while headroom
- * and private data starts at the beginning of wqe_data.
- */
- aq->rq.wqe_skip = 1;
- aq->rq.wqe_caching = 1;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 20; /* 20-bits */
-
- /* Flow Tag calculation :
- *
- * rq_tag <31:24> = good/bad_tag<8:0>;
- * rq_tag <23:0> = [ltag]
- *
- * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20>
- * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag);
- *
- * Setup :
- * ltag<23:0> = (eth_port_id & 0xF) << 20;
- * good/bad_tag<8:0> =
- * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4);
- *
- * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) |
- * <27:20> (eth_port_id) | <20:0> [TAG]
- */
-
- aq->rq.ltag = (eth_port_id & 0xF) << 20;
- aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) |
- (RTE_EVENT_TYPE_ETHDEV << 4);
- aq->rq.bad_utag = aq->rq.good_utag;
-
- aq->rq.ena = 0; /* Don't enable RQ yet */
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to init rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-static inline int
-sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to enable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 0;
- aq->rq.sso_tt = SSO_TT_UNTAGGED;
- aq->rq.sso_grp = 0;
- aq->rq.ena_wqwd = 0;
- aq->rq.wqe_caching = 0;
- aq->rq.wqe_skip = 0;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 0x20;
- aq->rq.ltag = 0;
- aq->rq.good_utag = 0;
- aq->rq.bad_utag = 0;
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to clear rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-void
-sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type)
-{
- int i;
-
- switch (event_type) {
- case RTE_EVENT_TYPE_ETHDEV:
- {
- struct otx2_eth_rxq *rxq = data;
- uint64_t *old_ptr;
-
- for (i = 0; i < dev->rx_adptr_pool_cnt; i++) {
- if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i])
- return;
- }
-
- dev->rx_adptr_pool_cnt++;
- old_ptr = dev->rx_adptr_pools;
- dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools,
- sizeof(uint64_t) *
- dev->rx_adptr_pool_cnt, 0);
- if (dev->rx_adptr_pools == NULL) {
- dev->adptr_xae_cnt += rxq->pool->size;
- dev->rx_adptr_pools = old_ptr;
- dev->rx_adptr_pool_cnt--;
- return;
- }
- dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] =
- (uint64_t)rxq->pool;
-
- dev->adptr_xae_cnt += rxq->pool->size;
- break;
- }
- case RTE_EVENT_TYPE_TIMER:
- {
- struct otx2_tim_ring *timr = data;
- uint16_t *old_ring_ptr;
- uint64_t *old_sz_ptr;
-
- for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
- if (timr->ring_id != dev->timer_adptr_rings[i])
- continue;
- if (timr->nb_timers == dev->timer_adptr_sz[i])
- return;
- dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz[i] = timr->nb_timers;
-
- return;
- }
-
- dev->tim_adptr_ring_cnt++;
- old_ring_ptr = dev->timer_adptr_rings;
- old_sz_ptr = dev->timer_adptr_sz;
-
- dev->timer_adptr_rings = rte_realloc(dev->timer_adptr_rings,
- sizeof(uint16_t) *
- dev->tim_adptr_ring_cnt,
- 0);
- if (dev->timer_adptr_rings == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_rings = old_ring_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_sz = rte_realloc(dev->timer_adptr_sz,
- sizeof(uint64_t) *
- dev->tim_adptr_ring_cnt,
- 0);
-
- if (dev->timer_adptr_sz == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz = old_sz_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
- timr->ring_id;
- dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
- timr->nb_timers;
-
- dev->adptr_xae_cnt += timr->nb_timers;
- break;
- }
- default:
- break;
- }
-}
-
-static inline void
-sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- }
- }
-}
-
-static inline void
-sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev,
- struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq,
- uint8_t ena)
-{
- struct otx2_fc_info *fc = &otx2_eth_dev->fc_info;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint32_t limit;
- int rc;
-
- if (otx2_dev_is_sdp(otx2_eth_dev))
- return;
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return;
- mbox = lf->mbox;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return;
-
- limit = rsp->aura.limit;
- /* BP is already enabled. */
- if (rsp->aura.bp_ena) {
- /* If BP ids don't match disable BP. */
- if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) {
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id =
- npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- req->aura.bp_ena = 0;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
- }
- return;
- }
-
- /* BP was previously enabled but now disabled skip. */
- if (rsp->aura.bp)
- return;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- if (ena) {
- req->aura.nix0_bpid = fc->bpid[0];
- req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
- req->aura.bp = NIX_RQ_AURA_THRESH(
- limit > 128 ? 256 : limit); /* 95% of size*/
- req->aura_mask.bp = ~(req->aura_mask.bp);
- }
-
- req->aura.bp_ena = !!ena;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t port = eth_dev->data->port_id;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure(
- (struct rte_eventdev *)(uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, i,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- }
- rxq = eth_dev->data->rx_queues[0];
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- } else {
- rxq = eth_dev->data->rx_queues[rx_queue_id];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure((struct rte_eventdev *)
- (uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- }
-
- if (rc < 0) {
- otx2_err("Failed to configure Rx adapter port=%d, q=%d", port,
- queue_conf->ev.queue_id);
- return rc;
- }
-
- dev->rx_offloads |= otx2_eth_dev->rx_offload_flags;
- dev->tstamp = &otx2_eth_dev->tstamp;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = sso_rxq_disable(otx2_eth_dev, i);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[i], false);
- }
- } else {
- rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[rx_queue_id],
- false);
- }
-
- if (rc < 0)
- otx2_err("Failed to clear Rx adapter config port=%d, q=%d",
- eth_dev->data->port_id, rx_queue_id);
-
- return rc;
-}
-
-int
-otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int ret;
-
- RTE_SET_USED(dev);
- ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13);
- if (ret)
- *caps = 0;
- else
- *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
-
- return 0;
-}
-
-static int
-sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-sso_add_tx_queue_data(const struct rte_eventdev *event_dev,
- uint16_t eth_port_id, uint16_t tx_queue_id,
- struct otx2_eth_txq *txq)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < event_dev->data->nb_ports; i++) {
- dev->max_port_id = RTE_MAX(dev->max_port_id, eth_port_id);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *old_dws;
- struct otx2_ssogws_dual *dws;
-
- old_dws = event_dev->data->ports[i];
- dws = rte_realloc_socket(ssogws_get_cookie(old_dws),
- sizeof(struct otx2_ssogws_dual)
- + RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (dws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- dws = (struct otx2_ssogws_dual *)
- ((uint8_t *)dws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&dws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = dws;
- } else {
- struct otx2_ssogws *old_ws;
- struct otx2_ssogws *ws;
-
- old_ws = event_dev->data->ports[i];
- ws = rte_realloc_socket(ssogws_get_cookie(old_ws),
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&ws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = ws;
- }
- }
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_eth_txq *txq;
- int i, ret;
-
- RTE_SET_USED(id);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev,
- eth_dev->data->port_id, i,
- txq);
- if (ret < 0)
- return ret;
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev, eth_dev->data->port_id,
- tx_queue_id, txq);
- if (ret < 0)
- return ret;
- }
-
- dev->tx_offloads |= otx2_eth_dev->tx_offload_flags;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_txq *txq;
- int i;
-
- RTE_SET_USED(id);
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(event_dev);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- txq->nb_sqb_bufs);
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs);
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
deleted file mode 100644
index d59d6c53f6..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_evdev.h"
-
-int
-otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(cdev);
-
- *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD;
-
- return 0;
-}
-
-static int
-otx2_ca_qp_sso_link(const struct rte_cryptodev *cdev, struct otx2_cpt_qp *qp,
- uint16_t sso_pf_func)
-{
- union otx2_cpt_af_lf_ctl2 af_lf_ctl2;
- int ret;
-
- ret = otx2_cpt_af_reg_read(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, &af_lf_ctl2.u);
- if (ret)
- return ret;
-
- af_lf_ctl2.s.sso_pf_func = sso_pf_func;
- ret = otx2_cpt_af_reg_write(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, af_lf_ctl2.u);
- return ret;
-}
-
-static void
-otx2_ca_qp_init(struct otx2_cpt_qp *qp, const struct rte_event *event)
-{
- if (event) {
- qp->qp_ev_bind = 1;
- rte_memcpy(&qp->ev, event, sizeof(struct rte_event));
- } else {
- qp->qp_ev_bind = 0;
- }
- qp->ca_enable = 1;
-}
-
-int
-otx2_ca_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id, const struct rte_event *event)
-{
- struct otx2_sso_evdev *sso_evdev = sso_pmd_priv(dev);
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- uint16_t sso_pf_func = otx2_sso_pf_func_get();
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret) {
- uint8_t qp_tmp;
- for (qp_tmp = 0; qp_tmp < qp_id; qp_tmp++)
- otx2_ca_qp_del(dev, cdev, qp_tmp);
- return ret;
- }
- otx2_ca_qp_init(qp, event);
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret)
- return ret;
- otx2_ca_qp_init(qp, event);
- }
-
- sso_evdev->rx_offloads |= NIX_RX_OFFLOAD_SECURITY_F;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)dev);
-
- /* Update crypto adapter xae count */
- if (queue_pair_id == -1)
- sso_evdev->adptr_xae_cnt +=
- vf->nb_queues * OTX2_CPT_DEFAULT_CMD_QLEN;
- else
- sso_evdev->adptr_xae_cnt += OTX2_CPT_DEFAULT_CMD_QLEN;
- sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)dev);
-
- return 0;
-}
-
-int
-otx2_ca_qp_del(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id)
-{
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- RTE_SET_USED(dev);
-
- ret = 0;
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
deleted file mode 100644
index b33cb7e139..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "cpt_pmd_logs.h"
-#include "cpt_ucode.h"
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_cryptodev_qp.h"
-
-static inline void
-otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
- struct rte_crypto_op *cop, uintptr_t *rsp,
- uint8_t cc)
-{
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- memset(cop->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session));
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
-}
-
-static inline uint64_t
-otx2_handle_crypto_event(uint64_t get_work1)
-{
- struct cpt_request_info *req;
- const struct otx2_cpt_qp *qp;
- struct rte_crypto_op *cop;
- uintptr_t *rsp;
- void *metabuf;
- uint8_t cc;
-
- req = (struct cpt_request_info *)(get_work1);
- cc = otx2_cpt_compcode_get(req);
- qp = req->qp;
-
- rsp = req->op;
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- otx2_ca_deq_post_process(qp, cop, rsp, cc);
-
- rte_mempool_put(qp->meta_info.pool, metabuf);
-
- return (uint64_t)(cop);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
deleted file mode 100644
index 1fc56f903b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ /dev/null
@@ -1,83 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2021 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_event_crypto_adapter.h>
-#include <rte_eventdev.h>
-
-#include <otx2_cryptodev_qp.h>
-#include <otx2_worker.h>
-
-static inline uint16_t
-otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
-{
- union rte_event_crypto_metadata *m_data;
- struct rte_crypto_op *crypto_op;
- struct rte_cryptodev *cdev;
- struct otx2_cpt_qp *qp;
- uint8_t cdev_id;
- uint16_t qp_id;
-
- crypto_op = ev->event_ptr;
- if (crypto_op == NULL)
- return 0;
-
- if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- crypto_op->sym->session);
- if (m_data == NULL)
- goto free_op;
-
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)crypto_op +
- crypto_op->private_data_offset);
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else {
- goto free_op;
- }
-
- cdev = &rte_cryptodevs[cdev_id];
- qp = cdev->data->queue_pairs[qp_id];
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(tag_op);
- if (qp->ca_enable)
- return cdev->enqueue_burst(qp, &crypto_op, 1);
-
-free_op:
- rte_pktmbuf_free(crypto_op->sym->m_src);
- rte_crypto_op_free(crypto_op);
- rte_errno = EINVAL;
- return 0;
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->tag_op, ev);
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
deleted file mode 100644
index 9b7ad27b04..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ /dev/null
@@ -1,272 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static void
-sso_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ggrp;
-
- ggrp = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + SSO_LF_GGRP_INT);
- if (intr == 0)
- return;
-
- otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSO_LF_GGRP_INT);
-}
-
-static int
-sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-ssow_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t gws = (base >> 12) & 0xFF;
- uint64_t intr;
-
- intr = otx2_read64(base + SSOW_LF_GWS_INT);
- if (intr == 0)
- return;
-
- otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSOW_LF_GWS_INT);
-}
-
-static int
-ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t ggrp_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec);
-}
-
-static void
-ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t gws_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec);
-}
-
-int
-sso_register_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc = -EINVAL;
- uint8_t nb_ports;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x",
- i, dev->sso_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < nb_ports; i++) {
- if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x",
- i, dev->ssow_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i],
- base);
- }
-
-fail:
- return rc;
-}
-
-void
-sso_unregister_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports;
- int i;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
- }
-}
-
-static void
-tim_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ring;
-
- ring = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + TIM_LF_NRSPERR_INT);
- otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr);
- intr = otx2_read64(base + TIM_LF_RAS_INT);
- otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + TIM_LF_NRSPERR_INT);
- otx2_write64(intr, base + TIM_LF_RAS_INT);
-}
-
-static int
-tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-}
-
-int
-tim_register_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- int rc = -EINVAL;
- uintptr_t base;
-
- if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x",
- ring_id, dev->tim_msixoff[ring_id]);
- goto fail;
- }
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-fail:
- return rc;
-}
-
-void
-tim_unregister_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- uintptr_t base;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c
deleted file mode 100644
index 48bfaf893d..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_selftest.c
+++ /dev/null
@@ -1,1517 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_debug.h>
-#include <rte_eal.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_hexdump.h>
-#include <rte_launch.h>
-#include <rte_lcore.h>
-#include <rte_mbuf.h>
-#include <rte_malloc.h>
-#include <rte_memcpy.h>
-#include <rte_per_lcore.h>
-#include <rte_random.h>
-#include <rte_test.h>
-
-#include "otx2_evdev.h"
-
-#define NUM_PACKETS (1024)
-#define MAX_EVENTS (1024)
-
-#define OCTEONTX2_TEST_RUN(setup, teardown, test) \
- octeontx_test_run(setup, teardown, test, #test)
-
-static int total;
-static int passed;
-static int failed;
-static int unsupported;
-
-static int evdev;
-static struct rte_mempool *eventdev_test_mempool;
-
-struct event_attr {
- uint32_t flow_id;
- uint8_t event_type;
- uint8_t sub_event_type;
- uint8_t sched_type;
- uint8_t queue;
- uint8_t port;
-};
-
-static uint32_t seqn_list_index;
-static int seqn_list[NUM_PACKETS];
-
-static inline void
-seqn_list_init(void)
-{
- RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
- memset(seqn_list, 0, sizeof(seqn_list));
- seqn_list_index = 0;
-}
-
-static inline int
-seqn_list_update(int val)
-{
- if (seqn_list_index >= NUM_PACKETS)
- return -1;
-
- seqn_list[seqn_list_index++] = val;
- rte_smp_wmb();
- return 0;
-}
-
-static inline int
-seqn_list_check(int limit)
-{
- int i;
-
- for (i = 0; i < limit; i++) {
- if (seqn_list[i] != i) {
- otx2_err("Seqn mismatch %d %d", seqn_list[i], i);
- return -1;
- }
- }
- return 0;
-}
-
-struct test_core_param {
- rte_atomic32_t *total_events;
- uint64_t dequeue_tmo_ticks;
- uint8_t port;
- uint8_t sched_type;
-};
-
-static int
-testsuite_setup(void)
-{
- const char *eventdev_name = "event_octeontx2";
-
- evdev = rte_event_dev_get_dev_id(eventdev_name);
- if (evdev < 0) {
- otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
- return -1;
- }
- return 0;
-}
-
-static void
-testsuite_teardown(void)
-{
- rte_event_dev_close(evdev);
-}
-
-static inline void
-devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
- struct rte_event_dev_info *info)
-{
- memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
- dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
- dev_conf->nb_event_ports = info->max_event_ports;
- dev_conf->nb_event_queues = info->max_event_queues;
- dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
- dev_conf->nb_event_port_dequeue_depth =
- info->max_event_port_dequeue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_events_limit =
- info->max_num_events;
-}
-
-enum {
- TEST_EVENTDEV_SETUP_DEFAULT,
- TEST_EVENTDEV_SETUP_PRIORITY,
- TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
-};
-
-static inline int
-_eventdev_setup(int mode)
-{
- const char *pool_name = "evdev_octeontx_test_pool";
- struct rte_event_dev_config dev_conf;
- struct rte_event_dev_info info;
- int i, ret;
-
- /* Create and destrory pool for each test case to make it standalone */
- eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS,
- 0, 0, 512,
- rte_socket_id());
- if (!eventdev_test_mempool) {
- otx2_err("ERROR creating mempool");
- return -1;
- }
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
-
- devconf_set_default_sane_values(&dev_conf, &info);
- if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
- dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
-
- ret = rte_event_dev_configure(evdev, &dev_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
-
- uint32_t queue_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
- if (queue_count > 8)
- queue_count = 8;
-
- /* Configure event queues(0 to n) with
- * RTE_EVENT_DEV_PRIORITY_HIGHEST to
- * RTE_EVENT_DEV_PRIORITY_LOWEST
- */
- uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) /
- queue_count;
- for (i = 0; i < (int)queue_count; i++) {
- struct rte_event_queue_conf queue_conf;
-
- ret = rte_event_queue_default_conf_get(evdev, i,
- &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
- i);
- queue_conf.priority = i * step;
- ret = rte_event_queue_setup(evdev, i, &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
-
- } else {
- /* Configure event queues with default priority */
- for (i = 0; i < (int)queue_count; i++) {
- ret = rte_event_queue_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
- }
- /* Configure event ports */
- uint32_t port_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
- ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
- i);
- }
-
- ret = rte_event_dev_start(evdev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
-
- return 0;
-}
-
-static inline int
-eventdev_setup(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
-}
-
-static inline int
-eventdev_setup_priority(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
-}
-
-static inline int
-eventdev_setup_dequeue_timeout(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
-}
-
-static inline void
-eventdev_teardown(void)
-{
- rte_event_dev_stop(evdev);
- rte_mempool_free(eventdev_test_mempool);
-}
-
-static inline void
-update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
- uint32_t flow_id, uint8_t event_type,
- uint8_t sub_event_type, uint8_t sched_type,
- uint8_t queue, uint8_t port)
-{
- struct event_attr *attr;
-
- /* Store the event attributes in mbuf for future reference */
- attr = rte_pktmbuf_mtod(m, struct event_attr *);
- attr->flow_id = flow_id;
- attr->event_type = event_type;
- attr->sub_event_type = sub_event_type;
- attr->sched_type = sched_type;
- attr->queue = queue;
- attr->port = port;
-
- ev->flow_id = flow_id;
- ev->sub_event_type = sub_event_type;
- ev->event_type = event_type;
- /* Inject the new event */
- ev->op = RTE_EVENT_OP_NEW;
- ev->sched_type = sched_type;
- ev->queue_id = queue;
- ev->mbuf = m;
-}
-
-static inline int
-inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
- uint8_t sched_type, uint8_t queue, uint8_t port,
- unsigned int events)
-{
- struct rte_mbuf *m;
- unsigned int i;
-
- for (i = 0; i < events; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- update_event_and_validation_attr(m, &ev, flow_id, event_type,
- sub_event_type, sched_type,
- queue, port);
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- return 0;
-}
-
-static inline int
-check_excess_events(uint8_t port)
-{
- uint16_t valid_event;
- struct rte_event ev;
- int i;
-
- /* Check for excess events, try for a few times and exit */
- for (i = 0; i < 32; i++) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
-
- RTE_TEST_ASSERT_SUCCESS(valid_event,
- "Unexpected valid event=%d",
- *rte_event_pmd_selftest_seqn(ev.mbuf));
- }
- return 0;
-}
-
-static inline int
-generate_random_events(const unsigned int total_events)
-{
- struct rte_event_dev_info info;
- uint32_t queue_count;
- unsigned int i;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
- for (i = 0; i < total_events; i++) {
- ret = inject_events(
- rte_rand() % info.max_event_queue_flows /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- rte_rand() % queue_count /* queue */,
- 0 /* port */,
- 1 /* events */);
- if (ret)
- return -1;
- }
- return ret;
-}
-
-
-static inline int
-validate_event(struct rte_event *ev)
-{
- struct event_attr *attr;
-
- attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
- RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
- "flow_id mismatch enq=%d deq =%d",
- attr->flow_id, ev->flow_id);
- RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
- "event_type mismatch enq=%d deq =%d",
- attr->event_type, ev->event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
- "sub_event_type mismatch enq=%d deq =%d",
- attr->sub_event_type, ev->sub_event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
- "sched_type mismatch enq=%d deq =%d",
- attr->sched_type, ev->sched_type);
- RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- attr->queue, ev->queue_id);
- return 0;
-}
-
-typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
- struct rte_event *ev);
-
-static inline int
-consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
-{
- uint32_t events = 0, forward_progress_cnt = 0, index = 0;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (1) {
- if (++forward_progress_cnt > UINT16_MAX) {
- otx2_err("Detected deadlock");
- return -1;
- }
-
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- forward_progress_cnt = 0;
- ret = validate_event(&ev);
- if (ret)
- return -1;
-
- if (fn != NULL) {
- ret = fn(index, port, &ev);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to validate test specific event");
- }
-
- ++index;
-
- rte_pktmbuf_free(ev.mbuf);
- if (++events >= total_events)
- break;
- }
-
- return check_excess_events(port);
-}
-
-static int
-validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
- "index=%d != seqn=%d",
- index, *rte_event_pmd_selftest_seqn(ev->mbuf));
- return 0;
-}
-
-static inline int
-test_simple_enqdeq(uint8_t sched_type)
-{
- int ret;
-
- ret = inject_events(0 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type */,
- sched_type,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
-}
-
-static int
-test_simple_enqdeq_ordered(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_simple_enqdeq_atomic(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_simple_enqdeq_parallel(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. On dequeue, using single event port(port 0) verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_single_port_deq(void)
-{
- int ret;
-
- ret = generate_random_events(MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, NULL);
-}
-
-/*
- * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
- * operation
- *
- * For example, Inject 32 events over 0..7 queues
- * enqueue events 0, 8, 16, 24 in queue 0
- * enqueue events 1, 9, 17, 25 in queue 1
- * ..
- * ..
- * enqueue events 7, 15, 23, 31 in queue 7
- *
- * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
- * order from queue0(highest priority) to queue7(lowest_priority)
- */
-static int
-validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- uint32_t queue_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- uint32_t range = MAX_EVENTS / queue_count;
- uint32_t expected_val = (index % range) * queue_count;
-
- expected_val += ev->queue_id;
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(
- *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
- "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
- *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
- range, queue_count, MAX_EVENTS);
- return 0;
-}
-
-static int
-test_multi_queue_priority(void)
-{
- int i, max_evts_roundoff;
- /* See validate_queue_priority() comments for priority validate logic */
- uint32_t queue_count;
- struct rte_mbuf *m;
- uint8_t queue;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- max_evts_roundoff = MAX_EVENTS / queue_count;
- max_evts_roundoff *= queue_count;
-
- for (i = 0; i < max_evts_roundoff; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- queue = i % queue_count;
- update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
- 0, RTE_SCHED_TYPE_PARALLEL,
- queue, 0);
- rte_event_enqueue_burst(evdev, 0, &ev, 1);
- }
-
- return consume_events(0, max_evts_roundoff, validate_queue_priority);
-}
-
-static int
-worker_multi_port_fn(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- ret = validate_event(&ev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- }
-
- return 0;
-}
-
-static inline int
-wait_workers_to_join(const rte_atomic32_t *count)
-{
- uint64_t cycles, print_cycles;
-
- cycles = rte_get_timer_cycles();
- print_cycles = cycles;
- while (rte_atomic32_read(count)) {
- uint64_t new_cycles = rte_get_timer_cycles();
-
- if (new_cycles - print_cycles > rte_get_timer_hz()) {
- otx2_err("Events %d", rte_atomic32_read(count));
- print_cycles = new_cycles;
- }
- if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
- otx2_err("No schedules for seconds, deadlock (%d)",
- rte_atomic32_read(count));
- rte_event_dev_dump(evdev, stdout);
- cycles = new_cycles;
- return -1;
- }
- }
- rte_eal_mp_wait_lcore();
-
- return 0;
-}
-
-static inline int
-launch_workers_and_wait(int (*main_thread)(void *),
- int (*worker_thread)(void *), uint32_t total_events,
- uint8_t nb_workers, uint8_t sched_type)
-{
- rte_atomic32_t atomic_total_events;
- struct test_core_param *param;
- uint64_t dequeue_tmo_ticks;
- uint8_t port = 0;
- int w_lcore;
- int ret;
-
- if (!nb_workers)
- return 0;
-
- rte_atomic32_set(&atomic_total_events, total_events);
- seqn_list_init();
-
- param = malloc(sizeof(struct test_core_param) * nb_workers);
- if (!param)
- return -1;
-
- ret = rte_event_dequeue_timeout_ticks(evdev,
- rte_rand() % 10000000/* 10ms */,
- &dequeue_tmo_ticks);
- if (ret) {
- free(param);
- return -1;
- }
-
- param[0].total_events = &atomic_total_events;
- param[0].sched_type = sched_type;
- param[0].port = 0;
- param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_wmb();
-
- w_lcore = rte_get_next_lcore(
- /* start core */ -1,
- /* skip main */ 1,
- /* wrap */ 0);
- rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
-
- for (port = 1; port < nb_workers; port++) {
- param[port].total_events = &atomic_total_events;
- param[port].sched_type = sched_type;
- param[port].port = port;
- param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_smp_wmb();
- w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
- rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
- }
-
- rte_smp_wmb();
- ret = wait_workers_to_join(&atomic_total_events);
- free(param);
-
- return ret;
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. Dequeue the events through multiple ports and verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_multi_port_deq(void)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- return launch_workers_and_wait(worker_multi_port_fn,
- worker_multi_port_fn, total_events,
- nr_ports, 0xff /* invalid */);
-}
-
-static
-void flush(uint8_t dev_id, struct rte_event event, void *arg)
-{
- unsigned int *count = arg;
-
- RTE_SET_USED(dev_id);
- if (event.event_type == RTE_EVENT_TYPE_CPU)
- *count = *count + 1;
-}
-
-static int
-test_dev_stop_flush(void)
-{
- unsigned int total_events = MAX_EVENTS, count = 0;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
- if (ret)
- return -2;
- rte_event_dev_stop(evdev);
- ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
- if (ret)
- return -3;
- RTE_TEST_ASSERT_EQUAL(total_events, count,
- "count mismatch total_events=%d count=%d",
- total_events, count);
-
- return 0;
-}
-
-static int
-validate_queue_to_port_single_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link queue x to port x and check correctness of link by checking
- * queue_id == x on dequeue on the specific port x
- */
-static int
-test_queue_to_port_single_link(void)
-{
- int i, nr_links, ret;
- uint32_t queue_count;
- uint32_t port_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
-
- /* Unlink all connections that created in eventdev_setup */
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_unlink(evdev, i, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0,
- "Failed to unlink all queues port=%d", i);
- }
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- nr_links = RTE_MIN(port_count, queue_count);
- const unsigned int total_events = MAX_EVENTS / nr_links;
-
- /* Link queue x to port x and inject events to queue x through port x */
- for (i = 0; i < nr_links; i++) {
- uint8_t queue = (uint8_t)i;
-
- ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, i /* port */,
- total_events /* events */);
- if (ret)
- return -1;
- }
-
- /* Verify the events generated from correct queue */
- for (i = 0; i < nr_links; i++) {
- ret = consume_events(i /* port */, total_events,
- validate_queue_to_port_single_link);
- if (ret)
- return -1;
- }
-
- return 0;
-}
-
-static int
-validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link all even number of queues to port 0 and all odd number of queues to
- * port 1 and verify the link connection on dequeue
- */
-static int
-test_queue_to_port_multi_link(void)
-{
- int ret, port0_events = 0, port1_events = 0;
- uint32_t nr_queues = 0;
- uint32_t nr_ports = 0;
- uint8_t queue, port;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- if (nr_ports < 2) {
- otx2_err("Not enough ports to test ports=%d", nr_ports);
- return 0;
- }
-
- /* Unlink all connections that created in eventdev_setup */
- for (port = 0; port < nr_ports; port++) {
- ret = rte_event_port_unlink(evdev, port, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
- port);
- }
-
- const unsigned int total_events = MAX_EVENTS / nr_queues;
-
- /* Link all even number of queues to port0 and odd numbers to port 1*/
- for (queue = 0; queue < nr_queues; queue++) {
- port = queue & 0x1;
- ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
- queue, port);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, port /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- if (port == 0)
- port0_events += total_events;
- else
- port1_events += total_events;
- }
-
- ret = consume_events(0 /* port */, port0_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
- ret = consume_events(1 /* port */, port1_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
-
- return 0;
-}
-
-static int
-worker_flow_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0 */
- if (ev.sub_event_type == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type = 1; /* stage 1 */
- ev.sched_type = new_sched_type;
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.sub_event_type = %d",
- ev.sub_event_type);
- return -1;
- }
- }
- return 0;
-}
-
-static int
-test_multiport_flow_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- rte_mb();
- ret = launch_workers_and_wait(worker_flow_based_pipeline,
- worker_flow_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-/* Multi port ordered to atomic transaction */
-static int
-test_multi_port_flow_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_ordered_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_ordered_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_atomic_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_atomic_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_parallel_to_atomic(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_parallel_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_parallel_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_group_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0(group 0) */
- if (ev.queue_id == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = new_sched_type;
- ev.queue_id = 1; /* Stage 1*/
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.queue_id = %d", ev.queue_id);
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_multiport_queue_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t queue_count;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count < 2 || !nr_ports) {
- otx2_err("Not enough queues=%d ports=%d or workers=%d",
- queue_count, nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- ret = launch_workers_and_wait(worker_group_based_pipeline,
- worker_group_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-static int
-test_multi_port_queue_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_ordered_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_ordered_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_atomic_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_atomic_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_parallel_to_atomic(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_parallel_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_parallel_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.sub_event_type == 255) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-static int
-launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
-{
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d",
- nr_ports, rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- rte_rand() %
- (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS /* events */);
- if (ret)
- return -1;
-
- return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
- 0xff /* invalid */);
-}
-
-/* Flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_flow_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_flow_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_queue_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_queue_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* Last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sub_event_type = rte_rand() % 256;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue and flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_mixed_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_mixed_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_ordered_flow_producer(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- struct rte_mbuf *m;
- int counter = 0;
-
- while (counter < NUM_PACKETS) {
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- if (m == NULL)
- continue;
-
- *rte_event_pmd_selftest_seqn(m) = counter++;
-
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- ev.flow_id = 0x1; /* Generate a fat flow */
- ev.sub_event_type = 0;
- /* Inject the new event */
- ev.op = RTE_EVENT_OP_NEW;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = RTE_SCHED_TYPE_ORDERED;
- ev.queue_id = 0;
- ev.mbuf = m;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
-
- return 0;
-}
-
-static inline int
-test_producer_consumer_ingress_order_test(int (*fn)(void *))
-{
- uint32_t nr_ports;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (rte_lcore_count() < 3 || nr_ports < 2) {
- otx2_err("### Not enough cores for test.");
- return 0;
- }
-
- launch_workers_and_wait(worker_ordered_flow_producer, fn,
- NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC);
- /* Check the events order maintained or not */
- return seqn_list_check(NUM_PACKETS);
-}
-
-/* Flow based producer consumer ingress order test */
-static int
-test_flow_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_flow_based_pipeline);
-}
-
-/* Queue based producer consumer ingress order test */
-static int
-test_queue_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_group_based_pipeline);
-}
-
-static void octeontx_test_run(int (*setup)(void), void (*tdown)(void),
- int (*test)(void), const char *name)
-{
- if (setup() < 0) {
- printf("Error setting up test %s", name);
- unsupported++;
- } else {
- if (test() < 0) {
- failed++;
- printf("+ TestCase [%2d] : %s failed\n", total, name);
- } else {
- passed++;
- printf("+ TestCase [%2d] : %s succeeded\n", total,
- name);
- }
- }
-
- total++;
- tdown();
-}
-
-int
-otx2_sso_selftest(void)
-{
- testsuite_setup();
-
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_single_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_dev_stop_flush);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_multi_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_single_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_multi_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_mixed_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_flow_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
- test_multi_queue_priority);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- printf("Total tests : %d\n", total);
- printf("Passed : %d\n", passed);
- printf("Failed : %d\n", failed);
- printf("Not supported : %d\n", unsupported);
-
- testsuite_teardown();
-
- if (failed)
- return -1;
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
deleted file mode 100644
index 74fcec8a07..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_stats.h
+++ /dev/null
@@ -1,286 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_STATS_H__
-#define __OTX2_EVDEV_STATS_H__
-
-#include "otx2_evdev.h"
-
-struct otx2_sso_xstats_name {
- const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
- const size_t offset;
- const uint64_t mask;
- const uint8_t shift;
- uint64_t reset_snap[OTX2_SSO_MAX_VHGRP];
-};
-
-static struct otx2_sso_xstats_name sso_hws_xstats[] = {
- {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration),
- 0x3FF, 0, {0} },
- {"affinity_arbitration_credits",
- offsetof(struct sso_hws_stats, arbitration),
- 0xF, 16, {0} },
-};
-
-static struct otx2_sso_xstats_name sso_grp_xstats[] = {
- {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0,
- {0} },
- {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0,
- 0, {0} },
- {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0,
- {0} },
- {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0,
- {0} },
- {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0,
- {0} },
- {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0,
- {0} },
- {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3,
- 0, {0} },
- {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F,
- 16, {0} },
- {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt),
- 0xFFFFFFFF, 0, {0} },
-};
-
-#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
-#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats)
-
-#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS)
-
-static int
-otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
- const unsigned int ids[], uint64_t values[], unsigned int n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- values[i] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- values[i] = (values[i] >> xstat->shift) &
- xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- values[i] += value;
- else
- values[i] = value;
-
- values[i] -= xstat->reset_snap[queue_port_id];
- }
-
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- int16_t queue_port_id, const uint32_t ids[], uint32_t n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- xstat->reset_snap[queue_port_id] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- xstat->reset_snap[queue_port_id] =
- (xstat->reset_snap[queue_port_id] >>
- xstat->shift) & xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- xstat->reset_snap[queue_port_id] += value;
- else
- xstat->reset_snap[queue_port_id] = value;
- }
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- uint8_t queue_port_id,
- struct rte_event_dev_xstats_name *xstats_names,
- unsigned int *ids, unsigned int size)
-{
- struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS];
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int xidx = 0;
- unsigned int i;
-
- for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_hws_xstats[i].name);
- }
-
- for (; i < OTX2_SSO_NUM_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name);
- }
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- break;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- break;
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- break;
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- default:
- otx2_err("Invalid mode received");
- return -EINVAL;
- };
-
- if (xstats_mode_count > size || !ids || !xstats_names)
- return xstats_mode_count;
-
- for (i = 0; i < xstats_mode_count; i++) {
- xidx = i + start_offset;
- strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
- sizeof(xstats_names[i].name));
- ids[i] = xidx;
- }
-
- return i;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
deleted file mode 100644
index 6da8b14b78..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ /dev/null
@@ -1,735 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static struct event_timer_adapter_ops otx2_tim_ops;
-
-static inline int
-tim_get_msix_offsets(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get TIM MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < dev->nb_rings; i++)
- dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i];
-
- return rc;
-}
-
-static void
-tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
-{
- uint8_t prod_flag = !tim_ring->prod_type_sp;
-
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
-#define FP(_name, _f3, _f2, _f1, flags) \
- [_f3][_f2][_f1] = otx2_tim_arm_burst_##_name,
- TIM_ARM_FASTPATH_MODES
-#undef FP
- };
-
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) \
- [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_##_name,
- TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
- };
-
- otx2_tim_ops.arm_burst =
- arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
- otx2_tim_ops.arm_tmo_tick_burst =
- arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
- otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
-}
-
-static void
-otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer_adapter_info *adptr_info)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
-
- adptr_info->max_tmo_ns = tim_ring->max_tout;
- adptr_info->min_resolution_ns = tim_ring->ena_periodic ?
- tim_ring->max_tout : tim_ring->tck_nsec;
- rte_memcpy(&adptr_info->conf, &adptr->data->conf,
- sizeof(struct rte_event_timer_adapter_conf));
-}
-
-static int
-tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
- struct rte_event_timer_adapter_conf *rcfg)
-{
- unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
- unsigned int mp_flags = 0;
- char pool_name[25];
- int rc;
-
- cache_sz /= rte_lcore_count();
- /* Create chunk pool. */
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
- mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
- otx2_tim_dbg("Using single producer mode");
- tim_ring->prod_type_sp = true;
- }
-
- snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d",
- tim_ring->ring_id);
-
- if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
- cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
-
- cache_sz = cache_sz != 0 ? cache_sz : 2;
- tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- if (!tim_ring->disable_npa) {
- tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(),
- NULL);
- if (rc < 0) {
- otx2_err("Unable to set chunkpool ops");
- goto free;
- }
-
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate chunkpool.");
- goto free;
- }
- tim_ring->aura = npa_lf_aura_handle_to_aura(
- tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = tim_ring->ena_periodic ? 1 : 0;
- } else {
- tim_ring->chunk_pool = rte_mempool_create(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, NULL, NULL, NULL, NULL,
- rte_socket_id(),
- mp_flags);
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
- tim_ring->ena_dfb = 1;
- }
-
- return 0;
-
-free:
- rte_mempool_free(tim_ring->chunk_pool);
- return rc;
-}
-
-static void
-tim_err_desc(int rc)
-{
- switch (rc) {
- case TIM_AF_NO_RINGS_LEFT:
- otx2_err("Unable to allocat new TIM ring.");
- break;
- case TIM_AF_INVALID_NPA_PF_FUNC:
- otx2_err("Invalid NPA pf func.");
- break;
- case TIM_AF_INVALID_SSO_PF_FUNC:
- otx2_err("Invalid SSO pf func.");
- break;
- case TIM_AF_RING_STILL_RUNNING:
- otx2_tim_dbg("Ring busy.");
- break;
- case TIM_AF_LF_INVALID:
- otx2_err("Invalid Ring id.");
- break;
- case TIM_AF_CSIZE_NOT_ALIGNED:
- otx2_err("Chunk size specified needs to be multiple of 16.");
- break;
- case TIM_AF_CSIZE_TOO_SMALL:
- otx2_err("Chunk size too small.");
- break;
- case TIM_AF_CSIZE_TOO_BIG:
- otx2_err("Chunk size too big.");
- break;
- case TIM_AF_INTERVAL_TOO_SMALL:
- otx2_err("Bucket traversal interval too small.");
- break;
- case TIM_AF_INVALID_BIG_ENDIAN_VALUE:
- otx2_err("Invalid Big endian value.");
- break;
- case TIM_AF_INVALID_CLOCK_SOURCE:
- otx2_err("Invalid Clock source specified.");
- break;
- case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED:
- otx2_err("GPIO clock source not enabled.");
- break;
- case TIM_AF_INVALID_BSIZE:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_PERIODIC:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_DONTFREE:
- otx2_err("Invalid Don't free value.");
- break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
- otx2_err("Don't free bit not set when periodic is enabled.");
- break;
- case TIM_AF_RING_ALREADY_DISABLED:
- otx2_err("Ring already stopped");
- break;
- default:
- otx2_err("Unknown Error.");
- }
-}
-
-static int
-otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
-{
- struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_tim_ring *tim_ring;
- struct tim_config_req *cfg_req;
- struct tim_ring_req *free_req;
- struct tim_lf_alloc_req *req;
- struct tim_lf_alloc_rsp *rsp;
- uint8_t is_periodic;
- int i, rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- if (adptr->data->id >= dev->nb_rings)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->sso_pf_func = otx2_sso_pf_func_get();
- req->ring = adptr->data->id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- return -ENODEV;
- }
-
- if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
- rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
- rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
- rsp->tenns_clk);
- else {
- rc = -ERANGE;
- goto rng_mem_err;
- }
- }
-
- is_periodic = 0;
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) {
- if (rcfg->max_tmo_ns &&
- rcfg->max_tmo_ns != rcfg->timer_tick_ns) {
- rc = -ERANGE;
- goto rng_mem_err;
- }
-
- /* Use 2 buckets to avoid contention */
- rcfg->max_tmo_ns = rcfg->timer_tick_ns;
- rcfg->timer_tick_ns /= 2;
- is_periodic = 1;
- }
-
- tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
- if (tim_ring == NULL) {
- rc = -ENOMEM;
- goto rng_mem_err;
- }
-
- adptr->data->adapter_priv = tim_ring;
-
- tim_ring->tenns_clk_freq = rsp->tenns_clk;
- tim_ring->clk_src = (int)rcfg->clk_src;
- tim_ring->ring_id = adptr->data->id;
- tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
- tim_ring->max_tout = is_periodic ?
- rcfg->timer_tick_ns * 2 : rcfg->max_tmo_ns;
- tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
- tim_ring->chunk_sz = dev->chunk_sz;
- tim_ring->nb_timers = rcfg->nb_timers;
- tim_ring->disable_npa = dev->disable_npa;
- tim_ring->ena_periodic = is_periodic;
- tim_ring->enable_stats = dev->enable_stats;
-
- for (i = 0; i < dev->ring_ctl_cnt ; i++) {
- struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
-
- if (ring_ctl->ring == tim_ring->ring_id) {
- tim_ring->chunk_sz = ring_ctl->chunk_slots ?
- ((uint32_t)(ring_ctl->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz;
- tim_ring->enable_stats = ring_ctl->enable_stats;
- tim_ring->disable_npa = ring_ctl->disable_npa;
- }
- }
-
- if (tim_ring->disable_npa) {
- tim_ring->nb_chunks =
- tim_ring->nb_timers /
- OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
- } else {
- tim_ring->nb_chunks = tim_ring->nb_timers;
- }
- tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
- sizeof(struct otx2_tim_bkt),
- RTE_CACHE_LINE_SIZE);
- if (tim_ring->bkt == NULL)
- goto bkt_mem_err;
-
- rc = tim_chnk_pool_create(tim_ring, rcfg);
- if (rc < 0)
- goto chnk_mem_err;
-
- cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox);
-
- cfg_req->ring = tim_ring->ring_id;
- cfg_req->bigendian = false;
- cfg_req->clocksource = tim_ring->clk_src;
- cfg_req->enableperiodic = tim_ring->ena_periodic;
- cfg_req->enabledontfreebuffer = tim_ring->ena_dfb;
- cfg_req->bucketsize = tim_ring->nb_bkts;
- cfg_req->chunksize = tim_ring->chunk_sz;
- cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec,
- tim_ring->tenns_clk_freq);
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- goto chnk_mem_err;
- }
-
- tim_ring->base = dev->bar2 +
- (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
-
- rc = tim_register_irq(tim_ring->ring_id);
- if (rc < 0)
- goto chnk_mem_err;
-
- otx2_write64((uint64_t)tim_ring->bkt,
- tim_ring->base + TIM_LF_RING_BASE);
- otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
-
- /* Set fastpath ops. */
- tim_set_fp_ops(tim_ring);
-
- /* Update SSO xae count. */
- sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)tim_ring,
- RTE_EVENT_TYPE_TIMER);
- sso_xae_reconfigure(dev->event_dev);
-
- otx2_tim_dbg("Total memory used %"PRIu64"MB\n",
- (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz)
- + (tim_ring->nb_bkts * sizeof(struct otx2_tim_bkt))) /
- BIT_ULL(20)));
-
- return rc;
-
-chnk_mem_err:
- rte_free(tim_ring->bkt);
-bkt_mem_err:
- rte_free(tim_ring);
-rng_mem_err:
- free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- free_req->ring = adptr->data->id;
- otx2_mbox_process(dev->mbox);
- return rc;
-}
-
-static void
-otx2_tim_calibrate_start_tsc(struct otx2_tim_ring *tim_ring)
-{
-#define OTX2_TIM_CALIB_ITER 1E6
- uint32_t real_bkt, bucket;
- int icount, ecount = 0;
- uint64_t bkt_cyc;
-
- for (icount = 0; icount < OTX2_TIM_CALIB_ITER; icount++) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- bkt_cyc = tim_cntvct();
- bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
- tim_ring->tck_int;
- bucket = bucket % (tim_ring->nb_bkts);
- tim_ring->ring_start_cyc = bkt_cyc - (real_bkt *
- tim_ring->tck_int);
- if (bucket != real_bkt)
- ecount++;
- }
- tim_ring->last_updt_cyc = bkt_cyc;
- otx2_tim_dbg("Bucket mispredict %3.2f distance %d\n",
- 100 - (((double)(icount - ecount) / (double)icount) * 100),
- bucket - real_bkt);
-}
-
-static int
-otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_enable_rsp *rsp;
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- goto fail;
- }
- tim_ring->ring_start_cyc = rsp->timestarted;
- tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, tim_cntfrq());
- tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
- tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
- tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
-
- otx2_tim_calibrate_start_tsc(tim_ring);
-
-fail:
- return rc;
-}
-
-static int
-otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- rc = -EBUSY;
- }
-
- return rc;
-}
-
-static int
-otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- tim_unregister_irq(tim_ring->ring_id);
-
- req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- return -EBUSY;
- }
-
- rte_free(tim_ring->bkt);
- rte_mempool_free(tim_ring->chunk_pool);
- rte_free(adptr->data->adapter_priv);
-
- return 0;
-}
-
-static int
-otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter,
- struct rte_event_timer_adapter_stats *stats)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
- uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
-
- stats->evtim_exp_count = __atomic_load_n(&tim_ring->arm_cnt,
- __ATOMIC_RELAXED);
- stats->ev_enq_count = stats->evtim_exp_count;
- stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc,
- &tim_ring->fast_div);
- return 0;
-}
-
-static int
-otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
-
- __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
- return 0;
-}
-
-int
-otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
- uint32_t *caps, const struct event_timer_adapter_ops **ops)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
-
- RTE_SET_USED(flags);
-
- if (dev == NULL)
- return -ENODEV;
-
- otx2_tim_ops.init = otx2_tim_ring_create;
- otx2_tim_ops.uninit = otx2_tim_ring_free;
- otx2_tim_ops.start = otx2_tim_ring_start;
- otx2_tim_ops.stop = otx2_tim_ring_stop;
- otx2_tim_ops.get_info = otx2_tim_ring_info_get;
-
- if (dev->enable_stats) {
- otx2_tim_ops.stats_get = otx2_tim_stats_get;
- otx2_tim_ops.stats_reset = otx2_tim_stats_reset;
- }
-
- /* Store evdev pointer for later use. */
- dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
- *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC;
- *ops = &otx2_tim_ops;
-
- return 0;
-}
-
-#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
-#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
-#define OTX2_TIM_STATS_ENA "tim_stats_ena"
-#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
-#define OTX2_TIM_RING_CTL "tim_ring_ctl"
-
-static void
-tim_parse_ring_param(char *value, void *opaque)
-{
- struct otx2_tim_evdev *dev = opaque;
- struct otx2_tim_ctl ring_ctl = {0};
- char *tok = strtok(value, "-");
- struct otx2_tim_ctl *old_ptr;
- uint16_t *val;
-
- val = (uint16_t *)&ring_ctl;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&ring_ctl.enable_stats + 1)) {
- otx2_err(
- "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
- return;
- }
-
- dev->ring_ctl_cnt++;
- old_ptr = dev->ring_ctl_data;
- dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data,
- sizeof(struct otx2_tim_ctl) *
- dev->ring_ctl_cnt, 0);
- if (dev->ring_ctl_data == NULL) {
- dev->ring_ctl_data = old_ptr;
- dev->ring_ctl_cnt--;
- return;
- }
-
- dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
-}
-
-static void
-tim_parse_ring_ctl_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- tim_parse_ring_param(start + 1, opaque);
- start = end;
- s = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
- * isn't allowed. 0 represents default.
- */
- tim_parse_ring_ctl_list(value, opaque);
-
- return 0;
-}
-
-static void
-tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
-{
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- return;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
- &parse_kvargs_flag, &dev->disable_npa);
- rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
- &parse_kvargs_value, &dev->chunk_slots);
- rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
- &dev->enable_stats);
- rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
- &dev->min_ring_cnt);
- rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL,
- &tim_parse_kvargs_dict, &dev);
-
- rte_kvargs_free(kvlist);
-}
-
-void
-otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
-{
- struct rsrc_attach_req *atch_req;
- struct rsrc_detach_req *dtch_req;
- struct free_rsrcs_rsp *rsrc_cnt;
- const struct rte_memzone *mz;
- struct otx2_tim_evdev *dev;
- int rc;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME),
- sizeof(struct otx2_tim_evdev),
- rte_socket_id(), 0);
- if (mz == NULL) {
- otx2_tim_dbg("Unable to allocate memory for TIM Event device");
- return;
- }
-
- dev = mz->addr;
- dev->pci_dev = pci_dev;
- dev->mbox = cmn_dev->mbox;
- dev->bar2 = cmn_dev->bar2;
-
- tim_parse_devargs(pci_dev->device.devargs, dev);
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count.");
- goto mz_free;
- }
-
- dev->nb_rings = dev->min_ring_cnt ?
- RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim;
-
- if (!dev->nb_rings) {
- otx2_tim_dbg("No TIM Logical functions provisioned.");
- goto mz_free;
- }
-
- atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox);
- atch_req->modify = true;
- atch_req->timlfs = dev->nb_rings;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- otx2_err("Unable to attach TIM rings.");
- goto mz_free;
- }
-
- rc = tim_get_msix_offsets();
- if (rc < 0) {
- otx2_err("Unable to get MSIX offsets for TIM.");
- goto detach;
- }
-
- if (dev->chunk_slots &&
- dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
- dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
- dev->chunk_sz = (dev->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT;
- } else {
- dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
- }
-
- return;
-
-detach:
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
-mz_free:
- rte_memzone_free(mz);
-}
-
-void
-otx2_tim_fini(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct rsrc_detach_req *dtch_req;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
- rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)));
-}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
deleted file mode 100644
index dac642e0e1..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ /dev/null
@@ -1,256 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_EVDEV_H__
-#define __OTX2_TIM_EVDEV_H__
-
-#include <event_timer_adapter_pmd.h>
-#include <rte_event_timer_adapter.h>
-#include <rte_reciprocal.h>
-
-#include "otx2_dev.h"
-
-#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
-
-#define otx2_tim_func_trace otx2_tim_dbg
-
-#define TIM_LF_RING_AURA (0x0)
-#define TIM_LF_RING_BASE (0x130)
-#define TIM_LF_NRSPERR_INT (0x200)
-#define TIM_LF_NRSPERR_INT_W1S (0x208)
-#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
-#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
-#define TIM_LF_RAS_INT (0x300)
-#define TIM_LF_RAS_INT_W1S (0x308)
-#define TIM_LF_RAS_INT_ENA_W1S (0x310)
-#define TIM_LF_RAS_INT_ENA_W1C (0x318)
-#define TIM_LF_RING_REL (0x400)
-
-#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
-#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \
- TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
-#define TIM_BUCKET_W1_S_LOCK (40)
-#define TIM_BUCKET_W1_M_LOCK ((1ULL << \
- (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \
- TIM_BUCKET_W1_S_LOCK)) - 1)
-#define TIM_BUCKET_W1_S_RSVD (35)
-#define TIM_BUCKET_W1_S_BSK (34)
-#define TIM_BUCKET_W1_M_BSK ((1ULL << \
- (TIM_BUCKET_W1_S_RSVD - \
- TIM_BUCKET_W1_S_BSK)) - 1)
-#define TIM_BUCKET_W1_S_HBT (33)
-#define TIM_BUCKET_W1_M_HBT ((1ULL << \
- (TIM_BUCKET_W1_S_BSK - \
- TIM_BUCKET_W1_S_HBT)) - 1)
-#define TIM_BUCKET_W1_S_SBT (32)
-#define TIM_BUCKET_W1_M_SBT ((1ULL << \
- (TIM_BUCKET_W1_S_HBT - \
- TIM_BUCKET_W1_S_SBT)) - 1)
-#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
-#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \
- (TIM_BUCKET_W1_S_SBT - \
- TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
-
-#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
-
-#define TIM_BUCKET_CHUNK_REMAIN \
- (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
-
-#define TIM_BUCKET_LOCK \
- (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
-
-#define TIM_BUCKET_SEMA_WLOCK \
- (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
-
-#define OTX2_MAX_TIM_RINGS (256)
-#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
-#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
-#define OTX2_TIM_CHUNK_ALIGNMENT (16)
-#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \
- OTX2_TIM_CHUNK_ALIGNMENT)
-#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
-#define OTX2_TIM_MIN_CHUNK_SLOTS (0x8)
-#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
-#define OTX2_TIM_MIN_TMO_TKS (256)
-
-#define OTX2_TIM_SP 0x1
-#define OTX2_TIM_MP 0x2
-#define OTX2_TIM_ENA_FB 0x10
-#define OTX2_TIM_ENA_DFB 0x20
-#define OTX2_TIM_ENA_STATS 0x40
-
-enum otx2_tim_clk_src {
- OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
- OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
- OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
- OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
-};
-
-struct otx2_tim_bkt {
- uint64_t first_chunk;
- union {
- uint64_t w1;
- struct {
- uint32_t nb_entry;
- uint8_t sbt:1;
- uint8_t hbt:1;
- uint8_t bsk:1;
- uint8_t rsvd:5;
- uint8_t lock;
- int16_t chunk_remainder;
- };
- };
- uint64_t current_chunk;
- uint64_t pad;
-} __rte_packed __rte_aligned(32);
-
-struct otx2_tim_ent {
- uint64_t w0;
- uint64_t wqe;
-} __rte_packed;
-
-struct otx2_tim_ctl {
- uint16_t ring;
- uint16_t chunk_slots;
- uint16_t disable_npa;
- uint16_t enable_stats;
-};
-
-struct otx2_tim_evdev {
- struct rte_pci_device *pci_dev;
- struct rte_eventdev *event_dev;
- struct otx2_mbox *mbox;
- uint16_t nb_rings;
- uint32_t chunk_sz;
- uintptr_t bar2;
- /* Dev args */
- uint8_t disable_npa;
- uint16_t chunk_slots;
- uint16_t min_ring_cnt;
- uint8_t enable_stats;
- uint16_t ring_ctl_cnt;
- struct otx2_tim_ctl *ring_ctl_data;
- /* HW const */
- /* MSIX offsets */
- uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
-};
-
-struct otx2_tim_ring {
- uintptr_t base;
- uint16_t nb_chunk_slots;
- uint32_t nb_bkts;
- uint64_t last_updt_cyc;
- uint64_t ring_start_cyc;
- uint64_t tck_int;
- uint64_t tot_int;
- struct otx2_tim_bkt *bkt;
- struct rte_mempool *chunk_pool;
- struct rte_reciprocal_u64 fast_div;
- struct rte_reciprocal_u64 fast_bkt;
- uint64_t arm_cnt;
- uint8_t prod_type_sp;
- uint8_t enable_stats;
- uint8_t disable_npa;
- uint8_t ena_dfb;
- uint8_t ena_periodic;
- uint16_t ring_id;
- uint32_t aura;
- uint64_t nb_timers;
- uint64_t tck_nsec;
- uint64_t max_tout;
- uint64_t nb_chunks;
- uint64_t chunk_sz;
- uint64_t tenns_clk_freq;
- enum otx2_tim_clk_src clk_src;
-} __rte_cache_aligned;
-
-static inline struct otx2_tim_evdev *
-tim_priv_get(void)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME));
- if (mz == NULL)
- return NULL;
-
- return mz->addr;
-}
-
-#ifdef RTE_ARCH_ARM64
-static inline uint64_t
-tim_cntvct(void)
-{
- return __rte_arm64_cntvct();
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return __rte_arm64_cntfrq();
-}
-#else
-static inline uint64_t
-tim_cntvct(void)
-{
- return 0;
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return 0;
-}
-#endif
-
-#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, 0, OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(mp, 0, 0, 1, OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(fb_sp, 0, 1, 0, OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(fb_mp, 0, 1, 1, OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
- FP(stats_mod_sp, 1, 0, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(stats_mod_mp, 1, 0, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(stats_mod_fb_sp, 1, 1, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(stats_mod_fb_mp, 1, 1, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_MP)
-
-#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, 0, OTX2_TIM_ENA_DFB) \
- FP(fb, 0, 1, OTX2_TIM_ENA_FB) \
- FP(stats_dfb, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB) \
- FP(stats_fb, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB)
-
-#define FP(_name, _f3, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint16_t nb_timers);
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_tmo_tick_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint64_t timeout_tick, \
- const uint16_t nb_timers);
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t otx2_tim_timer_cancel_burst(
- const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim, const uint16_t nb_timers);
-
-int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
- uint32_t *caps,
- const struct event_timer_adapter_ops **ops);
-
-void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
-void otx2_tim_fini(void);
-
-/* TIM IRQ */
-int tim_register_irq(uint16_t ring_id);
-void tim_unregister_irq(uint16_t ring_id);
-
-#endif /* __OTX2_TIM_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
deleted file mode 100644
index 9ee07958fd..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ /dev/null
@@ -1,192 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_tim_evdev.h"
-#include "otx2_tim_worker.h"
-
-static inline int
-tim_arm_checks(const struct otx2_tim_ring * const tim_ring,
- struct rte_event_timer * const tim)
-{
- if (unlikely(tim->state)) {
- tim->state = RTE_EVENT_TIMER_ERROR;
- rte_errno = EALREADY;
- goto fail;
- }
-
- if (unlikely(!tim->timeout_ticks ||
- tim->timeout_ticks >= tim_ring->nb_bkts)) {
- tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE
- : RTE_EVENT_TIMER_ERROR_TOOEARLY;
- rte_errno = EINVAL;
- goto fail;
- }
-
- return 0;
-
-fail:
- return -EINVAL;
-}
-
-static inline void
-tim_format_event(const struct rte_event_timer * const tim,
- struct otx2_tim_ent * const entry)
-{
- entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
- (tim->ev.event & 0xFFFFFFFFF);
- entry->wqe = tim->ev.u64;
-}
-
-static inline void
-tim_sync_start_cyc(struct otx2_tim_ring *tim_ring)
-{
- uint64_t cur_cyc = tim_cntvct();
- uint32_t real_bkt;
-
- if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- cur_cyc = tim_cntvct();
-
- tim_ring->ring_start_cyc = cur_cyc -
- (real_bkt * tim_ring->tck_int);
- tim_ring->last_updt_cyc = cur_cyc;
- }
-
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers,
- const uint8_t flags)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_ent entry;
- uint16_t index;
- int ret;
-
- tim_sync_start_cyc(tim_ring);
- for (index = 0; index < nb_timers; index++) {
- if (tim_arm_checks(tim_ring, tim[index]))
- break;
-
- tim_format_event(tim[index], &entry);
- if (flags & OTX2_TIM_SP)
- ret = tim_add_entry_sp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
- if (flags & OTX2_TIM_MP)
- ret = tim_add_entry_mp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
-
- if (unlikely(ret)) {
- rte_errno = -ret;
- break;
- }
- }
-
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
-
- return index;
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint64_t timeout_tick,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned;
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- uint16_t set_timers = 0;
- uint16_t arr_idx = 0;
- uint16_t idx;
- int ret;
-
- if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) {
- const enum rte_event_timer_state state = timeout_tick ?
- RTE_EVENT_TIMER_ERROR_TOOLATE :
- RTE_EVENT_TIMER_ERROR_TOOEARLY;
- for (idx = 0; idx < nb_timers; idx++)
- tim[idx]->state = state;
-
- rte_errno = EINVAL;
- return 0;
- }
-
- tim_sync_start_cyc(tim_ring);
- while (arr_idx < nb_timers) {
- for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers);
- idx++, arr_idx++) {
- tim_format_event(tim[arr_idx], &entry[idx]);
- }
- ret = tim_add_entry_brst(tim_ring, timeout_tick,
- &tim[set_timers], entry, idx, flags);
- set_timers += ret;
- if (ret != idx)
- break;
- }
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
- __ATOMIC_RELAXED);
-
- return set_timers;
-}
-
-#define FP(_name, _f3, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \
-}
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_tmo_tick_burst_ ## _name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint64_t timeout_tick, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
- nb_timers, _flags); \
-}
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t
-otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers)
-{
- uint16_t index;
- int ret;
-
- RTE_SET_USED(adptr);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- for (index = 0; index < nb_timers; index++) {
- if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
- rte_errno = EALREADY;
- break;
- }
-
- if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
- rte_errno = EINVAL;
- break;
- }
- ret = tim_rm_entry(tim[index]);
- if (ret) {
- rte_errno = -ret;
- break;
- }
- }
-
- return index;
-}
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
deleted file mode 100644
index efe88a8692..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ /dev/null
@@ -1,598 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_WORKER_H__
-#define __OTX2_TIM_WORKER_H__
-
-#include "otx2_tim_evdev.h"
-
-static inline uint8_t
-tim_bkt_fetch_lock(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_LOCK) &
- TIM_BUCKET_W1_M_LOCK;
-}
-
-static inline int16_t
-tim_bkt_fetch_rem(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
- TIM_BUCKET_W1_M_CHUNK_REMAINDER;
-}
-
-static inline int16_t
-tim_bkt_get_rem(struct otx2_tim_bkt *bktp)
-{
- return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline uint8_t
-tim_bkt_get_hbt(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
-}
-
-static inline uint8_t
-tim_bkt_get_bsk(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
-}
-
-static inline uint64_t
-tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp)
-{
- /* Clear everything except lock. */
- const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
- __ATOMIC_ACQUIRE);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_inc_lock(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_dec_lock(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
-}
-
-static inline void
-tim_bkt_dec_lock_relaxed(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
-}
-
-static inline uint32_t
-tim_bkt_get_nent(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
- TIM_BUCKET_W1_M_NUM_ENTRIES;
-}
-
-static inline void
-tim_bkt_inc_nent(struct otx2_tim_bkt *bktp)
-{
- __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v)
-{
- __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
- TIM_BUCKET_W1_S_NUM_ENTRIES);
-
- return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
-{
- return (n - (d * rte_reciprocal_divide_u64(n, &R)));
-}
-
-static __rte_always_inline void
-tim_get_target_bucket(struct otx2_tim_ring *const tim_ring,
- const uint32_t rel_bkt, struct otx2_tim_bkt **bkt,
- struct otx2_tim_bkt **mirr_bkt)
-{
- const uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
- uint64_t bucket =
- rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
- rel_bkt;
- uint64_t mirr_bucket = 0;
-
- bucket =
- tim_bkt_fast_mod(bucket, tim_ring->nb_bkts, tim_ring->fast_bkt);
- mirr_bucket = tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
- tim_ring->nb_bkts, tim_ring->fast_bkt);
- *bkt = &tim_ring->bkt[bucket];
- *mirr_bkt = &tim_ring->bkt[mirr_bucket];
-}
-
-static struct otx2_tim_ent *
-tim_clr_bkt(struct otx2_tim_ring * const tim_ring,
- struct otx2_tim_bkt * const bkt)
-{
-#define TIM_MAX_OUTSTANDING_OBJ 64
- void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
- struct otx2_tim_ent *chunk;
- struct otx2_tim_ent *pnext;
- uint8_t objs = 0;
-
-
- chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk);
- chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk +
- tim_ring->nb_chunk_slots)->w0;
- while (chunk) {
- pnext = (struct otx2_tim_ent *)(uintptr_t)
- ((chunk + tim_ring->nb_chunk_slots)->w0);
- if (objs == TIM_MAX_OUTSTANDING_OBJ) {
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
- objs = 0;
- }
- pend_chunks[objs++] = chunk;
- chunk = pnext;
- }
-
- if (objs)
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
-
- return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk;
-}
-
-static struct otx2_tim_ent *
-tim_refill_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (bkt->nb_entry || !bkt->first_chunk) {
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
- (void **)&chunk)))
- return NULL;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) =
- (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- } else {
- chunk = tim_clr_bkt(tim_ring, bkt);
- bkt->first_chunk = (uintptr_t)chunk;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
-
- return chunk;
-}
-
-static struct otx2_tim_ent *
-tim_insert_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
- return NULL;
-
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- return chunk;
-}
-
-static __rte_always_inline int
-tim_add_entry_sp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
- /* Insert the work. */
- rem = tim_bkt_fetch_rem(lock_sema);
-
- if (!rem) {
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- bkt->chunk_remainder = 0;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- }
-
- /* Copy work entry. */
- *chunk = *pent;
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static __rte_always_inline int
-tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- rem = tim_bkt_fetch_rem(lock_sema);
- if (rem < 0) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- uint64_t w1;
- asm volatile(" ldxr %[w1], [%[crem]] \n"
- " tbz %[w1], 63, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[w1], [%[crem]] \n"
- " tbnz %[w1], 63, rty%= \n"
- "dne%=: \n"
- : [w1] "=&r"(w1)
- : [crem] "r"(&bkt->w1)
- : "memory");
-#else
- while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
- 0)
- ;
-#endif
- goto __retry;
- } else if (!rem) {
- /* Only one thread can be here*/
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_set_rem(bkt, 0);
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- *chunk = *pent;
- if (tim_bkt_fetch_lock(lock_sema)) {
- do {
- lock_sema = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (tim_bkt_fetch_lock(lock_sema) - 1);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- __atomic_store_n(&bkt->chunk_remainder,
- tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- *chunk = *pent;
- }
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static inline uint16_t
-tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt,
- struct otx2_tim_ent *chunk,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent * const ents,
- const struct otx2_tim_bkt * const bkt)
-{
- for (; index < cpy_lmt; index++) {
- *chunk = *(ents + index);
- tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
- tim[index]->impl_opaque[1] = (uintptr_t)bkt;
- tim[index]->state = RTE_EVENT_TIMER_ARMED;
- }
-
- return index;
-}
-
-/* Burst mode functions */
-static inline int
-tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
- const uint16_t rel_bkt,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent *ents,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent *chunk = NULL;
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_bkt *bkt;
- uint16_t chunk_remainder;
- uint16_t index = 0;
- uint64_t lock_sema;
- int16_t rem, crem;
- uint8_t lock_cnt;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Only one thread beyond this. */
- lock_sema = tim_bkt_inc_lock(bkt);
- lock_cnt = (uint8_t)
- ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK);
-
- if (lock_cnt) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " beq dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " bne rty%= \n"
- "dne%=: \n"
- : [lock_cnt] "=&r"(lock_cnt)
- : [lock] "r"(&bkt->lock)
- : "memory");
-#else
- while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
- ;
-#endif
- goto __retry;
- }
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- chunk_remainder = tim_bkt_fetch_rem(lock_sema);
- rem = chunk_remainder - nb_timers;
- if (rem < 0) {
- crem = tim_ring->nb_chunk_slots - chunk_remainder;
- if (chunk_remainder && crem) {
- chunk = ((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) + crem;
-
- index = tim_cpy_wrk(index, chunk_remainder, chunk, tim,
- ents, bkt);
- tim_bkt_sub_rem(bkt, chunk_remainder);
- tim_bkt_add_nent(bkt, chunk_remainder);
- }
-
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim_bkt_dec_lock(bkt);
- rte_errno = ENOMEM;
- tim[index]->state = RTE_EVENT_TIMER_ERROR;
- return crem;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
-
- rem = nb_timers - chunk_remainder;
- tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
- tim_bkt_add_nent(bkt, rem);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
-
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
- tim_bkt_sub_rem(bkt, nb_timers);
- tim_bkt_add_nent(bkt, nb_timers);
- }
-
- tim_bkt_dec_lock(bkt);
-
- return nb_timers;
-}
-
-static int
-tim_rm_entry(struct rte_event_timer *tim)
-{
- struct otx2_tim_ent *entry;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
-
- if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
- return -ENOENT;
-
- entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0];
- if (entry->wqe != tim->ev.u64) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- return -ENOENT;
- }
-
- bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
- lock_sema = tim_bkt_inc_lock(bkt);
- if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
- return -ENOENT;
- }
-
- entry->w0 = 0;
- entry->wqe = 0;
- tim->state = RTE_EVENT_TIMER_CANCELED;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
-
- return 0;
-}
-
-#endif /* __OTX2_TIM_WORKER_H__ */
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
deleted file mode 100644
index 95139d27a3..0000000000
--- a/drivers/event/octeontx2/otx2_worker.c
+++ /dev/null
@@ -1,372 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
-
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag(ws);
- } else {
- otx2_ssogws_swtag_norm(ws, tag, new_tt);
- }
-
- ws->swtag_req = 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev,
- const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched(ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(ws->tag_op)) == grp)
- otx2_ssogws_fwd_swtag(ws, ev);
- else
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_fwd_group(ws, ev, grp);
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, flags, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-uint16_t __rte_hot
-otx2_ssogws_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws *ws = port;
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_forward_event(ws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_forward_event(ws, ev);
-
- return 1;
-}
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void
-ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base,
- otx2_handle_event_t fn, void *arg)
-{
- uint64_t cq_ds_cnt = 1;
- uint64_t aq_cnt = 1;
- uint64_t ds_cnt = 1;
- struct rte_event ev;
- uint64_t enable;
- uint64_t val;
-
- enable = otx2_read64(base + SSO_LF_GGRP_QCTL);
- if (!enable)
- return;
-
- val = queue_id; /* GGRP ID */
- val |= BIT_ULL(18); /* Grouped */
- val |= BIT_ULL(16); /* WAIT */
-
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- cq_ds_cnt &= 0x3FFF3FFF0000;
-
- while (aq_cnt || cq_ds_cnt || ds_cnt) {
- otx2_write64(val, ws->getwrk_op);
- otx2_ssogws_get_work_empty(ws, &ev, 0);
- if (fn != NULL && ev.u64 != 0)
- fn(arg, ev);
- if (ev.sched_type != SSO_TT_EMPTY)
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- rte_mb();
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- /* Extract cq and ds count */
- cq_ds_cnt &= 0x3FFF3FFF0000;
- }
-
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_GWC_INVAL);
- rte_mb();
-}
-
-void
-ssogws_reset(struct otx2_ssogws *ws)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t pend_state;
- uint8_t pend_tt;
- uint64_t tag;
-
- /* Wait till getwork/swtp/waitw/desched completes. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58)));
-
- tag = otx2_read64(base + SSOW_LF_GWS_TAG);
- pend_tt = (tag >> 32) & 0x3;
- if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
- if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED)
- otx2_ssogws_swtag_untag(ws);
- otx2_ssogws_desched(ws);
- }
- rte_mb();
-
- /* Wait for desched to complete. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & BIT_ULL(58));
-}
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
deleted file mode 100644
index aa766c6602..0000000000
--- a/drivers/event/octeontx2/otx2_worker.h
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_H__
-#define __OTX2_WORKER_H__
-
-#include <rte_common.h>
-#include <rte_branch_prediction.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-#include "otx2_ethdev_sec_tx.h"
-
-/* SSO Operations */
-
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags, const void * const lookup_mem)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- otx2_write64(BIT_ULL(16) | /* wait for work. */
- 1, /* Use Mask set 0. */
- ws->getwrk_op);
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags,
- lookup_mem);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf,
- ws->tstamp, flags,
- (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-/* Used in cleaning up workslot. */
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch_non_temporal((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch_non_temporal((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY &&
- event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags, NULL);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)get_work1)
- + OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt,
- uint16_t grp)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
- otx2_write64(val, ws->swtag_desched_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32);
- otx2_write64(val, ws->swtag_norm_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_untag(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_SWTAG_UNTAG);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
-{
- if (OTX2_SSOW_TT_FROM_TAG(otx2_read64(tag_op)) == SSO_TT_EMPTY)
- return;
- otx2_write64(0, flush_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_desched(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_DESCHED);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_wait(struct otx2_ssogws *ws)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t swtp;
-
- asm volatile(" ldr %[swtb], [%[swtp_loc]] \n"
- " tbz %[swtb], 62, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[swtb], [%[swtp_loc]] \n"
- " tbnz %[swtb], 62, rty%= \n"
- "done%=: \n"
- : [swtb] "=&r" (swtp)
- : [swtp_loc] "r" (ws->tag_op));
-#else
- /* Wait for the SWTAG/SWTAG_FULL operation */
- while (otx2_read64(ws->tag_op) & BIT_ULL(62))
- ;
-#endif
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t tag_op)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t tag;
-
- asm volatile (
- " ldr %[tag], [%[tag_op]] \n"
- " tbnz %[tag], 35, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_op]] \n"
- " tbz %[tag], 35, rty%= \n"
- "done%=: \n"
- : [tag] "=&r" (tag)
- : [tag_op] "r" (tag_op)
- );
-#else
- /* Wait for the HEAD to be set */
- while (!(otx2_read64(tag_op) & BIT_ULL(35)))
- ;
-#endif
-}
-
-static __rte_always_inline const struct otx2_eth_txq *
-otx2_ssogws_xtract_meta(struct rte_mbuf *m,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
-{
- return (const struct otx2_eth_txq *)txq_data[m->port][
- rte_event_eth_tx_adapter_txq_get(m)];
-}
-
-static __rte_always_inline void
-otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m,
- uint64_t *cmd, const uint32_t flags)
-{
- otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags));
- otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
-}
-
-static __rte_always_inline uint16_t
-otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
- const uint32_t flags)
-{
- struct rte_mbuf *m = ev->mbuf;
- const struct otx2_eth_txq *txq;
- uint16_t ref_cnt = m->refcnt;
-
- if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
- (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- return otx2_sec_event_tx(base, ev, m, txq, flags);
- }
-
- /* Perform header writes before barrier for TSO */
- otx2_nix_xmit_prepare_tso(m, flags);
- /* Lets commit any changes in the packet here in case when
- * fast free is set as no further changes will be made to mbuf.
- * In case of fast free is not set, both otx2_nix_prepare_mseg()
- * and otx2_nix_xmit_prepare() has a barrier after refcnt update.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- otx2_ssogws_prepare_pkt(txq, m, cmd, flags);
-
- if (flags & NIX_TX_MULTI_SEG_F) {
- const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, segdw, flags);
- if (!ev->sched_type) {
- otx2_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- } else {
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- }
- } else {
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, 4, flags);
-
- if (!ev->sched_type) {
- otx2_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_one(cmd, txq->lmt_addr,
- txq->io_addr, flags);
- } else {
- otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,
- flags);
- }
- }
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- if (ref_cnt > 1)
- return 1;
- }
-
- otx2_ssogws_swtag_flush(base + SSOW_LF_GWS_TAG,
- base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
-
- return 1;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
deleted file mode 100644
index 81af4ca904..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker_dual.h"
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws,
- const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws,
- const struct rte_event *ev)
-{
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws);
- } else {
- otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt);
- }
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws,
- const struct rte_event *ev, const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws,
- struct otx2_ssogws_state *vws,
- const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(vws->tag_op)) == grp) {
- otx2_ssogws_dual_fwd_swtag(vws, ev);
- ws->swtag_req = 1;
- } else {
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_dual_fwd_group(vws, ev, grp);
- }
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_dual_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_dual_forward_event(ws, vws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_dual_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_dual_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_dual_forward_event(ws, vws, ev);
-
- return 1;
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- rte_prefetch_non_temporal(ws); \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags | \
- NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws_dual *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F);\
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h
deleted file mode 100644
index 36ae4dd88f..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_DUAL_H__
-#define __OTX2_WORKER_DUAL_H__
-
-#include <rte_branch_prediction.h>
-#include <rte_common.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-
-/* SSO Operations */
-static __rte_always_inline uint16_t
-otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws,
- struct otx2_ssogws_state *ws_pair,
- struct rte_event *ev, const uint32_t flags,
- const void * const lookup_mem,
- struct otx2_timesync_info * const tstamp)
-{
- const uint64_t set_gw = BIT_ULL(16) | 1;
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- "rty%=: \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: str %[gw], [%[pong]] \n"
- " dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8]\n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op),
- [gw] "r" (set_gw),
- [pong] "r" (ws_pair->getwrk_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
- get_work1 = otx2_read64(ws->wqp_op);
- otx2_write64(set_gw, ws_pair->getwrk_op);
-
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- uint8_t port = event.sub_event_type;
-
- event.sub_event_type = 0;
- otx2_wqe_to_mbuf(get_work1, mbuf, port,
- event.flow_id, flags, lookup_mem);
- /* Extracting tstamp, if PTP enabled. CGX will prepend
- * the timestamp at starting of packet data and it can
- * be derieved from WQE 9 dword which corresponds to SG
- * iova.
- * rte_pktmbuf_mtod_offset can be used for this purpose
- * but it brings down the performance as it reads
- * mbuf->buf_addr which is not part of cache in general
- * fast path.
- */
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-#endif
diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/event/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index 57be33b862..ea473552dd 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -161,48 +161,20 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id npa_pci_map[] = {
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build
index d295263b87..dc88812585 100644
--- a/drivers/mempool/meson.build
+++ b/drivers/mempool/meson.build
@@ -7,7 +7,6 @@ drivers = [
'dpaa',
'dpaa2',
'octeontx',
- 'octeontx2',
'ring',
'stack',
]
diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build
deleted file mode 100644
index a4bea6d364..0000000000
--- a/drivers/mempool/octeontx2/meson.build
+++ /dev/null
@@ -1,18 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_mempool.c',
- 'otx2_mempool_debug.c',
- 'otx2_mempool_irq.c',
- 'otx2_mempool_ops.c',
-)
-
-deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool']
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
deleted file mode 100644
index f63dc06ef2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ /dev/null
@@ -1,457 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_io.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mempool.h"
-
-#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_)
-#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE)
-
-static inline int
-npa_lf_alloc(struct otx2_npa_lf *lf)
-{
- struct otx2_mbox *mbox = lf->mbox;
- struct npa_lf_alloc_req *req;
- struct npa_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox);
- req->aura_sz = lf->aura_sz;
- req->nr_pools = lf->nr_pools;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return NPA_LF_ERR_ALLOC;
-
- lf->stack_pg_ptrs = rsp->stack_pg_ptrs;
- lf->stack_pg_bytes = rsp->stack_pg_bytes;
- lf->qints = rsp->qints;
-
- return 0;
-}
-
-static int
-npa_lf_free(struct otx2_mbox *mbox)
-{
- otx2_mbox_alloc_msg_npa_lf_free(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz,
- uint32_t nr_pools, struct otx2_mbox *mbox)
-{
- uint32_t i, bmp_sz;
- int rc;
-
- /* Sanity checks */
- if (!lf || !base || !mbox || !nr_pools)
- return NPA_LF_ERR_PARAM;
-
- if (base & AURA_ID_MASK)
- return NPA_LF_ERR_BASE_INVALID;
-
- if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX)
- return NPA_LF_ERR_PARAM;
-
- memset(lf, 0x0, sizeof(*lf));
- lf->base = base;
- lf->aura_sz = aura_sz;
- lf->nr_pools = nr_pools;
- lf->mbox = mbox;
-
- rc = npa_lf_alloc(lf);
- if (rc)
- goto exit;
-
- bmp_sz = rte_bitmap_get_memory_footprint(nr_pools);
-
- /* Allocate memory for bitmap */
- lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz,
- RTE_CACHE_LINE_SIZE);
- if (lf->npa_bmp_mem == NULL) {
- rc = -ENOMEM;
- goto lf_free;
- }
-
- /* Initialize pool resource bitmap array */
- lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz);
- if (lf->npa_bmp == NULL) {
- rc = -EINVAL;
- goto bmap_mem_free;
- }
-
- /* Mark all pools available */
- for (i = 0; i < nr_pools; i++)
- rte_bitmap_set(lf->npa_bmp, i);
-
- /* Allocate memory for qint context */
- lf->npa_qint_mem = rte_zmalloc("npa_qint_mem",
- sizeof(struct otx2_npa_qint) * nr_pools, 0);
- if (lf->npa_qint_mem == NULL) {
- rc = -ENOMEM;
- goto bmap_free;
- }
-
- /* Allocate memory for nap_aura_lim memory */
- lf->aura_lim = rte_zmalloc("npa_aura_lim_mem",
- sizeof(struct npa_aura_lim) * nr_pools, 0);
- if (lf->aura_lim == NULL) {
- rc = -ENOMEM;
- goto qint_free;
- }
-
- /* Init aura start & end limits */
- for (i = 0; i < nr_pools; i++) {
- lf->aura_lim[i].ptr_start = UINT64_MAX;
- lf->aura_lim[i].ptr_end = 0x0ull;
- }
-
- return 0;
-
-qint_free:
- rte_free(lf->npa_qint_mem);
-bmap_free:
- rte_bitmap_free(lf->npa_bmp);
-bmap_mem_free:
- rte_free(lf->npa_bmp_mem);
-lf_free:
- npa_lf_free(lf->mbox);
-exit:
- return rc;
-}
-
-static int
-npa_lf_fini(struct otx2_npa_lf *lf)
-{
- if (!lf)
- return NPA_LF_ERR_PARAM;
-
- rte_free(lf->aura_lim);
- rte_free(lf->npa_qint_mem);
- rte_bitmap_free(lf->npa_bmp);
- rte_free(lf->npa_bmp_mem);
-
- return npa_lf_free(lf->mbox);
-
-}
-
-static inline uint32_t
-otx2_aura_size_to_u32(uint8_t val)
-{
- if (val == NPA_AURA_SZ_0)
- return 128;
- if (val >= NPA_AURA_SZ_MAX)
- return BIT_ULL(20);
-
- return 1 << (val + 6);
-}
-
-static int
-parse_max_pools(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
- if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128))
- val = 128;
- if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M))
- val = BIT_ULL(20);
-
- *(uint8_t *)extra_args = rte_log2_u32(val) - 6;
- return 0;
-}
-
-#define OTX2_MAX_POOLS "max_pools"
-
-static uint8_t
-otx2_parse_aura_size(struct rte_devargs *devargs)
-{
- uint8_t aura_sz = NPA_AURA_SZ_128;
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- goto exit;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-exit:
- return aura_sz;
-}
-
-static inline int
-npa_lf_attach(struct otx2_mbox *mbox)
-{
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff)
-{
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- *npa_msixoff = msix_rsp->npa_msixoff;
-
- return rc;
-}
-
-/**
- * @internal
- * Finalize NPA LF.
- */
-int
-otx2_npa_lf_fini(void)
-{
- struct otx2_idev_cfg *idev;
- int rc = 0;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) {
- otx2_npa_unregister_irqs(idev->npa_lf);
- rc |= npa_lf_fini(idev->npa_lf);
- rc |= npa_lf_detach(idev->npa_lf->mbox);
- otx2_npa_set_defaults(idev);
- }
-
- return rc;
-}
-
-/**
- * @internal
- * Initialize NPA LF.
- */
-int
-otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_npa_lf *lf;
- uint16_t npa_msixoff;
- uint32_t nr_pools;
- uint8_t aura_sz;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- /* Is NPA LF initialized by any another driver? */
- if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) {
-
- rc = npa_lf_attach(dev->mbox);
- if (rc)
- goto fail;
-
- rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff);
- if (rc)
- goto npa_detach;
-
- aura_sz = otx2_parse_aura_size(pci_dev->device.devargs);
- nr_pools = otx2_aura_size_to_u32(aura_sz);
-
- lf = &dev->npalf;
- rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20),
- aura_sz, nr_pools, dev->mbox);
-
- if (rc)
- goto npa_detach;
-
- lf->pf_func = dev->pf_func;
- lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = pci_dev->intr_handle;
- lf->pci_dev = pci_dev;
-
- idev->npa_pf_func = dev->pf_func;
- idev->npa_lf = lf;
- rte_smp_wmb();
- rc = otx2_npa_register_irqs(lf);
- if (rc)
- goto npa_fini;
-
- rte_mbuf_set_platform_mempool_ops("octeontx2_npa");
- otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x",
- lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff);
- }
-
- return 0;
-
-npa_fini:
- npa_lf_fini(idev->npa_lf);
-npa_detach:
- npa_lf_detach(dev->mbox);
-fail:
- rte_atomic16_dec(&idev->npa_refcnt);
- return rc;
-}
-
-static inline char*
-otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name)
-{
- snprintf(name, OTX2_NPA_DEV_NAME_LEN,
- OTX2_NPA_DEV_NAME PCI_PRI_FMT,
- pci_dev->addr.domain, pci_dev->addr.bus,
- pci_dev->addr.devid, pci_dev->addr.function);
-
- return name;
-}
-
-static int
-otx2_npa_init(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
- int rc = -ENOMEM;
-
- mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name),
- sizeof(*dev), SOCKET_ID_ANY,
- 0, OTX2_ALIGN);
- if (mz == NULL)
- goto error;
-
- dev = mz->addr;
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc)
- goto malloc_fail;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto dev_uninit;
-
- dev->drv_inited = true;
- return 0;
-
-dev_uninit:
- otx2_npa_lf_fini();
- otx2_dev_fini(pci_dev, dev);
-malloc_fail:
- rte_memzone_free(mz);
-error:
- otx2_err("Failed to initialize npa device rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_npa_fini(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
-
- mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name));
- if (mz == NULL)
- return -EINVAL;
-
- dev = mz->addr;
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("%s: common resource in use by other devices",
- pci_dev->name);
- return -EAGAIN;
- }
-
- otx2_dev_fini(pci_dev, dev);
- rte_memzone_free(mz);
-
- return 0;
-}
-
-static int
-npa_remove(struct rte_pci_device *pci_dev)
-{
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_fini(pci_dev);
-}
-
-static int
-npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- RTE_SET_USED(pci_drv);
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_init(pci_dev);
-}
-
-static const struct rte_pci_id pci_npa_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_npa = {
- .id_table = pci_npa_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = npa_probe,
- .remove = npa_remove,
-};
-
-RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
-RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
-RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
deleted file mode 100644
index 8aa548248d..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MEMPOOL_H__
-#define __OTX2_MEMPOOL_H__
-
-#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
-#include <rte_devargs.h>
-#include <rte_mempool.h>
-
-#include "otx2_common.h"
-#include "otx2_mbox.h"
-
-enum npa_lf_status {
- NPA_LF_ERR_PARAM = -512,
- NPA_LF_ERR_ALLOC = -513,
- NPA_LF_ERR_INVALID_BLOCK_SZ = -514,
- NPA_LF_ERR_AURA_ID_ALLOC = -515,
- NPA_LF_ERR_AURA_POOL_INIT = -516,
- NPA_LF_ERR_AURA_POOL_FINI = -517,
- NPA_LF_ERR_BASE_INVALID = -518,
-};
-
-struct otx2_npa_lf;
-struct otx2_npa_qint {
- struct otx2_npa_lf *lf;
- uint8_t qintx;
-};
-
-struct npa_aura_lim {
- uint64_t ptr_start;
- uint64_t ptr_end;
-};
-
-struct otx2_npa_lf {
- uint16_t qints;
- uintptr_t base;
- uint8_t aura_sz;
- uint16_t pf_func;
- uint32_t nr_pools;
- void *npa_bmp_mem;
- void *npa_qint_mem;
- uint16_t npa_msixoff;
- struct otx2_mbox *mbox;
- uint32_t stack_pg_ptrs;
- uint32_t stack_pg_bytes;
- struct rte_bitmap *npa_bmp;
- struct npa_aura_lim *aura_lim;
- struct rte_pci_device *pci_dev;
- struct rte_intr_handle *intr_handle;
-};
-
-#define AURA_ID_MASK (BIT_ULL(16) - 1)
-
-/*
- * Generate 64bit handle to have optimized alloc and free aura operation.
- * 0 - AURA_ID_MASK for storing the aura_id.
- * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address.
- * This scheme is valid when OS can give AURA_ID_MASK
- * aligned address for lf base address.
- */
-static inline uint64_t
-npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr)
-{
- uint64_t val;
-
- val = aura_id & AURA_ID_MASK;
- return (uint64_t)addr | val;
-}
-
-static inline uint64_t
-npa_lf_aura_handle_to_aura(uint64_t aura_handle)
-{
- return aura_handle & AURA_ID_MASK;
-}
-
-static inline uintptr_t
-npa_lf_aura_handle_to_base(uint64_t aura_handle)
-{
- return (uintptr_t)(aura_handle & ~AURA_ID_MASK);
-}
-
-static inline uint64_t
-npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop)
-{
- uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (drop)
- wdata |= BIT_ULL(63); /* DROP */
-
- return otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_ALLOCX(0)));
-}
-
-static inline void
-npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (fabs)
- reg |= BIT_ULL(63); /* FABS */
-
- otx2_store_pair(iova, reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0);
-}
-
-static inline uint64_t
-npa_lf_aura_op_cnt_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_CNT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count)
-{
- uint64_t reg = count & (BIT_ULL(36) - 1);
-
- if (sign)
- reg |= BIT_ULL(43); /* CNT_ADD */
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_limit_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_LIMIT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit)
-{
- uint64_t reg = limit & (BIT_ULL(36) - 1);
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_available(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(
- aura_handle) + NPA_LF_POOL_OP_AVAILABLE));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
- uint64_t end_iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
-
- lim[reg].ptr_start = RTE_MIN(lim[reg].ptr_start, start_iova);
- lim[reg].ptr_end = RTE_MAX(lim[reg].ptr_end, end_iova);
-
- otx2_store_pair(lim[reg].ptr_start, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_START0);
- otx2_store_pair(lim[reg].ptr_end, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_END0);
-}
-
-/* NPA LF */
-__rte_internal
-int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_npa_lf_fini(void);
-
-/* IRQ */
-int otx2_npa_register_irqs(struct otx2_npa_lf *lf);
-void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf);
-
-/* Debug */
-int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf);
-
-#endif /* __OTX2_MEMPOOL_H__ */
diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c
deleted file mode 100644
index 279ea2e25f..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_debug.c
+++ /dev/null
@@ -1,135 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_mempool.h"
-
-#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-
-static inline void
-npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool)
-{
- npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base);
- npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d",
- pool->ena, pool->nat_align, pool->stack_caching);
- npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d",
- pool->stack_way_mask, pool->buf_offset);
- npa_dump("W1: buf_size \t\t%d", pool->buf_size);
-
- npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d",
- pool->stack_max_pages, pool->stack_pages);
-
- npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc);
-
- npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d",
- pool->stack_offset, pool->shift, pool->avg_level);
- npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d",
- pool->avg_con, pool->fc_ena, pool->fc_stype);
- npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d",
- pool->fc_hyst_bits, pool->fc_up_crossing);
- npa_dump("W4: update_time\t\t%d\n", pool->update_time);
-
- npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr);
-
- npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start);
-
- npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end);
- npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d",
- pool->err_int, pool->err_int_ena);
- npa_dump("W8: thresh_int\t\t%d", pool->thresh_int);
-
- npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d",
- pool->thresh_int_ena, pool->thresh_up);
- npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d",
- pool->thresh_qint_idx, pool->err_qint_idx);
-}
-
-static inline void
-npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura)
-{
- npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr);
-
- npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d",
- aura->ena, aura->pool_caching, aura->pool_way_mask);
- npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d",
- aura->avg_con, aura->pool_drop_ena);
- npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena);
- npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d",
- aura->bp_ena, aura->aura_drop, aura->shift);
- npa_dump("W1: avg_level\t\t%d\n", aura->avg_level);
-
- npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d",
- (uint64_t)aura->count, aura->nix0_bpid);
- npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid);
-
- npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n",
- (uint64_t)aura->limit, aura->bp, aura->fc_ena);
- npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d",
- aura->fc_up_crossing, aura->fc_stype);
-
- npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits);
-
- npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr);
-
- npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d",
- aura->pool_drop, aura->update_time);
- npa_dump("W5: err_int\t\t%d", aura->err_int);
- npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d",
- aura->err_int_ena, aura->thresh_int);
- npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena);
-
- npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d",
- aura->thresh_up, aura->thresh_qint_idx);
- npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx);
-
- npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh);
-}
-
-int
-otx2_mempool_ctx_dump(struct otx2_npa_lf *lf)
-{
- struct npa_aq_enq_req *aq;
- struct npa_aq_enq_rsp *rsp;
- uint32_t q;
- int rc = 0;
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_POOL;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(%d) context", q);
- return rc;
- }
- npa_dump("============== pool=%d ===============\n", q);
- npa_lf_pool_dump(&rsp->pool);
- }
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_AURA;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get aura(%d) context", q);
- return rc;
- }
- npa_dump("============== aura=%d ===============\n", q);
- npa_lf_aura_dump(&rsp->aura);
- }
-
- return rc;
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
deleted file mode 100644
index 5fa22b9612..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ /dev/null
@@ -1,303 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_common.h>
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-
-static void
-npa_lf_err_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_ERR_INT);
-}
-
-static int
-npa_lf_register_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- /* Register err interrupt vector */
- rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec);
-
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec);
-}
-
-static void
-npa_lf_ras_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_RAS);
-}
-
-static int
-npa_lf_register_ras_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf)
-{
- int vec;
- struct rte_intr_handle *handle = lf->intr_handle;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec);
-}
-
-static inline uint8_t
-npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, lf->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p)
-{
- return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a)
-{
- return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00);
-}
-
-static void
-npa_lf_q_irq(void *param)
-{
- struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param;
- struct otx2_npa_lf *lf = qint->lf;
- uint8_t irq, qintx = qint->qintx;
- uint32_t q, pool, aura;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx);
-
- /* Handle pool queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- pool = q % lf->qints;
- irq = npa_lf_pool_irq_get_and_clear(lf, pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool);
- }
-
- /* Handle aura queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
-
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aura = q % lf->qints;
- irq = npa_lf_aura_irq_get_and_clear(lf, aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS))
- otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura);
- }
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
- otx2_mempool_ctx_dump(lf);
-}
-
-static int
-npa_lf_register_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs, rc = 0;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- qintmem->lf = lf;
- qintmem->qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec);
- if (rc)
- break;
-
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-static void
-npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec);
-
- qintmem->lf = NULL;
- qintmem->qintx = 0;
- }
-}
-
-int
-otx2_npa_register_irqs(struct otx2_npa_lf *lf)
-{
- int rc;
-
- if (lf->npa_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x",
- lf->npa_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = npa_lf_register_err_irq(lf);
- /* Register RAS interrupt */
- rc |= npa_lf_register_ras_irq(lf);
- /* Register queue interrupts */
- rc |= npa_lf_register_queue_irqs(lf);
-
- return rc;
-}
-
-void
-otx2_npa_unregister_irqs(struct otx2_npa_lf *lf)
-{
- npa_lf_unregister_err_irq(lf);
- npa_lf_unregister_ras_irq(lf);
- npa_lf_unregister_queue_irqs(lf);
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
deleted file mode 100644
index 332e4f1cb2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ /dev/null
@@ -1,901 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_mempool.h>
-#include <rte_vect.h>
-
-#include "otx2_mempool.h"
-
-static int __rte_hot
-otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n)
-{
- unsigned int index; const uint64_t aura_handle = mp->pool_id;
- const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_FREE0;
-
- /* Ensure mbuf init changes are written before the free pointers
- * are enqueued to the stack.
- */
- rte_io_wmb();
- for (index = 0; index < n; index++)
- otx2_store_pair((uint64_t)obj_table[index], reg, addr);
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr,
- void **obj_table, uint8_t i)
-{
- uint8_t retry = 4;
-
- do {
- obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr);
- if (obj_table[i] != NULL)
- return 0;
-
- } while (retry--);
-
- return -ENOENT;
-}
-
-#if defined(RTE_ARCH_ARM64)
-static __rte_noinline int
-npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr,
- void **obj_table, unsigned int n)
-{
- uint8_t i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL)
- continue;
- if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i))
- return -ENOENT;
- }
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr,
- unsigned int n, void **obj_table)
-{
- register const uint64_t wdata64 __asm("x26") = wdata;
- register const uint64_t wdata128 __asm("x27") = wdata;
- uint64x2_t failed = vdupq_n_u64(~0);
-
- switch (n) {
- case 32:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x16, x17, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x18, x19, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x20, x21, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x22, x23, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- "fmov d16, x16\n"
- "fmov v16.D[1], x17\n"
- "fmov d17, x18\n"
- "fmov v17.D[1], x19\n"
- "fmov d18, x20\n"
- "fmov v18.D[1], x21\n"
- "fmov d19, x22\n"
- "fmov v19.D[1], x23\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x0\n"
- "fmov v20.D[1], x1\n"
- "fmov d21, x2\n"
- "fmov v21.D[1], x3\n"
- "fmov d22, x4\n"
- "fmov v22.D[1], x5\n"
- "fmov d23, x6\n"
- "fmov v23.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16",
- "x17", "x18", "x19", "x20", "x21", "x22", "x23", "v16", "v17",
- "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 16:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "v16",
- "v17", "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 8:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "v16", "v17", "v18", "v19"
- );
- break;
- }
- case 4:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "st1 { v16.2d, v17.2d}, [%[dst]], 32\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "v16", "v17"
- );
- break;
- }
- case 2:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "st1 { v16.2d}, [%[dst]], 16\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "v16"
- );
- break;
- }
- case 1:
- return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- }
-
- if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1))))
- return npa_lf_aura_op_search_alloc(wdata, addr, (void **)
- ((char *)obj_table - (sizeof(uint64_t) * n)), n);
-
- return 0;
-}
-
-static __rte_noinline void
-otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- unsigned int i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL) {
- otx2_npa_enq(mp, &obj_table[i], 1);
- obj_table[i] = NULL;
- }
- }
-}
-
-static __rte_noinline int __rte_hot
-otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- void **obj_table_bak = obj_table;
- const unsigned int nfree = n;
- unsigned int parts;
-
- int64_t * const addr = (int64_t * const)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- while (n) {
- parts = n > 31 ? 32 : rte_align32prevpow2(n);
- n -= parts;
- if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr,
- parts, obj_table))) {
- otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n);
- return -ENOENT;
- }
- obj_table += parts;
- }
-
- return 0;
-}
-
-#else
-
-static inline int __rte_hot
-otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- unsigned int index;
- uint64_t obj;
-
- int64_t * const addr = (int64_t *)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- for (index = 0; index < n; index++, obj_table++) {
- obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- if (obj == 0) {
- for (; index > 0; index--) {
- obj_table--;
- otx2_npa_enq(mp, obj_table, 1);
- }
- return -ENOENT;
- }
- *obj_table = (void *)obj;
- }
-
- return 0;
-}
-
-#endif
-
-static unsigned int
-otx2_npa_get_count(const struct rte_mempool *mp)
-{
- return (unsigned int)npa_lf_aura_op_available(mp->pool_id);
-}
-
-static int
-npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
- struct npa_aura_s *aura, struct npa_pool_s *pool)
-{
- struct npa_aq_enq_req *aura_init_req, *pool_init_req;
- struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura));
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool));
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
- off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff;
- pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0)
- return 0;
- else
- return NPA_LF_ERR_AURA_POOL_INIT;
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return -ENOMEM;
- }
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to lock POOL ctx to NDC");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static int
-npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
- uint32_t aura_id,
- uint64_t aura_handle)
-{
- struct npa_aq_enq_req *aura_req, *pool_req;
- struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct ndc_sync_op *ndc_req;
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -EINVAL;
-
- /* Procedure for disabling an aura/pool */
- rte_delay_us(10);
- npa_lf_aura_op_alloc(aura_handle, 0);
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_WRITE;
- pool_req->pool.ena = 0;
- pool_req->pool_mask.ena = ~pool_req->pool_mask.ena;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
- aura_req->aura.ena = 0;
- aura_req->aura_mask.ena = ~aura_req->aura_mask.ena;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- off = mbox->rx_start + pool_rsp->hdr.next_msgoff;
- aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0)
- return NPA_LF_ERR_AURA_POOL_FINI;
-
- /* Sync NDC-NPA for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->npa_lf_sync = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
- return NPA_LF_ERR_AURA_POOL_FINI;
- }
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock AURA ctx to NDC");
- return -EINVAL;
- }
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock POOL ctx to NDC");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static inline char*
-npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name)
-{
- snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d",
- lf->pf_func, pool_id);
-
- return name;
-}
-
-static inline const struct rte_memzone *
-npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name,
- int pool_id, size_t size)
-{
- return rte_memzone_reserve_aligned(
- npa_lf_stack_memzone_name(lf, pool_id, name), size, 0,
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-}
-
-static inline int
-npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name));
- if (mz == NULL)
- return -EINVAL;
-
- return rte_memzone_free(mz);
-}
-
-static inline int
-bitmap_ctzll(uint64_t slab)
-{
- if (slab == 0)
- return 0;
-
- return __builtin_ctzll(slab);
-}
-
-static int
-npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size,
- const uint32_t block_count, struct npa_aura_s *aura,
- struct npa_pool_s *pool, uint64_t *aura_handle)
-{
- int rc, aura_id, pool_id, stack_size, alloc_size;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- uint64_t slab;
- uint32_t pos;
-
- /* Sanity check */
- if (!lf || !block_size || !block_count ||
- !pool || !aura || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- /* Block size should be cache line aligned and in range of 128B-128KB */
- if (block_size % OTX2_ALIGN || block_size < 128 ||
- block_size > 128 * 1024)
- return NPA_LF_ERR_INVALID_BLOCK_SZ;
-
- pos = slab = 0;
- /* Scan from the beginning */
- __rte_bitmap_scan_init(lf->npa_bmp);
- /* Scan bitmap to get the free pool */
- rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab);
- /* Empty bitmap */
- if (rc == 0) {
- otx2_err("Mempools exhausted, 'max_pools' devargs to increase");
- return -ERANGE;
- }
-
- /* Get aura_id from resource bitmap */
- aura_id = pos + bitmap_ctzll(slab);
- /* Mark pool as reserved */
- rte_bitmap_clear(lf->npa_bmp, aura_id);
-
- /* Configuration based on each aura has separate pool(aura-pool pair) */
- pool_id = aura_id;
- rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >=
- (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0;
- if (rc)
- goto exit;
-
- /* Allocate stack memory */
- stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs;
- alloc_size = stack_size * lf->stack_pg_bytes;
-
- mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size);
- if (mz == NULL) {
- rc = -ENOMEM;
- goto aura_res_put;
- }
-
- /* Update aura fields */
- aura->pool_addr = pool_id;/* AF will translate to associated poolctx */
- aura->ena = 1;
- aura->shift = rte_log2_u32(block_count);
- aura->shift = aura->shift < 8 ? 0 : aura->shift - 8;
- aura->limit = block_count;
- aura->pool_caching = 1;
- aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS);
- /* Many to one reduction */
- aura->err_qint_idx = aura_id % lf->qints;
-
- /* Update pool fields */
- pool->stack_base = mz->iova;
- pool->ena = 1;
- pool->buf_size = block_size / OTX2_ALIGN;
- pool->stack_max_pages = stack_size;
- pool->shift = rte_log2_u32(block_count);
- pool->shift = pool->shift < 8 ? 0 : pool->shift - 8;
- pool->ptr_start = 0;
- pool->ptr_end = ~0;
- pool->stack_caching = 1;
- pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR);
-
- /* Many to one reduction */
- pool->err_qint_idx = pool_id % lf->qints;
-
- /* Issue AURA_INIT and POOL_INIT op */
- rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool);
- if (rc)
- goto stack_mem_free;
-
- *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base);
-
- /* Update aura count */
- npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count);
- /* Read it back to make sure aura count is updated */
- npa_lf_aura_op_cnt_get(*aura_handle);
-
- return 0;
-
-stack_mem_free:
- rte_memzone_free(mz);
-aura_res_put:
- rte_bitmap_set(lf->npa_bmp, aura_id);
-exit:
- return rc;
-}
-
-static int
-npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- int aura_id, pool_id, rc;
-
- if (!lf || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle);
- rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle);
- rc |= npa_lf_stack_dma_free(lf, name, pool_id);
-
- rte_bitmap_set(lf->npa_bmp, aura_id);
-
- return rc;
-}
-
-static int
-npa_lf_aura_range_update_check(uint64_t aura_handle)
-{
- uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
- __otx2_io struct npa_pool_s *pool;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
-
- req->aura_id = aura_id;
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
- return rc;
- }
-
- pool = &rsp->pool;
-
- if (lim[aura_id].ptr_start != pool->ptr_start ||
- lim[aura_id].ptr_end != pool->ptr_end) {
- otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
- return -ERANGE;
- }
-
- return 0;
-}
-
-static int
-otx2_npa_alloc(struct rte_mempool *mp)
-{
- uint32_t block_size, block_count;
- uint64_t aura_handle = 0;
- struct otx2_npa_lf *lf;
- struct npa_aura_s aura;
- struct npa_pool_s pool;
- size_t padding;
- int rc;
-
- lf = otx2_npa_lf_obj_get();
- if (lf == NULL) {
- rc = -EINVAL;
- goto error;
- }
-
- block_size = mp->elt_size + mp->header_size + mp->trailer_size;
- /*
- * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate
- * the set selection.
- * Add additional padding to ensure that the element size always
- * occupies odd number of cachelines to ensure even distribution
- * of elements among L1D cache sets.
- */
- padding = ((block_size / RTE_CACHE_LINE_SIZE) % 2) ? 0 :
- RTE_CACHE_LINE_SIZE;
- mp->trailer_size += padding;
- block_size += padding;
-
- block_count = mp->size;
-
- if (block_size % OTX2_ALIGN != 0) {
- otx2_err("Block size should be multiple of 128B");
- rc = -ERANGE;
- goto error;
- }
-
- memset(&aura, 0, sizeof(struct npa_aura_s));
- memset(&pool, 0, sizeof(struct npa_pool_s));
- pool.nat_align = 1;
- pool.buf_offset = 1;
-
- if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) {
- otx2_err("Unsupported mp->header_size=%d", mp->header_size);
- rc = -EINVAL;
- goto error;
- }
-
- /* Use driver specific mp->pool_config to override aura config */
- if (mp->pool_config != NULL)
- memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s));
-
- rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count,
- &aura, &pool, &aura_handle);
- if (rc) {
- otx2_err("Failed to alloc pool or aura rc=%d", rc);
- goto error;
- }
-
- /* Store aura_handle for future queue operations */
- mp->pool_id = aura_handle;
- otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64,
- lf, block_size, block_count, aura_handle);
-
- /* Just hold the reference of the object */
- otx2_npa_lf_obj_ref();
- return 0;
-error:
- return rc;
-}
-
-static void
-otx2_npa_free(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- int rc = 0;
-
- otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id);
- if (lf != NULL)
- rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id);
-
- if (rc)
- otx2_err("Failed to free pool or aura rc=%d", rc);
-
- /* Release the reference of npalf */
- otx2_npa_lf_fini();
-}
-
-static ssize_t
-otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
- uint32_t pg_shift, size_t *min_chunk_size, size_t *align)
-{
- size_t total_elt_sz;
-
- /* Need space for one more obj on each chunk to fulfill
- * alignment requirements.
- */
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
- return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift,
- total_elt_sz, min_chunk_size,
- align);
-}
-
-static uint8_t
-otx2_npa_l1d_way_set_get(uint64_t iova)
-{
- return (iova >> rte_log2_u32(RTE_CACHE_LINE_SIZE)) & 0x7;
-}
-
-static int
-otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
- rte_iova_t iova, size_t len,
- rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
-{
-#define OTX2_L1D_NB_SETS 8
- uint64_t distribution[OTX2_L1D_NB_SETS];
- rte_iova_t start_iova;
- size_t total_elt_sz;
- uint8_t set;
- size_t off;
- int i;
-
- if (iova == RTE_BAD_IOVA)
- return -EINVAL;
-
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
- /* Align object start address to a multiple of total_elt_sz */
- off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1);
-
- if (len < off)
- return -EINVAL;
-
-
- vaddr = (char *)vaddr + off;
- iova += off;
- len -= off;
-
- memset(distribution, 0, sizeof(uint64_t) * OTX2_L1D_NB_SETS);
- start_iova = iova;
- while (start_iova < iova + len) {
- set = otx2_npa_l1d_way_set_get(start_iova + mp->header_size);
- distribution[set]++;
- start_iova += total_elt_sz;
- }
-
- otx2_npa_dbg("iova %"PRIx64", aligned iova %"PRIx64"", iova - off,
- iova);
- otx2_npa_dbg("length %"PRIu64", aligned length %"PRIu64"",
- (uint64_t)(len + off), (uint64_t)len);
- otx2_npa_dbg("element size %"PRIu64"", (uint64_t)total_elt_sz);
- otx2_npa_dbg("requested objects %"PRIu64", possible objects %"PRIu64"",
- (uint64_t)max_objs, (uint64_t)(len / total_elt_sz));
- otx2_npa_dbg("L1D set distribution :");
- for (i = 0; i < OTX2_L1D_NB_SETS; i++)
- otx2_npa_dbg("set[%d] : objects : %"PRIu64"", i,
- distribution[i]);
-
- npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
-
- if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
- return -EBUSY;
-
- return rte_mempool_op_populate_helper(mp,
- RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ,
- max_objs, vaddr, iova, len,
- obj_cb, obj_cb_arg);
-}
-
-static struct rte_mempool_ops otx2_npa_ops = {
- .name = "octeontx2_npa",
- .alloc = otx2_npa_alloc,
- .free = otx2_npa_free,
- .enqueue = otx2_npa_enq,
- .get_count = otx2_npa_get_count,
- .calc_mem_size = otx2_npa_calc_mem_size,
- .populate = otx2_npa_populate,
-#if defined(RTE_ARCH_ARM64)
- .dequeue = otx2_npa_deq_arm64,
-#else
- .dequeue = otx2_npa_deq,
-#endif
-};
-
-RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/octeontx2/version.map b/drivers/mempool/octeontx2/version.map
deleted file mode 100644
index e6887ceb8f..0000000000
--- a/drivers/mempool/octeontx2/version.map
+++ /dev/null
@@ -1,8 +0,0 @@
-INTERNAL {
- global:
-
- otx2_npa_lf_fini;
- otx2_npa_lf_init;
-
- local: *;
-};
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index f8f3d3895e..d34bc6898f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -579,6 +579,21 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_nix_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_AF_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 2355d1cde8..e35652fe63 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -45,7 +45,6 @@ drivers = [
'ngbe',
'null',
'octeontx',
- 'octeontx2',
'octeontx_ep',
'pcap',
'pfe',
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
deleted file mode 100644
index ab15844cbc..0000000000
--- a/drivers/net/octeontx2/meson.build
+++ /dev/null
@@ -1,47 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_rx.c',
- 'otx2_tx.c',
- 'otx2_tm.c',
- 'otx2_rss.c',
- 'otx2_mac.c',
- 'otx2_ptp.c',
- 'otx2_flow.c',
- 'otx2_link.c',
- 'otx2_vlan.c',
- 'otx2_stats.c',
- 'otx2_mcast.c',
- 'otx2_lookup.c',
- 'otx2_ethdev.c',
- 'otx2_flow_ctrl.c',
- 'otx2_flow_dump.c',
- 'otx2_flow_parse.c',
- 'otx2_flow_utils.c',
- 'otx2_ethdev_irq.c',
- 'otx2_ethdev_ops.c',
- 'otx2_ethdev_sec.c',
- 'otx2_ethdev_debug.c',
- 'otx2_ethdev_devargs.c',
-)
-
-deps += ['bus_pci', 'cryptodev', 'eventdev', 'security']
-deps += ['common_octeontx2', 'mempool_octeontx2']
-
-extra_flags = ['-flax-vector-conversions']
-foreach flag: extra_flags
- if cc.has_argument(flag)
- cflags += flag
- endif
-endforeach
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../crypto/octeontx2')
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
deleted file mode 100644
index 4f1c0b98de..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ /dev/null
@@ -1,2814 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <ethdev_pci.h>
-#include <rte_io.h>
-#include <rte_malloc.h>
-#include <rte_mbuf.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_mempool.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-
-static inline uint64_t
-nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_RX_OFFLOAD_CAPA;
-
- if (otx2_dev_is_vf(dev) ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
-
- return capa;
-}
-
-static inline uint64_t
-nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_TX_OFFLOAD_CAPA;
-
- /* TSO not supported for earlier chip revisions */
- if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
- return capa;
-}
-
-static const struct otx2_dev_ops otx2_dev_ops = {
- .link_status_update = otx2_eth_dev_link_status_update,
- .ptp_info_update = otx2_eth_dev_ptp_info_update,
- .link_status_get = otx2_eth_dev_link_status_get,
-};
-
-static int
-nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_alloc_req *req;
- struct nix_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
- req->rq_cnt = nb_rxq;
- req->sq_cnt = nb_txq;
- req->cq_cnt = nb_rxq;
- /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
- RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
- req->xqe_sz = NIX_XQESZ_W16;
- req->rss_sz = dev->rss_info.rss_size;
- req->rss_grps = NIX_RSS_GRPS;
- req->npa_func = otx2_npa_pf_func_get();
- req->sso_func = otx2_sso_pf_func_get();
- req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
- req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
- req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
- }
- req->rx_cfg |= (BIT_ULL(32 /* DROP_RE */) |
- BIT_ULL(33 /* Outer L2 Length */) |
- BIT_ULL(38 /* Inner L4 UDP Length */) |
- BIT_ULL(39 /* Inner L3 Length */) |
- BIT_ULL(40 /* Outer L4 UDP Length */) |
- BIT_ULL(41 /* Outer L3 Length */));
-
- if (dev->rss_tag_as_xor == 0)
- req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->sqb_size = rsp->sqb_size;
- dev->tx_chan_base = rsp->tx_chan_base;
- dev->rx_chan_base = rsp->rx_chan_base;
- dev->rx_chan_cnt = rsp->rx_chan_cnt;
- dev->tx_chan_cnt = rsp->tx_chan_cnt;
- dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
- dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
- dev->lf_tx_stats = rsp->lf_tx_stats;
- dev->lf_rx_stats = rsp->lf_rx_stats;
- dev->cints = rsp->cints;
- dev->qints = rsp->qints;
- dev->npc_flow.channel = dev->rx_chan_base;
- dev->ptp_en = rsp->hw_rx_tstamp_en;
-
- return 0;
-}
-
-static int
-nix_lf_switch_header_type_enable(struct otx2_eth_dev *dev, bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_set_pkind *req;
- struct msg_resp *rsp;
- int rc;
-
- if (dev->npc_flow.switch_header_type == 0)
- return 0;
-
- /* Notify AF about higig2 config */
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN90B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_CH_LEN_24B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN24B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_EXDSA_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_VLAN_EXDSA_PKIND;
- }
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_RX;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_24B)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_TX;
- return otx2_mbox_process_msg(mbox, (void *)&rsp);
-}
-
-static int
-nix_lf_free(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_free_req *req;
- struct ndc_sync_op *ndc_req;
- int rc;
-
- /* Sync NDC-NIX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- ndc_req->nix_lf_rx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
-
- req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
- /* Let AF driver free all this nix lf's
- * NPC entries allocated using NPC MBOX.
- */
- req->flags = 0;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_enable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_disable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_start_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (en && otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static inline void
-nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
-{
- rxq->head = 0;
- rxq->available = 0;
-}
-
-static inline uint32_t
-nix_qsize_to_val(enum nix_q_size_e qsize)
-{
- return (16UL << (qsize * 2));
-}
-
-static inline enum nix_q_size_e
-nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
-{
- int i;
-
- if (otx2_ethdev_fixup_is_min_4k_q(dev))
- i = nix_q_size_4K;
- else
- i = nix_q_size_16;
-
- for (; i < nix_q_size_max; i++)
- if (val <= nix_qsize_to_val(i))
- break;
-
- if (i >= nix_q_size_max)
- i = nix_q_size_max - 1;
-
- return i;
-}
-
-static int
-nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
- uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
-{
- struct otx2_mbox *mbox = dev->mbox;
- const struct rte_memzone *rz;
- uint32_t ring_size, cq_size;
- struct nix_aq_enq_req *aq;
- uint16_t first_skip;
- int rc;
-
- cq_size = rxq->qlen;
- ring_size = cq_size * NIX_CQ_ENTRY_SZ;
- rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
- NIX_CQ_ALIGN, dev->node);
- if (rz == NULL) {
- otx2_err("Failed to allocate mem for cq hw ring");
- return -ENOMEM;
- }
- memset(rz->addr, 0, rz->len);
- rxq->desc = (uintptr_t)rz->addr;
- rxq->qmask = cq_size - 1;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
- aq->cq.qsize = rxq->qsize;
- aq->cq.base = rz->iova;
- aq->cq.avg_level = 0xff;
- aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
- aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
-
- /* Many to one reduction */
- aq->cq.qint_idx = qid % dev->qints;
- /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
- aq->cq.cint_idx = qid;
-
- if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
- const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID;
- uint16_t min_rx_drop;
-
- min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
- aq->cq.drop = min_rx_drop;
- aq->cq.drop_ena = 1;
- rxq->cq_drop = min_rx_drop;
- } else {
- rxq->cq_drop = NIX_CQ_THRESH_LEVEL;
- aq->cq.drop = rxq->cq_drop;
- aq->cq.drop_ena = 1;
- }
-
- /* TX pause frames enable flowctrl on RX side */
- if (dev->fc_info.tx_pause) {
- /* Single bpid is allocated for all rx channels for now */
- aq->cq.bpid = dev->fc_info.bpid[0];
- aq->cq.bp = rxq->cq_drop;
- aq->cq.bp_ena = 1;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->rq.sso_ena = 0;
-
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- aq->rq.ipsech_ena = 1;
-
- aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
- aq->rq.spb_ena = 0;
- aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- first_skip = (sizeof(struct rte_mbuf));
- first_skip += RTE_PKTMBUF_HEADROOM;
- first_skip += rte_pktmbuf_priv_size(mp);
- rxq->data_off = first_skip;
-
- first_skip /= 8; /* Expressed in number of dwords */
- aq->rq.first_skip = first_skip;
- aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
- aq->rq.flow_tagw = 32; /* 32-bits */
- aq->rq.lpb_sizem1 = mp->elt_size / 8;
- aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
- aq->rq.rq_int_ena = 0;
- /* Many to one reduction */
- aq->rq.qint_idx = qid % dev->qints;
-
- aq->rq.xqe_drop_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init rq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to LOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to LOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
- struct otx2_eth_rxq *rxq, const bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
-
- /* Pkts will be dropped silently if RQ is disabled */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.ena = enb;
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- /* RQ is already disabled */
- /* Disable CQ */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to UNLOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static inline int
-nix_get_data_off(struct otx2_eth_dev *dev)
-{
- return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
-}
-
-uint64_t
-otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
-{
- struct rte_mbuf mb_def;
- uint64_t *tmp;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
- offsetof(struct rte_mbuf, data_off) != 2);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
- offsetof(struct rte_mbuf, data_off) != 4);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
- offsetof(struct rte_mbuf, data_off) != 6);
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
- mb_def.port = port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* Prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- tmp = (uint64_t *)&mb_def.rearm_data;
-
- return *tmp;
-}
-
-static void
-otx2_nix_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- struct otx2_eth_rxq *rxq = dev->data->rx_queues[qid];
-
- if (!rxq)
- return;
-
- otx2_nix_dbg("Releasing rxq %u", rxq->rq);
- nix_cq_rq_uninit(rxq->eth_dev, rxq);
- rte_free(rxq);
- dev->data->rx_queues[qid] = NULL;
-}
-
-static int
-otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
- uint16_t nb_desc, unsigned int socket,
- const struct rte_eth_rxconf *rx_conf,
- struct rte_mempool *mp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mempool_ops *ops;
- struct otx2_eth_rxq *rxq;
- const char *platform_ops;
- enum nix_q_size_e qsize;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
-
- /* Sanity checks */
- if (rx_conf->rx_deferred_start == 1) {
- otx2_err("Deferred Rx start is not supported");
- goto fail;
- }
-
- platform_ops = rte_mbuf_platform_mempool_ops();
- /* This driver needs octeontx2_npa mempool ops to work */
- ops = rte_mempool_get_ops(mp->ops_index);
- if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
- otx2_err("mempool ops should be of octeontx2_npa type");
- goto fail;
- }
-
- if (mp->pool_id == 0) {
- otx2_err("Invalid pool_id");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed */
- if (eth_dev->data->rx_queues[rq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
- otx2_nix_rx_queue_release(eth_dev, rq);
- rte_eth_dma_zone_free(eth_dev, "cq", rq);
- }
-
- offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
- dev->rx_offloads |= offloads;
-
- /* Find the CQ queue size */
- qsize = nix_qsize_clampup_get(dev, nb_desc);
- /* Allocate rxq memory */
- rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
- if (rxq == NULL) {
- otx2_err("Failed to allocate rq=%d", rq);
- rc = -ENOMEM;
- goto fail;
- }
-
- rxq->eth_dev = eth_dev;
- rxq->rq = rq;
- rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
- rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
- rxq->wdata = (uint64_t)rq << 32;
- rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
- eth_dev->data->port_id);
- rxq->offloads = offloads;
- rxq->pool = mp;
- rxq->qlen = nix_qsize_to_val(qsize);
- rxq->qsize = qsize;
- rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
- rxq->tstamp = &dev->tstamp;
-
- eth_dev->data->rx_queues[rq] = rxq;
-
- /* Alloc completion queue */
- rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
- if (rc) {
- otx2_err("Failed to allocate rxq=%u", rq);
- goto free_rxq;
- }
-
- rxq->qconf.socket_id = socket;
- rxq->qconf.nb_desc = nb_desc;
- rxq->qconf.mempool = mp;
- memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
-
- nix_rx_queue_reset(rxq);
- otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
- rq, mp->name, qsize, nb_desc, rxq->qlen);
-
- eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
-
- /* Calculating delta and freq mult between PTP HI clock and tsc.
- * These are needed in deriving raw clock value from tsc counter.
- * read_clock eth op returns raw clock value.
- */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc) {
- otx2_err("Failed to calculate delta and freq mult");
- goto fail;
- }
- }
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- return 0;
-
-free_rxq:
- otx2_nix_rx_queue_release(eth_dev, rq);
-fail:
- return rc;
-}
-
-static inline uint8_t
-nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
-{
- /*
- * Maximum three segments can be supported with W8, Choose
- * NIX_MAXSQESZ_W16 for multi segment offload.
- */
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- return NIX_MAXSQESZ_W16;
- else
- return NIX_MAXSQESZ_W8;
-}
-
-static uint16_t
-nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- uint16_t flags = 0;
-
- if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
- (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
- flags |= NIX_RX_OFFLOAD_RSS_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- flags |= NIX_RX_MULTI_SEG_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
- flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- flags |= NIX_RX_OFFLOAD_SECURITY_F;
-
- if (!dev->ptype_disable)
- flags |= NIX_RX_OFFLOAD_PTYPE_F;
-
- return flags;
-}
-
-static uint16_t
-nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t conf = dev->tx_offloads;
- uint16_t flags = 0;
-
- /* Fastpath is dependent on these enums */
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
- RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_mbuf, buf_iova) + 8);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_mbuf, buf_iova) + 16);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_mbuf, ol_flags) + 12);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
- offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
-
- if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
- conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
- flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
- flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
- flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
-
- if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
- flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- flags |= NIX_TX_MULTI_SEG_F;
-
- /* Enable Inner checksum for TSO */
- if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- /* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
- flags |= NIX_TX_OFFLOAD_SECURITY_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- return flags;
-}
-
-static int
-nix_sqb_lock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
-
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to lock POOL in NDC");
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_sqb_unlock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to UNLOCK POOL context");
- return -ENOMEM;
- }
- }
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to UNLOCK AURA in NDC");
- return rc;
- }
-
- return 0;
-}
-
-void
-otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
-{
- struct rte_pktmbuf_pool_private *mbp_priv;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_dev *dev;
- uint32_t buffsz;
-
- eth_dev = rxq->eth_dev;
- dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Get rx buffer size */
- mbp_priv = rte_mempool_get_priv(rxq->pool);
- buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
-
- /* Setting up the rx[tx]_offload_flags due to change
- * in rx[tx]_offloads.
- */
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- }
-}
-
-static int
-nix_sq_init(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *sq;
- uint32_t rr_quantum;
- uint16_t smq;
- int rc;
-
- if (txq->sqb_pool->pool_id == 0)
- return -EINVAL;
-
- rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
- if (rc) {
- otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
- return rc;
- }
-
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_INIT;
- sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
-
- sq->sq.smq = smq;
- sq->sq.smq_rr_quantum = rr_quantum;
- sq->sq.default_chan = dev->tx_chan_base;
- sq->sq.sqe_stype = NIX_STYPE_STF;
- sq->sq.ena = 1;
- if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
- sq->sq.sqe_stype = NIX_STYPE_STP;
- sq->sq.sqb_aura =
- npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
- sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
-
- /* Many to one reduction */
- sq->sq.qint_idx = txq->sq % dev->qints;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- if (dev->lock_tx_ctx) {
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static int
-nix_sq_uninit(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct ndc_sync_op *ndc_req;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint16_t sqes_per_sqb;
- void *sqb_buf;
- int rc, count;
-
- otx2_nix_dbg("Cleaning up sq %u", txq->sq);
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Check if sq is already cleaned up */
- if (!rsp->sq.ena)
- return 0;
-
- /* Disable sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->sq_mask.ena = ~aq->sq_mask.ena;
- aq->sq.ena = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (dev->lock_tx_ctx) {
- /* Unlock sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- nix_sqb_unlock(txq->sqb_pool);
- }
-
- /* Read SQ and free sqb's */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (aq->sq.smq_pend)
- otx2_err("SQ has pending sqe's");
-
- count = aq->sq.sqb_count;
- sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
- /* Free SQB's that are used */
- sqb_buf = (void *)rsp->sq.head_sqb;
- while (count) {
- void *next_sqb;
-
- next_sqb = *(void **)((uintptr_t)sqb_buf + (uint32_t)
- ((sqes_per_sqb - 1) *
- nix_sq_max_sqe_sz(txq)));
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- (uint64_t)sqb_buf);
- sqb_buf = next_sqb;
- count--;
- }
-
- /* Free next to use sqb */
- if (rsp->sq.next_sqb)
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- rsp->sq.next_sqb);
-
- /* Sync NDC-NIX-TX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
-
- return rc;
-}
-
-static int
-nix_sqb_aura_limit_cfg(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
-{
- struct otx2_eth_dev *dev = txq->dev;
- uint16_t sqes_per_sqb, nb_sqb_bufs;
- char name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool_objsz sz;
- struct npa_aura_s *aura;
- uint32_t tmp, blk_sz;
-
- aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
- snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
- blk_sz = dev->sqb_size;
-
- if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
- sqes_per_sqb = (dev->sqb_size / 8) / 16;
- else
- sqes_per_sqb = (dev->sqb_size / 8) / 8;
-
- nb_sqb_bufs = nb_desc / sqes_per_sqb;
- /* Clamp up to devarg passed SQB count */
- nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_DEF_SQB,
- nb_sqb_bufs + NIX_SQB_LIST_SPACE));
-
- txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
- 0, 0, dev->node,
- RTE_MEMPOOL_F_NO_SPREAD);
- txq->nb_sqb_bufs = nb_sqb_bufs;
- txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
- txq->nb_sqb_bufs_adj = nb_sqb_bufs -
- RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
- txq->nb_sqb_bufs_adj =
- (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
-
- if (txq->sqb_pool == NULL) {
- otx2_err("Failed to allocate sqe mempool");
- goto fail;
- }
-
- memset(aura, 0, sizeof(*aura));
- aura->fc_ena = 1;
- aura->fc_addr = txq->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
- if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
- otx2_err("Failed to set ops for sqe mempool");
- goto fail;
- }
- if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
- otx2_err("Failed to populate sqe mempool");
- goto fail;
- }
-
- tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
- if (dev->sqb_size != sz.elt_size) {
- otx2_err("sqe pool block size is not expected %d != %d",
- dev->sqb_size, tmp);
- goto fail;
- }
-
- nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
- if (dev->lock_tx_ctx)
- nix_sqb_lock(txq->sqb_pool);
-
- return 0;
-fail:
- return -ENOMEM;
-}
-
-void
-otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- struct nix_send_mem_s *send_mem;
- union nix_send_sg_s *sg;
-
- /* Initialize the fields based on basic single segment packet */
- memset(&txq->cmd, 0, sizeof(txq->cmd));
-
- if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
- send_hdr->w0.sizem1 = 2;
-
- send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
- send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
- if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- /* Default: one seg packet would have:
- * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
- * => 8/2 - 1 = 3
- */
- send_hdr->w0.sizem1 = 3;
- send_hdr_ext->w0.tstmp = 1;
-
- /* To calculate the offset for send_mem,
- * send_hdr->w0.sizem1 * 2
- */
- send_mem = (struct nix_send_mem_s *)(txq->cmd +
- (send_hdr->w0.sizem1 << 1));
- send_mem->subdc = NIX_SUBDC_MEM;
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
- send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
- }
- sg = (union nix_send_sg_s *)&txq->cmd[4];
- } else {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
- send_hdr->w0.sizem1 = 1;
- sg = (union nix_send_sg_s *)&txq->cmd[2];
- }
-
- send_hdr->w0.sq = txq->sq;
- sg->subdc = NIX_SUBDC_SG;
- sg->segs = 1;
- sg->ld_type = NIX_SENDLDTYPE_LDD;
-
- rte_smp_wmb();
-}
-
-static void
-otx2_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
-{
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[qid];
-
- if (!txq)
- return;
-
- otx2_nix_dbg("Releasing txq %u", txq->sq);
-
- /* Flush and disable tm */
- otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
-
- /* Free sqb's and disable sq */
- nix_sq_uninit(txq);
-
- if (txq->sqb_pool) {
- rte_mempool_free(txq->sqb_pool);
- txq->sqb_pool = NULL;
- }
- otx2_nix_sq_flush_post(txq);
- rte_free(txq);
- eth_dev->data->tx_queues[qid] = NULL;
-}
-
-
-static int
-otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
- uint16_t nb_desc, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct rte_memzone *fc;
- struct otx2_eth_txq *txq;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
-
- if (tx_conf->tx_deferred_start) {
- otx2_err("Tx deferred start is not supported");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed. */
- if (eth_dev->data->tx_queues[sq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
- otx2_nix_tx_queue_release(eth_dev, sq);
- }
-
- /* Find the expected offloads for this queue */
- offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
-
- /* Allocating tx queue data structure */
- txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
- OTX2_ALIGN, socket_id);
- if (txq == NULL) {
- otx2_err("Failed to alloc txq=%d", sq);
- rc = -ENOMEM;
- goto fail;
- }
- txq->sq = sq;
- txq->dev = dev;
- txq->sqb_pool = NULL;
- txq->offloads = offloads;
- dev->tx_offloads |= offloads;
- eth_dev->data->tx_queues[sq] = txq;
-
- /*
- * Allocate memory for flow control updates from HW.
- * Alloc one cache line, so that fits all FC_STYPE modes.
- */
- fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
- OTX2_ALIGN + sizeof(struct npa_aura_s),
- OTX2_ALIGN, dev->node);
- if (fc == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- rc = -ENOMEM;
- goto free_txq;
- }
- txq->fc_iova = fc->iova;
- txq->fc_mem = fc->addr;
-
- /* Initialize the aura sqb pool */
- rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
- if (rc) {
- otx2_err("Failed to alloc sqe pool rc=%d", rc);
- goto free_txq;
- }
-
- /* Initialize the SQ */
- rc = nix_sq_init(txq);
- if (rc) {
- otx2_err("Failed to init sq=%d context", sq);
- goto free_txq;
- }
-
- txq->fc_cache_pkts = 0;
- txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
- /* Evenly distribute LMT slot for each sq */
- txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
-
- txq->qconf.socket_id = socket_id;
- txq->qconf.nb_desc = nb_desc;
- memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
-
- txq->lso_tun_fmt = dev->lso_tun_fmt;
- otx2_nix_form_default_desc(txq);
-
- otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
- " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
- fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
- txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
- eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
-
-free_txq:
- otx2_nix_tx_queue_release(eth_dev, sq);
-fail:
- return rc;
-}
-
-static int
-nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = NULL;
- struct otx2_eth_qconf *rx_qconf = NULL;
- struct otx2_eth_txq **txq;
- struct otx2_eth_rxq **rxq;
- int i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
- if (tx_qconf == NULL) {
- otx2_err("Failed to allocate memory for tx_qconf");
- goto fail;
- }
-
- rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
- if (rx_qconf == NULL) {
- otx2_err("Failed to allocate memory for rx_qconf");
- goto fail;
- }
-
- txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
- for (i = 0; i < nb_txq; i++) {
- if (txq[i] == NULL) {
- tx_qconf[i].valid = false;
- otx2_info("txq[%d] is already released", i);
- continue;
- }
- memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
- tx_qconf[i].valid = true;
- otx2_nix_tx_queue_release(eth_dev, i);
- }
-
- rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
- for (i = 0; i < nb_rxq; i++) {
- if (rxq[i] == NULL) {
- rx_qconf[i].valid = false;
- otx2_info("rxq[%d] is already released", i);
- continue;
- }
- memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
- rx_qconf[i].valid = true;
- otx2_nix_rx_queue_release(eth_dev, i);
- }
-
- dev->tx_qconf = tx_qconf;
- dev->rx_qconf = rx_qconf;
- return 0;
-
-fail:
- free(tx_qconf);
- free(rx_qconf);
-
- return -ENOMEM;
-}
-
-static int
-nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
- struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
- int rc, i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- rc = -ENOMEM;
- /* Setup tx & rx queues with previous configuration so
- * that the queues can be functional in cases like ports
- * are started without re configuring queues.
- *
- * Usual re config sequence is like below:
- * port_configure() {
- * if(reconfigure) {
- * queue_release()
- * queue_setup()
- * }
- * queue_configure() {
- * queue_release()
- * queue_setup()
- * }
- * }
- * port_start()
- *
- * In some application's control path, queue_configure() would
- * NOT be invoked for TXQs/RXQs in port_configure().
- * In such cases, queues can be functional after start as the
- * queues are already setup in port_configure().
- */
- for (i = 0; i < nb_txq; i++) {
- if (!tx_qconf[i].valid)
- continue;
- rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
- tx_qconf[i].socket_id,
- &tx_qconf[i].conf.tx);
- if (rc) {
- otx2_err("Failed to setup tx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_tx_queue_release(eth_dev, i);
- goto fail;
- }
- }
-
- free(tx_qconf); tx_qconf = NULL;
-
- for (i = 0; i < nb_rxq; i++) {
- if (!rx_qconf[i].valid)
- continue;
- rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
- rx_qconf[i].socket_id,
- &rx_qconf[i].conf.rx,
- rx_qconf[i].mempool);
- if (rc) {
- otx2_err("Failed to setup rx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_rx_queue_release(eth_dev, i);
- goto release_tx_queues;
- }
- }
-
- free(rx_qconf); rx_qconf = NULL;
-
- return 0;
-
-release_tx_queues:
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
-fail:
- if (tx_qconf)
- free(tx_qconf);
- if (rx_qconf)
- free(rx_qconf);
-
- return rc;
-}
-
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
-static void
-nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
-{
- /* These dummy functions are required for supporting
- * some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
- * When the queues context is saved, txq/rxqs are released
- * which caused app crash since rx/tx burst is still
- * on different lcores
- */
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
- rte_mb();
-}
-
-static void
-nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4)
-{
- volatile struct nix_lso_format *field;
-
- /* Format works only with TCP packet marked by OL3/OL4 */
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
- /* TCP flags field */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Outer UDP length */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static int
-nix_setup_lso_formats(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lso_format_cfg_rsp *rsp;
- struct nix_lso_format_cfg *req;
- uint8_t *fmt;
- int rc;
-
- /* Skip if TSO was not requested */
- if (!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))
- return 0;
- /*
- * IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4)
- return -EFAULT;
- otx2_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
-
-
- /*
- * IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6)
- return -EFAULT;
- otx2_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /* Save all tun formats into u64 for fast path.
- * Lower 32bit has non-udp tunnel formats.
- * Upper 32bit has udp tunnel formats.
- */
- fmt = dev->lso_tun_idx;
- dev->lso_tun_fmt = ((uint64_t)fmt[NIX_LSO_TUN_V4V4] |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 8 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 16 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 24);
-
- fmt = dev->lso_udp_tun_idx;
- dev->lso_tun_fmt |= ((uint64_t)fmt[NIX_LSO_TUN_V4V4] << 32 |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 40 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 48 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 56);
-
- return 0;
-}
-
-static int
-otx2_nix_configure(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- struct rte_eth_txmode *txmode = &conf->txmode;
- char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
- struct rte_ether_addr *ea;
- uint8_t nb_rxq, nb_txq;
- int rc;
-
- rc = -EINVAL;
-
- /* Sanity checks */
- if (rte_eal_has_hugepages() == 0) {
- otx2_err("Huge page is not configured");
- goto fail_configure;
- }
-
- if (conf->dcb_capability_en == 1) {
- otx2_err("dcb enable is not supported");
- goto fail_configure;
- }
-
- if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
- otx2_err("Flow director is not supported");
- goto fail_configure;
- }
-
- if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
- rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
- otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
- goto fail_configure;
- }
-
- if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
- otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
- goto fail_configure;
- }
-
- if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
- otx2_err("Outer IP and SCTP checksum unsupported");
- goto fail_configure;
- }
-
- /* Free the resources allocated from the previous configure */
- if (dev->configured == 1) {
- otx2_eth_sec_fini(eth_dev);
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
- otx2_nix_vlan_fini(eth_dev);
- otx2_nix_mc_addr_list_uninstall(eth_dev);
- otx2_flow_free_all_resources(dev);
- oxt2_nix_unregister_queue_irqs(eth_dev);
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
- nix_set_nop_rxtx_function(eth_dev);
- rc = nix_store_queue_cfg_and_then_release(eth_dev);
- if (rc)
- goto fail_configure;
- otx2_nix_tm_fini(eth_dev);
- nix_lf_free(dev);
- }
-
- dev->rx_offloads = rxmode->offloads;
- dev->tx_offloads = txmode->offloads;
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- dev->rss_info.rss_grps = NIX_RSS_GRPS;
-
- nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
- nb_txq = RTE_MAX(data->nb_tx_queues, 1);
-
- /* Alloc a nix lf */
- rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
- if (rc) {
- otx2_err("Failed to init nix_lf rc=%d", rc);
- goto fail_offloads;
- }
-
- otx2_nix_err_intr_enb_dis(eth_dev, true);
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- if (dev->ptp_en &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- goto free_nix_lf;
- }
-
- rc = nix_lf_switch_header_type_enable(dev, true);
- if (rc) {
- otx2_err("Failed to enable switch type nix_lf rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = nix_setup_lso_formats(dev);
- if (rc) {
- otx2_err("failed to setup nix lso format fields, rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Configure RSS */
- rc = otx2_nix_rss_config(eth_dev);
- if (rc) {
- otx2_err("Failed to configure rss rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Init the default TM scheduler hierarchy */
- rc = otx2_nix_tm_init_default(eth_dev);
- if (rc) {
- otx2_err("Failed to init traffic manager rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = otx2_nix_vlan_offload_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init vlan offload rc=%d", rc);
- goto tm_fini;
- }
-
- /* Register queue IRQs */
- rc = oxt2_nix_register_queue_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register queue interrupts rc=%d", rc);
- goto vlan_fini;
- }
-
- /* Register cq IRQs */
- if (eth_dev->data->dev_conf.intr_conf.rxq) {
- if (eth_dev->data->nb_rx_queues > dev->cints) {
- otx2_err("Rx interrupt cannot be enabled, rxq > %d",
- dev->cints);
- goto q_irq_fini;
- }
- /* Rx interrupt feature cannot work with vector mode because,
- * vector mode doesn't process packets unless min 4 pkts are
- * received, while cq interrupts are generated even for 1 pkt
- * in the CQ.
- */
- dev->scalar_ena = true;
-
- rc = oxt2_nix_register_cq_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register CQ interrupts rc=%d", rc);
- goto q_irq_fini;
- }
- }
-
- /* Configure loop back mode */
- rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
- if (rc) {
- otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
- if (rc) {
- otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
- goto cq_fini;
- }
-
- /* Enable security */
- rc = otx2_eth_sec_init(eth_dev);
- if (rc)
- goto cq_fini;
-
- rc = otx2_nix_flow_ctrl_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init flow ctrl mode %d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_mc_addr_list_install(eth_dev);
- if (rc < 0) {
- otx2_err("Failed to install mc address list rc=%d", rc);
- goto sec_fini;
- }
-
- /*
- * Restore queue config when reconfigure followed by
- * reconfigure and no queue configure invoked from application case.
- */
- if (dev->configured == 1) {
- rc = nix_restore_queue_cfg(eth_dev);
- if (rc)
- goto uninstall_mc_list;
- }
-
- /* Update the mac address */
- ea = eth_dev->data->mac_addrs;
- memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
- if (rte_is_zero_ether_addr(ea))
- rte_eth_random_addr((uint8_t *)ea);
-
- rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
-
- /* Apply new link configurations if changed */
- rc = otx2_apply_link_speed(eth_dev);
- if (rc) {
- otx2_err("Failed to set link configuration");
- goto uninstall_mc_list;
- }
-
- otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
- " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
- " rx_flags=0x%x tx_flags=0x%x",
- eth_dev->data->port_id, ea_fmt, nb_rxq,
- nb_txq, dev->rx_offloads, dev->tx_offloads,
- dev->rx_offload_flags, dev->tx_offload_flags);
-
- /* All good */
- dev->configured = 1;
- dev->configured_nb_rx_qs = data->nb_rx_queues;
- dev->configured_nb_tx_qs = data->nb_tx_queues;
- return 0;
-
-uninstall_mc_list:
- otx2_nix_mc_addr_list_uninstall(eth_dev);
-sec_fini:
- otx2_eth_sec_fini(eth_dev);
-cq_fini:
- oxt2_nix_unregister_cq_irqs(eth_dev);
-q_irq_fini:
- oxt2_nix_unregister_queue_irqs(eth_dev);
-vlan_fini:
- otx2_nix_vlan_fini(eth_dev);
-tm_fini:
- otx2_nix_tm_fini(eth_dev);
-free_nix_lf:
- nix_lf_free(dev);
-fail_offloads:
- dev->rx_offload_flags &= ~nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags &= ~nix_tx_offload_flags(eth_dev);
-fail_configure:
- dev->configured = 0;
- return rc;
-}
-
-int
-otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc = -EINVAL;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-int
-otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- txq->fc_cache_pkts = 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
- if (rc) {
- otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
- if (rc) {
- otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mbuf *rx_pkts[32];
- struct otx2_eth_rxq *rxq;
- struct rte_eth_link link;
- int count, i, j, rc;
-
- nix_lf_switch_header_type_enable(dev, false);
- nix_cgx_stop_link_event(dev);
- npc_rx_disable(dev);
-
- /* Stop rx queues and free up pkts pending */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_stop(eth_dev, i);
- if (rc)
- continue;
-
- rxq = eth_dev->data->rx_queues[i];
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- while (count) {
- for (j = 0; j < count; j++)
- rte_pktmbuf_free(rx_pkts[j]);
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- }
- }
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- /* Bring down link status internally */
- memset(&link, 0, sizeof(link));
- rte_eth_linkstatus_set(eth_dev, &link);
-
- return 0;
-}
-
-static int
-otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- /* MTU recalculate should be avoided here if PTP is enabled by PF, as
- * otx2_nix_recalc_mtu would be invoked during otx2_nix_ptp_enable_vf
- * call below.
- */
- if (eth_dev->data->nb_rx_queues != 0 && !otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- return rc;
- }
-
- /* Start rx queues */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = otx2_nix_tx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
- if (rc) {
- otx2_err("Failed to update flow ctrl mode %d", rc);
- return rc;
- }
-
- /* Enable PTP if it was requested by the app or if it is already
- * enabled in PF owning this VF
- */
- memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_enable(eth_dev);
- else
- otx2_nix_timesync_disable(eth_dev);
-
- /* Update VF about data off shifted by 8 bytes if PTP already
- * enabled in PF owning this VF
- */
- if (otx2_ethdev_is_ptp_en(dev) && otx2_dev_is_vf(dev))
- otx2_nix_ptp_enable_vf(eth_dev);
-
- if (dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F) {
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
- }
-
- rc = npc_rx_enable(dev);
- if (rc) {
- otx2_err("Failed to enable NPC rx %d", rc);
- return rc;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, true);
-
- rc = nix_cgx_start_link_event(dev);
- if (rc) {
- otx2_err("Failed to start cgx link event %d", rc);
- goto rx_disable;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, false);
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-
-rx_disable:
- npc_rx_disable(dev);
- otx2_nix_toggle_flag_link_cfg(dev, false);
- return rc;
-}
-
-static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
-static int otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
-
-/* Initialize and register driver with DPDK Application */
-static const struct eth_dev_ops otx2_eth_dev_ops = {
- .dev_infos_get = otx2_nix_info_get,
- .dev_configure = otx2_nix_configure,
- .link_update = otx2_nix_link_update,
- .tx_queue_setup = otx2_nix_tx_queue_setup,
- .tx_queue_release = otx2_nix_tx_queue_release,
- .tm_ops_get = otx2_nix_tm_ops_get,
- .rx_queue_setup = otx2_nix_rx_queue_setup,
- .rx_queue_release = otx2_nix_rx_queue_release,
- .dev_start = otx2_nix_dev_start,
- .dev_stop = otx2_nix_dev_stop,
- .dev_close = otx2_nix_dev_close,
- .tx_queue_start = otx2_nix_tx_queue_start,
- .tx_queue_stop = otx2_nix_tx_queue_stop,
- .rx_queue_start = otx2_nix_rx_queue_start,
- .rx_queue_stop = otx2_nix_rx_queue_stop,
- .dev_set_link_up = otx2_nix_dev_set_link_up,
- .dev_set_link_down = otx2_nix_dev_set_link_down,
- .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
- .dev_ptypes_set = otx2_nix_ptypes_set,
- .dev_reset = otx2_nix_dev_reset,
- .stats_get = otx2_nix_dev_stats_get,
- .stats_reset = otx2_nix_dev_stats_reset,
- .get_reg = otx2_nix_dev_get_reg,
- .mtu_set = otx2_nix_mtu_set,
- .mac_addr_add = otx2_nix_mac_addr_add,
- .mac_addr_remove = otx2_nix_mac_addr_del,
- .mac_addr_set = otx2_nix_mac_addr_set,
- .set_mc_addr_list = otx2_nix_set_mc_addr_list,
- .promiscuous_enable = otx2_nix_promisc_enable,
- .promiscuous_disable = otx2_nix_promisc_disable,
- .allmulticast_enable = otx2_nix_allmulticast_enable,
- .allmulticast_disable = otx2_nix_allmulticast_disable,
- .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
- .reta_update = otx2_nix_dev_reta_update,
- .reta_query = otx2_nix_dev_reta_query,
- .rss_hash_update = otx2_nix_rss_hash_update,
- .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
- .xstats_get = otx2_nix_xstats_get,
- .xstats_get_names = otx2_nix_xstats_get_names,
- .xstats_reset = otx2_nix_xstats_reset,
- .xstats_get_by_id = otx2_nix_xstats_get_by_id,
- .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
- .rxq_info_get = otx2_nix_rxq_info_get,
- .txq_info_get = otx2_nix_txq_info_get,
- .rx_burst_mode_get = otx2_rx_burst_mode_get,
- .tx_burst_mode_get = otx2_tx_burst_mode_get,
- .tx_done_cleanup = otx2_nix_tx_done_cleanup,
- .set_queue_rate_limit = otx2_nix_tm_set_queue_rate_limit,
- .pool_ops_supported = otx2_nix_pool_ops_supported,
- .flow_ops_get = otx2_nix_dev_flow_ops_get,
- .get_module_info = otx2_nix_get_module_info,
- .get_module_eeprom = otx2_nix_get_module_eeprom,
- .fw_version_get = otx2_nix_fw_version_get,
- .flow_ctrl_get = otx2_nix_flow_ctrl_get,
- .flow_ctrl_set = otx2_nix_flow_ctrl_set,
- .timesync_enable = otx2_nix_timesync_enable,
- .timesync_disable = otx2_nix_timesync_disable,
- .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
- .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
- .timesync_adjust_time = otx2_nix_timesync_adjust_time,
- .timesync_read_time = otx2_nix_timesync_read_time,
- .timesync_write_time = otx2_nix_timesync_write_time,
- .vlan_offload_set = otx2_nix_vlan_offload_set,
- .vlan_filter_set = otx2_nix_vlan_filter_set,
- .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
- .vlan_tpid_set = otx2_nix_vlan_tpid_set,
- .vlan_pvid_set = otx2_nix_vlan_pvid_set,
- .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
- .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
- .read_clock = otx2_nix_read_clock,
-};
-
-static inline int
-nix_lf_attach(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct rsrc_attach_req *req;
-
- /* Attach NIX(lf) */
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->modify = true;
- req->nixlf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- dev->nix_msixoff = msix_rsp->nix_msixoff;
-
- return rc;
-}
-
-static inline int
-otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
-
- /* Detach all except npa lf */
- req->partial = true;
- req->nixlf = true;
- req->sso = true;
- req->ssow = true;
- req->timlfs = true;
- req->cptlfs = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static bool
-otx2_eth_dev_is_sdp(struct rte_pci_device *pci_dev)
-{
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- return true;
- return false;
-}
-
-static inline uint64_t
-nix_get_blkaddr(struct otx2_eth_dev *dev)
-{
- uint64_t reg;
-
- /* Reading the discovery register to know which NIX is the LF
- * attached to.
- */
- reg = otx2_read64(dev->bar2 +
- RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0));
-
- return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1;
-}
-
-static int
-otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, max_entries;
-
- eth_dev->dev_ops = &otx2_eth_dev_ops;
- eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
- eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
- eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
-
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- /* Setup callbacks for secondary process */
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
- return 0;
- }
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- rte_eth_copy_pci_info(eth_dev, pci_dev);
- eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
- memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
- offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
-
- /* Parse devargs string */
- rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
- if (rc) {
- otx2_err("Failed to parse devargs rc=%d", rc);
- goto error;
- }
-
- if (!dev->mbox_active) {
- /* Initialize the base otx2_dev object
- * only if already present
- */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
- }
- if (otx2_eth_dev_is_sdp(pci_dev))
- dev->sdp_link = true;
- else
- dev->sdp_link = false;
- /* Device generic callbacks */
- dev->ops = &otx2_dev_ops;
- dev->eth_dev = eth_dev;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto otx2_dev_uninit;
-
- dev->configured = 0;
- dev->drv_inited = true;
- dev->ptype_disable = 0;
- dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
-
- /* Attach NIX LF */
- rc = nix_lf_attach(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- dev->base = dev->bar2 + (nix_get_blkaddr(dev) << 20);
-
- /* Get NIX MSIX offset */
- rc = nix_lf_get_msix_offset(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- /* Register LF irq handlers */
- rc = otx2_nix_register_irqs(eth_dev);
- if (rc)
- goto mbox_detach;
-
- /* Get maximum number of supported MAC entries */
- max_entries = otx2_cgx_mac_max_entries_get(dev);
- if (max_entries < 0) {
- otx2_err("Failed to get max entries for mac addr");
- rc = -ENOTSUP;
- goto unregister_irq;
- }
-
- /* For VFs, returned max_entries will be 0. But to keep default MAC
- * address, one entry must be allocated. So setting up to 1.
- */
- if (max_entries == 0)
- max_entries = 1;
-
- eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
- RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- otx2_err("Failed to allocate memory for mac addr");
- rc = -ENOMEM;
- goto unregister_irq;
- }
-
- dev->max_mac_entries = max_entries;
-
- rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
- if (rc)
- goto free_mac_addrs;
-
- /* Update the mac address */
- memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
-
- /* Also sync same MAC address to CGX table */
- otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
-
- /* Initialize the tm data structures */
- otx2_nix_tm_conf_init(eth_dev);
-
- dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
- dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
-
- if (otx2_dev_is_96xx_A0(dev) ||
- otx2_dev_is_95xx_Ax(dev)) {
- dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
- dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
- }
-
- /* Create security ctx */
- rc = otx2_eth_sec_ctx_create(eth_dev);
- if (rc)
- goto free_mac_addrs;
- dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
-
- /* Initialize rte-flow */
- rc = otx2_flow_init(dev);
- if (rc)
- goto sec_ctx_destroy;
-
- otx2_nix_mc_filter_init(dev);
-
- otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
- " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
- eth_dev->data->port_id, dev->pf, dev->vf,
- OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
- dev->rx_offload_capa, dev->tx_offload_capa);
- return 0;
-
-sec_ctx_destroy:
- otx2_eth_sec_ctx_destroy(eth_dev);
-free_mac_addrs:
- rte_free(eth_dev->data->mac_addrs);
-unregister_irq:
- otx2_nix_unregister_irqs(eth_dev);
-mbox_detach:
- otx2_eth_dev_lf_detach(dev->mbox);
-otx2_npa_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- otx2_err("Failed to init nix eth_dev rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, i;
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Clear the flag since we are closing down */
- dev->configured = 0;
-
- /* Disable nix bpid config */
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
-
- npc_rx_disable(dev);
-
- /* Disable vlan offloads */
- otx2_nix_vlan_fini(eth_dev);
-
- /* Disable other rte_flow entries */
- otx2_flow_fini(dev);
-
- /* Free multicast filter list */
- otx2_nix_mc_filter_fini(dev);
-
- /* Disable PTP if already enabled */
- if (otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_disable(eth_dev);
-
- nix_cgx_stop_link_event(dev);
-
- /* Unregister the dev ops, this is required to stop VFs from
- * receiving link status updates on exit path.
- */
- dev->ops = NULL;
-
- /* Free up SQs */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
- eth_dev->data->nb_tx_queues = 0;
-
- /* Free up RQ's and CQ's */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
- otx2_nix_rx_queue_release(eth_dev, i);
- eth_dev->data->nb_rx_queues = 0;
-
- /* Free tm resources */
- rc = otx2_nix_tm_fini(eth_dev);
- if (rc)
- otx2_err("Failed to cleanup tm, rc=%d", rc);
-
- /* Unregister queue irqs */
- oxt2_nix_unregister_queue_irqs(eth_dev);
-
- /* Unregister cq irqs */
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
-
- rc = nix_lf_free(dev);
- if (rc)
- otx2_err("Failed to free nix lf, rc=%d", rc);
-
- rc = otx2_npa_lf_fini();
- if (rc)
- otx2_err("Failed to cleanup npa lf, rc=%d", rc);
-
- /* Disable security */
- otx2_eth_sec_fini(eth_dev);
-
- /* Destroy security ctx */
- otx2_eth_sec_ctx_destroy(eth_dev);
-
- rte_free(eth_dev->data->mac_addrs);
- eth_dev->data->mac_addrs = NULL;
- dev->drv_inited = false;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- otx2_nix_unregister_irqs(eth_dev);
-
- rc = otx2_eth_dev_lf_detach(dev->mbox);
- if (rc)
- otx2_err("Failed to detach resources, rc=%d", rc);
-
- /* Check if mbox close is needed */
- if (!mbox_close)
- return 0;
-
- if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
- /* Will be freed later by PMD */
- eth_dev->data->dev_private = NULL;
- return 0;
- }
-
- otx2_dev_fini(pci_dev, dev);
- return 0;
-}
-
-static int
-otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
-{
- otx2_eth_dev_uninit(eth_dev, true);
- return 0;
-}
-
-static int
-otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
-{
- int rc;
-
- rc = otx2_eth_dev_uninit(eth_dev, false);
- if (rc)
- return rc;
-
- return otx2_eth_dev_init(eth_dev);
-}
-
-static int
-nix_remove(struct rte_pci_device *pci_dev)
-{
- struct rte_eth_dev *eth_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_dev *otx2_dev;
- int rc;
-
- eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
- if (eth_dev) {
- /* Cleanup eth dev */
- rc = otx2_eth_dev_uninit(eth_dev, true);
- if (rc)
- return rc;
-
- rte_eth_dev_release_port(eth_dev);
- }
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Check for common resources */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
- return 0;
-
- otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
-
- if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
- goto exit;
-
- /* Safe to cleanup mbox as no more users */
- otx2_dev_fini(pci_dev, otx2_dev);
- rte_free(otx2_dev);
- return 0;
-
-exit:
- otx2_info("%s: common resource in use by other devices", pci_dev->name);
- return -EAGAIN;
-}
-
-static int
-nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- int rc;
-
- RTE_SET_USED(pci_drv);
-
- rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
- otx2_eth_dev_init);
-
- /* On error on secondary, recheck if port exists in primary or
- * in mid of detach state.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
- if (!rte_eth_dev_allocated(pci_dev->device.name))
- return 0;
- return rc;
-}
-
-static const struct rte_pci_id pci_nix_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_nix = {
- .id_table = pci_nix_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA |
- RTE_PCI_DRV_INTR_LSC,
- .probe = nix_probe,
- .remove = nix_remove,
-};
-
-RTE_PMD_REGISTER_PCI(OCTEONTX2_PMD, pci_nix);
-RTE_PMD_REGISTER_PCI_TABLE(OCTEONTX2_PMD, pci_nix_map);
-RTE_PMD_REGISTER_KMOD_DEP(OCTEONTX2_PMD, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
deleted file mode 100644
index a5282c6c12..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ /dev/null
@@ -1,619 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_H__
-#define __OTX2_ETHDEV_H__
-
-#include <math.h>
-#include <stdint.h>
-
-#include <rte_common.h>
-#include <rte_ethdev.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf.h>
-#include <rte_mempool.h>
-#include <rte_security_driver.h>
-#include <rte_spinlock.h>
-#include <rte_string_fns.h>
-#include <rte_time.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_flow.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-#include "otx2_rx.h"
-#include "otx2_tm.h"
-#include "otx2_tx.h"
-
-#define OTX2_ETH_DEV_PMD_VERSION "1.0"
-
-/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
-
-/* Minimum CQ size should be 4K */
-#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
-#define otx2_ethdev_fixup_is_min_4k_q(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
-/* Limit CQ being full */
-#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
-#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
-
-/* Used for struct otx2_eth_dev::flags */
-#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
-
-/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
- * In Tx space is always reserved for this in FRS.
- */
-#define NIX_MAX_VTAG_INS 2
-#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
-
-/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
-#define NIX_L2_OVERHEAD \
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
-#define NIX_L2_MAX_LEN \
- (RTE_ETHER_MTU + NIX_L2_OVERHEAD)
-
-/* HW config of frame size doesn't include FCS */
-#define NIX_MAX_HW_FRS 9212
-#define NIX_MIN_HW_FRS 60
-
-/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
-#define NIX_MAX_FRS \
- (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
-
-#define NIX_MIN_FRS \
- (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
-
-#define NIX_MAX_MTU \
- (NIX_MAX_FRS - NIX_L2_OVERHEAD)
-
-#define NIX_MAX_SQB 512
-#define NIX_DEF_SQB 16
-#define NIX_MIN_SQB 8
-#define NIX_SQB_LIST_SPACE 2
-#define NIX_RSS_RETA_SIZE_MAX 256
-/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
-#define NIX_RSS_GRPS 8
-#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
-#define NIX_RSS_RETA_SIZE 64
-#define NIX_RX_MIN_DESC 16
-#define NIX_RX_MIN_DESC_ALIGN 16
-#define NIX_RX_NB_SEG_MAX 6
-#define NIX_CQ_ENTRY_SZ 128
-#define NIX_CQ_ALIGN 512
-#define NIX_SQB_LOWER_THRESH 70
-#define LMT_SLOT_MASK 0x7f
-#define NIX_RX_DEFAULT_RING_SZ 4096
-
-/* If PTP is enabled additional SEND MEM DESC is required which
- * takes 2 words, hence max 7 iova address are possible
- */
-#if defined(RTE_LIBRTE_IEEE1588)
-#define NIX_TX_NB_SEG_MAX 7
-#else
-#define NIX_TX_NB_SEG_MAX 9
-#endif
-
-#define NIX_TX_MSEG_SG_DWORDS \
- ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
- + NIX_TX_NB_SEG_MAX)
-
-/* Apply BP/DROP when CQ is 95% full */
-#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
-#define NIX_CQ_FULL_ERRATA_SKID (1024ull * 256)
-
-#define CQ_OP_STAT_OP_ERR 63
-#define CQ_OP_STAT_CQ_ERR 46
-
-#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
-#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
-
-#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
- * NIX_LF_CINTX_CNT[QCOUNT]
- * crosses this value
- */
-#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
-#define CQ_TIMER_THRESH_MAX 255
-
-#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
- | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-
-#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
- RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
- RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
- RTE_ETH_RSS_C_VLAN)
-
-#define NIX_TX_OFFLOAD_CAPA ( \
- RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
- RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
- RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
- RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_TSO | \
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
- RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
-
-#define NIX_RX_OFFLOAD_CAPA ( \
- RTE_ETH_RX_OFFLOAD_CHECKSUM | \
- RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_RX_OFFLOAD_SCATTER | \
- RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
- RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
- RTE_ETH_RX_OFFLOAD_RSS_HASH)
-
-#define NIX_DEFAULT_RSS_CTX_GROUP 0
-#define NIX_DEFAULT_RSS_MCAM_IDX -1
-
-#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
-
-#define NIX_TIMESYNC_TX_CMD_LEN 8
-/* Additional timesync values. */
-#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
-
-#define OCTEONTX2_PMD net_octeontx2
-
-#define otx2_ethdev_is_same_driver(dev) \
- (strcmp((dev)->device->driver->name, RTE_STR(OCTEONTX2_PMD)) == 0)
-
-enum nix_q_size_e {
- nix_q_size_16, /* 16 entries */
- nix_q_size_64, /* 64 entries */
- nix_q_size_256,
- nix_q_size_1K,
- nix_q_size_4K,
- nix_q_size_16K,
- nix_q_size_64K,
- nix_q_size_256K,
- nix_q_size_1M, /* Million entries */
- nix_q_size_max
-};
-
-enum nix_lso_tun_type {
- NIX_LSO_TUN_V4V4,
- NIX_LSO_TUN_V4V6,
- NIX_LSO_TUN_V6V4,
- NIX_LSO_TUN_V6V6,
- NIX_LSO_TUN_MAX,
-};
-
-struct otx2_qint {
- struct rte_eth_dev *eth_dev;
- uint8_t qintx;
-};
-
-struct otx2_rss_info {
- uint64_t nix_rss;
- uint32_t flowkey_cfg;
- uint16_t rss_size;
- uint8_t rss_grps;
- uint8_t alg_idx; /* Selected algo index */
- uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
- uint8_t key[NIX_HASH_KEY_SIZE];
-};
-
-struct otx2_eth_qconf {
- union {
- struct rte_eth_txconf tx;
- struct rte_eth_rxconf rx;
- } conf;
- void *mempool;
- uint32_t socket_id;
- uint16_t nb_desc;
- uint8_t valid;
-};
-
-struct otx2_fc_info {
- enum rte_eth_fc_mode mode; /**< Link flow control mode */
- uint8_t rx_pause;
- uint8_t tx_pause;
- uint8_t chan_cnt;
- uint16_t bpid[NIX_MAX_CHAN];
-};
-
-struct vlan_mkex_info {
- struct npc_xtract_info la_xtract;
- struct npc_xtract_info lb_xtract;
- uint64_t lb_lt_offset;
-};
-
-struct mcast_entry {
- struct rte_ether_addr mcast_mac;
- uint16_t mcam_index;
- TAILQ_ENTRY(mcast_entry) next;
-};
-
-TAILQ_HEAD(otx2_nix_mc_filter_tbl, mcast_entry);
-
-struct vlan_entry {
- uint32_t mcam_idx;
- uint16_t vlan_id;
- TAILQ_ENTRY(vlan_entry) next;
-};
-
-TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
-
-struct otx2_vlan_info {
- struct otx2_vlan_filter_tbl fltr_tbl;
- /* MKEX layer info */
- struct mcam_entry def_tx_mcam_ent;
- struct mcam_entry def_rx_mcam_ent;
- struct vlan_mkex_info mkex;
- /* Default mcam entry that matches vlan packets */
- uint32_t def_rx_mcam_idx;
- uint32_t def_tx_mcam_idx;
- /* MCAM entry that matches double vlan packets */
- uint32_t qinq_mcam_idx;
- /* Indices of tx_vtag def registers */
- uint32_t outer_vlan_idx;
- uint32_t inner_vlan_idx;
- uint16_t outer_vlan_tpid;
- uint16_t inner_vlan_tpid;
- uint16_t pvid;
- /* QinQ entry allocated before default one */
- uint8_t qinq_before_def;
- uint8_t pvid_insert_on;
- /* Rx vtag action type */
- uint8_t vtag_type_idx;
- uint8_t filter_on;
- uint8_t strip_on;
- uint8_t qinq_on;
- uint8_t promisc_on;
-};
-
-struct otx2_eth_dev {
- OTX2_DEV; /* Base class */
- RTE_MARKER otx2_eth_dev_data_start;
- uint16_t sqb_size;
- uint16_t rx_chan_base;
- uint16_t tx_chan_base;
- uint8_t rx_chan_cnt;
- uint8_t tx_chan_cnt;
- uint8_t lso_tsov4_idx;
- uint8_t lso_tsov6_idx;
- uint8_t lso_udp_tun_idx[NIX_LSO_TUN_MAX];
- uint8_t lso_tun_idx[NIX_LSO_TUN_MAX];
- uint64_t lso_tun_fmt;
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t mkex_pfl_name[MKEX_NAME_LEN];
- uint8_t max_mac_entries;
- bool dmac_filter_enable;
- uint8_t lf_tx_stats;
- uint8_t lf_rx_stats;
- uint8_t lock_rx_ctx;
- uint8_t lock_tx_ctx;
- uint16_t flags;
- uint16_t cints;
- uint16_t qints;
- uint8_t configured;
- uint8_t configured_qints;
- uint8_t configured_cints;
- uint8_t configured_nb_rx_qs;
- uint8_t configured_nb_tx_qs;
- uint8_t ptype_disable;
- uint16_t nix_msixoff;
- uintptr_t base;
- uintptr_t lmt_addr;
- uint16_t scalar_ena;
- uint16_t rss_tag_as_xor;
- uint16_t max_sqb_count;
- uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
- uint64_t rx_offloads;
- uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
- uint64_t tx_offloads;
- uint64_t rx_offload_capa;
- uint64_t tx_offload_capa;
- struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
- struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
- uint16_t txschq[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
- /* Dis-contiguous queues */
- uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Contiguous queues */
- uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t otx2_tm_root_lvl;
- uint16_t link_cfg_lvl;
- uint16_t tm_flags;
- uint16_t tm_leaf_cnt;
- uint64_t tm_rate_min;
- struct otx2_nix_tm_node_list node_list;
- struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
- struct otx2_rss_info rss_info;
- struct otx2_fc_info fc_info;
- uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- struct otx2_npc_flow_info npc_flow;
- struct otx2_vlan_info vlan_info;
- struct otx2_eth_qconf *tx_qconf;
- struct otx2_eth_qconf *rx_qconf;
- struct rte_eth_dev *eth_dev;
- eth_rx_burst_t rx_pkt_burst_no_offload;
- /* PTP counters */
- bool ptp_en;
- struct otx2_timesync_info tstamp;
- struct rte_timecounter systime_tc;
- struct rte_timecounter rx_tstamp_tc;
- struct rte_timecounter tx_tstamp_tc;
- double clk_freq_mult;
- uint64_t clk_delta;
- bool mc_tbl_set;
- struct otx2_nix_mc_filter_tbl mc_fltr_tbl;
- bool sdp_link; /* SDP flag */
- /* Inline IPsec params */
- uint16_t ipsec_in_max_spi;
- rte_spinlock_t ipsec_tbl_lock;
- uint8_t duplex;
- uint32_t speed;
-} __rte_cache_aligned;
-
-struct otx2_eth_txq {
- uint64_t cmd[8];
- int64_t fc_cache_pkts;
- uint64_t *fc_mem;
- void *lmt_addr;
- rte_iova_t io_addr;
- rte_iova_t fc_iova;
- uint16_t sqes_per_sqb_log2;
- int16_t nb_sqb_bufs_adj;
- uint64_t lso_tun_fmt;
- RTE_MARKER slow_path_start;
- uint16_t nb_sqb_bufs;
- uint16_t sq;
- uint64_t offloads;
- struct otx2_eth_dev *dev;
- struct rte_mempool *sqb_pool;
- struct otx2_eth_qconf qconf;
-} __rte_cache_aligned;
-
-struct otx2_eth_rxq {
- uint64_t mbuf_initializer;
- uint64_t data_off;
- uintptr_t desc;
- void *lookup_mem;
- uintptr_t cq_door;
- uint64_t wdata;
- int64_t *cq_status;
- uint32_t head;
- uint32_t qmask;
- uint32_t available;
- uint16_t rq;
- struct otx2_timesync_info *tstamp;
- RTE_MARKER slow_path_start;
- uint64_t aura;
- uint64_t offloads;
- uint32_t qlen;
- struct rte_mempool *pool;
- enum nix_q_size_e qsize;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_qconf qconf;
- uint16_t cq_drop;
-} __rte_cache_aligned;
-
-static inline struct otx2_eth_dev *
-otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
-{
- return eth_dev->data->dev_private;
-}
-
-/* Ops */
-int otx2_nix_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *dev_info);
-int otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev,
- const struct rte_flow_ops **ops);
-int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size);
-int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo);
-int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info);
-int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
-void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo);
-void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo);
-int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(void *rx_queue);
-int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
-
-void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
-int otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
-int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
-uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
-
-/* Multicast filter APIs */
-void otx2_nix_mc_filter_init(struct otx2_eth_dev *dev);
-void otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev);
-int otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev);
-int otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev);
-int otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr);
-
-/* MTU */
-int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
-int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
-void otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq);
-
-
-/* Link */
-void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
-int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
-void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-void otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
-int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
-int otx2_apply_link_speed(struct rte_eth_dev *eth_dev);
-
-/* IRQ */
-int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-void otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-
-/* Debug */
-int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
-int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
- struct rte_dev_reg_info *regs);
-int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
-void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
-
-/* Stats */
-int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats);
-int otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
- uint16_t queue_id, uint8_t stat_idx,
- uint8_t is_rx);
-int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats, unsigned int n);
-int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- uint64_t *values, unsigned int n);
-int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-
-/* RSS */
-void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
- uint8_t *key, uint32_t key_len);
-uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
- uint64_t ethdev_rss, uint8_t rss_level);
-int otx2_rss_set_hf(struct otx2_eth_dev *dev,
- uint32_t flowkey_cfg, uint8_t *alg_idx,
- uint8_t group, int mcam_index);
-int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
- uint16_t *ind_tbl);
-int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-/* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
-int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-
-/* Flow Control */
-int otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
-
-/* VLAN */
-int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
-void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
-int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on);
-void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
-int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid);
-int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
-
-/* Lookup configuration */
-void *otx2_nix_fastpath_lookup_mem_get(void);
-
-/* PTYPES */
-const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
-int otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask);
-
-/* Mac address handling */
-int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
-int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr,
- uint32_t index, uint32_t pool);
-void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
-int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
-
-/* Devargs */
-int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
- struct otx2_eth_dev *dev);
-
-/* Rx and Tx routines */
-void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
-void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
-void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
-
-/* Timesync - PTP routines */
-int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t flags);
-int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp);
-int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
-int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts);
-int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
- struct timespec *ts);
-int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
-int otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *time);
-int otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev);
-void otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
deleted file mode 100644
index 6d951bc7e2..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ /dev/null
@@ -1,811 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-#define NIX_REG_INFO(reg) {reg, #reg}
-#define NIX_REG_NAME_SZ 48
-
-struct nix_lf_reg_info {
- uint32_t offset;
- const char *name;
-};
-
-static const struct
-nix_lf_reg_info nix_lf_reg[] = {
- NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
- NIX_REG_INFO(NIX_LF_CFG),
- NIX_REG_INFO(NIX_LF_GINT),
- NIX_REG_INFO(NIX_LF_GINT_W1S),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT),
- NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_RAS),
- NIX_REG_INFO(NIX_LF_RAS_W1S),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
- NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
- NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
- NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
-};
-
-static int
-nix_lf_get_reg_count(struct otx2_eth_dev *dev)
-{
- int reg_count = 0;
-
- reg_count = RTE_DIM(nix_lf_reg);
- /* NIX_LF_TX_STATX */
- reg_count += dev->lf_tx_stats;
- /* NIX_LF_RX_STATX */
- reg_count += dev->lf_rx_stats;
- /* NIX_LF_QINTX_CNT*/
- reg_count += dev->qints;
- /* NIX_LF_QINTX_INT */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1S */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1C */
- reg_count += dev->qints;
- /* NIX_LF_CINTX_CNT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_WAIT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1C */
- reg_count += dev->cints;
-
- return reg_count;
-}
-
-int
-otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
-{
- uintptr_t nix_lf_base = dev->base;
- bool dump_stdout;
- uint64_t reg;
- uint32_t i;
-
- dump_stdout = data ? 0 : 1;
-
- for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
- reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
- if (dump_stdout && reg)
- nix_dump("%32s = 0x%" PRIx64,
- nix_lf_reg[i].name, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_TX_STATX */
- for (i = 0; i < dev->lf_tx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_TX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_RX_STATX */
- for (i = 0; i < dev->lf_rx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_RX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_CNT*/
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_INT */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1S */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1C */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_CNT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_WAIT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_WAIT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1C */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
- return 0;
-}
-
-int
-otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t *data = regs->data;
-
- if (data == NULL) {
- regs->length = nix_lf_get_reg_count(dev);
- regs->width = 8;
- return 0;
- }
-
- if (!regs->length ||
- regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
- otx2_nix_reg_dump(dev, data);
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline void
-nix_lf_sq_dump(__otx2_io struct nix_sq_ctx_s *ctx)
-{
- nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
- ctx->sqe_way_mask, ctx->cq);
- nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->sdp_mcast, ctx->substream);
- nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
- ctx->qint_idx, ctx->ena);
-
- nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
- ctx->sqb_count, ctx->default_chan);
- nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
- ctx->smq_rr_quantum, ctx->sso_ena);
- nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
- ctx->xoff, ctx->cq_ena, ctx->smq);
-
- nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
- ctx->sqe_stype, ctx->sq_int_ena);
- nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
- ctx->sq_int, ctx->sqb_aura);
- nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
-
- nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
- ctx->smq_next_sq_vld, ctx->smq_pend);
- nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
- ctx->smenq_next_sqb_vld, ctx->head_offset);
- nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
- ctx->smenq_offset, ctx->tail_offset);
- nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
- ctx->smq_lso_segnum, ctx->smq_next_sq);
- nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
- ctx->mnq_dis, ctx->lmt_dis);
- nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
- ctx->cq_limit, ctx->max_sqe_size);
-
- nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
- nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
- nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
- nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
- nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
-
- nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
- ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
- nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
- ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
- nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
- ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
- nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
-
- nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
- (uint64_t)ctx->scm_lso_rem);
- nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_octs);
- nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_pkts);
-}
-
-static inline void
-nix_lf_rq_dump(__otx2_io struct nix_rq_ctx_s *ctx)
-{
- nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->wqe_aura, ctx->substream);
- nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
- ctx->cq, ctx->ena_wqwd);
- nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
- ctx->ipsech_ena, ctx->sso_ena);
- nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
-
- nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
- ctx->lpb_drop_ena, ctx->spb_drop_ena);
- nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
- ctx->xqe_drop_ena, ctx->wqe_caching);
- nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
- ctx->pb_caching, ctx->sso_tt);
- nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
- ctx->sso_grp, ctx->lpb_aura);
- nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
-
- nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
- ctx->xqe_hdr_split, ctx->xqe_imm_copy);
- nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
- ctx->xqe_imm_size, ctx->later_skip);
- nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
- ctx->first_skip, ctx->lpb_sizem1);
- nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
- ctx->spb_ena, ctx->wqe_skip);
- nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
-
- nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
- ctx->spb_pool_pass, ctx->spb_pool_drop);
- nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
- ctx->spb_aura_pass, ctx->spb_aura_drop);
- nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
- ctx->wqe_pool_pass, ctx->wqe_pool_drop);
- nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
- ctx->xqe_pass, ctx->xqe_drop);
-
- nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
- ctx->qint_idx, ctx->rq_int_ena);
- nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
- ctx->rq_int, ctx->lpb_pool_pass);
- nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
- ctx->lpb_pool_drop, ctx->lpb_aura_pass);
- nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
-
- nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
- ctx->flow_tagw, ctx->bad_utag);
- nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
- ctx->good_utag, ctx->ltag);
-
- nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
- nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
- nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
-}
-
-static inline void
-nix_lf_cq_dump(__otx2_io struct nix_cq_ctx_s *ctx)
-{
- nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
-
- nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
- nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
- ctx->avg_con, ctx->cint_idx);
- nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
- ctx->cq_err, ctx->qint_idx);
- nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
- ctx->bpid, ctx->bp_ena);
-
- nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
- ctx->update_time, ctx->avg_level);
- nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
- ctx->head, ctx->tail);
-
- nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
- ctx->cq_err_int_ena, ctx->cq_err_int);
- nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
- ctx->qsize, ctx->caching);
- nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
- ctx->substream, ctx->ena);
- nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
- ctx->drop_ena, ctx->drop);
- nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
-}
-
-int
-otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, q, rq = eth_dev->data->nb_rx_queues;
- int sq = eth_dev->data->nb_tx_queues;
- struct otx2_mbox *mbox = dev->mbox;
- struct npa_aq_enq_rsp *npa_rsp;
- struct npa_aq_enq_req *npa_aq;
- struct otx2_npa_lf *npa_lf;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
-
- npa_lf = otx2_npa_lf_obj_get();
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get cq context");
- goto fail;
- }
- nix_dump("============== port=%d cq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_cq_dump(&rsp->cq);
- }
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc) {
- otx2_err("Failed to get rq context");
- goto fail;
- }
- nix_dump("============== port=%d rq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_rq_dump(&rsp->rq);
- }
- for (q = 0; q < sq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get sq context");
- goto fail;
- }
- nix_dump("============== port=%d sq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_sq_dump(&rsp->sq);
-
- if (!npa_lf) {
- otx2_err("NPA LF doesn't exist");
- continue;
- }
-
- /* Dump SQB Aura minimal info */
- npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- npa_aq->aura_id = rsp->sq.sqb_aura;
- npa_aq->ctype = NPA_AQ_CTYPE_AURA;
- npa_aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
- if (rc) {
- otx2_err("Failed to get sq's sqb_aura context");
- continue;
- }
-
- nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
- npa_rsp->aura.pool_addr);
- nix_dump("SQB Aura W1: ena\t\t\t%d",
- npa_rsp->aura.ena);
- nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.count);
- nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.limit);
- nix_dump("SQB Aura W3: fc_ena\t\t%d",
- npa_rsp->aura.fc_ena);
- nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
- npa_rsp->aura.fc_addr);
- }
-
-fail:
- return rc;
-}
-
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
- nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
- cq->tag, cq->q, cq->node, cq->cqe_type);
-
- nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
- rx->chan, rx->desc_sizem1);
- nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
- rx->imm_copy, rx->express);
- nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
- rx->wqwd, rx->errlev, rx->errcode);
- nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
- rx->latype, rx->lbtype, rx->lctype);
- nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
- rx->ldtype, rx->letype, rx->lftype);
- nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
- rx->lgtype, rx->lhtype);
-
- nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
- nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
- rx->l2m, rx->l2b, rx->l3m, rx->l3b);
- nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
- rx->vtag0_valid, rx->vtag0_gone);
- nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
- rx->vtag1_valid, rx->vtag1_gone);
- nix_dump("W1: pkind \t%d", rx->pkind);
- nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
- rx->vtag0_tci, rx->vtag1_tci);
-
- nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
- rx->laflags, rx->lbflags, rx->lcflags);
- nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
- rx->ldflags, rx->leflags, rx->lfflags);
- nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
- rx->lgflags, rx->lhflags);
-
- nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
- rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
- nix_dump("W3: match_id \t%d", rx->match_id);
-
- nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
- rx->laptr, rx->lbptr, rx->lcptr);
- nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
- rx->ldptr, rx->leptr, rx->lfptr);
- nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
- nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
- rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
-static uint8_t
-prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
- uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
-{
- uint8_t k = 0;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_SMQX_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_SMQ[%u]_CFG", schq);
-
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_MDQX_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PIR", schq);
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_CIR", schq);
-
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
-
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL4X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL3X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL2X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL1:
-
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL1X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SW_XOFF", schq);
-
- reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
- break;
- default:
- break;
- }
-
- if (k > MAX_REGS_PER_MBOX_MSG) {
- nix_dump("\t!!!NIX TM Registers request overflow!!!");
- return 0;
- }
- return k;
-}
-
-/* Dump TM hierarchy and registers */
-void
-otx2_nix_tm_dump(struct otx2_eth_dev *dev)
-{
- char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
- struct otx2_nix_tm_node *tm_node, *root_node, *parent;
- uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
- struct nix_txschq_config *req;
- const char *lvlstr, *parent_lvlstr;
- struct nix_txschq_config *rsp;
- uint32_t schq, parent_schq;
- int hw_lvl, j, k, rc;
-
- nix_dump("===TM hierarchy and registers dump of %s===",
- dev->eth_dev->data->name);
-
- root_node = NULL;
-
- for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != hw_lvl)
- continue;
-
- parent = tm_node->parent;
- if (hw_lvl == NIX_TXSCH_LVL_CNT) {
- lvlstr = "SQ";
- schq = tm_node->id;
- } else {
- lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
- schq = tm_node->hw_id;
- }
-
- if (parent) {
- parent_schq = parent->hw_id;
- parent_lvlstr =
- nix_hwlvl2str(parent->hw_lvl);
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- parent_schq = otx2_nix_get_link(dev);
- parent_lvlstr = "LINK";
- } else {
- parent_schq = tm_node->parent_hw_id;
- parent_lvlstr =
- nix_hwlvl2str(tm_node->hw_lvl + 1);
- }
-
- nix_dump("%s_%d->%s_%d", lvlstr, schq,
- parent_lvlstr, parent_schq);
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- /* Need to dump TL1 when root is TL2 */
- if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
- root_node = tm_node;
-
- /* Dump registers only when HWRES is present */
- k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
- otx2_nix_get_link(dev), reg,
- regstr);
- if (!k)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = tm_node->hw_lvl;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
- nix_dump("\n");
- }
-
- /* Dump TL1 node data when root level is TL2 */
- if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
- k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
- root_node->parent_hw_id,
- otx2_nix_get_link(dev),
- reg, regstr);
- if (!k)
- return;
-
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = NIX_TXSCH_LVL_TL1;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
-
- otx2_nix_queues_ctx_dump(dev->eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
deleted file mode 100644
index 60bf6c3f5f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ /dev/null
@@ -1,215 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-#include <math.h>
-
-#include "otx2_ethdev.h"
-
-static int
-parse_flow_max_priority(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the max priority to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the prealloc size to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_reta_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val <= RTE_ETH_RSS_RETA_SIZE_64)
- val = RTE_ETH_RSS_RETA_SIZE_64;
- else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
- val = RTE_ETH_RSS_RETA_SIZE_128;
- else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
- val = RTE_ETH_RSS_RETA_SIZE_256;
- else
- val = NIX_RSS_RETA_SIZE;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flag(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- *(uint16_t *)extra_args = atoi(value);
-
- return 0;
-}
-
-static int
-parse_sqb_count(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_switch_header_type(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- if (strcmp(value, "higig2") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_HIGIG;
-
- if (strcmp(value, "dsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EDSA;
-
- if (strcmp(value, "chlen90b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_90B;
-
- if (strcmp(value, "chlen24b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_24B;
-
- if (strcmp(value, "exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EXDSA;
-
- if (strcmp(value, "vlan_exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_VLAN_EXDSA;
-
- return 0;
-}
-
-#define OTX2_RSS_RETA_SIZE "reta_size"
-#define OTX2_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
-#define OTX2_SCL_ENABLE "scalar_enable"
-#define OTX2_MAX_SQB_COUNT "max_sqb_count"
-#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
-#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
-#define OTX2_SWITCH_HEADER_TYPE "switch_header"
-#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
-#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
-#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
-
-int
-otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
-{
- uint16_t rss_size = NIX_RSS_RETA_SIZE;
- uint16_t sqb_count = NIX_MAX_SQB;
- uint16_t flow_prealloc_size = 8;
- uint16_t switch_header_type = 0;
- uint16_t flow_max_priority = 3;
- uint16_t ipsec_in_max_spi = 1;
- uint16_t rss_tag_as_xor = 0;
- uint16_t scalar_enable = 0;
- struct rte_kvargs *kvlist;
- uint16_t lock_rx_ctx = 0;
- uint16_t lock_tx_ctx = 0;
-
- if (devargs == NULL)
- goto null_devargs;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
- &parse_reta_size, &rss_size);
- rte_kvargs_process(kvlist, OTX2_IPSEC_IN_MAX_SPI,
- &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
- rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
- &parse_flag, &scalar_enable);
- rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
- &parse_sqb_count, &sqb_count);
- rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
- &parse_flow_prealloc_size, &flow_prealloc_size);
- rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
- &parse_flow_max_priority, &flow_max_priority);
- rte_kvargs_process(kvlist, OTX2_SWITCH_HEADER_TYPE,
- &parse_switch_header_type, &switch_header_type);
- rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
- &parse_flag, &rss_tag_as_xor);
- rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
- &parse_flag, &lock_rx_ctx);
- rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
- &parse_flag, &lock_tx_ctx);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-
-null_devargs:
- dev->ipsec_in_max_spi = ipsec_in_max_spi;
- dev->scalar_ena = scalar_enable;
- dev->rss_tag_as_xor = rss_tag_as_xor;
- dev->max_sqb_count = sqb_count;
- dev->lock_rx_ctx = lock_rx_ctx;
- dev->lock_tx_ctx = lock_tx_ctx;
- dev->rss_info.rss_size = rss_size;
- dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
- dev->npc_flow.flow_max_priority = flow_max_priority;
- dev->npc_flow.switch_header_type = switch_header_type;
- return 0;
-
-exit:
- return -EINVAL;
-}
-
-RTE_PMD_REGISTER_PARAM_STRING(OCTEONTX2_PMD,
- OTX2_RSS_RETA_SIZE "=<64|128|256>"
- OTX2_IPSEC_IN_MAX_SPI "=<1-65535>"
- OTX2_SCL_ENABLE "=1"
- OTX2_MAX_SQB_COUNT "=<8-512>"
- OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
- OTX2_FLOW_MAX_PRIORITY "=<1-32>"
- OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b|chlen24b>"
- OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>"
- OTX2_LOCK_RX_CTX "=1"
- OTX2_LOCK_TX_CTX "=1");
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
deleted file mode 100644
index cc573bb2e8..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ /dev/null
@@ -1,493 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-
-static void
-nix_lf_err_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
- /* Enable all dev interrupt except for RQ_DISABLED */
- otx2_nix_err_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
-}
-
-static void
-nix_lf_ras_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_RAS);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
- /* Enable dev interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
-}
-
-static inline uint8_t
-nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, dev->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
-{
- return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
-{
- return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
-{
- return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
-}
-
-static inline void
-nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
-{
- uint64_t reg;
-
- reg = otx2_read64(dev->base + off);
- if (reg & BIT_ULL(44))
- otx2_err("SQ=%d err_code=0x%x",
- (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
-}
-
-static void
-nix_lf_cq_irq(void *param)
-{
- struct otx2_qint *cint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = cint->eth_dev;
- struct otx2_eth_dev *dev;
-
- dev = otx2_eth_pmd_priv(eth_dev);
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
-}
-
-static void
-nix_lf_q_irq(void *param)
-{
- struct otx2_qint *qint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = qint->eth_dev;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t irq, qintx = qint->qintx;
- int q, cq, rq, sq;
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
- intr, qintx, dev->pf, dev->vf);
-
- /* Handle RQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- rq = q % dev->qints;
- irq = nix_lf_rq_irq_get_and_clear(dev, rq);
-
- if (irq & BIT_ULL(NIX_RQINT_DROP))
- otx2_err("RQ=%d NIX_RQINT_DROP", rq);
-
- if (irq & BIT_ULL(NIX_RQINT_RED))
- otx2_err("RQ=%d NIX_RQINT_RED", rq);
- }
-
- /* Handle CQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- cq = q % dev->qints;
- irq = nix_lf_cq_irq_get_and_clear(dev, cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
- otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
- otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
- otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
- }
-
- /* Handle SQ interrupts */
- for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
- sq = q % dev->qints;
- irq = nix_lf_sq_irq_get_and_clear(dev, sq);
-
- if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
- otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- }
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-int
-oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q, sqs, rqs, qs, rc = 0;
-
- /* Figure out max qintx required */
- rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
- sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
- qs = RTE_MAX(rqs, sqs);
-
- dev->configured_qints = qs;
-
- for (q = 0; q < qs; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- dev->qints_mem[q].eth_dev = eth_dev;
- dev->qints_mem[q].qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- if (rc)
- break;
-
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_qints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- }
-}
-
-int
-oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rc = 0, vec, q;
-
- dev->configured_cints = RTE_MIN(dev->cints,
- eth_dev->data->nb_rx_queues);
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- dev->cints_mem[q].eth_dev = eth_dev;
- dev->cints_mem[q].qintx = q;
-
- /* Sync cints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- if (rc) {
- otx2_err("Fail to register CQ irq, rc=%d", rc);
- return rc;
- }
-
- rc = rte_intr_vec_list_alloc(handle, "intr_vec",
- dev->configured_cints);
- if (rc) {
- otx2_err("Fail to allocate intr vec list, "
- "rc=%d", rc);
- return rc;
- }
- /* VFIO vector zero is resereved for misc interrupt so
- * doing required adjustment. (b13bfab4cd)
- */
- if (rte_intr_vec_list_index_set(handle, q,
- RTE_INTR_VEC_RXTX_OFFSET + vec))
- return -1;
-
- /* Configure CQE interrupt coalescing parameters */
- otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
- (CQ_CQE_THRESH_DEFAULT << 32) |
- (CQ_TIMER_THRESH_DEFAULT << 48)),
- dev->base + NIX_LF_CINTX_WAIT((q)));
-
- /* Keeping the CQ interrupt disabled as the rx interrupt
- * feature needs to be enabled/disabled on demand.
- */
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- }
-}
-
-int
-otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
- dev->nix_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = nix_lf_register_err_irq(eth_dev);
- /* Register RAS interrupt */
- rc |= nix_lf_register_ras_irq(eth_dev);
-
- return rc;
-}
-
-void
-otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
-{
- nix_lf_unregister_err_irq(eth_dev);
- nix_lf_unregister_ras_irq(eth_dev);
-}
-
-int
-otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1S(rx_queue_id));
-
- return 0;
-}
-
-int
-otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Clear and disable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1C(rx_queue_id));
-
- return 0;
-}
-
-void
-otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable all nix lf error interrupts except
- * RQ_DISABLED and CQ_DISABLED.
- */
- if (enb)
- otx2_write64(~(BIT_ULL(11) | BIT_ULL(24)),
- dev->base + NIX_LF_ERR_INT_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
-}
-
-void
-otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (enb)
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
deleted file mode 100644
index 48781514c3..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ /dev/null
@@ -1,589 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_ethdev.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_frs_cfg *req;
- int rc;
-
- if (dev->configured && otx2_ethdev_is_ptp_en(dev))
- frame_size += NIX_TIMESYNC_RX_OFFSET;
-
- buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
-
- /* Refuse MTU that requires the support of scattered packets
- * when this feature has not been enabled before.
- */
- if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
- return -EINVAL;
-
- /* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
- (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->update_smq = true;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
- /* FRS HW config should exclude FCS but include NPC VTAG insert size */
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Now just update Rx MAXLEN */
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- return rc;
-}
-
-int
-otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_rxq *rxq;
- int rc;
-
- rxq = data->rx_queues[0];
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- rc = otx2_nix_mtu_set(eth_dev, data->mtu);
- if (rc)
- otx2_err("Failed to set default MTU size %d", rc);
-
- return rc;
-}
-
-static void
-nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
-
- otx2_mbox_process(mbox);
-}
-
-void
-otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
- eth_dev->data->promiscuous = en;
- otx2_nix_vlan_update_promisc(eth_dev, en);
-}
-
-int
-otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
-{
- otx2_nix_promisc_config(eth_dev, 1);
- nix_cgx_promisc_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- otx2_nix_promisc_config(eth_dev, dev->dmac_filter_enable);
- nix_cgx_promisc_config(eth_dev, 0);
- dev->dmac_filter_enable = false;
-
- return 0;
-}
-
-static void
-nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
- else if (eth_dev->data->promiscuous)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 0);
-
- return 0;
-}
-
-void
-otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo)
-{
- struct otx2_eth_rxq *rxq;
-
- rxq = eth_dev->data->rx_queues[queue_id];
-
- qinfo->mp = rxq->pool;
- qinfo->scattered_rx = eth_dev->data->scattered_rx;
- qinfo->nb_desc = rxq->qconf.nb_desc;
-
- qinfo->conf.rx_free_thresh = 0;
- qinfo->conf.rx_drop_en = 0;
- qinfo->conf.rx_deferred_start = 0;
- qinfo->conf.offloads = rxq->offloads;
-}
-
-void
-otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo)
-{
- struct otx2_eth_txq *txq;
-
- txq = eth_dev->data->tx_queues[queue_id];
-
- qinfo->nb_desc = txq->qconf.nb_desc;
-
- qinfo->conf.tx_thresh.pthresh = 0;
- qinfo->conf.tx_thresh.hthresh = 0;
- qinfo->conf.tx_thresh.wthresh = 0;
-
- qinfo->conf.tx_free_thresh = 0;
- qinfo->conf.tx_rs_thresh = 0;
- qinfo->conf.offloads = txq->offloads;
- qinfo->conf.tx_deferred_start = 0;
-}
-
-int
-otx2_rx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } rx_offload_map[] = {
- {NIX_RX_OFFLOAD_RSS_F, "RSS,"},
- {NIX_RX_OFFLOAD_PTYPE_F, " Ptype,"},
- {NIX_RX_OFFLOAD_CHECKSUM_F, " Checksum,"},
- {NIX_RX_OFFLOAD_VLAN_STRIP_F, " VLAN Strip,"},
- {NIX_RX_OFFLOAD_MARK_UPDATE_F, " Mark Update,"},
- {NIX_RX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_RX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
- "Scalar, Rx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Rx offload info */
- for (i = 0; i < RTE_DIM(rx_offload_map); i++) {
- if (dev->rx_offload_flags & rx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- rx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-int
-otx2_tx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } tx_offload_map[] = {
- {NIX_TX_OFFLOAD_L3_L4_CSUM_F, " Inner L3/L4 csum,"},
- {NIX_TX_OFFLOAD_OL3_OL4_CSUM_F, " Outer L3/L4 csum,"},
- {NIX_TX_OFFLOAD_VLAN_QINQ_F, " VLAN Insertion,"},
- {NIX_TX_OFFLOAD_MBUF_NOFF_F, " MBUF free disable,"},
- {NIX_TX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_TX_OFFLOAD_TSO_F, " TSO,"},
- {NIX_TX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
- "Scalar, Tx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Tx offload info */
- for (i = 0; i < RTE_DIM(tx_offload_map); i++) {
- if (dev->tx_offload_flags & tx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- tx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-static void
-nix_rx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_CQ_OP_STATUS));
- if (val & (OP_ERR | CQ_ERR))
- val = 0;
-
- *tail = (uint32_t)(val & 0xFFFFF);
- *head = (uint32_t)((val >> 20) & 0xFFFFF);
-}
-
-uint32_t
-otx2_nix_rx_queue_count(void *rx_queue)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
- uint32_t head, tail;
-
- nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
- return (tail - head) % rxq->qlen;
-}
-
-static inline int
-nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
-{
- /* Check given offset(queue index) has packet filled by HW */
- if (tail > head && offset <= tail && offset >= head)
- return 1;
- /* Wrap around case */
- if (head > tail && (offset >= head || offset <= tail))
- return 1;
-
- return 0;
-}
-
-int
-otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- uint32_t head, tail;
-
- if (rxq->qlen <= offset)
- return -EINVAL;
-
- nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
- &head, &tail, rxq->rq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_RX_DESC_DONE;
- else
- return RTE_ETH_RX_DESC_AVAIL;
-}
-
-static void
-nix_tx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_SQ_OP_STATUS));
- if (val & OP_ERR)
- val = 0;
-
- *tail = (uint32_t)((val >> 28) & 0x3F);
- *head = (uint32_t)((val >> 20) & 0x3F);
-}
-
-int
-otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset)
-{
- struct otx2_eth_txq *txq = tx_queue;
- uint32_t head, tail;
-
- if (txq->qconf.nb_desc <= offset)
- return -EINVAL;
-
- nix_tx_head_tail_get(txq->dev, &head, &tail, txq->sq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_TX_DESC_DONE;
- else
- return RTE_ETH_TX_DESC_FULL;
-}
-
-/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
-int
-otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
-{
- RTE_SET_USED(txq);
- RTE_SET_USED(free_cnt);
-
- return 0;
-}
-
-int
-otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc = (int)fw_size;
-
- if (fw_size > sizeof(dev->mkex_pfl_name))
- rc = sizeof(dev->mkex_pfl_name);
-
- rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
-
- rc += 1; /* Add the size of '\0' */
- if (fw_size < (size_t)rc)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
-{
- RTE_SET_USED(eth_dev);
-
- if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
- return 0;
-
- return -ENOTSUP;
-}
-
-int
-otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_ops **ops)
-{
- *ops = &otx2_flow_ops;
- return 0;
-}
-
-static struct cgx_fw_data *
-nix_get_fwdata(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_fw_data *rsp = NULL;
- int rc;
-
- otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get fw data: %d", rc);
- return NULL;
- }
-
- return rsp;
-}
-
-int
-otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
- modinfo->eeprom_len = SFP_EEPROM_SIZE;
-
- return 0;
-}
-
-int
-otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- if (info->offset + info->length > SFP_EEPROM_SIZE)
- return -EINVAL;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
- info->length);
-
- return 0;
-}
-
-int
-otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- devinfo->min_rx_bufsize = NIX_MIN_FRS;
- devinfo->max_rx_pktlen = NIX_MAX_FRS;
- devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_mac_addrs = dev->max_mac_entries;
- devinfo->max_vfs = pci_dev->max_vfs;
- devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
- devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
- if (dev->configured && otx2_ethdev_is_ptp_en(dev)) {
- devinfo->max_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->min_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->max_rx_pktlen -= NIX_TIMESYNC_RX_OFFSET;
- }
-
- devinfo->rx_offload_capa = dev->rx_offload_capa;
- devinfo->tx_offload_capa = dev->tx_offload_capa;
- devinfo->rx_queue_offload_capa = 0;
- devinfo->tx_queue_offload_capa = 0;
-
- devinfo->reta_size = dev->rss_info.rss_size;
- devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
- devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
-
- devinfo->default_rxconf = (struct rte_eth_rxconf) {
- .rx_drop_en = 0,
- .offloads = 0,
- };
-
- devinfo->default_txconf = (struct rte_eth_txconf) {
- .offloads = 0,
- };
-
- devinfo->default_rxportconf = (struct rte_eth_dev_portconf) {
- .ring_size = NIX_RX_DEFAULT_RING_SZ,
- };
-
- devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = NIX_RX_MIN_DESC,
- .nb_align = NIX_RX_MIN_DESC_ALIGN,
- .nb_seg_max = NIX_RX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
- };
- devinfo->rx_desc_lim.nb_max =
- RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
- NIX_RX_MIN_DESC_ALIGN);
-
- devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = 1,
- .nb_align = 1,
- .nb_seg_max = NIX_TX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
- };
-
- /* Auto negotiation disabled */
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
- if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
- RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
-
- /* 50G and 100G to be supported for board version C0
- * and above.
- */
- if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
- RTE_ETH_LINK_SPEED_100G;
- }
-
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
- RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
- devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
deleted file mode 100644
index 4d40184de4..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ /dev/null
@@ -1,923 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_memzone.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_fp.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#define ERR_STR_SZ 256
-
-struct eth_sec_tag_const {
- RTE_STD_C11
- union {
- struct {
- uint32_t rsvd_11_0 : 12;
- uint32_t port : 8;
- uint32_t event_type : 4;
- uint32_t rsvd_31_24 : 8;
- };
- uint32_t u32;
- };
-};
-
-static struct rte_cryptodev_capabilities otx2_eth_sec_crypto_caps[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 20,
- .max = 64,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- },
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability otx2_eth_sec_capabilities[] = {
- { /* IPsec Inline Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Inline Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-lookup_mem_sa_tbl_clear(struct rte_eth_dev *eth_dev)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return;
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
- if (sa_tbl[port] == NULL)
- return;
-
- rte_free(sa_tbl[port]);
- sa_tbl[port] = NULL;
-}
-
-static int
-lookup_mem_sa_index_update(struct rte_eth_dev *eth_dev, int spi, void *sa,
- char *err_str)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not find fastpath lookup table");
- return -EINVAL;
- }
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
-
- if (sa_tbl[port] == NULL) {
- sa_tbl[port] = rte_malloc(NULL, dev->ipsec_in_max_spi *
- sizeof(uint64_t), 0);
- }
-
- sa_tbl[port][spi] = (uint64_t)sa;
-
- return 0;
-}
-
-static inline void
-in_sa_mz_name_get(char *name, int size, uint16_t port)
-{
- snprintf(name, size, "otx2_ipsec_in_sadb_%u", port);
-}
-
-static struct otx2_ipsec_fp_in_sa *
-in_sa_get(uint16_t port, int sa_index)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- struct otx2_ipsec_fp_in_sa *sa;
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- otx2_err("Could not get the memzone reserved for IN SA DB");
- return NULL;
- }
-
- sa = mz->addr;
-
- return sa + sa_index;
-}
-
-static int
-ipsec_sa_const_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_ip *sess)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- sess->partial_len = sizeof(struct rte_ipv4_hdr);
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- sess->partial_len += sizeof(struct rte_esp_hdr);
- sess->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- sess->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- sess->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- sess->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- sess->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- sess->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- }
- return 0;
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- sess->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- sess->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- sess->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-hmac_init(struct otx2_ipsec_fp_sa_ctl *ctl, struct otx2_cpt_qp *qp,
- const uint8_t *auth_key, int len, uint8_t *hmac_key)
-{
- struct inst_data {
- struct otx2_cpt_res cpt_res;
- uint8_t buffer[64];
- } *md;
-
- volatile struct otx2_cpt_res *res;
- uint64_t timeout, lmt_status;
- struct otx2_cpt_inst_s inst;
- rte_iova_t md_iova;
- int ret;
-
- memset(&inst, 0, sizeof(struct otx2_cpt_inst_s));
-
- md = rte_zmalloc(NULL, sizeof(struct inst_data), OTX2_CPT_RES_ALIGN);
- if (md == NULL)
- return -ENOMEM;
-
- memcpy(md->buffer, auth_key, len);
-
- md_iova = rte_malloc_virt2iova(md);
- if (md_iova == RTE_BAD_IOVA) {
- ret = -EINVAL;
- goto free_md;
- }
-
- inst.res_addr = md_iova + offsetof(struct inst_data, cpt_res);
- inst.opcode = OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD;
- inst.param2 = ctl->auth_type;
- inst.dlen = len;
- inst.dptr = md_iova + offsetof(struct inst_data, buffer);
- inst.rptr = inst.dptr;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
-
- md->cpt_res.compcode = 0;
- md->cpt_res.uc_compcode = 0xff;
-
- timeout = rte_get_timer_cycles() + 5 * rte_get_timer_hz();
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(qp->lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- res = (volatile struct otx2_cpt_res *)&md->cpt_res;
-
- /* Wait until instruction completes or times out */
- while (res->uc_compcode == 0xff) {
- if (rte_get_timer_cycles() > timeout)
- break;
- }
-
- if (res->u16[0] != OTX2_SEC_COMP_GOOD) {
- ret = -EIO;
- goto free_md;
- }
-
- /* Retrieve the ipad and opad from rptr */
- memcpy(hmac_key, md->buffer, 48);
-
- ret = 0;
-
-free_md:
- rte_free(md);
- return ret;
-}
-
-static int
-eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_out_sa *sa;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_qp *qp;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- sess = &priv->ipsec.ip;
-
- sa = &sess->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sess, 0, sizeof(struct otx2_sec_session_ipsec_ip));
-
- sess->seq = 1;
-
- ret = ipsec_sa_const_set(ipsec, crypto_xform, sess);
- if (ret < 0)
- return ret;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- memcpy(sa->nonce, &ipsec->salt, 4);
-
- if (ipsec->options.udp_encap == 1) {
- sa->udp_src = 4500;
- sa->udp_dst = 4500;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- /* Start ip id from 1 */
- sess->ip_id = 1;
-
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memcpy(&sa->ip_src, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&sa->ip_dst, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else {
- return -EINVAL;
- }
- } else {
- return -EINVAL;
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- /* Determine word 7 of CPT instruction */
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
- inst.cptr = rte_mempool_virt2iova(sa);
- sess->inst_w7 = inst.u64[7];
-
- /* Get CPT QP to be used for this SA */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret)
- return ret;
-
- sess->qp = qp;
-
- sess->cpt_lmtline = qp->lmtline;
- sess->cpt_nq_reg = qp->lf_nq_reg;
-
- /* Populate control word */
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- goto cpt_put;
-
- if (auth_key_len && auth_key) {
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- if (ret)
- goto cpt_put;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- return 0;
-cpt_put:
- otx2_sec_idev_tx_cpt_qp_put(sess->qp);
- return ret;
-}
-
-static int
-eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- char err_str[ERR_STR_SZ];
- struct otx2_cpt_qp *qp;
-
- memset(err_str, 0, ERR_STR_SZ);
-
- if (ipsec->spi >= dev->ipsec_in_max_spi) {
- otx2_err("SPI exceeds max supported");
- return -EINVAL;
- }
-
- sa = in_sa_get(port, ipsec->spi);
- if (sa == NULL)
- return -ENOMEM;
-
- ctl = &sa->ctl;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- sess = &priv->ipsec.ip;
-
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
-
- if (ctl->valid) {
- snprintf(err_str, ERR_STR_SZ, "SA already registered");
- ret = -EEXIST;
- goto tbl_unlock;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0) {
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- } else {
- snprintf(err_str, ERR_STR_SZ, "Invalid cipher key len");
- ret = -EINVAL;
- goto sa_clear;
- }
-
- sess->in_sa = sa;
-
- sa->userdata = priv->userdata;
-
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- if (lookup_mem_sa_index_update(eth_dev, ipsec->spi, sa, err_str)) {
- ret = -EINVAL;
- goto sa_clear;
- }
-
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not set SA CTL word (err: %d)", ret);
- goto sa_clear;
- }
-
- if (auth_key_len && auth_key) {
- /* Get a queue pair for HMAC init */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not get CPT QP");
- goto sa_clear;
- }
-
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- otx2_sec_idev_tx_cpt_qp_put(qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not put CPT QP");
- goto sa_clear;
- }
- }
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- snprintf(err_str, ERR_STR_SZ,
- "Replay window size is not supported");
- ret = -ENOTSUP;
- goto sa_clear;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not allocate memory");
- ret = -ENOMEM;
- goto sa_clear;
- }
-
- rte_spinlock_init(&sa->replay->lock);
- /*
- * Set window bottom to 1, base and top to size of
- * window
- */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- return 0;
-
-sa_clear:
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
-tbl_unlock:
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
-
- otx2_err("%s", err_str);
-
- return ret;
-}
-
-static int
-eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- ret = ipsec_fp_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return eth_sec_ipsec_in_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
- else
- return eth_sec_ipsec_out_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_eth_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- /*
- * Save userdata provided by the application. For ingress packets, this
- * could be used to identify the SA.
- */
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static void
-otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
-{
- if (sa != NULL) {
- if (sa->replay_win_sz && sa->replay)
- rte_free(sa->replay);
- }
-}
-
-static int
-otx2_eth_sec_session_destroy(void *device,
- struct rte_security_session *sess)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
- struct otx2_sec_session_ipsec_ip *sess_ip;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
- int ret;
-
- priv = get_sec_session_private_data(sess);
- if (priv == NULL)
- return -EINVAL;
-
- sess_ip = &priv->ipsec.ip;
-
- if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
- sa = sess_ip->in_sa;
-
- /* Release the anti replay window */
- otx2_eth_sec_free_anti_replay(sa);
-
- /* Clear SA table entry */
- if (sa != NULL) {
- sa->ctl.valid = 0;
- rte_io_wmb();
- }
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- }
-
- /* Release CPT LF used for this session */
- if (sess_ip->qp != NULL) {
- ret = otx2_sec_idev_tx_cpt_qp_put(sess_ip->qp);
- if (ret)
- return ret;
- }
-
- sess_mp = rte_mempool_from_obj(priv);
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_eth_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static const struct rte_security_capability *
-otx2_eth_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_eth_sec_capabilities;
-}
-
-static struct rte_security_ops otx2_eth_sec_ops = {
- .session_create = otx2_eth_sec_session_create,
- .session_destroy = otx2_eth_sec_session_destroy,
- .session_get_size = otx2_eth_sec_session_get_size,
- .capabilities_get = otx2_eth_sec_capabilities_get
-};
-
-int
-otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev)
-{
- struct rte_security_ctx *ctx;
- int ret;
-
- ctx = rte_malloc("otx2_eth_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
- if (ctx == NULL)
- return -ENOMEM;
-
- ret = otx2_sec_idev_cfg_init(eth_dev->data->port_id);
- if (ret) {
- rte_free(ctx);
- return ret;
- }
-
- /* Populate ctx */
-
- ctx->device = eth_dev;
- ctx->ops = &otx2_eth_sec_ops;
- ctx->sess_cnt = 0;
- ctx->flags =
- (RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
-
- eth_dev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev)
-{
- rte_free(eth_dev->security_ctx);
-}
-
-static int
-eth_sec_ipsec_cfg(struct rte_eth_dev *eth_dev, uint8_t tt)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- struct nix_inline_ipsec_lf_cfg *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct eth_sec_tag_const tag_const;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
- req->enable = 1;
- req->sa_base_addr = mz->iova;
-
- req->ipsec_cfg0.tt = tt;
-
- tag_const.u32 = 0;
- tag_const.event_type = RTE_EVENT_TYPE_ETHDEV;
- tag_const.port = port;
- req->ipsec_cfg0.tag_const = tag_const.u32;
-
- req->ipsec_cfg0.sa_pow2_size =
- rte_log2_u32(sizeof(struct otx2_ipsec_fp_in_sa));
- req->ipsec_cfg0.lenm1_max = NIX_MAX_FRS - 1;
-
- req->ipsec_cfg1.sa_idx_w = rte_log2_u32(dev->ipsec_in_max_spi);
- req->ipsec_cfg1.sa_idx_max = dev->ipsec_in_max_spi - 1;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- int ret;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = 0; /* Read RQ:0 context */
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret < 0) {
- otx2_err("Could not read RQ context");
- return ret;
- }
-
- /* Update tag type */
- ret = eth_sec_ipsec_cfg(eth_dev, rsp->rq.sso_tt);
- if (ret < 0)
- otx2_err("Could not update sec eth tag type");
-
- return ret;
-}
-
-int
-otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
-{
- const size_t sa_width = sizeof(struct otx2_ipsec_fp_in_sa);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int mz_sz, ret;
- uint16_t nb_sa;
-
- RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
- !RTE_IS_POWER_OF_2(sa_width));
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return 0;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- nb_sa = dev->ipsec_in_max_spi;
- mz_sz = nb_sa * sa_width;
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_reserve_aligned(name, mz_sz, rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-
- if (mz == NULL) {
- otx2_err("Could not allocate inbound SA DB");
- return -ENOMEM;
- }
-
- memset(mz->addr, 0, mz_sz);
-
- ret = eth_sec_ipsec_cfg(eth_dev, SSO_TT_ORDERED);
- if (ret < 0) {
- otx2_err("Could not configure inline IPsec");
- goto sec_fini;
- }
-
- rte_spinlock_init(&dev->ipsec_tbl_lock);
-
- return 0;
-
-sec_fini:
- otx2_err("Could not configure device for security");
- otx2_eth_sec_fini(eth_dev);
- return ret;
-}
-
-void
-otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return;
-
- lookup_mem_sa_tbl_clear(eth_dev);
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- rte_memzone_free(rte_memzone_lookup(name));
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.h b/drivers/net/octeontx2/otx2_ethdev_sec.h
deleted file mode 100644
index 298b00bf89..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.h
+++ /dev/null
@@ -1,130 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_H__
-#define __OTX2_ETHDEV_SEC_H__
-
-#include <rte_ethdev.h>
-
-#include "otx2_ipsec_fp.h"
-#include "otx2_ipsec_po.h"
-
-#define OTX2_CPT_RES_ALIGN 16
-#define OTX2_NIX_SEND_DESC_ALIGN 16
-#define OTX2_CPT_INST_SIZE 64
-
-#define OTX2_CPT_EGRP_INLINE_IPSEC 1
-
-#define OTX2_CPT_OP_INLINE_IPSEC_OUTB (0x40 | 0x25)
-#define OTX2_CPT_OP_INLINE_IPSEC_INB (0x40 | 0x26)
-#define OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD (0x40 | 0x27)
-
-#define OTX2_SEC_CPT_COMP_GOOD 0x1
-#define OTX2_SEC_UC_COMP_GOOD 0x0
-#define OTX2_SEC_COMP_GOOD (OTX2_SEC_UC_COMP_GOOD << 8 | \
- OTX2_SEC_CPT_COMP_GOOD)
-
-/* CPT Result */
-struct otx2_cpt_res {
- union {
- struct {
- uint64_t compcode:8;
- uint64_t uc_compcode:8;
- uint64_t doneint:1;
- uint64_t reserved_17_63:47;
- uint64_t reserved_64_127;
- };
- uint16_t u16[8];
- };
-};
-
-struct otx2_cpt_inst_s {
- union {
- struct {
- /* W0 */
- uint64_t nixtxl : 3;
- uint64_t doneint : 1;
- uint64_t nixtx_addr : 60;
- /* W1 */
- uint64_t res_addr : 64;
- /* W2 */
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_175_172 : 4;
- uint64_t rvu_pf_func : 16;
- /* W3 */
- uint64_t qord : 1;
- uint64_t rsvd_194_193 : 2;
- uint64_t wqe_ptr : 61;
- /* W4 */
- uint64_t dlen : 16;
- uint64_t param2 : 16;
- uint64_t param1 : 16;
- uint64_t opcode : 16;
- /* W5 */
- uint64_t dptr : 64;
- /* W6 */
- uint64_t rptr : 64;
- /* W7 */
- uint64_t cptr : 61;
- uint64_t egrp : 3;
- };
- uint64_t u64[8];
- };
-};
-
-/*
- * Security session for inline IPsec protocol offload. This is private data of
- * inline capable PMD.
- */
-struct otx2_sec_session_ipsec_ip {
- RTE_STD_C11
- union {
- /*
- * Inbound SA would accessed by crypto block. And so the memory
- * is allocated differently and shared with the h/w. Only
- * holding a pointer to this memory in the session private
- * space.
- */
- void *in_sa;
- /* Outbound SA */
- struct otx2_ipsec_fp_out_sa out_sa;
- };
-
- /* Address of CPT LMTLINE */
- void *cpt_lmtline;
- /* CPT LF enqueue register address */
- rte_iova_t cpt_nq_reg;
-
- /* Pre calculated lengths and data for a session */
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq;
- uint32_t esn_hi;
- };
- };
-
- uint64_t inst_w7;
-
- /* CPT QP used by SA */
- struct otx2_cpt_qp *qp;
-};
-
-int otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_init(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_fini(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_SEC_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
deleted file mode 100644
index 021782009f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ /dev/null
@@ -1,182 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_TX_H__
-#define __OTX2_ETHDEV_SEC_TX_H__
-
-#include <rte_security.h>
-#include <rte_mbuf.h>
-
-#include "otx2_ethdev_sec.h"
-#include "otx2_security.h"
-
-struct otx2_ipsec_fp_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-static __rte_always_inline int32_t
-otx2_ipsec_fp_out_rlen_get(struct otx2_sec_session_ipsec_ip *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t base);
-
-static __rte_always_inline int
-otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
- const struct otx2_eth_txq *txq, const uint32_t offload_flags)
-{
- uint32_t dlen, rlen, desc_headroom, extend_head, extend_tail;
- struct otx2_sec_session_ipsec_ip *sess;
- struct otx2_ipsec_fp_out_hdr *hdr;
- struct otx2_ipsec_fp_out_sa *sa;
- uint64_t data_addr, desc_addr;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- uint64_t lmt_status;
- char *data;
-
- struct desc {
- struct otx2_cpt_res cpt_res __rte_aligned(OTX2_CPT_RES_ALIGN);
- struct nix_send_hdr_s nix_hdr
- __rte_aligned(OTX2_NIX_SEND_DESC_ALIGN);
- union nix_send_sg_s nix_sg;
- struct nix_iova_s nix_iova;
- } *sd;
-
- priv = (struct otx2_sec_session *)(*rte_security_dynfield(m));
- sess = &priv->ipsec.ip;
- sa = &sess->out_sa;
-
- RTE_ASSERT(sess->cpt_lmtline != NULL);
- RTE_ASSERT(!(offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F));
-
- dlen = rte_pktmbuf_pkt_len(m) + sizeof(*hdr) - RTE_ETHER_HDR_LEN;
- rlen = otx2_ipsec_fp_out_rlen_get(sess, dlen - sizeof(*hdr));
-
- RTE_BUILD_BUG_ON(OTX2_CPT_RES_ALIGN % OTX2_NIX_SEND_DESC_ALIGN);
- RTE_BUILD_BUG_ON(sizeof(sd->cpt_res) % OTX2_NIX_SEND_DESC_ALIGN);
-
- extend_head = sizeof(*hdr);
- extend_tail = rlen - dlen;
-
- desc_headroom = (OTX2_CPT_RES_ALIGN - 1) + sizeof(*sd);
-
- if (unlikely(!rte_pktmbuf_is_contiguous(m)) ||
- unlikely(rte_pktmbuf_headroom(m) < extend_head + desc_headroom) ||
- unlikely(rte_pktmbuf_tailroom(m) < extend_tail)) {
- goto drop;
- }
-
- /*
- * Extend mbuf data to point to the expected packet buffer for NIX.
- * This includes the Ethernet header followed by the encrypted IPsec
- * payload
- */
- rte_pktmbuf_append(m, extend_tail);
- data = rte_pktmbuf_prepend(m, extend_head);
- data_addr = rte_pktmbuf_iova(m);
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_fp_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + sizeof(*hdr), RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_fp_out_hdr *)(data + RTE_ETHER_HDR_LEN);
-
- if (sa->ctl.enc_type == OTX2_IPSEC_FP_SA_ENC_AES_GCM) {
- /* AES-128-GCM */
- memcpy(hdr->iv, &sa->nonce, 4);
- memset(hdr->iv + 4, 0, 12); //TODO: make it random
- } else {
- /* AES-128-[CBC] + [SHA1] */
- memset(hdr->iv, 0, 16); //TODO: make it random
- }
-
- /* Keep CPT result and NIX send descriptors in headroom */
- sd = (void *)RTE_PTR_ALIGN(data - desc_headroom, OTX2_CPT_RES_ALIGN);
- desc_addr = data_addr - RTE_PTR_DIFF(data, sd);
-
- /* Prepare CPT instruction */
-
- inst.nixtx_addr = (desc_addr + offsetof(struct desc, nix_hdr)) >> 4;
- inst.doneint = 0;
- inst.nixtxl = 1;
- inst.res_addr = desc_addr + offsetof(struct desc, cpt_res);
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.wqe_ptr = desc_addr >> 3; /* FIXME: Handle errors */
- inst.qord = 1;
- inst.opcode = OTX2_CPT_OP_INLINE_IPSEC_OUTB;
- inst.dlen = dlen;
- inst.dptr = data_addr + RTE_ETHER_HDR_LEN;
- inst.u64[7] = sess->inst_w7;
-
- /* First word contains 8 bit completion code & 8 bit uc comp code */
- sd->cpt_res.u16[0] = 0;
-
- /* Prepare NIX send descriptors for output expected from CPT */
-
- sd->nix_hdr.w0.u = 0;
- sd->nix_hdr.w1.u = 0;
- sd->nix_hdr.w0.sq = txq->sq;
- sd->nix_hdr.w0.sizem1 = 1;
- sd->nix_hdr.w0.total = rte_pktmbuf_data_len(m);
- sd->nix_hdr.w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sd->nix_hdr.w0.df = otx2_nix_prefree_seg(m);
-
- sd->nix_sg.u = 0;
- sd->nix_sg.subdc = NIX_SUBDC_SG;
- sd->nix_sg.ld_type = NIX_SENDLDTYPE_LDD;
- sd->nix_sg.segs = 1;
- sd->nix_sg.seg1_size = rte_pktmbuf_data_len(m);
-
- sd->nix_iova.addr = rte_mbuf_data_iova(m);
-
- /* Mark mempool object as "put" since it is freed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
-
- inst.param1 = sess->esn_hi >> 16;
- inst.param2 = sess->esn_hi & 0xffff;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(sess->cpt_lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(sess->cpt_nq_reg);
- } while (lmt_status == 0);
-
- return 1;
-
-drop:
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* Don't free if reference count > 1 */
- if (rte_pktmbuf_prefree_seg(m) == NULL)
- return 0;
- }
- rte_pktmbuf_free(m);
- return 0;
-}
-
-#endif /* __OTX2_ETHDEV_SEC_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
deleted file mode 100644
index 1d0fe4e950..0000000000
--- a/drivers/net/octeontx2/otx2_flow.c
+++ /dev/null
@@ -1,1189 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-enum flow_vtag_cfg_dir { VTAG_TX, VTAG_RX };
-
-int
-otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct otx2_mcam_ents_info *info;
- struct rte_bitmap *bmap;
- struct rte_flow *flow;
- int entry_count = 0;
- int rc, idx;
-
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- info = &npc->flow_entry_info[idx];
- entry_count += info->live_ent;
- }
-
- if (entry_count == 0)
- return 0;
-
- /* Free all MCAM entries allocated */
- rc = otx2_flow_mcam_free_all_entries(mbox);
-
- /* Free any MCAM counters and delete flow list */
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
- if (flow->ctr_id != NPC_COUNTER_NONE)
- rc |= otx2_flow_mcam_free_counter(mbox,
- flow->ctr_id);
-
- TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
- rte_free(flow);
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
- }
- info = &npc->flow_entry_info[idx];
- info->free_ent = 0;
- info->live_ent = 0;
- }
- return rc;
-}
-
-
-static int
-flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
- struct otx2_npc_flow_info *flow_info)
-{
- /* This is non-LDATA part in search key */
- uint64_t key_data[2] = {0ULL, 0ULL};
- uint64_t key_mask[2] = {0ULL, 0ULL};
- int intf = pst->flow->nix_intf;
- int key_len, bit = 0, index;
- int off, idx, data_off = 0;
- uint8_t lid, mask, data;
- uint16_t layer_info;
- uint64_t lt, flags;
-
-
- /* Skip till Layer A data start */
- while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
- if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
- data_off++;
- bit++;
- }
-
- /* Each bit represents 1 nibble */
- data_off *= 4;
-
- index = 0;
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- /* Offset in key */
- off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
- lt = pst->lt[lid] & 0xf;
- flags = pst->flags[lid] & 0xff;
-
- /* NPC_LAYER_KEX_S */
- layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
-
- if (layer_info) {
- for (idx = 0; idx <= 2 ; idx++) {
- if (layer_info & (1 << idx)) {
- if (idx == 2)
- data = lt;
- else if (idx == 1)
- data = ((flags >> 4) & 0xf);
- else
- data = (flags & 0xf);
-
- if (data_off >= 64) {
- data_off = 0;
- index++;
- }
- key_data[index] |= ((uint64_t)data <<
- data_off);
- mask = 0xf;
- if (lt == 0)
- mask = 0;
- key_mask[index] |= ((uint64_t)mask <<
- data_off);
- data_off += 4;
- }
- }
- }
- }
-
- otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
- key_data[0], key_data[1]);
-
- /* Copy this into mcam string */
- key_len = (pst->npc->keyx_len[intf] + 7) / 8;
- otx2_npc_dbg("Key_len = %d", key_len);
- memcpy(pst->flow->mcam_data, key_data, key_len);
- memcpy(pst->flow->mcam_mask, key_mask, key_len);
-
- otx2_npc_dbg("Final flow data");
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
- idx, pst->flow->mcam_data[idx],
- idx, pst->flow->mcam_mask[idx]);
- }
-
- /*
- * Now we have mcam data and mask formatted as
- * [Key_len/4 nibbles][0 or 1 nibble hole][data]
- * hole is present if key_len is odd number of nibbles.
- * mcam data must be split into 64 bits + 48 bits segments
- * for each back W0, W1.
- */
-
- return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
-}
-
-static int
-flow_parse_attr(struct rte_eth_dev *eth_dev,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const char *errmsg = NULL;
-
- if (attr == NULL)
- errmsg = "Attribute can't be empty";
- else if (attr->group)
- errmsg = "Groups are not supported";
- else if (attr->priority >= dev->npc_flow.flow_max_priority)
- errmsg = "Priority should be with in specified range";
- else if ((!attr->egress && !attr->ingress) ||
- (attr->egress && attr->ingress))
- errmsg = "Exactly one of ingress or egress must be set";
-
- if (errmsg != NULL) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
- attr, errmsg);
- return -ENOTSUP;
- }
-
- if (attr->ingress)
- flow->nix_intf = OTX2_INTF_RX;
- else
- flow->nix_intf = OTX2_INTF_TX;
-
- flow->priority = attr->priority;
- return 0;
-}
-
-static inline int
-flow_get_free_rss_grp(struct rte_bitmap *bmap,
- uint32_t size, uint32_t *pos)
-{
- for (*pos = 0; *pos < size; ++*pos) {
- if (!rte_bitmap_get(bmap, *pos))
- break;
- }
-
- return *pos < size ? 0 : -1;
-}
-
-static int
-flow_configure_rss_action(struct otx2_eth_dev *dev,
- const struct rte_flow_action_rss *rss,
- uint8_t *alg_idx, uint32_t *rss_grp,
- int mcam_index)
-{
- struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
- uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
- uint32_t flowkey_cfg, grp_aval, i;
- uint16_t *ind_tbl = NULL;
- uint8_t flowkey_algx;
- int rc;
-
- rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
- flow_info->rss_grps, &grp_aval);
- /* RSS group :0 is not usable for flow rss action */
- if (rc < 0 || grp_aval == 0)
- return -ENOSPC;
-
- *rss_grp = grp_aval;
-
- otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
- rss->key_len);
-
- /* If queue count passed in the rss action is less than
- * HW configured reta size, replicate rss action reta
- * across HW reta table.
- */
- if (dev->rss_info.rss_size > rss->queue_num) {
- ind_tbl = reta;
-
- for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
- memcpy(reta + i * rss->queue_num, rss->queue,
- sizeof(uint16_t) * rss->queue_num);
-
- i = dev->rss_info.rss_size % rss->queue_num;
- if (i)
- memcpy(&reta[dev->rss_info.rss_size] - i,
- rss->queue, i * sizeof(uint16_t));
- } else {
- ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
- }
-
- rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
- if (rc) {
- otx2_err("Failed to init rss table rc = %d", rc);
- return rc;
- }
-
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
- *rss_grp, mcam_index);
- if (rc) {
- otx2_err("Failed to set rss hash function rc = %d", rc);
- return rc;
- }
-
- *alg_idx = flowkey_algx;
-
- rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
-
- return 0;
-}
-
-
-static int
-flow_program_rss_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const struct rte_flow_action_rss *rss;
- uint32_t rss_grp;
- uint8_t alg_idx;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
- rss = (const struct rte_flow_action_rss *)actions->conf;
-
- rc = flow_configure_rss_action(dev,
- rss, &alg_idx, &rss_grp,
- flow->mcam_id);
- if (rc)
- return rc;
-
- flow->npc_action &= (~(0xfULL));
- flow->npc_action |= NIX_RX_ACTIONOP_RSS;
- flow->npc_action |=
- ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
- NIX_RSS_ACT_ALG_OFFSET) |
- ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
- NIX_RSS_ACT_GRP_OFFSET);
- }
- }
- return 0;
-}
-
-static int
-flow_free_rss_action(struct rte_eth_dev *eth_dev,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- uint32_t rss_grp;
-
- if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
- rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
- NIX_RSS_ACT_GRP_MASK;
- if (rss_grp == 0 || rss_grp >= npc->rss_grps)
- return -EINVAL;
-
- rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
- }
-
- return 0;
-}
-
-static int
-flow_update_sec_tt(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[])
-{
- int rc = 0;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- rc = otx2_eth_sec_update_tag_type(eth_dev);
- break;
- }
- }
-
- return rc;
-}
-
-static int
-flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
-{
- otx2_npc_dbg("Meta Item");
- return 0;
-}
-
-/*
- * Parse function of each layer:
- * - Consume one or more patterns that are relevant.
- * - Update parse_state
- * - Set parse_state.pattern = last item consumed
- * - Set appropriate error code/message when returning error.
- */
-typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
-
-static int
-flow_parse_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- flow_parse_stage_func_t parse_stage_funcs[] = {
- flow_parse_meta_items,
- otx2_flow_parse_higig2_hdr,
- otx2_flow_parse_la,
- otx2_flow_parse_lb,
- otx2_flow_parse_lc,
- otx2_flow_parse_ld,
- otx2_flow_parse_le,
- otx2_flow_parse_lf,
- otx2_flow_parse_lg,
- otx2_flow_parse_lh,
- };
- struct otx2_eth_dev *hw = dev->data->dev_private;
- uint8_t layer = 0;
- int key_offset;
- int rc;
-
- if (pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
- "pattern is NULL");
- return -EINVAL;
- }
-
- memset(pst, 0, sizeof(*pst));
- pst->npc = &hw->npc_flow;
- pst->error = error;
- pst->flow = flow;
-
- /* Use integral byte offset */
- key_offset = pst->npc->keyx_len[flow->nix_intf];
- key_offset = (key_offset + 7) / 8;
-
- /* Location where LDATA would begin */
- pst->mcam_data = (uint8_t *)flow->mcam_data;
- pst->mcam_mask = (uint8_t *)flow->mcam_mask;
-
- while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
- layer < RTE_DIM(parse_stage_funcs)) {
- otx2_npc_dbg("Pattern type = %d", pattern->type);
-
- /* Skip place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- pst->pattern = pattern;
- otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
- rc = parse_stage_funcs[layer](pst);
- if (rc != 0)
- return -rte_errno;
-
- layer++;
-
- /*
- * Parse stage function sets pst->pattern to
- * 1 past the last item it consumed.
- */
- pattern = pst->pattern;
-
- if (pst->terminate)
- break;
- }
-
- /* Skip trailing place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- /* Are there more items than what we can handle? */
- if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, pattern,
- "unsupported item in the sequence");
- return -ENOTSUP;
- }
-
- return 0;
-}
-
-static int
-flow_parse_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- int err;
-
- /* Check attributes */
- err = flow_parse_attr(dev, attr, error, flow);
- if (err)
- return err;
-
- /* Check actions */
- err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
- if (err)
- return err;
-
- /* Check pattern */
- err = flow_parse_pattern(dev, pattern, error, flow, pst);
- if (err)
- return err;
-
- /* Check for overlaps? */
- return 0;
-}
-
-static int
-otx2_flow_validate(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_parse_state parse_state;
- struct rte_flow flow;
-
- memset(&flow, 0, sizeof(flow));
- return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
- &parse_state);
-}
-
-static int
-flow_program_vtag_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- uint16_t vlan_id = 0, vlan_ethtype = RTE_ETHER_TYPE_VLAN;
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } tx_vtag_action;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- bool vlan_insert_action = false;
- uint64_t rx_vtag_action = 0;
- uint8_t vlan_pcp = 0;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_OF_POP_VLAN) {
- if (dev->npc_flow.vtag_actions == 1) {
- vtag_cfg =
- otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_RX;
- vtag_cfg->rx.strip_vtag = 1;
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- }
-
- rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- rx_vtag_action |= (NPC_LID_LB << 8);
- rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = rx_vtag_action;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) {
- const struct rte_flow_action_of_set_vlan_vid *vtag =
- (const struct rte_flow_action_of_set_vlan_vid *)
- actions->conf;
- vlan_id = rte_be_to_cpu_16(vtag->vlan_vid);
- if (vlan_id > 0xfff) {
- otx2_err("Invalid vlan_id for set vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type == RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN) {
- const struct rte_flow_action_of_push_vlan *ethtype =
- (const struct rte_flow_action_of_push_vlan *)
- actions->conf;
- vlan_ethtype = rte_be_to_cpu_16(ethtype->ethertype);
- if (vlan_ethtype != RTE_ETHER_TYPE_VLAN &&
- vlan_ethtype != RTE_ETHER_TYPE_QINQ) {
- otx2_err("Invalid ethtype specified for push"
- " vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) {
- const struct rte_flow_action_of_set_vlan_pcp *pcp =
- (const struct rte_flow_action_of_set_vlan_pcp *)
- actions->conf;
- vlan_pcp = pcp->vlan_pcp;
- if (vlan_pcp > 0x7) {
- otx2_err("Invalid PCP value for pcp action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- }
- }
-
- if (vlan_insert_action) {
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->tx.vtag0 =
- ((vlan_ethtype << 16) | (vlan_pcp << 13) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- tx_vtag_action.reg = 0;
- tx_vtag_action.act.vtag0_def = rsp->vtag0_idx;
- if (tx_vtag_action.act.vtag0_def < 0) {
- otx2_err("Failed to config TX VTAG action");
- return -EINVAL;
- }
- tx_vtag_action.act.vtag0_lid = NPC_LID_LA;
- tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- tx_vtag_action.act.vtag0_relptr =
- NIX_TX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = tx_vtag_action.reg;
- }
- return 0;
-}
-
-static struct rte_flow *
-otx2_flow_create(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_parse_state parse_state;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_flow *flow, *flow_iter;
- struct otx2_flow_list *list;
- int rc;
-
- flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Memory allocation failed");
- return NULL;
- }
- memset(flow, 0, sizeof(*flow));
-
- rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
- &parse_state);
- if (rc != 0)
- goto err_exit;
-
- rc = flow_program_vtag_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program vlan action");
- goto err_exit;
- }
-
- parse_state.is_vf = otx2_dev_is_vf(hw);
-
- rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to insert filter");
- goto err_exit;
- }
-
- rc = flow_program_rss_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program rss action");
- goto err_exit;
- }
-
- if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
- rc = flow_update_sec_tt(dev, actions);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to update tt with sec act");
- goto err_exit;
- }
- }
-
- list = &hw->npc_flow.flow_list[flow->priority];
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > flow->mcam_id) {
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- return flow;
- }
- }
-
- TAILQ_INSERT_TAIL(list, flow, next);
- return flow;
-
-err_exit:
- rte_free(flow);
- return NULL;
-}
-
-static int
-otx2_flow_destroy(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_bitmap *bmap;
- uint16_t match_id;
- int rc;
-
- match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
- NIX_RX_ACT_MATCH_MASK;
-
- if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- if (rte_atomic32_read(&npc->mark_actions) == 0)
- return -EINVAL;
-
- /* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
- hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
- }
-
- if (flow->nix_intf == OTX2_INTF_RX && flow->vtag_action) {
- npc->vtag_actions--;
- if (npc->vtag_actions == 0) {
- if (hw->vlan_info.strip_on == 0) {
- hw->rx_offload_flags &=
- ~NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
- }
- }
-
- rc = flow_free_rss_action(dev, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to free rss action");
- }
-
- rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to destroy filter");
- }
-
- TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
-
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
-
- rte_free(flow);
- return 0;
-}
-
-static int
-otx2_flow_flush(struct rte_eth_dev *dev,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries "
- ", counters");
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to flush filter");
- return -rte_errno;
- }
-
- return 0;
-}
-
-static int
-otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
- int enable __rte_unused,
- struct rte_flow_error *error)
-{
- /*
- * If we support, we need to un-install the default mcam
- * entry for this port.
- */
-
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Flow isolation not supported");
-
- return -rte_errno;
-}
-
-static int
-otx2_flow_query(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- const struct rte_flow_action *action,
- void *data,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct rte_flow_query_count *query = data;
- struct otx2_mbox *mbox = hw->mbox;
- const char *errmsg = NULL;
- int errcode = ENOTSUP;
- int rc;
-
- if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
- errmsg = "Only COUNT is supported in query";
- goto err_exit;
- }
-
- if (flow->ctr_id == NPC_COUNTER_NONE) {
- errmsg = "Counter is not available";
- goto err_exit;
- }
-
- rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error reading flow counter";
- goto err_exit;
- }
- query->hits_set = 1;
- query->bytes_set = 0;
-
- if (query->reset)
- rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error clearing flow counter";
- goto err_exit;
- }
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- errmsg);
- return -rte_errno;
-}
-
-static int
-otx2_flow_dev_dump(struct rte_eth_dev *dev,
- struct rte_flow *flow, FILE *file,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- uint32_t max_prio, i;
-
- if (file == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Invalid file");
- return -EINVAL;
- }
- if (flow != NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL,
- "Invalid argument");
- return -EINVAL;
- }
-
- max_prio = hw->npc_flow.flow_max_priority;
-
- for (i = 0; i < max_prio; i++) {
- list = &hw->npc_flow.flow_list[i];
-
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- otx2_flow_dump(file, hw, flow_iter);
- }
- }
-
- return 0;
-}
-
-const struct rte_flow_ops otx2_flow_ops = {
- .validate = otx2_flow_validate,
- .create = otx2_flow_create,
- .destroy = otx2_flow_destroy,
- .flush = otx2_flow_flush,
- .query = otx2_flow_query,
- .isolate = otx2_flow_isolate,
- .dev_dump = otx2_flow_dev_dump,
-};
-
-static int
-flow_supp_key_len(uint32_t supp_mask)
-{
- int nib_count = 0;
- while (supp_mask) {
- nib_count++;
- supp_mask &= (supp_mask - 1);
- }
- return nib_count * 4;
-}
-
-/* Refer HRM register:
- * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
- * and
- * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
- **/
-#define BYTESM1_SHIFT 16
-#define HDR_OFF_SHIFT 8
-static void
-flow_update_kex_info(struct npc_xtract_info *xtract_info,
- uint64_t val)
-{
- xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
- xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
- xtract_info->key_off = val & 0x3f;
- xtract_info->enable = ((val >> 7) & 0x1);
- xtract_info->flags_enable = ((val >> 6) & 0x1);
-}
-
-static void
-flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
- struct npc_get_kex_cfg_rsp *kex_rsp)
-{
- volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
- [NPC_MAX_LD];
- struct npc_xtract_info *x_info = NULL;
- int lid, lt, ld, fl, ix;
- otx2_dxcfg_t *p;
- uint64_t keyw;
- uint64_t val;
-
- npc->keyx_supp_nmask[NPC_MCAM_RX] =
- kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_supp_nmask[NPC_MCAM_TX] =
- kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_len[NPC_MCAM_RX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
- npc->keyx_len[NPC_MCAM_TX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
-
- keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_RX] = keyw;
- keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_TX] = keyw;
-
- /* Update KEX_LD_FLAG */
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- for (fl = 0; fl < NPC_MAX_LFL; fl++) {
- x_info =
- &npc->prx_fxcfg[ix][ld][fl].xtract[0];
- val = kex_rsp->intf_ld_flags[ix][ld][fl];
- flow_update_kex_info(x_info, val);
- }
- }
- }
-
- /* Update LID, LT and LDATA cfg */
- p = &npc->prx_dxcfg;
- q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
- (&kex_rsp->intf_lid_lt_ld);
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- for (lt = 0; lt < NPC_MAX_LT; lt++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- x_info = &(*p)[ix][lid][lt].xtract[ld];
- val = (*q)[ix][lid][lt][ld];
- flow_update_kex_info(x_info, val);
- }
- }
- }
- }
- /* Update LDATA Flags cfg */
- npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
- npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
-}
-
-static struct otx2_idev_kex_cfg *
-flow_intra_dev_kex_cfg(void)
-{
- static const char name[] = "octeontx2_intra_device_kex_conf";
- struct otx2_idev_kex_cfg *idev;
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(name);
- if (mz)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz) {
- idev = mz->addr;
- rte_atomic16_set(&idev->kex_refcnt, 0);
- return idev;
- }
- return NULL;
-}
-
-static int
-flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
-{
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_get_kex_cfg_rsp *kex_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- char mkex_pfl_name[MKEX_NAME_LEN];
- struct otx2_idev_kex_cfg *idev;
- int rc = 0;
-
- idev = flow_intra_dev_kex_cfg();
- if (!idev)
- return -ENOMEM;
-
- /* Is kex_cfg read by any another driver? */
- if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
- /* Call mailbox to get key & data size */
- (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config");
- goto done;
- }
- memcpy(&idev->kex_cfg, kex_rsp,
- sizeof(struct npc_get_kex_cfg_rsp));
- }
-
- otx2_mbox_memcpy(mkex_pfl_name,
- idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
-
- strlcpy((char *)dev->mkex_pfl_name,
- mkex_pfl_name, sizeof(dev->mkex_pfl_name));
-
- flow_process_mkex_cfg(npc, &idev->kex_cfg);
-
-done:
- return rc;
-}
-
-#define OTX2_MCAM_TOT_ENTRIES_96XX (4096)
-#define OTX2_MCAM_TOT_ENTRIES_98XX (16384)
-
-static int otx2_mcam_tot_entries(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_98xx(dev))
- return OTX2_MCAM_TOT_ENTRIES_98XX;
- else
- return OTX2_MCAM_TOT_ENTRIES_96XX;
-}
-
-int
-otx2_flow_init(struct otx2_eth_dev *hw)
-{
- uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- uint32_t bmap_sz, tot_mcam_entries = 0;
- int rc = 0, idx;
-
- rc = flow_fetch_kex_cfg(hw);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config from idev");
- return rc;
- }
-
- rte_atomic32_init(&npc->mark_actions);
- npc->vtag_actions = 0;
-
- tot_mcam_entries = otx2_mcam_tot_entries(hw);
- npc->mcam_entries = tot_mcam_entries >> npc->keyw[NPC_MCAM_RX];
- /* Free, free_rev, live and live_rev entries */
- bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
- mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
- RTE_CACHE_LINE_SIZE);
- if (mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- return rc;
- }
-
- npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_mcam_ents_info),
- 0);
- if (npc->flow_entry_info == NULL) {
- otx2_err("flow_entry_info alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries == NULL) {
- otx2_err("free_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries_rev == NULL) {
- otx2_err("free_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries == NULL) {
- otx2_err("live_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries_rev == NULL) {
- otx2_err("live_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_flow_list),
- 0);
- if (npc->flow_list == NULL) {
- otx2_err("flow_list alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc_mem = mem;
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- TAILQ_INIT(&npc->flow_list[idx]);
-
- npc->free_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->free_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->flow_entry_info[idx].free_ent = 0;
- npc->flow_entry_info[idx].live_ent = 0;
- npc->flow_entry_info[idx].max_id = 0;
- npc->flow_entry_info[idx].min_id = ~(0);
- }
-
- npc->rss_grps = NIX_RSS_GRPS;
-
- bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
- nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
- if (nix_mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
-
- /* Group 0 will be used for RSS,
- * 1 -7 will be used for rte_flow RSS action
- */
- rte_bitmap_set(npc->rss_grp_entries, 0);
-
- return 0;
-
-err:
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
- if (npc_mem)
- rte_free(npc_mem);
- return rc;
-}
-
-int
-otx2_flow_fini(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries, counters");
- return rc;
- }
-
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
deleted file mode 100644
index 790e6ef1e8..0000000000
--- a/drivers/net/octeontx2/otx2_flow.h
+++ /dev/null
@@ -1,414 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_FLOW_H__
-#define __OTX2_FLOW_H__
-
-#include <stdint.h>
-
-#include <rte_flow_driver.h>
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-#include "otx2_mbox.h"
-
-struct otx2_eth_dev;
-
-int otx2_flow_init(struct otx2_eth_dev *hw);
-int otx2_flow_fini(struct otx2_eth_dev *hw);
-extern const struct rte_flow_ops otx2_flow_ops;
-
-enum {
- OTX2_INTF_RX = 0,
- OTX2_INTF_TX = 1,
- OTX2_INTF_MAX = 2,
-};
-
-#define NPC_IH_LENGTH 8
-#define NPC_TPID_LENGTH 2
-#define NPC_HIGIG2_LENGTH 16
-#define NPC_MAX_RAW_ITEM_LEN 16
-#define NPC_COUNTER_NONE (-1)
-/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
-#define NPC_MAX_EXTRACT_DATA_LEN (64)
-#define NPC_LDATA_LFLAG_LEN (16)
-#define NPC_MAX_KEY_NIBBLES (31)
-/* Nibble offsets */
-#define NPC_LAYER_KEYX_SZ (3)
-#define NPC_PARSE_KEX_S_LA_OFFSET (7)
-#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
- ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
- + NPC_PARSE_KEX_S_LA_OFFSET)
-
-
-/* supported flow actions flags */
-#define OTX2_FLOW_ACT_MARK (1 << 0)
-#define OTX2_FLOW_ACT_FLAG (1 << 1)
-#define OTX2_FLOW_ACT_DROP (1 << 2)
-#define OTX2_FLOW_ACT_QUEUE (1 << 3)
-#define OTX2_FLOW_ACT_RSS (1 << 4)
-#define OTX2_FLOW_ACT_DUP (1 << 5)
-#define OTX2_FLOW_ACT_SEC (1 << 6)
-#define OTX2_FLOW_ACT_COUNT (1 << 7)
-#define OTX2_FLOW_ACT_PF (1 << 8)
-#define OTX2_FLOW_ACT_VF (1 << 9)
-#define OTX2_FLOW_ACT_VLAN_STRIP (1 << 10)
-#define OTX2_FLOW_ACT_VLAN_INSERT (1 << 11)
-#define OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT (1 << 12)
-#define OTX2_FLOW_ACT_VLAN_PCP_INSERT (1 << 13)
-
-/* terminating actions */
-#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
- OTX2_FLOW_ACT_QUEUE | \
- OTX2_FLOW_ACT_RSS | \
- OTX2_FLOW_ACT_DUP | \
- OTX2_FLOW_ACT_SEC)
-
-/* This mark value indicates flag action */
-#define OTX2_FLOW_FLAG_VAL (0xffff)
-
-#define NIX_RX_ACT_MATCH_OFFSET (40)
-#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
-
-#define NIX_RSS_ACT_GRP_OFFSET (20)
-#define NIX_RSS_ACT_ALG_OFFSET (56)
-#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
-#define NIX_RSS_ACT_ALG_MASK (0x1F)
-
-/* PMD-specific definition of the opaque struct rte_flow */
-#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
-
-enum npc_mcam_intf {
- NPC_MCAM_RX,
- NPC_MCAM_TX
-};
-
-struct npc_xtract_info {
- /* Length in bytes of pkt data extracted. len = 0
- * indicates that extraction is disabled.
- */
- uint8_t len;
- uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
- uint8_t key_off; /* Byte offset in MCAM key where data is placed */
- uint8_t enable; /* Extraction enabled or disabled */
- uint8_t flags_enable; /* Flags extraction enabled */
-};
-
-/* Information for a given {LAYER, LTYPE} */
-struct npc_lid_lt_xtract_info {
- /* Info derived from parser configuration */
- uint16_t npc_proto; /* Network protocol identified */
- uint8_t valid_flags_mask; /* Flags applicable */
- uint8_t is_terminating:1; /* No more parsing */
- struct npc_xtract_info xtract[NPC_MAX_LD];
-};
-
-union npc_kex_ldata_flags_cfg {
- struct {
- #if defined(__BIG_ENDIAN_BITFIELD)
- uint64_t rvsd_62_1 : 61;
- uint64_t lid : 3;
- #else
- uint64_t lid : 3;
- uint64_t rvsd_62_1 : 61;
- #endif
- } s;
-
- uint64_t i;
-};
-
-typedef struct npc_lid_lt_xtract_info
- otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
-typedef struct npc_lid_lt_xtract_info
- otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
-
-
-/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
-struct npc_get_datax_cfg {
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
- /* Extract information indexed with [LID][LTYPE] */
- struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
- /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
- * Fields flags_ena_ld0, flags_ena_ld1 in
- * struct npc_lid_lt_xtract_info indicate if this is applicable
- * for a given {LAYER, LTYPE}
- */
- struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
-};
-
-struct otx2_mcam_ents_info {
- /* Current max & min values of mcam index */
- uint32_t max_id;
- uint32_t min_id;
- uint32_t free_ent;
- uint32_t live_ent;
-};
-
-struct otx2_flow_dump_data {
- uint8_t lid;
- uint16_t ltype;
-};
-
-struct rte_flow {
- uint8_t nix_intf;
- uint32_t mcam_id;
- int32_t ctr_id;
- uint32_t priority;
- /* Contiguous match string */
- uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t npc_action;
- uint64_t vtag_action;
- struct otx2_flow_dump_data dump_data[32];
- uint16_t num_patterns;
- TAILQ_ENTRY(rte_flow) next;
-};
-
-TAILQ_HEAD(otx2_flow_list, rte_flow);
-
-/* Accessed from ethdev private - otx2_eth_dev */
-struct otx2_npc_flow_info {
- rte_atomic32_t mark_actions;
- uint32_t vtag_actions;
- uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
- uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
- uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
- uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
- uint32_t mcam_entries; /* mcam entries supported */
- otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
- otx2_fxcfg_t prx_fxcfg; /* Flag extract */
- otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
- /* mcam entry info per priority level: both free & in-use */
- struct otx2_mcam_ents_info *flow_entry_info;
- /* Bitmap of free preallocated entries in ascending index &
- * descending priority
- */
- struct rte_bitmap **free_entries;
- /* Bitmap of free preallocated entries in descending index &
- * ascending priority
- */
- struct rte_bitmap **free_entries_rev;
- /* Bitmap of live entries in ascending index & descending priority */
- struct rte_bitmap **live_entries;
- /* Bitmap of live entries in descending index & ascending priority */
- struct rte_bitmap **live_entries_rev;
- /* Priority bucket wise tail queue of all rte_flow resources */
- struct otx2_flow_list *flow_list;
- uint32_t rss_grps; /* rss groups supported */
- struct rte_bitmap *rss_grp_entries;
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
- uint16_t switch_header_type;
-};
-
-struct otx2_parse_state {
- struct otx2_npc_flow_info *npc;
- const struct rte_flow_item *pattern;
- const struct rte_flow_item *last_pattern; /* Temp usage */
- struct rte_flow_error *error;
- struct rte_flow *flow;
- uint8_t tunnel;
- uint8_t terminate;
- uint8_t layer_mask;
- uint8_t lt[NPC_MAX_LID];
- uint8_t flags[NPC_MAX_LID];
- uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
- uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
- bool is_vf;
-};
-
-struct otx2_flow_item_info {
- const void *def_mask; /* rte_flow default mask */
- void *hw_mask; /* hardware supported mask */
- int len; /* length of item */
- const void *spec; /* spec to use, NULL implies match any */
- const void *mask; /* mask to use */
- uint8_t hw_hdr_len; /* Extra data len at each layer*/
-};
-
-struct otx2_idev_kex_cfg {
- struct npc_get_kex_cfg_rsp kex_cfg;
- rte_atomic16_t kex_refcnt;
-};
-
-enum npc_kpu_parser_flag {
- NPC_F_NA = 0,
- NPC_F_PKI,
- NPC_F_PKI_VLAN,
- NPC_F_PKI_ETAG,
- NPC_F_PKI_ITAG,
- NPC_F_PKI_MPLS,
- NPC_F_PKI_NSH,
- NPC_F_ETYPE_UNK,
- NPC_F_ETHER_VLAN,
- NPC_F_ETHER_ETAG,
- NPC_F_ETHER_ITAG,
- NPC_F_ETHER_MPLS,
- NPC_F_ETHER_NSH,
- NPC_F_STAG_CTAG,
- NPC_F_STAG_CTAG_UNK,
- NPC_F_STAG_STAG_CTAG,
- NPC_F_STAG_STAG_STAG,
- NPC_F_QINQ_CTAG,
- NPC_F_QINQ_CTAG_UNK,
- NPC_F_QINQ_QINQ_CTAG,
- NPC_F_QINQ_QINQ_QINQ,
- NPC_F_BTAG_ITAG,
- NPC_F_BTAG_ITAG_STAG,
- NPC_F_BTAG_ITAG_CTAG,
- NPC_F_BTAG_ITAG_UNK,
- NPC_F_ETAG_CTAG,
- NPC_F_ETAG_BTAG_ITAG,
- NPC_F_ETAG_STAG,
- NPC_F_ETAG_QINQ,
- NPC_F_ETAG_ITAG,
- NPC_F_ETAG_ITAG_STAG,
- NPC_F_ETAG_ITAG_CTAG,
- NPC_F_ETAG_ITAG_UNK,
- NPC_F_ITAG_STAG_CTAG,
- NPC_F_ITAG_STAG,
- NPC_F_ITAG_CTAG,
- NPC_F_MPLS_4_LABELS,
- NPC_F_MPLS_3_LABELS,
- NPC_F_MPLS_2_LABELS,
- NPC_F_IP_HAS_OPTIONS,
- NPC_F_IP_IP_IN_IP,
- NPC_F_IP_6TO4,
- NPC_F_IP_MPLS_IN_IP,
- NPC_F_IP_UNK_PROTO,
- NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
- NPC_F_IP_6TO4_HAS_OPTIONS,
- NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
- NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
- NPC_F_IP6_HAS_EXT,
- NPC_F_IP6_TUN_IP6,
- NPC_F_IP6_MPLS_IN_IP,
- NPC_F_TCP_HAS_OPTIONS,
- NPC_F_TCP_HTTP,
- NPC_F_TCP_HTTPS,
- NPC_F_TCP_PPTP,
- NPC_F_TCP_UNK_PORT,
- NPC_F_TCP_HTTP_HAS_OPTIONS,
- NPC_F_TCP_HTTPS_HAS_OPTIONS,
- NPC_F_TCP_PPTP_HAS_OPTIONS,
- NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
- NPC_F_UDP_VXLAN,
- NPC_F_UDP_VXLAN_NOVNI,
- NPC_F_UDP_VXLAN_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE,
- NPC_F_UDP_VXLANGPE_NSH,
- NPC_F_UDP_VXLANGPE_MPLS,
- NPC_F_UDP_VXLANGPE_NOVNI,
- NPC_F_UDP_VXLANGPE_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
- NPC_F_UDP_VXLANGPE_UNK,
- NPC_F_UDP_VXLANGPE_NONP,
- NPC_F_UDP_GTP_GTPC,
- NPC_F_UDP_GTP_GTPU_G_PDU,
- NPC_F_UDP_GTP_GTPU_UNK,
- NPC_F_UDP_UNK_PORT,
- NPC_F_UDP_GENEVE,
- NPC_F_UDP_GENEVE_OAM,
- NPC_F_UDP_GENEVE_CRI_OPT,
- NPC_F_UDP_GENEVE_OAM_CRI_OPT,
- NPC_F_GRE_NVGRE,
- NPC_F_GRE_HAS_SRE,
- NPC_F_GRE_HAS_CSUM,
- NPC_F_GRE_HAS_KEY,
- NPC_F_GRE_HAS_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY,
- NPC_F_GRE_HAS_CSUM_SEQ,
- NPC_F_GRE_HAS_KEY_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY_SEQ,
- NPC_F_GRE_HAS_ROUTE,
- NPC_F_GRE_UNK_PROTO,
- NPC_F_GRE_VER1,
- NPC_F_GRE_VER1_HAS_SEQ,
- NPC_F_GRE_VER1_HAS_ACK,
- NPC_F_GRE_VER1_HAS_SEQ_ACK,
- NPC_F_GRE_VER1_UNK_PROTO,
- NPC_F_TU_ETHER_UNK,
- NPC_F_TU_ETHER_CTAG,
- NPC_F_TU_ETHER_CTAG_UNK,
- NPC_F_TU_ETHER_STAG_CTAG,
- NPC_F_TU_ETHER_STAG_CTAG_UNK,
- NPC_F_TU_ETHER_STAG,
- NPC_F_TU_ETHER_STAG_UNK,
- NPC_F_TU_ETHER_QINQ_CTAG,
- NPC_F_TU_ETHER_QINQ_CTAG_UNK,
- NPC_F_TU_ETHER_QINQ,
- NPC_F_TU_ETHER_QINQ_UNK,
- NPC_F_LAST /* has to be the last item */
-};
-
-
-int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
-
-int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count);
-
-int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
-
-int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
-
-int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
-
-int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt, uint8_t flags);
-
-int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error);
-
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
-int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
- struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info);
-
-void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt);
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
-
-int otx2_flow_parse_lh(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lg(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lf(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_le(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_ld(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lc(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lb(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_la(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow);
-
-int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
-
-int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-void otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow);
-#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
deleted file mode 100644
index 071740de86..0000000000
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_bp_cfg_req *req;
- struct nix_bp_cfg_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_sdp(dev))
- return 0;
-
- if (enb) {
- req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
- req->bpid_per_chan = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || req->chan_cnt != rsp->chan_cnt) {
- otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
- rsp->chan_cnt, req->chan_cnt, rc);
- return rc;
- }
-
- fc->bpid[0] = rsp->chan_bpid[0];
- } else {
- req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
-
- rc = otx2_mbox_process(mbox);
-
- memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
- }
-
- return rc;
-}
-
-int
-otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_pause_frm_cfg *req, *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_ETH_FC_NONE;
- return 0;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto done;
-
- if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_FULL;
- else if (rsp->rx_pause)
- fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
- else if (rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
- else
- fc_conf->mode = RTE_ETH_FC_NONE;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq)
- return -ENOMEM;
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- if (enb) {
- aq->cq.bpid = fc->bpid[0];
- aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
- aq->cq.bp = rxq->cq_drop;
- aq->cq_mask.bp = ~(aq->cq_mask.bp);
- }
-
- aq->cq.bp_ena = !!enb;
- aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-static int
-otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- return otx2_nix_cq_bp_cfg(eth_dev, enb);
-}
-
-int
-otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_pause_frm_cfg *req;
- uint8_t tx_pause, rx_pause;
- int rc = 0;
-
- if (otx2_dev_is_lbk(dev)) {
- otx2_info("No flow control support for LBK bound ethports");
- return -ENOTSUP;
- }
-
- if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
- fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
- otx2_info("Flowctrl parameter is not supported");
- return -EINVAL;
- }
-
- if (fc_conf->mode == fc->mode)
- return 0;
-
- rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
-
- /* Check if TX pause frame is already enabled or not */
- if (fc->tx_pause ^ tx_pause) {
- if (otx2_dev_is_Ax(dev) && eth_dev->data->dev_started) {
- /* on Ax, CQ should be in disabled state
- * while setting flow control configuration.
- */
- otx2_info("Stop the port=%d for setting flow control\n",
- eth_dev->data->port_id);
- return 0;
- }
- /* TX pause frames, enable/disable flowctrl on RX side. */
- rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
- if (rc)
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 1;
- req->rx_pause = rx_pause;
- req->tx_pause = tx_pause;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- fc->tx_pause = tx_pause;
- fc->rx_pause = rx_pause;
- fc->mode = fc_conf->mode;
-
- return rc;
-}
-
-int
-otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- fc_conf.mode = fc->mode;
-
- /* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
- if (otx2_dev_is_Ax(dev) &&
- (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
- fc_conf.mode =
- (fc_conf.mode == RTE_ETH_FC_FULL ||
- fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
- RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
- }
-
- return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
-}
-
-int
-otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
- int rc;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
- * by AF driver, update those info in PMD structure.
- */
- rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
- if (rc)
- goto exit;
-
- fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
-
-exit:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_dump.c b/drivers/net/octeontx2/otx2_flow_dump.c
deleted file mode 100644
index 3f86071300..0000000000
--- a/drivers/net/octeontx2/otx2_flow_dump.c
+++ /dev/null
@@ -1,595 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-#define NPC_MAX_FIELD_NAME_SIZE 80
-#define NPC_RX_ACTIONOP_MASK GENMASK(3, 0)
-#define NPC_RX_ACTION_PFFUNC_MASK GENMASK(19, 4)
-#define NPC_RX_ACTION_INDEX_MASK GENMASK(39, 20)
-#define NPC_RX_ACTION_MATCH_MASK GENMASK(55, 40)
-#define NPC_RX_ACTION_FLOWKEY_MASK GENMASK(60, 56)
-
-#define NPC_TX_ACTION_INDEX_MASK GENMASK(31, 12)
-#define NPC_TX_ACTION_MATCH_MASK GENMASK(47, 32)
-
-#define NIX_RX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_RX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_RX_VTAGACT_VTAG0_TYPE_MASK GENMASK(14, 12)
-#define NIX_RX_VTAGACT_VTAG0_VALID_MASK BIT_ULL(15)
-
-#define NIX_RX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_RX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_RX_VTAGACT_VTAG1_TYPE_MASK GENMASK(46, 44)
-#define NIX_RX_VTAGACT_VTAG1_VALID_MASK BIT_ULL(47)
-
-#define NIX_TX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_TX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_TX_VTAGACT_VTAG0_OP_MASK GENMASK(13, 12)
-#define NIX_TX_VTAGACT_VTAG0_DEF_MASK GENMASK(25, 16)
-
-#define NIX_TX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_TX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_TX_VTAGACT_VTAG1_OP_MASK GENMASK(45, 44)
-#define NIX_TX_VTAGACT_VTAG1_DEF_MASK GENMASK(57, 48)
-
-struct npc_rx_parse_nibble_s {
- uint16_t chan : 3;
- uint16_t errlev : 1;
- uint16_t errcode : 2;
- uint16_t l2l3bm : 1;
- uint16_t laflags : 2;
- uint16_t latype : 1;
- uint16_t lbflags : 2;
- uint16_t lbtype : 1;
- uint16_t lcflags : 2;
- uint16_t lctype : 1;
- uint16_t ldflags : 2;
- uint16_t ldtype : 1;
- uint16_t leflags : 2;
- uint16_t letype : 1;
- uint16_t lfflags : 2;
- uint16_t lftype : 1;
- uint16_t lgflags : 2;
- uint16_t lgtype : 1;
- uint16_t lhflags : 2;
- uint16_t lhtype : 1;
-} __rte_packed;
-
-const char *intf_str[] = {
- "NIX-RX",
- "NIX-TX",
-};
-
-const char *ltype_str[NPC_MAX_LID][NPC_MAX_LT] = {
- [NPC_LID_LA][0] = "NONE",
- [NPC_LID_LA][NPC_LT_LA_ETHER] = "LA_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_ETHER] = "LA_IH_NIX_ETHER",
- [NPC_LID_LA][NPC_LT_LA_HIGIG2_ETHER] = "LA_HIGIG2_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_HIGIG2_ETHER] = "LA_IH_NIX_HIGIG2_ETHER",
- [NPC_LID_LB][0] = "NONE",
- [NPC_LID_LB][NPC_LT_LB_CTAG] = "LB_CTAG",
- [NPC_LID_LB][NPC_LT_LB_STAG_QINQ] = "LB_STAG_QINQ",
- [NPC_LID_LB][NPC_LT_LB_ETAG] = "LB_ETAG",
- [NPC_LID_LB][NPC_LT_LB_EXDSA] = "LB_EXDSA",
- [NPC_LID_LB][NPC_LT_LB_VLAN_EXDSA] = "LB_VLAN_EXDSA",
- [NPC_LID_LC][0] = "NONE",
- [NPC_LID_LC][NPC_LT_LC_IP] = "LC_IP",
- [NPC_LID_LC][NPC_LT_LC_IP6] = "LC_IP6",
- [NPC_LID_LC][NPC_LT_LC_ARP] = "LC_ARP",
- [NPC_LID_LC][NPC_LT_LC_IP6_EXT] = "LC_IP6_EXT",
- [NPC_LID_LC][NPC_LT_LC_NGIO] = "LC_NGIO",
- [NPC_LID_LD][0] = "NONE",
- [NPC_LID_LD][NPC_LT_LD_ICMP] = "LD_ICMP",
- [NPC_LID_LD][NPC_LT_LD_ICMP6] = "LD_ICMP6",
- [NPC_LID_LD][NPC_LT_LD_UDP] = "LD_UDP",
- [NPC_LID_LD][NPC_LT_LD_TCP] = "LD_TCP",
- [NPC_LID_LD][NPC_LT_LD_SCTP] = "LD_SCTP",
- [NPC_LID_LD][NPC_LT_LD_GRE] = "LD_GRE",
- [NPC_LID_LD][NPC_LT_LD_NVGRE] = "LD_NVGRE",
- [NPC_LID_LE][0] = "NONE",
- [NPC_LID_LE][NPC_LT_LE_VXLAN] = "LE_VXLAN",
- [NPC_LID_LE][NPC_LT_LE_ESP] = "LE_ESP",
- [NPC_LID_LE][NPC_LT_LE_GTPC] = "LE_GTPC",
- [NPC_LID_LE][NPC_LT_LE_GTPU] = "LE_GTPU",
- [NPC_LID_LE][NPC_LT_LE_GENEVE] = "LE_GENEVE",
- [NPC_LID_LE][NPC_LT_LE_VXLANGPE] = "LE_VXLANGPE",
- [NPC_LID_LF][0] = "NONE",
- [NPC_LID_LF][NPC_LT_LF_TU_ETHER] = "LF_TU_ETHER",
- [NPC_LID_LG][0] = "NONE",
- [NPC_LID_LG][NPC_LT_LG_TU_IP] = "LG_TU_IP",
- [NPC_LID_LG][NPC_LT_LG_TU_IP6] = "LG_TU_IP6",
- [NPC_LID_LH][0] = "NONE",
- [NPC_LID_LH][NPC_LT_LH_TU_UDP] = "LH_TU_UDP",
- [NPC_LID_LH][NPC_LT_LH_TU_TCP] = "LH_TU_TCP",
- [NPC_LID_LH][NPC_LT_LH_TU_SCTP] = "LH_TU_SCTP",
- [NPC_LID_LH][NPC_LT_LH_TU_ESP] = "LH_TU_ESP",
-};
-
-static uint16_t
-otx2_get_nibbles(struct rte_flow *flow, uint16_t size, uint32_t bit_offset)
-{
- uint32_t byte_index, noffset;
- uint16_t data, mask;
- uint8_t *bytes;
-
- bytes = (uint8_t *)flow->mcam_data;
- mask = (1ULL << (size * 4)) - 1;
- byte_index = bit_offset / 8;
- noffset = bit_offset % 8;
- data = *(uint16_t *)&bytes[byte_index];
- data >>= noffset;
- data &= mask;
-
- return data;
-}
-
-static void
-otx2_flow_print_parse_nibbles(FILE *file, struct rte_flow *flow,
- uint64_t parse_nibbles)
-{
- struct npc_rx_parse_nibble_s *rx_parse;
- uint32_t data, offset = 0;
-
- rx_parse = (struct npc_rx_parse_nibble_s *)&parse_nibbles;
-
- if (rx_parse->chan) {
- data = otx2_get_nibbles(flow, 3, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_CHAN:%#03X\n", data);
- offset += 12;
- }
-
- if (rx_parse->errlev) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRLEV:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->errcode) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRCODE:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->l2l3bm) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_L2L3_BCAST:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->latype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_LTYPE:%s\n",
- ltype_str[NPC_LID_LA][data]);
- offset += 4;
- }
-
- if (rx_parse->laflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lbtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_LTYPE:%s\n",
- ltype_str[NPC_LID_LB][data]);
- offset += 4;
- }
-
- if (rx_parse->lbflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lctype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_LTYPE:%s\n",
- ltype_str[NPC_LID_LC][data]);
- offset += 4;
- }
-
- if (rx_parse->lcflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->ldtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_LTYPE:%s\n",
- ltype_str[NPC_LID_LD][data]);
- offset += 4;
- }
-
- if (rx_parse->ldflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->letype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_LTYPE:%s\n",
- ltype_str[NPC_LID_LE][data]);
- offset += 4;
- }
-
- if (rx_parse->leflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lftype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_LTYPE:%s\n",
- ltype_str[NPC_LID_LF][data]);
- offset += 4;
- }
-
- if (rx_parse->lfflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lgtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_LTYPE:%s\n",
- ltype_str[NPC_LID_LG][data]);
- offset += 4;
- }
-
- if (rx_parse->lgflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lhtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_LTYPE:%s\n",
- ltype_str[NPC_LID_LH][data]);
- offset += 4;
- }
-
- if (rx_parse->lhflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_FLAGS:%#02X\n", data);
- }
-}
-
-static void
-otx2_flow_print_xtractinfo(FILE *file, struct npc_xtract_info *lfinfo,
- struct rte_flow *flow, int lid, int lt)
-{
- uint8_t *datastart, *maskstart;
- int i;
-
- datastart = (uint8_t *)&flow->mcam_data + lfinfo->key_off;
- maskstart = (uint8_t *)&flow->mcam_mask + lfinfo->key_off;
-
- fprintf(file, "\t%s, hdr offset:%#X, len:%#X, key offset:%#X, ",
- ltype_str[lid][lt], lfinfo->hdr_off,
- lfinfo->len, lfinfo->key_off);
-
- fprintf(file, "Data:0X");
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", datastart[i]);
-
- fprintf(file, ", ");
-
- fprintf(file, "Mask:0X");
-
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", maskstart[i]);
-
- fprintf(file, "\n");
-}
-
-static void
-otx2_flow_print_item(FILE *file, struct otx2_eth_dev *hw,
- struct npc_xtract_info *xinfo, struct rte_flow *flow,
- int intf, int lid, int lt, int ld)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_xtract_info *lflags_info;
- int i, lf_cfg;
-
- otx2_flow_print_xtractinfo(file, xinfo, flow, lid, lt);
-
- if (xinfo->flags_enable) {
- lf_cfg = npc_flow->prx_lfcfg[ld].i;
-
- if (lf_cfg == lid) {
- for (i = 0; i < NPC_MAX_LFL; i++) {
- lflags_info = npc_flow->prx_fxcfg[intf]
- [ld][i].xtract;
-
- otx2_flow_print_xtractinfo(file, lflags_info,
- flow, lid, lt);
- }
- }
- }
-}
-
-static void
-otx2_flow_dump_patterns(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_lid_lt_xtract_info *lt_xinfo;
- struct npc_xtract_info *xinfo;
- uint32_t intf, lid, ld, i;
- uint64_t parse_nibbles;
- uint16_t ltype;
-
- intf = flow->nix_intf;
- parse_nibbles = npc_flow->keyx_supp_nmask[intf];
- otx2_flow_print_parse_nibbles(file, flow, parse_nibbles);
-
- for (i = 0; i < flow->num_patterns; i++) {
- lid = flow->dump_data[i].lid;
- ltype = flow->dump_data[i].ltype;
- lt_xinfo = &npc_flow->prx_dxcfg[intf][lid][ltype];
-
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- xinfo = <_xinfo->xtract[ld];
- if (!xinfo->enable)
- continue;
- otx2_flow_print_item(file, hw, xinfo, flow, intf, lid,
- ltype, ld);
- }
- }
-}
-
-static void
-otx2_flow_dump_tx_action(FILE *file, uint64_t npc_action)
-{
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
- uint32_t tx_op, index, match_id;
-
- tx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (tx_op) {
- case NIX_TX_ACTIONOP_DROP:
- fprintf(file, "NIX_TX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_TX_ACTIONOP_UCAST_DEFAULT:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_DEFAULT);
- break;
- case NIX_TX_ACTIONOP_UCAST_CHAN:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_CHAN);
- strncpy(index_name, "Transmit Channel:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_MCAST:
- fprintf(file, "NIX_TX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast Table Index:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_DROP_VIOL:
- fprintf(file, "NIX_TX_ACTIONOP_DROP_VIOL (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_DROP_VIOL);
- break;
- }
-
- index = ((npc_action & NPC_TX_ACTION_INDEX_MASK) >> 12) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_TX_ACTION_MATCH_MASK) >> 32) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-}
-
-static void
-otx2_flow_dump_rx_action(FILE *file, uint64_t npc_action)
-{
- uint32_t rx_op, pf_func, index, match_id, flowkey_alg;
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
-
- rx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (rx_op) {
- case NIX_RX_ACTIONOP_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_RX_ACTIONOP_UCAST:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST);
- strncpy(index_name, "RQ Index", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_UCAST_IPSEC:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST_IPSEC (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST_IPSEC);
- strncpy(index_name, "RQ Index:", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_MCAST:
- fprintf(file, "NIX_RX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_RSS:
- fprintf(file, "NIX_RX_ACTIONOP_RSS (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_RSS);
- strncpy(index_name, "RSS Group Index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_PF_FUNC_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_PF_FUNC_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_PF_FUNC_DROP);
- break;
- case NIX_RX_ACTIONOP_MIRROR:
- fprintf(file, "NIX_RX_ACTIONOP_MIRROR (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MIRROR);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- }
-
- pf_func = ((npc_action & NPC_RX_ACTION_PFFUNC_MASK) >> 4) & 0xFFFF;
-
- fprintf(file, "\tPF_FUNC: %#04X\n", pf_func);
-
- index = ((npc_action & NPC_RX_ACTION_INDEX_MASK) >> 20) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_RX_ACTION_MATCH_MASK) >> 40) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-
- flowkey_alg = ((npc_action & NPC_RX_ACTION_FLOWKEY_MASK) >> 56) & 0x1F;
-
- fprintf(file, "\tFlow Key Alg:%#X\n", flowkey_alg);
-}
-
-static void
-otx2_flow_dump_parsed_action(FILE *file, uint64_t npc_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX Action:%#016lX\n", npc_action);
- otx2_flow_dump_rx_action(file, npc_action);
- } else {
- fprintf(file, "NPC TX Action:%#016lX\n", npc_action);
- otx2_flow_dump_tx_action(file, npc_action);
- }
-}
-
-static void
-otx2_flow_dump_rx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t type, lid, relptr;
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG0_VALID_MASK) {
- relptr = vtag_action & NIX_RX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG0_LID_MASK) >> 8)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG0_TYPE_MASK) >> 12)
- & 0x7;
-
- fprintf(file, "\tVTAG0:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG1_VALID_MASK) {
- relptr = ((vtag_action & NIX_RX_VTAGACT_VTAG1_RELPTR_MASK)
- >> 32) & 0xFF;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG1_LID_MASK) >> 40)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG1_TYPE_MASK) >> 44)
- & 0x7;
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-}
-
-static void
-otx2_get_vtag_opname(uint32_t op, char *opname, int len)
-{
- switch (op) {
- case 0x0:
- strncpy(opname, "NOP", len - 1);
- break;
- case 0x1:
- strncpy(opname, "INSERT", len - 1);
- break;
- case 0x2:
- strncpy(opname, "REPLACE", len - 1);
- break;
- }
-}
-
-static void
-otx2_flow_dump_tx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t relptr, lid, op, vtag_def;
- char opname[10];
-
- relptr = vtag_action & NIX_TX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG0_LID_MASK) >> 8) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG0_OP_MASK) >> 12) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG0_DEF_MASK) >> 16)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG0 relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-
- relptr = ((vtag_action & NIX_TX_VTAGACT_VTAG1_RELPTR_MASK) >> 32)
- & 0xFF;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG1_LID_MASK) >> 40) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG1_OP_MASK) >> 44) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG1_DEF_MASK) >> 48)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-}
-
-static void
-otx2_flow_dump_vtag_action(FILE *file, uint64_t vtag_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_rx_vtag_action(file, vtag_action);
- } else {
- fprintf(file, "NPC TX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_tx_vtag_action(file, vtag_action);
- }
-}
-
-void
-otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw, struct rte_flow *flow)
-{
- bool is_rx = 0;
- int i;
-
- fprintf(file, "MCAM Index:%d\n", flow->mcam_id);
- fprintf(file, "Interface :%s (%d)\n", intf_str[flow->nix_intf],
- flow->nix_intf);
- fprintf(file, "Priority :%d\n", flow->priority);
-
- if (flow->nix_intf == NIX_INTF_RX)
- is_rx = 1;
-
- otx2_flow_dump_parsed_action(file, flow->npc_action, is_rx);
- otx2_flow_dump_vtag_action(file, flow->vtag_action, is_rx);
- fprintf(file, "Patterns:\n");
- otx2_flow_dump_patterns(file, hw, flow);
-
- fprintf(file, "MCAM Raw Data :\n");
-
- for (i = 0; i < OTX2_MAX_MCAM_WIDTH_DWORDS; i++) {
- fprintf(file, "\tDW%d :%016lX\n", i, flow->mcam_data[i]);
- fprintf(file, "\tDW%d_Mask:%016lX\n", i, flow->mcam_mask[i]);
- }
-
- fprintf(file, "\n");
-}
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
deleted file mode 100644
index 91267bbb81..0000000000
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ /dev/null
@@ -1,1239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
-{
- while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
- (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
- pattern++;
-
- return pattern;
-}
-
-/*
- * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
- * Tunnel+SCTP
- */
-int
-otx2_flow_parse_lh(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LH;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LH_TU_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LH_TU_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LH_TU_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LH_TU_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+IPv4, Tunnel+IPv6 */
-int
-otx2_flow_parse_lg(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LG;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- lt = NPC_LT_LG_TU_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- lt = NPC_LT_LG_TU_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- } else {
- /* There is no tunneled IP header */
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+Ether */
-int
-otx2_flow_parse_lf(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern, *last_pattern;
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int nr_vlans = 0;
- int rc;
-
- /* We hit this layer if there is a tunneling protocol */
- if (!pst->tunnel)
- return 0;
-
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LF;
- lt = NPC_LT_LF_TU_ETHER;
- lflags = 0;
-
- info.def_mask = &rte_flow_item_vlan_mask;
- /* No match support for vlan tags */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- /* Look ahead and find out any VLAN tags. These can be
- * detected but no data matching is available.
- */
- last_pattern = pst->pattern;
- pattern = pst->pattern + 1;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
- otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
- switch (nr_vlans) {
- case 0:
- break;
- case 1:
- lflags = NPC_F_TU_ETHER_CTAG;
- break;
- case 2:
- lflags = NPC_F_TU_ETHER_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 2 vlans with tunneled Ethernet "
- "not supported");
- return -rte_errno;
- }
-
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- info.hw_hdr_len = 0;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- pst->pattern = last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-int
-otx2_flow_parse_le(struct otx2_parse_state *pst)
-{
- /*
- * We are positioned at UDP. Scan ahead and look for
- * UDP encapsulated tunnel protocols. If available,
- * parse them. In that case handle this:
- * - RTE spec assumes we point to tunnel header.
- * - NPC parser provides offset from UDP header.
- */
-
- /*
- * Note: Add support to GENEVE, VXLAN_GPE when we
- * upgrade DPDK
- *
- * Note: Better to split flags into two nibbles:
- * - Higher nibble can have flags
- * - Lower nibble to further enumerate protocols
- * and have flags based extraction
- */
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- char hw_mask[64];
- int rc;
-
- if (pst->tunnel)
- return 0;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LE);
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LE;
- lflags = 0;
-
- /* Ensure we are not matching anything in UDP */
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc)
- return rc;
-
- info.hw_mask = &hw_mask;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- otx2_npc_dbg("Pattern->type = %d", pattern->type);
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- lflags = NPC_F_UDP_VXLAN;
- info.def_mask = &rte_flow_item_vxlan_mask;
- info.len = sizeof(struct rte_flow_item_vxlan);
- lt = NPC_LT_LE_VXLAN;
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LE_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- lflags = NPC_F_UDP_GTP_GTPC;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPC;
- break;
- case RTE_FLOW_ITEM_TYPE_GTPU:
- lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPU;
- break;
- case RTE_FLOW_ITEM_TYPE_GENEVE:
- lflags = NPC_F_UDP_GENEVE;
- info.def_mask = &rte_flow_item_geneve_mask;
- info.len = sizeof(struct rte_flow_item_geneve);
- lt = NPC_LT_LE_GENEVE;
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- lflags = NPC_F_UDP_VXLANGPE;
- info.def_mask = &rte_flow_item_vxlan_gpe_mask;
- info.len = sizeof(struct rte_flow_item_vxlan_gpe);
- lt = NPC_LT_LE_VXLANGPE;
- break;
- default:
- return 0;
- }
-
- pst->tunnel = 1;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static int
-flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
-{
- int nr_labels = 0;
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int rc;
- uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
- NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
-
- /*
- * pst->pattern points to first MPLS label. We only check
- * that subsequent labels do not have anything to match.
- */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
- nr_labels++;
-
- /* Basic validation of 2nd/3rd/4th mpls item */
- if (nr_labels > 1) {
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- pst->last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- if (nr_labels > 4) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->last_pattern,
- "more than 4 mpls labels not supported");
- return -rte_errno;
- }
-
- *flag = flag_list[nr_labels - 1];
- return 0;
-}
-
-int
-otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
-{
- /* Find number of MPLS labels */
- struct rte_flow_item_mpls hw_mask;
- struct otx2_flow_item_info info;
- int lt, lflags;
- int rc;
-
- lflags = 0;
-
- if (lid == NPC_LID_LC)
- lt = NPC_LT_LC_MPLS;
- else if (lid == NPC_LID_LD)
- lt = NPC_LT_LD_TU_MPLS_IN_IP;
- else
- lt = NPC_LT_LE_TU_MPLS_IN_UDP;
-
- /* Prepare for parsing the first item */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /*
- * Parse for more labels.
- * This sets lflags and pst->last_pattern correctly.
- */
- rc = flow_parse_mpls_label_stack(pst, &lflags);
- if (rc != 0)
- return rc;
-
- pst->tunnel = 1;
- pst->pattern = pst->last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-/*
- * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
- * GTP, GTPC, GTPU, ESP
- *
- * Note: UDP tunnel protocols are identified by flags.
- * LPTR for these protocol still points to UDP
- * header. Need flag based extraction to support
- * this.
- */
-int
-otx2_flow_parse_ld(struct otx2_parse_state *pst)
-{
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint32_t gre_key_mask = 0xffffffff;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int rc;
-
- if (pst->tunnel) {
- /* We have already parsed MPLS or IPv4/v6 followed
- * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
- * would be parsed as tunneled versions. Skip
- * this layer, except for tunneled MPLS. If LC is
- * MPLS, we have anyway skipped all stacked MPLS
- * labels.
- */
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LD);
- return 0;
- }
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
-
- lid = NPC_LID_LD;
- lflags = 0;
-
- otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ICMP:
- if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
- lt = NPC_LT_LD_ICMP6;
- else
- lt = NPC_LT_LD_ICMP;
- info.def_mask = &rte_flow_item_icmp_mask;
- info.len = sizeof(struct rte_flow_item_icmp);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LD_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LD_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LD_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &rte_flow_item_gre_mask;
- info.len = sizeof(struct rte_flow_item_gre);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE_KEY:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &gre_key_mask;
- info.len = sizeof(gre_key_mask);
- info.hw_hdr_len = 4;
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- lt = NPC_LT_LD_NVGRE;
- lflags = NPC_F_GRE_NVGRE;
- info.def_mask = &rte_flow_item_nvgre_mask;
- info.len = sizeof(struct rte_flow_item_nvgre);
- /* Further IP/Ethernet are parsed as tunneled */
- pst->tunnel = 1;
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static inline void
-flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern + 1;
-
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
- pst->tunnel = 1;
-}
-
-static int
-otx2_flow_raw_item_prepare(const struct rte_flow_item_raw *raw_spec,
- const struct rte_flow_item_raw *raw_mask,
- struct otx2_flow_item_info *info,
- uint8_t *spec_buf, uint8_t *mask_buf)
-{
- uint32_t custom_hdr_size = 0;
-
- memset(spec_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- memset(mask_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- custom_hdr_size = raw_spec->offset + raw_spec->length;
-
- memcpy(spec_buf + raw_spec->offset, raw_spec->pattern,
- raw_spec->length);
-
- if (raw_mask->pattern) {
- memcpy(mask_buf + raw_spec->offset, raw_mask->pattern,
- raw_spec->length);
- } else {
- memset(mask_buf + raw_spec->offset, 0xFF, raw_spec->length);
- }
-
- info->len = custom_hdr_size;
- info->spec = spec_buf;
- info->mask = mask_buf;
-
- return 0;
-}
-
-/* Outer IPv4, Outer IPv6, MPLS, ARP */
-int
-otx2_flow_parse_lc(struct otx2_parse_state *pst)
-{
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- const struct rte_flow_item_raw *raw_spec;
- struct otx2_flow_item_info info;
- int lid, lt, len;
- int rc;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LC);
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LC;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_IPV4:
- lt = NPC_LT_LC_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- break;
- case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
- lt = NPC_LT_LC_ARP;
- info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6_EXT:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6_EXT;
- info.def_mask = &rte_flow_item_ipv6_ext_mask;
- info.len = sizeof(struct rte_flow_item_ipv6_ext);
- info.hw_hdr_len = 40;
- break;
- case RTE_FLOW_ITEM_TYPE_RAW:
- raw_spec = pst->pattern->spec;
- if (!raw_spec->relative)
- return 0;
-
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_NGIO;
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- break;
- default:
- /* No match at this layer */
- return 0;
- }
-
- /* Identify if IP tunnels MPLS or IPv4/v6 */
- flow_check_lc_ip_tunnel(pst);
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* VLAN, ETAG */
-int
-otx2_flow_parse_lb(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern;
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- const struct rte_flow_item *last_pattern;
- const struct rte_flow_item_raw *raw_spec;
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- struct otx2_flow_item_info info;
- int lid, lt, lflags, len;
- int nr_vlans = 0;
- int rc;
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = NPC_TPID_LENGTH;
-
- lid = NPC_LID_LB;
- lflags = 0;
- last_pattern = pattern;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- /* RTE vlan is either 802.1q or 802.1ad,
- * this maps to either CTAG/STAG. We need to decide
- * based on number of VLANS present. Matching is
- * supported on first tag only.
- */
- info.def_mask = &rte_flow_item_vlan_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
-
- pattern = pst->pattern;
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
-
- /* Basic validation of 2nd/3rd vlan item */
- if (nr_vlans > 1) {
- otx2_npc_dbg("Vlans = %d", nr_vlans);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- switch (nr_vlans) {
- case 1:
- lt = NPC_LT_LB_CTAG;
- break;
- case 2:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_CTAG;
- break;
- case 3:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 3 vlans not supported");
- return -rte_errno;
- }
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
- /* we can support ETAG and match a subsequent CTAG
- * without any matching support.
- */
- lt = NPC_LT_LB_ETAG;
- lflags = 0;
-
- last_pattern = pst->pattern;
- pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- info.def_mask = &rte_flow_item_vlan_mask;
- /* set supported mask to NULL for vlan tag */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
-
- lflags = NPC_F_ETAG_CTAG;
- last_pattern = pattern;
- }
-
- info.def_mask = &rte_flow_item_e_tag_mask;
- info.len = sizeof(struct rte_flow_item_e_tag);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_RAW) {
- raw_spec = pst->pattern->spec;
- if (raw_spec->relative)
- return 0;
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- lt = NPC_LT_LB_VLAN_EXDSA;
- } else if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- lt = NPC_LT_LB_EXDSA;
- } else {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "exdsa or vlan_exdsa not enabled on"
- " port");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- info.hw_hdr_len = 0;
- } else {
- return 0;
- }
-
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /* Point pattern to last item consumed */
- pst->pattern = last_pattern;
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-
-int
-otx2_flow_parse_la(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len += NPC_HIGIG2_LENGTH;
- }
- } else {
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_HIGIG2_LENGTH;
- }
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-int
-otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_higig2_hdr hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_HIGIG2)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_higig2_hdr_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_higig2_hdr);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-static int
-parse_rss_action(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action *act,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_rss_info *rss_info = &hw->rss_info;
- const struct rte_flow_action_rss *rss;
- uint32_t i;
-
- rss = (const struct rte_flow_action_rss *)act->conf;
-
- /* Not supported */
- if (attr->egress) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
- attr, "No support of RSS in egress");
- }
-
- if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "multi-queue mode is disabled");
-
- /* Parse RSS related parameters from configuration */
- if (!rss || !rss->queue_num)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "no valid queues");
-
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions"
- " are not supported");
-
- if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "RSS hash key too large");
-
- if (rss->queue_num > rss_info->rss_size)
- return rte_flow_error_set
- (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "too many queues for RSS context");
-
- for (i = 0; i < rss->queue_num; i++) {
- if (rss->queue[i] >= dev->data->nb_rx_queues)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "queue id > max number"
- " of queues");
- }
-
- return 0;
-}
-
-int
-otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- const struct rte_flow_action_mark *act_mark;
- const struct rte_flow_action_queue *act_q;
- const struct rte_flow_action_vf *vf_act;
- uint16_t pf_func, vf_id, port_id, pf_id;
- char if_name[RTE_ETH_NAME_MAX_LEN];
- bool vlan_insert_action = false;
- struct rte_eth_dev *eth_dev;
- const char *errmsg = NULL;
- int sel_act, req_act = 0;
- int errcode = 0;
- int mark = 0;
- int rq = 0;
-
- /* Initialize actions */
- flow->ctr_id = NPC_COUNTER_NONE;
- pf_func = otx2_pfvf_func(hw->pf, hw->vf);
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- otx2_npc_dbg("Action type = %d", actions->type);
-
- switch (actions->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- act_mark =
- (const struct rte_flow_action_mark *)actions->conf;
-
- /* We have only 16 bits. Use highest val for flag */
- if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
- errmsg = "mark value must be < 0xfffe";
- errcode = ENOTSUP;
- goto err_exit;
- }
- mark = act_mark->id + 1;
- req_act |= OTX2_FLOW_ACT_MARK;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_FLAG:
- mark = OTX2_FLOW_FLAG_VAL;
- req_act |= OTX2_FLOW_ACT_FLAG;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_COUNT:
- /* Indicates, need a counter */
- flow->ctr_id = 1;
- req_act |= OTX2_FLOW_ACT_COUNT;
- break;
-
- case RTE_FLOW_ACTION_TYPE_DROP:
- req_act |= OTX2_FLOW_ACT_DROP;
- break;
-
- case RTE_FLOW_ACTION_TYPE_PF:
- req_act |= OTX2_FLOW_ACT_PF;
- pf_func &= (0xfc00);
- break;
-
- case RTE_FLOW_ACTION_TYPE_VF:
- vf_act = (const struct rte_flow_action_vf *)
- actions->conf;
- req_act |= OTX2_FLOW_ACT_VF;
- if (vf_act->original == 0) {
- vf_id = vf_act->id & RVU_PFVF_FUNC_MASK;
- if (vf_id >= hw->maxvf) {
- errmsg = "invalid vf specified";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- }
- break;
-
- case RTE_FLOW_ACTION_TYPE_PORT_ID:
- case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
- if (actions->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
- const struct rte_flow_action_port_id *port_act;
-
- port_act = actions->conf;
- port_id = port_act->id;
- } else {
- const struct rte_flow_action_ethdev *ethdev_act;
-
- ethdev_act = actions->conf;
- port_id = ethdev_act->port_id;
- }
- if (rte_eth_dev_get_name_by_port(port_id, if_name)) {
- errmsg = "Name not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- eth_dev = rte_eth_dev_allocated(if_name);
- if (!eth_dev) {
- errmsg = "eth_dev not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- if (!otx2_ethdev_is_same_driver(eth_dev)) {
- errmsg = "Output port id unsupported type";
- errcode = ENOTSUP;
- goto err_exit;
- }
- if (!otx2_dev_is_vf(otx2_eth_pmd_priv(eth_dev))) {
- errmsg = "Output port should be VF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- vf_id = otx2_eth_pmd_priv(eth_dev)->vf;
- if (vf_id >= hw->maxvf) {
- errmsg = "Invalid vf for output port";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_id = otx2_eth_pmd_priv(eth_dev)->pf;
- if (pf_id != hw->pf) {
- errmsg = "Output port unsupported PF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- req_act |= OTX2_FLOW_ACT_VF;
- break;
-
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Applicable only to ingress flow */
- act_q = (const struct rte_flow_action_queue *)
- actions->conf;
- rq = act_q->index;
- if (rq >= dev->data->nb_rx_queues) {
- errmsg = "invalid queue index";
- errcode = EINVAL;
- goto err_exit;
- }
- req_act |= OTX2_FLOW_ACT_QUEUE;
- break;
-
- case RTE_FLOW_ACTION_TYPE_RSS:
- errcode = parse_rss_action(dev, attr, actions, error);
- if (errcode)
- return -rte_errno;
-
- req_act |= OTX2_FLOW_ACT_RSS;
- break;
-
- case RTE_FLOW_ACTION_TYPE_SECURITY:
- /* Assumes user has already configured security
- * session for this flow. Associated conf is
- * opaque. When RTE security is implemented for otx2,
- * we need to verify that for specified security
- * session:
- * action_type ==
- * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
- * session_protocol ==
- * RTE_SECURITY_PROTOCOL_IPSEC
- *
- * RSS is not supported with inline ipsec. Get the
- * rq from associated conf, or make
- * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
- * action.
- * Currently, rq = 0 is assumed.
- */
- req_act |= OTX2_FLOW_ACT_SEC;
- rq = 0;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- req_act |= OTX2_FLOW_ACT_VLAN_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_STRIP;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- req_act |= OTX2_FLOW_ACT_VLAN_PCP_INSERT;
- break;
- default:
- errmsg = "Unsupported action specified";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if (req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT))
- vlan_insert_action = true;
-
- if ((req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT)) ==
- OTX2_FLOW_ACT_VLAN_PCP_INSERT) {
- errmsg = " PCP insert action can't be supported alone";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Both STRIP and INSERT actions are not supported */
- if (vlan_insert_action && (req_act & OTX2_FLOW_ACT_VLAN_STRIP)) {
- errmsg = "Both VLAN insert and strip actions not supported"
- " together";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Check if actions specified are compatible */
- if (attr->egress) {
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP) {
- errmsg = "VLAN pop action is not supported on Egress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_TX_ACTIONOP_DROP;
- } else if ((req_act & OTX2_FLOW_ACT_COUNT) ||
- vlan_insert_action) {
- flow->npc_action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
- } else {
- errmsg = "Unsupported action for egress";
- errcode = EINVAL;
- goto err_exit;
- }
- goto set_pf_func;
- }
-
- /* We have already verified the attr, this is ingress.
- * - Exactly one terminating action is supported
- * - Exactly one of MARK or FLAG is supported
- * - If terminating action is DROP, only count is valid.
- */
- sel_act = req_act & OTX2_FLOW_ACT_TERM;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only one terminating action supported";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only COUNT action is supported "
- "with DROP ingress action";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
- == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- errmsg = "Only one of FLAG or MARK action is supported";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (vlan_insert_action) {
- errmsg = "VLAN push/Insert action is not supported on Ingress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP)
- npc->vtag_actions++;
-
- /* Only VLAN action is provided */
- if (req_act == OTX2_FLOW_ACT_VLAN_STRIP)
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- /* Set NIX_RX_ACTIONOP */
- else if (req_act & (OTX2_FLOW_ACT_PF | OTX2_FLOW_ACT_VF)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- if (req_act & OTX2_FLOW_ACT_QUEUE)
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_RSS) {
- /* When user added a rule for rss, first we will add the
- *rule in MCAM and then update the action, once if we have
- *FLOW_KEY_ALG index. So, till we update the action with
- *flow_key_alg index, set the action to drop.
- */
- if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- else
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_SEC) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_COUNT) {
- /* Keep OTX2_FLOW_ACT_COUNT always at the end
- * This is default action, when user specify only
- * COUNT ACTION
- */
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else {
- /* Should never reach here */
- errmsg = "Invalid action specified";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (mark)
- flow->npc_action |= (uint64_t)mark << 40;
-
- if (rte_atomic32_read(&npc->mark_actions) == 1) {
- hw->rx_offload_flags |=
- NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
-
- if (npc->vtag_actions == 1) {
- hw->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
-
-set_pf_func:
- /* Ideally AF must ensure that correct pf_func is set */
- if (attr->egress)
- flow->npc_action |= (uint64_t)pf_func << 48;
- else
- flow->npc_action |= (uint64_t)pf_func << 4;
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
- errmsg);
- return -rte_errno;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
deleted file mode 100644
index 35f7d0f4bc..0000000000
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ /dev/null
@@ -1,969 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-static int
-flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
-{
- struct npc_mcam_alloc_counter_req *req;
- struct npc_mcam_alloc_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
- req->count = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *ctr = rsp->cntr_list[0];
- return rc;
-}
-
-int
-otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count)
-{
- struct npc_mcam_oper_counter_req *req;
- struct npc_mcam_oper_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *count = rsp->stat;
- return rc;
-}
-
-int
-otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->all = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-static void
-flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
-{
- int idx;
-
- for (idx = 0; idx < len; idx++)
- ptr[idx] = data[len - 1 - idx];
-}
-
-static int
-flow_check_copysz(size_t size, size_t len)
-{
- if (len <= size)
- return len;
- return -1;
-}
-
-static inline int
-flow_mem_is_zero(const void *mem, int len)
-{
- const char *m = mem;
- int i;
-
- for (i = 0; i < len; i++) {
- if (m[i] != 0)
- return 0;
- }
- return 1;
-}
-
-static void
-flow_set_hw_mask(struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo,
- char *hw_mask)
-{
- int max_off, offset;
- int j;
-
- if (xinfo->enable == 0)
- return;
-
- if (xinfo->hdr_off < info->hw_hdr_len)
- return;
-
- max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len;
-
- if (max_off > info->len)
- max_off = info->len;
-
- offset = xinfo->hdr_off - info->hw_hdr_len;
- for (j = offset; j < max_off; j++)
- hw_mask[j] = 0xff;
-}
-
-void
-otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt)
-{
- struct npc_xtract_info *xinfo, *lfinfo;
- char *hw_mask = info->hw_mask;
- int lf_cfg;
- int i, j;
- int intf;
-
- intf = pst->flow->nix_intf;
- xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
- memset(hw_mask, 0, info->len);
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- flow_set_hw_mask(info, &xinfo[i], hw_mask);
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
-
- if (xinfo[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- flow_set_hw_mask(info, &lfinfo[0], hw_mask);
- }
- }
- }
-}
-
-static int
-flow_update_extraction_data(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo)
-{
- uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
- struct npc_xtract_info *x;
- int k, idx, hdr_off;
- int len = 0;
-
- x = xinfo;
- len = x->len;
- hdr_off = x->hdr_off;
-
- if (hdr_off < info->hw_hdr_len)
- return 0;
-
- if (x->enable == 0)
- return 0;
-
- otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
- "x->key_off = %d", x->hdr_off, len, info->len,
- x->key_off);
-
- hdr_off -= info->hw_hdr_len;
-
- if (hdr_off + len > info->len)
- len = info->len - hdr_off;
-
- /* Check for over-write of previous layer */
- if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
- len)) {
- /* Cannot support this data match */
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Extraction unsupported");
- return -rte_errno;
- }
-
- len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
- - x->key_off,
- len);
- if (len < 0) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Internal Error");
- return -rte_errno;
- }
-
- /* Need to reverse complete structure so that dest addr is at
- * MSB so as to program the MCAM using mcam_data & mcam_mask
- * arrays
- */
- flow_prep_mcam_ldata(int_info,
- (const uint8_t *)info->spec + hdr_off,
- x->len);
- flow_prep_mcam_ldata(int_info_mask,
- (const uint8_t *)info->mask + hdr_off,
- x->len);
-
- otx2_npc_dbg("Spec: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ",
- ((const uint8_t *)info->spec)[k]);
-
- otx2_npc_dbg("Int_info: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ", int_info[k]);
-
- memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
- memcpy(pst->mcam_data + x->key_off, int_info, len);
-
- otx2_npc_dbg("Parse state mcam data & mask");
- for (idx = 0; idx < len ; idx++)
- otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
- *(pst->mcam_data + idx + x->key_off), idx,
- *(pst->mcam_mask + idx + x->key_off));
- return 0;
-}
-
-int
-otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt,
- uint8_t flags)
-{
- struct npc_lid_lt_xtract_info *xinfo;
- struct otx2_flow_dump_data *dump;
- struct npc_xtract_info *lfinfo;
- int intf, lf_cfg;
- int i, j, rc = 0;
-
- otx2_npc_dbg("Parse state function info mask total %s",
- (const uint8_t *)info->mask);
-
- pst->layer_mask |= lid;
- pst->lt[lid] = lt;
- pst->flags[lid] = flags;
-
- intf = pst->flow->nix_intf;
- xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
- otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
- if (xinfo->is_terminating)
- pst->terminate = 1;
-
- if (info->spec == NULL) {
- otx2_npc_dbg("Info spec NULL");
- goto done;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- rc = flow_update_extraction_data(pst, info, &xinfo->xtract[i]);
- if (rc != 0)
- return rc;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- if (xinfo->xtract[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- rc = flow_update_extraction_data(pst, info,
- &lfinfo[0]);
- if (rc != 0)
- return rc;
-
- if (lfinfo[0].enable)
- pst->flags[lid] = j;
- }
- }
- }
-
-done:
- dump = &pst->flow->dump_data[pst->flow->num_patterns++];
- dump->lid = lid;
- dump->ltype = lt;
- /* Next pattern to parse by subsequent layers */
- pst->pattern++;
- return 0;
-}
-
-static inline int
-flow_range_is_valid(const char *spec, const char *last, const char *mask,
- int len)
-{
- /* Mask must be zero or equal to spec as we do not support
- * non-contiguous ranges.
- */
- while (len--) {
- if (last[len] &&
- (spec[len] & mask[len]) != (last[len] & mask[len]))
- return 0; /* False */
- }
- return 1;
-}
-
-
-static inline int
-flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
-{
- /*
- * If no hw_mask, assume nothing is supported.
- * mask is never NULL
- */
- if (hw_mask == NULL)
- return flow_mem_is_zero(mask, len);
-
- while (len--) {
- if ((mask[len] | hw_mask[len]) != hw_mask[len])
- return 0; /* False */
- }
- return 1;
-}
-
-int
-otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error)
-{
- /* Item must not be NULL */
- if (item == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Item is NULL");
- return -rte_errno;
- }
- /* If spec is NULL, both mask and last must be NULL, this
- * makes it to match ANY value (eq to mask = 0).
- * Setting either mask or last without spec is an error
- */
- if (item->spec == NULL) {
- if (item->last == NULL && item->mask == NULL) {
- info->spec = NULL;
- return 0;
- }
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "mask or last set without spec");
- return -rte_errno;
- }
-
- /* We have valid spec */
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->spec = item->spec;
-
- /* If mask is not set, use default mask, err if default mask is
- * also NULL.
- */
- if (item->mask == NULL) {
- otx2_npc_dbg("Item mask null, using default mask");
- if (info->def_mask == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "No mask or default mask given");
- return -rte_errno;
- }
- info->mask = info->def_mask;
- } else {
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->mask = item->mask;
- }
-
- /* mask specified must be subset of hw supported mask
- * mask | hw_mask == hw_mask
- */
- if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Unsupported field in the mask");
- return -rte_errno;
- }
-
- /* Now we have spec and mask. OTX2 does not support non-contiguous
- * range. We should have either:
- * - spec & mask == last & mask or,
- * - last == 0 or,
- * - last == NULL
- */
- if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
- if (!flow_range_is_valid(item->spec, item->last, info->mask,
- info->len)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Unsupported range for match");
- return -rte_errno;
- }
- }
-
- return 0;
-}
-
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
- uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
- int i, j = 0;
-
- for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
- if (nibble_mask & (1 << i)) {
- nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
- cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
- j += 1;
- }
- }
-
- data[0] = cdata[0];
- data[1] = cdata[1];
-}
-
-static int
-flow_first_set_bit(uint64_t slab)
-{
- int num = 0;
-
- if ((slab & 0xffffffff) == 0) {
- num += 32;
- slab >>= 32;
- }
- if ((slab & 0xffff) == 0) {
- num += 16;
- slab >>= 16;
- }
- if ((slab & 0xff) == 0) {
- num += 8;
- slab >>= 8;
- }
- if ((slab & 0xf) == 0) {
- num += 4;
- slab >>= 4;
- }
- if ((slab & 0x3) == 0) {
- num += 2;
- slab >>= 2;
- }
- if ((slab & 0x1) == 0)
- num += 1;
-
- return num;
-}
-
-static int
-flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- uint32_t old_ent, uint32_t new_ent)
-{
- struct npc_mcam_shift_entry_req *req;
- struct npc_mcam_shift_entry_rsp *rsp;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- int rc = 0;
-
- otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
- flow->priority);
-
- list = &flow_info->flow_list[flow->priority];
-
- /* Old entry is disabled & it's contents are moved to new_entry,
- * new entry is enabled finally.
- */
- req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
- req->curr_entry[0] = old_ent;
- req->new_entry[0] = new_ent;
- req->shift_count = 1;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Remove old node from list */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id == old_ent)
- TAILQ_REMOVE(list, flow_iter, next);
- }
-
- /* Insert node with new mcam id at right place */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > new_ent)
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- }
- return rc;
-}
-
-/* Exchange all required entries with a given priority level */
-static int
-flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
-{
- struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
- uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
- uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
- /* Bit position within the slab */
- uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
- /* Overall bit position of the start of slab */
- /* free & live entry index */
- int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
- struct otx2_mcam_ents_info *ent_info;
- /* free & live bitmap slab */
- uint64_t sl_fr = 0, sl_lv = 0, *sl;
-
- fr_bmp = flow_info->free_entries[prio_lvl];
- fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
- lv_bmp = flow_info->live_entries[prio_lvl];
- lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
- ent_info = &flow_info->flow_entry_info[prio_lvl];
- mcam_entries = flow_info->mcam_entries;
-
-
- /* New entries allocated are always contiguous, but older entries
- * already in free/live bitmap can be non-contiguous: so return
- * shifted entries should be in non-contiguous format.
- */
- while (idx <= rsp->count) {
- if (!sl_fr && !sl_lv) {
- /* Lower index elements to be exchanged */
- if (dir < 0) {
- rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
- otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- } else {
- rc_fr = rte_bitmap_scan(fr_bmp_rev,
- &sl_fr_bit_off,
- &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp_rev,
- &sl_lv_bit_off,
- &sl_lv);
-
- otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- }
- }
-
- if (rc_fr) {
- fr_bit_pos = flow_first_set_bit(sl_fr);
- e_fr = sl_fr_bit_off + fr_bit_pos;
- otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
- } else {
- e_fr = ~(0);
- }
-
- if (rc_lv) {
- lv_bit_pos = flow_first_set_bit(sl_lv);
- e_lv = sl_lv_bit_off + lv_bit_pos;
- otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
- } else {
- e_lv = ~(0);
- }
-
- /* First entry is from free_bmap */
- if (e_fr < e_lv) {
- bmp = fr_bmp;
- e = e_fr;
- sl = &sl_fr;
- bit_pos = fr_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
- otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
- } else {
- bmp = lv_bmp;
- e = e_lv;
- sl = &sl_lv;
- bit_pos = lv_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
-
- otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
- if (idx < rsp->count)
- rc =
- flow_shift_lv_ent(mbox, flow,
- flow_info, e_id,
- rsp->entry + idx);
- }
-
- rte_bitmap_clear(bmp, e);
- rte_bitmap_set(bmp, rsp->entry + idx);
- /* Update entry list, use non-contiguous
- * list now.
- */
- rsp->entry_list[idx] = e_id;
- *sl &= ~(1 << bit_pos);
-
- /* Update min & max entry identifiers in current
- * priority level.
- */
- if (dir < 0) {
- ent_info->max_id = rsp->entry + idx;
- ent_info->min_id = e_id;
- } else {
- ent_info->max_id = e_id;
- ent_info->min_id = rsp->entry;
- }
-
- idx++;
- }
- return rc;
-}
-
-/* Validate if newly allocated entries lie in the correct priority zone
- * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
- * If not properly aligned, shift entries to do so
- */
-static int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio)
-{
- int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
- uint32_t tot_ent = 0;
-
- otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
-
- if (dir < 0)
- prio_idx = flow_info->flow_max_priority - 1;
-
- /* Only live entries needs to be shifted, free entries can just be
- * moved by bits manipulation.
- */
-
- /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
- * level entries(lower indexes).
- *
- * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
- * level entries(higher indexes) with highest indexes.
- */
- do {
- tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
-
- if (dir < 0 && prio_idx != prio &&
- rsp->entry > info[prio_idx].max_id && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "max id %u", rsp->entry, prio_idx,
- info[prio_idx].max_id);
-
- needs_shift = 1;
- } else if ((dir > 0) && (prio_idx != prio) &&
- (rsp->entry < info[prio_idx].min_id) && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "min id %u", rsp->entry, prio_idx,
- info[prio_idx].min_id);
- needs_shift = 1;
- }
-
- otx2_npc_dbg("Needs_shift = %d", needs_shift);
- if (needs_shift) {
- needs_shift = 0;
- rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
- prio_idx);
- } else {
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
- } while ((prio_idx != prio) && (prio_idx += dir));
-
- return rc;
-}
-
-static int
-flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
- int prio_lvl)
-{
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int step = 1;
-
- while (step < flow_info->flow_max_priority) {
- if (((prio_lvl + step) < flow_info->flow_max_priority) &&
- info[prio_lvl + step].live_ent) {
- *prio = NPC_MCAM_HIGHER_PRIO;
- return info[prio_lvl + step].min_id;
- }
-
- if (((prio_lvl - step) >= 0) &&
- info[prio_lvl - step].live_ent) {
- otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
- info[prio_lvl - step].live_ent);
- *prio = NPC_MCAM_LOWER_PRIO;
- return info[prio_lvl - step].max_id;
- }
- step++;
- }
- *prio = NPC_MCAM_ANY_PRIO;
- return 0;
-}
-
-static int
-flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
-{
- struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
- struct npc_mcam_alloc_entry_rsp rsp_local;
- struct npc_mcam_alloc_entry_rsp *rsp_cmd;
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mcam_ents_info *info;
- uint16_t ref_ent, idx;
- int rc, prio;
-
- info = &flow_info->flow_entry_info[flow->priority];
- free_bmp = flow_info->free_entries[flow->priority];
- free_bmp_rev = flow_info->free_entries_rev[flow->priority];
- live_bmp = flow_info->live_entries[flow->priority];
- live_bmp_rev = flow_info->live_entries_rev[flow->priority];
-
- ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->contig = 1;
- req->count = flow_info->flow_prealloc_size;
- req->priority = prio;
- req->ref_entry = ref_ent;
-
- otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
- if (rc)
- return rc;
-
- rsp = &rsp_local;
- memcpy(rsp, rsp_cmd, sizeof(*rsp));
-
- otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
- rsp->count, prio);
-
- /* Non-first ent cache fill */
- if (prio != NPC_MCAM_ANY_PRIO) {
- flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
- prio);
- } else {
- /* Copy into response entry list */
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
-
- otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
- /* Update free entries, reverse free entries list,
- * min & max entry ids.
- */
- for (idx = 0; idx < rsp->count; idx++) {
- if (unlikely(rsp->entry_list[idx] < info->min_id))
- info->min_id = rsp->entry_list[idx];
-
- if (unlikely(rsp->entry_list[idx] > info->max_id))
- info->max_id = rsp->entry_list[idx];
-
- /* Skip entry to be returned, not to be part of free
- * list.
- */
- if (prio == NPC_MCAM_HIGHER_PRIO) {
- if (unlikely(idx == (rsp->count - 1))) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- } else {
- if (unlikely(!idx)) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- }
- info->free_ent++;
- rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
- rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
- rsp->entry_list[idx] - 1);
-
- otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
- rsp->entry_list[idx],
- flow_info->mcam_entries - rsp->entry_list[idx] - 1);
- }
-
- otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
- flow_info->mcam_entries - *free_ent - 1);
- info->live_ent++;
- rte_bitmap_set(live_bmp, *free_ent);
- rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
-
- return 0;
-}
-
-static int
-flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
- struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info)
-{
- struct rte_bitmap *free, *free_rev, *live, *live_rev;
- uint32_t pos = 0, free_ent = 0, mcam_entries;
- struct otx2_mcam_ents_info *info;
- uint64_t slab = 0;
- int rc;
-
- otx2_npc_dbg("Flow priority %u", flow->priority);
-
- info = &flow_info->flow_entry_info[flow->priority];
-
- free_rev = flow_info->free_entries_rev[flow->priority];
- free = flow_info->free_entries[flow->priority];
- live_rev = flow_info->live_entries_rev[flow->priority];
- live = flow_info->live_entries[flow->priority];
- mcam_entries = flow_info->mcam_entries;
-
- if (info->free_ent) {
- rc = rte_bitmap_scan(free, &pos, &slab);
- if (rc) {
- /* Get free_ent from free entry bitmap */
- free_ent = pos + __builtin_ctzll(slab);
- otx2_npc_dbg("Allocated from cache entry %u", free_ent);
- /* Remove from free bitmaps and add to live ones */
- rte_bitmap_clear(free, free_ent);
- rte_bitmap_set(live, free_ent);
- rte_bitmap_clear(free_rev,
- mcam_entries - free_ent - 1);
- rte_bitmap_set(live_rev,
- mcam_entries - free_ent - 1);
-
- info->free_ent--;
- info->live_ent++;
- return free_ent;
- }
-
- otx2_npc_dbg("No free entry:its a mess");
- return -1;
- }
-
- rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
- if (rc)
- return rc;
-
- return free_ent;
-}
-
-int
-otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info)
-{
- int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
- struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
- struct npc_mcam_write_entry_req *req;
- struct mcam_entry *base_entry;
- struct mbox_msghdr *rsp;
- uint16_t ctr = ~(0);
- int rc, idx;
- int entry;
-
- if (use_ctr) {
- rc = flow_mcam_alloc_counter(mbox, &ctr);
- if (rc)
- return rc;
- }
-
- entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
- if (entry < 0) {
- otx2_err("Prealloc failed");
- otx2_flow_mcam_free_counter(mbox, ctr);
- return NPC_MCAM_ALLOC_FAILED;
- }
-
- if (pst->is_vf) {
- (void)otx2_mbox_alloc_msg_npc_read_base_steer_rule(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&base_rule_rsp);
- if (rc) {
- otx2_err("Failed to fetch VF's base MCAM entry");
- return rc;
- }
- base_entry = &base_rule_rsp->entry_data;
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- flow->mcam_data[idx] |= base_entry->kw[idx];
- flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
- }
- }
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- req->set_cntr = use_ctr;
- req->cntr = ctr;
- req->entry = entry;
- otx2_npc_dbg("Alloc & write entry %u", entry);
-
- req->intf =
- (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
- req->enable_entry = 1;
- req->entry_data.action = flow->npc_action;
- req->entry_data.vtag_action = flow->vtag_action;
-
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- req->entry_data.kw[idx] = flow->mcam_data[idx];
- req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
- }
-
- if (flow->nix_intf == OTX2_INTF_RX) {
- req->entry_data.kw[0] |= flow_info->channel;
- req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
- } else {
- uint16_t pf_func = (flow->npc_action >> 48) & 0xffff;
-
- pf_func = htons(pf_func);
- req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
- req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc != 0)
- return rc;
-
- flow->mcam_id = entry;
- if (use_ctr)
- flow->ctr_id = ctr;
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
deleted file mode 100644
index 8f5d0eed92..0000000000
--- a/drivers/net/octeontx2/otx2_link.c
+++ /dev/null
@@ -1,287 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <ethdev_pci.h>
-
-#include "otx2_ethdev.h"
-
-void
-otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
-{
- if (set)
- dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
- else
- dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
-
- rte_wmb();
-}
-
-static inline int
-nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
-{
- uint16_t wait = 1000;
-
- do {
- rte_rmb();
- if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
- break;
- wait--;
- rte_delay_ms(1);
- } while (wait);
-
- return wait ? 0 : -1;
-}
-
-static void
-nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
-{
- if (link && link->link_status)
- otx2_info("Port %d: Link Up - speed %u Mbps - %s",
- (int)(eth_dev->data->port_id),
- (uint32_t)link->link_speed,
- link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
- "full-duplex" : "half-duplex");
- else
- otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
-}
-
-void
-otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return;
-
- rte_eth_linkstatus_get(eth_dev, ð_link);
-
- link->link_up = eth_link.link_status;
- link->speed = eth_link.link_speed;
- link->an = eth_link.link_autoneg;
- link->full_duplex = eth_link.link_duplex;
-}
-
-void
-otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev || !eth_dev->data->dev_conf.intr_conf.lsc)
- return;
-
- if (nix_wait_for_link_cfg(otx2_dev)) {
- otx2_err("Timeout waiting for link_cfg to complete");
- return;
- }
-
- eth_link.link_status = link->link_up;
- eth_link.link_speed = link->speed;
- eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
- eth_link.link_duplex = link->full_duplex;
-
- otx2_dev->speed = link->speed;
- otx2_dev->duplex = link->full_duplex;
-
- /* Print link info */
- nix_link_status_print(eth_dev, ð_link);
-
- /* Update link info */
- rte_eth_linkstatus_set(eth_dev, ð_link);
-
- /* Set the flag and execute application callbacks */
- rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
-}
-
-static int
-lbk_link_update(struct rte_eth_link *link)
-{
- link->link_status = RTE_ETH_LINK_UP;
- link->link_speed = RTE_ETH_SPEED_NUM_100G;
- link->link_autoneg = RTE_ETH_LINK_FIXED;
- link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- return 0;
-}
-
-static int
-cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_link_info_msg *rsp;
- int rc;
- otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- link->link_status = rsp->link_info.link_up;
- link->link_speed = rsp->link_info.speed;
- link->link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- if (rsp->link_info.full_duplex)
- link->link_duplex = rsp->link_info.full_duplex;
- return 0;
-}
-
-int
-otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_link link;
- int rc;
-
- RTE_SET_USED(wait_to_complete);
- memset(&link, 0, sizeof(struct rte_eth_link));
-
- if (!eth_dev->data->dev_started || otx2_dev_is_sdp(dev))
- return 0;
-
- if (otx2_dev_is_lbk(dev))
- rc = lbk_link_update(&link);
- else
- rc = cgx_link_update(dev, &link);
-
- if (rc)
- return rc;
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-static int
-nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_state_msg *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
- req->enable = enable;
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- rc = nix_dev_set_link_state(eth_dev, 1);
- if (rc)
- goto done;
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_start(eth_dev, i);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- return nix_dev_set_link_state(eth_dev, 0);
-}
-
-static int
-cgx_change_mode(struct otx2_eth_dev *dev, struct cgx_set_link_mode_args *cfg)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_mode_req *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_mode(mbox);
- req->args.speed = cfg->speed;
- req->args.duplex = cfg->duplex;
- req->args.an = cfg->an;
-
- return otx2_mbox_process(mbox);
-}
-
-#define SPEED_NONE 0
-static inline uint32_t
-nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
-{
- uint32_t link_speed = SPEED_NONE;
-
- /* 50G and 100G to be supported for board version C0 and above */
- if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & RTE_ETH_LINK_SPEED_100G)
- link_speed = 100000;
- if (link_speeds & RTE_ETH_LINK_SPEED_50G)
- link_speed = 50000;
- }
- if (link_speeds & RTE_ETH_LINK_SPEED_40G)
- link_speed = 40000;
- if (link_speeds & RTE_ETH_LINK_SPEED_25G)
- link_speed = 25000;
- if (link_speeds & RTE_ETH_LINK_SPEED_20G)
- link_speed = 20000;
- if (link_speeds & RTE_ETH_LINK_SPEED_10G)
- link_speed = 10000;
- if (link_speeds & RTE_ETH_LINK_SPEED_5G)
- link_speed = 5000;
- if (link_speeds & RTE_ETH_LINK_SPEED_1G)
- link_speed = 1000;
-
- return link_speed;
-}
-
-static inline uint8_t
-nix_parse_eth_link_duplex(uint32_t link_speeds)
-{
- if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
- return RTE_ETH_LINK_HALF_DUPLEX;
- else
- return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
-int
-otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_conf *conf = ð_dev->data->dev_conf;
- struct cgx_set_link_mode_args cfg;
-
- /* If VF/SDP/LBK, link attributes cannot be changed */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return 0;
-
- memset(&cfg, 0, sizeof(struct cgx_set_link_mode_args));
- cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
- if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
- cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
-
- return cgx_change_mode(dev, &cfg);
- }
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
deleted file mode 100644
index 5fa9ae1396..0000000000
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ /dev/null
@@ -1,352 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <rte_memzone.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
- sizeof(uint32_t))
-
-#define SA_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ +\
- SA_TBL_SZ)
-
-const uint32_t *
-otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-
- static const uint32_t ptypes[] = {
- RTE_PTYPE_L2_ETHER_QINQ, /* LB */
- RTE_PTYPE_L2_ETHER_VLAN, /* LB */
- RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
- RTE_PTYPE_L2_ETHER_ARP, /* LC */
- RTE_PTYPE_L2_ETHER_NSH, /* LC */
- RTE_PTYPE_L2_ETHER_FCOE, /* LC */
- RTE_PTYPE_L2_ETHER_MPLS, /* LC */
- RTE_PTYPE_L3_IPV4, /* LC */
- RTE_PTYPE_L3_IPV4_EXT, /* LC */
- RTE_PTYPE_L3_IPV6, /* LC */
- RTE_PTYPE_L3_IPV6_EXT, /* LC */
- RTE_PTYPE_L4_TCP, /* LD */
- RTE_PTYPE_L4_UDP, /* LD */
- RTE_PTYPE_L4_SCTP, /* LD */
- RTE_PTYPE_L4_ICMP, /* LD */
- RTE_PTYPE_L4_IGMP, /* LD */
- RTE_PTYPE_TUNNEL_GRE, /* LD */
- RTE_PTYPE_TUNNEL_ESP, /* LD */
- RTE_PTYPE_TUNNEL_NVGRE, /* LD */
- RTE_PTYPE_TUNNEL_VXLAN, /* LE */
- RTE_PTYPE_TUNNEL_GENEVE, /* LE */
- RTE_PTYPE_TUNNEL_GTPC, /* LE */
- RTE_PTYPE_TUNNEL_GTPU, /* LE */
- RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
- RTE_PTYPE_INNER_L2_ETHER,/* LF */
- RTE_PTYPE_INNER_L3_IPV4, /* LG */
- RTE_PTYPE_INNER_L3_IPV6, /* LG */
- RTE_PTYPE_INNER_L4_TCP, /* LH */
- RTE_PTYPE_INNER_L4_UDP, /* LH */
- RTE_PTYPE_INNER_L4_SCTP, /* LH */
- RTE_PTYPE_INNER_L4_ICMP, /* LH */
- RTE_PTYPE_UNKNOWN,
- };
-
- return ptypes;
-}
-
-int
-otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (ptype_mask) {
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 0;
- } else {
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 1;
- }
-
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-}
-
-/*
- * +------------------ +------------------ +
- * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
- * +-------------------+-------------------+
- *
- * +-------------------+------------------ +
- * | | LH | LG | LF | LE | LD | LC | LB |
- * +-------------------+-------------------+
- *
- * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
- * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
- *
- */
-static void
-nix_create_non_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lb, lc, ld, le;
- uint16_t val;
- uint32_t idx;
-
- for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
- lb = idx & 0xF;
- lc = (idx & 0xF0) >> 4;
- ld = (idx & 0xF00) >> 8;
- le = (idx & 0xF000) >> 12;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lb) {
- case NPC_LT_LB_STAG_QINQ:
- val |= RTE_PTYPE_L2_ETHER_QINQ;
- break;
- case NPC_LT_LB_CTAG:
- val |= RTE_PTYPE_L2_ETHER_VLAN;
- break;
- }
-
- switch (lc) {
- case NPC_LT_LC_ARP:
- val |= RTE_PTYPE_L2_ETHER_ARP;
- break;
- case NPC_LT_LC_NSH:
- val |= RTE_PTYPE_L2_ETHER_NSH;
- break;
- case NPC_LT_LC_FCOE:
- val |= RTE_PTYPE_L2_ETHER_FCOE;
- break;
- case NPC_LT_LC_MPLS:
- val |= RTE_PTYPE_L2_ETHER_MPLS;
- break;
- case NPC_LT_LC_IP:
- val |= RTE_PTYPE_L3_IPV4;
- break;
- case NPC_LT_LC_IP_OPT:
- val |= RTE_PTYPE_L3_IPV4_EXT;
- break;
- case NPC_LT_LC_IP6:
- val |= RTE_PTYPE_L3_IPV6;
- break;
- case NPC_LT_LC_IP6_EXT:
- val |= RTE_PTYPE_L3_IPV6_EXT;
- break;
- case NPC_LT_LC_PTP:
- val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
- break;
- }
-
- switch (ld) {
- case NPC_LT_LD_TCP:
- val |= RTE_PTYPE_L4_TCP;
- break;
- case NPC_LT_LD_UDP:
- val |= RTE_PTYPE_L4_UDP;
- break;
- case NPC_LT_LD_SCTP:
- val |= RTE_PTYPE_L4_SCTP;
- break;
- case NPC_LT_LD_ICMP:
- case NPC_LT_LD_ICMP6:
- val |= RTE_PTYPE_L4_ICMP;
- break;
- case NPC_LT_LD_IGMP:
- val |= RTE_PTYPE_L4_IGMP;
- break;
- case NPC_LT_LD_GRE:
- val |= RTE_PTYPE_TUNNEL_GRE;
- break;
- case NPC_LT_LD_NVGRE:
- val |= RTE_PTYPE_TUNNEL_NVGRE;
- break;
- }
-
- switch (le) {
- case NPC_LT_LE_VXLAN:
- val |= RTE_PTYPE_TUNNEL_VXLAN;
- break;
- case NPC_LT_LE_ESP:
- val |= RTE_PTYPE_TUNNEL_ESP;
- break;
- case NPC_LT_LE_VXLANGPE:
- val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
- break;
- case NPC_LT_LE_GENEVE:
- val |= RTE_PTYPE_TUNNEL_GENEVE;
- break;
- case NPC_LT_LE_GTPC:
- val |= RTE_PTYPE_TUNNEL_GTPC;
- break;
- case NPC_LT_LE_GTPU:
- val |= RTE_PTYPE_TUNNEL_GTPU;
- break;
- case NPC_LT_LE_TU_MPLS_IN_GRE:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
- break;
- case NPC_LT_LE_TU_MPLS_IN_UDP:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
- break;
- }
- ptype[idx] = val;
- }
-}
-
-#define TU_SHIFT(x) ((x) >> PTYPE_NON_TUNNEL_WIDTH)
-static void
-nix_create_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lf, lg, lh;
- uint16_t val;
- uint32_t idx;
-
- /* Skip non tunnel ptype array memory */
- ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
-
- for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
- lf = idx & 0xF;
- lg = (idx & 0xF0) >> 4;
- lh = (idx & 0xF00) >> 8;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lf) {
- case NPC_LT_LF_TU_ETHER:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
- break;
- }
- switch (lg) {
- case NPC_LT_LG_TU_IP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
- break;
- case NPC_LT_LG_TU_IP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
- break;
- }
- switch (lh) {
- case NPC_LT_LH_TU_TCP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
- break;
- case NPC_LT_LH_TU_UDP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
- break;
- case NPC_LT_LH_TU_SCTP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
- break;
- case NPC_LT_LH_TU_ICMP:
- case NPC_LT_LH_TU_ICMP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
- break;
- }
-
- ptype[idx] = val;
- }
-}
-
-static void
-nix_create_rx_ol_flags_array(void *mem)
-{
- uint16_t idx, errcode, errlev;
- uint32_t val, *ol_flags;
-
- /* Skip ptype array memory */
- ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
-
- for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
- errlev = idx & 0xf;
- errcode = (idx & 0xff0) >> 4;
-
- val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
-
- switch (errlev) {
- case NPC_ERRLEV_RE:
- /* Mark all errors as BAD checksum errors
- * including Outer L2 length mismatch error
- */
- if (errcode) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LC:
- if (errcode == NPC_EC_OIP4_CSUM ||
- errcode == NPC_EC_IP_FRAG_OFFSET_1) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LG:
- if (errcode == NPC_EC_IIP4_CSUM)
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- else
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- break;
- case NPC_ERRLEV_NIX:
- if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
- errcode == NIX_RX_PERRCODE_OL4_LEN ||
- errcode == NIX_RX_PERRCODE_OL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
- errcode == NIX_RX_PERRCODE_IL4_LEN ||
- errcode == NIX_RX_PERRCODE_IL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
- errcode == NIX_RX_PERRCODE_OL3_LEN) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- }
- ol_flags[idx] = val;
- }
-}
-
-void *
-otx2_nix_fastpath_lookup_mem_get(void)
-{
- const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- const struct rte_memzone *mz;
- void *mem;
-
- /* SA_TBL starts after PTYPE_ARRAY & ERR_ARRAY */
- RTE_BUILD_BUG_ON(OTX2_NIX_SA_TBL_START != (PTYPE_ARRAY_SZ +
- ERR_ARRAY_SZ));
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- mem = mz->addr;
- /* Form the ptype array lookup memory */
- nix_create_non_tunnel_ptype_array(mem);
- nix_create_tunnel_ptype_array(mem);
- /* Form the rx ol_flags based on errcode */
- nix_create_rx_ol_flags_array(mem);
- return mem;
- }
- return NULL;
-}
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
deleted file mode 100644
index 49a700ca1d..0000000000
--- a/drivers/net/octeontx2/otx2_mac.c
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-
-int
-otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_mac_addr_set_or_get *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set mac address in CGX, rc=%d", rc);
-
- return 0;
-}
-
-int
-otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
-{
- struct cgx_max_dmac_entries_get_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->max_dmac_filters;
-}
-
-int
-otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
- uint32_t index __rte_unused, uint32_t pool __rte_unused)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_add_req *req;
- struct cgx_mac_addr_add_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to add mac address, rc=%d", rc);
- goto done;
- }
-
- /* Enable promiscuous mode at NIX level */
- otx2_nix_promisc_config(eth_dev, 1);
- dev->dmac_filter_enable = true;
- eth_dev->data->promiscuous = 0;
-
-done:
- return rc;
-}
-
-void
-otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_del_req *req;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
- req->index = index;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to delete mac address, rc=%d", rc);
-}
-
-int
-otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_set_mac_addr *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to set mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- /* Install the same entry into CGX DMAC filter table too. */
- otx2_cgx_mac_addr_set(eth_dev, addr);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_get_mac_addr_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
-
-done:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
deleted file mode 100644
index b9c63ad3bc..0000000000
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-static int
-nix_mc_addr_list_free(struct otx2_eth_dev *dev, uint32_t entry_count)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (entry_count == 0)
- goto exit;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry->mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- if (rc < 0)
- goto exit;
-
- TAILQ_REMOVE(&dev->mc_fltr_tbl, entry, next);
- rte_free(entry);
- entry_count--;
-
- if (entry_count == 0)
- break;
- }
-
- if (entry == NULL)
- dev->mc_tbl_set = false;
-
-exit:
- return rc;
-}
-
-static int
-nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_xtract_info *x_info;
- uint64_t mcam_data, mcam_mask;
- struct mcast_entry *entry;
- otx2_dxcfg_t *ld_cfg;
- uint8_t *mac_addr;
- uint64_t action;
- int idx, rc = 0;
-
- ld_cfg = &npc->prx_dxcfg;
- /* Get ETH layer profile info for populating mcam entries */
- x_info = &(*ld_cfg)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- /* The mbox memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- req->intf = NPC_MCAM_RX;
- req->enable_entry = 1;
-
- /* Channel base extracted to KW0[11:0] */
- req->entry_data.kw[0] = dev->rx_chan_base;
- req->entry_data.kw_mask[0] = RTE_LEN2MASK(12, uint64_t);
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)req->entry_data.kw;
- key_mask = (volatile uint8_t *)req->entry_data.kw_mask;
-
- mcam_data = 0ull;
- mcam_mask = RTE_LEN2MASK(48, uint64_t);
- mac_addr = &entry->mcast_mac.addr_bytes[0];
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- otx2_mbox_memcpy(key_data + x_info->key_off,
- &mcam_data, x_info->len);
- otx2_mbox_memcpy(key_mask + x_info->key_off,
- &mcam_mask, x_info->len);
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= ((uint64_t)otx2_pfvf_func(dev->pf, dev->vf)) << 4;
- req->entry_data.action = action;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t entry_count = 0, idx = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry_count++;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = entry_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || rsp->count < entry_count) {
- otx2_err("Failed to allocate required mcam entries");
- goto exit;
- }
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry->mcam_index = rsp->entry_list[idx];
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-static int
-nix_setup_mc_addr_list(struct otx2_eth_dev *dev,
- struct rte_ether_addr *mc_addr_set)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- uint32_t idx = 0;
- int rc = 0;
-
- /* Populate PMD's mcast list with given mcast mac addresses and
- * disable all mcam entries pertaining to the mcast list.
- */
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- rte_memcpy(&entry->mcast_mac, &mc_addr_set[idx++],
- RTE_ETHER_ADDR_LEN);
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t idx, priv_count = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (otx2_dev_is_vf(dev))
- return -ENOTSUP;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- priv_count++;
-
- if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- /* Free existing list if new list is null */
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- for (idx = 0; idx < nb_mc_addr; idx++) {
- if (!rte_is_multicast_ether_addr(&mc_addr_set[idx]))
- return -EINVAL;
- }
-
- /* New list is bigger than the existing list,
- * allocate mcam entries for the extra entries.
- */
- if (nb_mc_addr > priv_count) {
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = nb_mc_addr - priv_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || (rsp->count + priv_count < nb_mc_addr)) {
- otx2_err("Failed to allocate required entries");
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- /* Append new mcam entries to the existing mc list */
- for (idx = 0; idx < rsp->count; idx++) {
- entry = rte_zmalloc("otx2_nix_mc_entry",
- sizeof(struct mcast_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- nb_mc_addr = priv_count;
- rc = -ENOMEM;
- goto exit;
- }
- entry->mcam_index = rsp->entry_list[idx];
- TAILQ_INSERT_HEAD(&dev->mc_fltr_tbl, entry, next);
- }
- } else {
- /* Free the extra mcam entries if the new list is smaller
- * than exiting list.
- */
- nix_mc_addr_list_free(dev, priv_count - nb_mc_addr);
- }
-
-
- /* Now mc_fltr_tbl has the required number of mcam entries,
- * Traverse through it and add new multicast filter table entries.
- */
- rc = nix_setup_mc_addr_list(dev, mc_addr_set);
- if (rc < 0)
- goto exit;
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
- if (rc < 0)
- goto exit;
-
- dev->mc_tbl_set = true;
-
- return 0;
-
-exit:
- nix_mc_addr_list_free(dev, nb_mc_addr);
- return rc;
-}
-
-void
-otx2_nix_mc_filter_init(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_INIT(&dev->mc_fltr_tbl);
-}
-
-void
-otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev)
-{
- struct mcast_entry *entry;
- uint32_t count = 0;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- count++;
-
- nix_mc_addr_list_free(dev, count);
-}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
deleted file mode 100644
index abb2130587..0000000000
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ /dev/null
@@ -1,450 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <ethdev_driver.h>
-
-#include "otx2_ethdev.h"
-
-#define PTP_FREQ_ADJUST (1 << 9)
-
-/* Function to enable ptp config for VFs */
-void
-otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (otx2_nix_recalc_mtu(eth_dev))
- otx2_err("Failed to set MTU size for ptp");
-
- dev->scalar_ena = true;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
-}
-
-static uint16_t
-nix_eth_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- struct otx2_eth_rxq *rxq = queue;
- struct rte_eth_dev *eth_dev;
-
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- eth_dev = rxq->eth_dev;
- otx2_nix_ptp_enable_vf(eth_dev);
-
- return 0;
-}
-
-static int
-nix_read_raw_clock(struct otx2_eth_dev *dev, uint64_t *clock, uint64_t *tsc,
- uint8_t is_pmu)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- req->is_pmu = is_pmu;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto fail;
-
- if (clock)
- *clock = rsp->clk;
- if (tsc)
- *tsc = rsp->tsc;
-
-fail:
- return rc;
-}
-
-/* This function calculates two parameters "clk_freq_mult" and
- * "clk_delta" which is useful in deriving PTP HI clock from
- * timestamp counter (tsc) value.
- */
-int
-otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev)
-{
- uint64_t ticks_base = 0, ticks = 0, tsc = 0, t_freq;
- int rc, val;
-
- /* Calculating the frequency at which PTP HI clock is running */
- rc = nix_read_raw_clock(dev, &ticks_base, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- rte_delay_ms(100);
-
- rc = nix_read_raw_clock(dev, &ticks, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- t_freq = (ticks - ticks_base) * 10;
-
- /* Calculating the freq multiplier viz the ratio between the
- * frequency at which PTP HI clock works and tsc clock runs
- */
- dev->clk_freq_mult =
- (double)pow(10, floor(log10(t_freq))) / rte_get_timer_hz();
-
- val = false;
-#ifdef RTE_ARM_EAL_RDTSC_USE_PMU
- val = true;
-#endif
- rc = nix_read_raw_clock(dev, &ticks, &tsc, val);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- /* Calculating delta between PTP HI clock and tsc */
- dev->clk_delta = ((uint64_t)(ticks / dev->clk_freq_mult) - tsc);
-
-fail:
- return rc;
-}
-
-static void
-nix_start_timecounters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
-
- dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
-}
-
-static int
-nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t rc = -EINVAL;
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return rc;
-
- if (en) {
- /* Enable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
- return rc;
- }
- /* Enable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
- } else {
- /* Disable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
- return rc;
- }
- /* Disable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
- }
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_dev *eth_dev;
- int i;
-
- if (!dev)
- return -EINVAL;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return -EINVAL;
-
- otx2_dev->ptp_en = ptp_en;
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
- rxq->mbuf_initializer =
- otx2_nix_rxq_mbuf_setup(otx2_dev,
- eth_dev->data->port_id);
- }
- if (otx2_dev_is_vf(otx2_dev) && !(otx2_dev_is_sdp(otx2_dev)) &&
- !(otx2_dev_is_lbk(otx2_dev))) {
- /* In case of VF, setting of MTU cant be done directly in this
- * function as this is running as part of MBOX request(PF->VF)
- * and MTU setting also requires MBOX message to be
- * sent(VF->PF)
- */
- eth_dev->rx_pkt_burst = nix_eth_ptp_vf_burst;
- rte_mb();
- }
-
- return 0;
-}
-
-int
-otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- /* If we are VF/SDP/LBK, ptp cannot not be enabled */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev)) {
- otx2_info("PTP cannot be enabled in case of VF/SDP/LBK");
- return -EINVAL;
- }
-
- if (otx2_ethdev_is_ptp_en(dev)) {
- otx2_info("PTP mode is already enabled");
- return -EINVAL;
- }
-
- if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
- otx2_err("Ptype offload is disabled, it should be enabled");
- return -EINVAL;
- }
-
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- return -EINVAL;
- }
-
- /* Allocating a iova address for tx tstamp */
- const struct rte_memzone *ts;
- ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
- 0, OTX2_ALIGN, OTX2_ALIGN,
- dev->node);
- if (ts == NULL) {
- otx2_err("Failed to allocate mem for tx tstamp addr");
- return -ENOMEM;
- }
-
- dev->tstamp.tx_tstamp_iova = ts->iova;
- dev->tstamp.tx_tstamp = ts->addr;
-
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
-
- /* System time should be already on by default */
- nix_start_timecounters(eth_dev);
-
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 1);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- if (!otx2_ethdev_is_ptp_en(dev)) {
- otx2_nix_dbg("PTP mode is disabled");
- return -EINVAL;
- }
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return -EINVAL;
-
- dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 0);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t __rte_unused flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (!tstamp->rx_ready)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
- tstamp->rx_ready = 0;
-
- otx2_nix_dbg("rx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- (uint64_t)tstamp->rx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (*tstamp->tx_tstamp == 0)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("tx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- *tstamp->tx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- *tstamp->tx_tstamp = 0;
- rte_wmb();
-
- return 0;
-}
-
-int
-otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- /* Adjust the frequent to make tics increments in 10^9 tics per sec */
- if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_ADJFINE;
- req->scaled_ppm = delta;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- /* Since the frequency of PTP comp register is tuned, delta and
- * freq mult calculation for deriving PTP_HI from timestamp
- * counter should be done again.
- */
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc)
- otx2_err("Failed to calculate delta and freq mult");
- }
- dev->systime_tc.nsec += delta;
- dev->rx_tstamp_tc.nsec += delta;
- dev->tx_tstamp_tc.nsec += delta;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t ns;
-
- ns = rte_timespec_to_ns(ts);
- /* Set the time counters to a new value. */
- dev->systime_tc.nsec = ns;
- dev->rx_tstamp_tc.nsec = ns;
- dev->tx_tstamp_tc.nsec = ns;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- uint64_t ns;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
- *ts = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("PTP time read: %"PRIu64" .%09"PRIu64"",
- (uint64_t)ts->tv_sec, (uint64_t)ts->tv_nsec);
-
- return 0;
-}
-
-
-int
-otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* This API returns the raw PTP HI clock value. Since LFs doesn't
- * have direct access to PTP registers and it requires mbox msg
- * to AF for this value. In fastpath reading this value for every
- * packet (which involes mbox call) becomes very expensive, hence
- * we should be able to derive PTP HI clock value from tsc by
- * using freq_mult and clk_delta calculated during configure stage.
- */
- *clock = (rte_get_tsc_cycles() + dev->clk_delta) * dev->clk_freq_mult;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
deleted file mode 100644
index 68cef1caa3..0000000000
--- a/drivers/net/octeontx2/otx2_rss.c
+++ /dev/null
@@ -1,427 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
- uint8_t group, uint16_t *ind_tbl)
-{
- struct otx2_rss_info *rss = &dev->rss_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *req;
- int rc, idx;
-
- for (idx = 0; idx < rss->rss_size; idx++) {
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_INIT;
-
- if (!dev->lock_rx_ctx)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_LOCK;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
- int idx = 0;
-
- rc = -EINVAL;
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask >> j) & 0x01)
- rss->ind_tbl[idx] = reta_conf[i].reta[j];
- idx++;
- }
- }
-
- return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
-
- rc = -EINVAL;
-
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
- if ((reta_conf[i].mask >> j) & 0x01)
- reta_conf[i].reta[j] = rss->ind_tbl[j];
- }
-
- return 0;
-
-fail:
- return rc;
-}
-
-void
-otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
- uint32_t key_len)
-{
- const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
- };
- struct otx2_rss_info *rss = &dev->rss_info;
- uint64_t *keyptr;
- uint64_t val;
- uint32_t idx;
-
- if (key == NULL || key == 0) {
- keyptr = (uint64_t *)(uintptr_t)default_key;
- key_len = NIX_HASH_KEY_SIZE;
- memset(rss->key, 0, key_len);
- } else {
- memcpy(rss->key, key, key_len);
- keyptr = (uint64_t *)rss->key;
- }
-
- for (idx = 0; idx < (key_len >> 3); idx++) {
- val = rte_cpu_to_be_64(*keyptr);
- otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
- keyptr++;
- }
-}
-
-static void
-rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
-{
- uint64_t *keyptr = (uint64_t *)key;
- uint64_t val;
- int idx;
-
- for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
- val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
- *keyptr = rte_be_to_cpu_64(val);
- keyptr++;
- }
-}
-
-#define RSS_IPV4_ENABLE ( \
- RTE_ETH_RSS_IPV4 | \
- RTE_ETH_RSS_FRAG_IPV4 | \
- RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-
-#define RSS_IPV6_ENABLE ( \
- RTE_ETH_RSS_IPV6 | \
- RTE_ETH_RSS_FRAG_IPV6 | \
- RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define RSS_IPV6_EX_ENABLE ( \
- RTE_ETH_RSS_IPV6_EX | \
- RTE_ETH_RSS_IPV6_TCP_EX | \
- RTE_ETH_RSS_IPV6_UDP_EX)
-
-#define RSS_MAX_LEVELS 3
-
-#define RSS_IPV4_INDEX 0
-#define RSS_IPV6_INDEX 1
-#define RSS_TCP_INDEX 2
-#define RSS_UDP_INDEX 3
-#define RSS_SCTP_INDEX 4
-#define RSS_DMAC_INDEX 5
-
-uint32_t
-otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
- uint8_t rss_level)
-{
- uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
- {
- FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
- FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
- FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
- FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
- FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
- FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
- }
- };
- uint32_t flowkey_cfg = 0;
-
- dev->rss_info.nix_rss = ethdev_rss;
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
- }
-
- if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
-
- if (ethdev_rss & RSS_IPV4_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
-
- if (ethdev_rss & RSS_IPV6_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_TCP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_UDP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_SCTP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
- flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
-
- if (ethdev_rss & RSS_IPV6_EX_ENABLE)
- flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
-
- if (ethdev_rss & RTE_ETH_RSS_PORT)
- flowkey_cfg |= FLOW_KEY_TYPE_PORT;
-
- if (ethdev_rss & RTE_ETH_RSS_NVGRE)
- flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
-
- if (ethdev_rss & RTE_ETH_RSS_VXLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_GENEVE)
- flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
-
- if (ethdev_rss & RTE_ETH_RSS_GTPU)
- flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
-
- return flowkey_cfg;
-}
-
-int
-otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
- uint8_t *alg_idx, uint8_t group, int mcam_index)
-{
- struct nix_rss_flowkey_cfg_rsp *rss_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rss_flowkey_cfg *cfg;
- int rc;
-
- rc = -EINVAL;
-
- dev->rss_info.flowkey_cfg = flowkey_cfg;
-
- cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
-
- cfg->flowkey_cfg = flowkey_cfg;
- cfg->mcam_index = mcam_index; /* -1 indicates default group */
- cfg->group = group; /* 0 is default group */
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
- if (rc)
- return rc;
-
- if (alg_idx)
- *alg_idx = rss_rsp->alg_idx;
-
- return rc;
-}
-
-int
-otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint8_t alg_idx;
- int rc;
-
- rc = -EINVAL;
-
- if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
- otx2_err("Hash key size mismatch %d vs %d",
- rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
- goto fail;
- }
-
- if (rss_conf->rss_key)
- otx2_nix_rss_set_key(dev, rss_conf->rss_key,
- (uint32_t)rss_conf->rss_key_len);
-
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg =
- otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (rss_conf->rss_key)
- rss_get_key(dev, rss_conf->rss_key);
-
- rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
- rss_conf->rss_hf = dev->rss_info.nix_rss;
-
- return 0;
-}
-
-int
-otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint64_t rss_hf;
- uint8_t alg_idx;
- int rc;
-
- /* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
- return 0;
-
- /* Update default RSS key and cfg */
- otx2_nix_rss_set_key(dev, NULL, 0);
-
- /* Update default RSS RETA */
- for (idx = 0; idx < dev->rss_info.rss_size; idx++)
- dev->rss_info.ind_tbl[idx] = idx % qcnt;
-
- /* Init RSS table context */
- rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
- if (rc) {
- otx2_err("Failed to init RSS table rc=%d", rc);
- return rc;
- }
-
- rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
deleted file mode 100644
index 5ee1aed786..0000000000
--- a/drivers/net/octeontx2/otx2_rx.c
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_rx.h"
-
-#define NIX_DESCS_PER_LOOP 4
-#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
-#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
-
-static inline uint16_t
-nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
- const uint16_t pkts, const uint32_t qmask)
-{
- uint32_t available = rxq->available;
-
- /* Update the available count if cached value is not enough */
- if (unlikely(available < pkts)) {
- uint64_t reg, head, tail;
-
- /* Use LDADDA version to avoid reorder */
- reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
- /* CQ_OP_STATUS operation error */
- if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
- reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
- return 0;
-
- tail = reg & 0xFFFFF;
- head = (reg >> 20) & 0xFFFFF;
- if (tail < head)
- available = tail - head + qmask + 1;
- else
- available = tail - head;
-
- rxq->available = available;
- }
-
- return RTE_MIN(pkts, available);
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- const uint64_t mbuf_init = rxq->mbuf_initializer;
- const void *lookup_mem = rxq->lookup_mem;
- const uint64_t data_off = rxq->data_off;
- const uintptr_t desc = rxq->desc;
- const uint64_t wdata = rxq->wdata;
- const uint32_t qmask = rxq->qmask;
- uint16_t packets = 0, nb_pkts;
- uint32_t head = rxq->head;
- struct nix_cqe_hdr_s *cq;
- struct rte_mbuf *mbuf;
-
- nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
-
- while (packets < nb_pkts) {
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(desc +
- (CQE_SZ((head + 2) & qmask))));
- cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
-
- mbuf = nix_get_mbuf_from_cqe(cq, data_off);
-
- otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
- flags);
- otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags,
- (uint64_t *)((uint8_t *)mbuf + data_off));
- rx_pkts[packets++] = mbuf;
- otx2_prefetch_store_keep(mbuf);
- head++;
- head &= qmask;
- }
-
- rxq->head = head;
- rxq->available -= nb_pkts;
-
- /* Free all the CQs that we've processed */
- otx2_write64((wdata | nb_pkts), rxq->cq_door);
-
- return nb_pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-static __rte_always_inline uint64_t
-nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
-{
- if (w2 & BIT_ULL(21) /* vtag0_gone */) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint64_t
-nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
-{
- if (w2 & BIT_ULL(23) /* vtag1_gone */) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
- uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
- const uint64_t mbuf_initializer = rxq->mbuf_initializer;
- const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
- uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
- uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
- struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- const uint16_t *lookup_mem = rxq->lookup_mem;
- const uint32_t qmask = rxq->qmask;
- const uint64_t wdata = rxq->wdata;
- const uintptr_t desc = rxq->desc;
- uint8x16_t f0, f1, f2, f3;
- uint32_t head = rxq->head;
- uint16_t pkts_left;
-
- pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
-
- /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- while (packets < pkts) {
- /* Exit loop if head is about to wrap and become unaligned */
- if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
- NIX_DESCS_PER_LOOP) {
- pkts_left += (pkts - packets);
- break;
- }
-
- const uintptr_t cq0 = desc + CQE_SZ(head);
-
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
-
- /* Get NIX_RX_SG_S for size and buffer pointer */
- cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
- cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
- cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
- cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
-
- /* Extract mbuf from NIX_RX_SG_S */
- mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
- mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
- mbuf01 = vqsubq_u64(mbuf01, data_off);
- mbuf23 = vqsubq_u64(mbuf23, data_off);
-
- /* Move mbufs to scalar registers for future use */
- mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
- mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
- mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
- mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
-
- /* Mask to get packet len from NIX_RX_SG_S */
- const uint8x16_t shuf_msk = {
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0, 1, /* octet 1~0, low 16 bits pkt_len */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 0, 1, /* octet 1~0, 16 bits data_len */
- 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF
- };
-
- /* Form the rx_descriptor_fields1 with pkt_len and data_len */
- f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
- f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
- f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
- f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
-
- /* Load CQE word0 and word 1 */
- uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
- uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
- uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
- uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
- uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
- uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
- uint64_t cq3_w0 = ((uint64_t *)(cq0 + CQE_SZ(3)))[0];
- uint64_t cq3_w1 = ((uint64_t *)(cq0 + CQE_SZ(3)))[1];
-
- if (flags & NIX_RX_OFFLOAD_RSS_F) {
- /* Fill rss in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(cq0_w0, f0, 3);
- f1 = vsetq_lane_u32(cq1_w0, f1, 3);
- f2 = vsetq_lane_u32(cq2_w0, f2, 3);
- f3 = vsetq_lane_u32(cq3_w0, f3, 3);
- ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
- } else {
- ol_flags0 = 0; ol_flags1 = 0;
- ol_flags2 = 0; ol_flags3 = 0;
- }
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
- /* Fill packet_type in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq0_w1),
- f0, 0);
- f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq1_w1),
- f1, 0);
- f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq2_w1),
- f2, 0);
- f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq3_w1),
- f3, 0);
- }
-
- if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
- ol_flags0 |= nix_rx_olflags_get(lookup_mem, cq0_w1);
- ol_flags1 |= nix_rx_olflags_get(lookup_mem, cq1_w1);
- ol_flags2 |= nix_rx_olflags_get(lookup_mem, cq2_w1);
- ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
- }
-
- if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
- uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
- uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
- uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
-
- ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
- ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
- ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
- ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
-
- ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
- ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
- ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
- ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
- }
-
- if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
- ol_flags0 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
- ol_flags1 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
- ol_flags2 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
- ol_flags3 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
- }
-
- /* Form rearm_data with ol_flags */
- rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
- rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
- rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
- rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
-
- /* Update rx_descriptor_fields1 */
- vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
- vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
- vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
- vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
-
- /* Update rearm_data */
- vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
- vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
- vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
- vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
-
- /* Update that no more segments */
- mbuf0->next = NULL;
- mbuf1->next = NULL;
- mbuf2->next = NULL;
- mbuf3->next = NULL;
-
- /* Store the mbufs to rx_pkts */
- vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
- vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
-
- /* Prefetch mbufs */
- otx2_prefetch_store_keep(mbuf0);
- otx2_prefetch_store_keep(mbuf1);
- otx2_prefetch_store_keep(mbuf2);
- otx2_prefetch_store_keep(mbuf3);
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
-
- /* Advance head pointer and packets */
- head += NIX_DESCS_PER_LOOP; head &= qmask;
- packets += NIX_DESCS_PER_LOOP;
- }
-
- rxq->head = head;
- rxq->available -= packets;
-
- rte_io_wmb();
- /* Free all the CQs that we've processed */
- otx2_write64((rxq->wdata | packets), rxq->cq_door);
-
- if (unlikely(pkts_left))
- packets += nix_recv_pkts(rx_queue, &rx_pkts[packets],
- pkts_left, flags);
-
- return packets;
-}
-
-#else
-
-static inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- RTE_SET_USED(rx_queue);
- RTE_SET_USED(rx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(flags);
-
- return 0;
-}
-
-#endif
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
- (flags) | NIX_RX_MULTI_SEG_F); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- /* TSTMP is not supported by vector */ \
- if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
- return 0; \
- return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
-} \
-
-NIX_RX_FASTPATH_MODES
-#undef R
-
-static inline void
-pick_rx_func(struct rte_eth_dev *eth_dev,
- const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
- eth_dev->rx_pkt_burst = rx_burst
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
-}
-
-void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- /* For PTP enabled, scalar rx function should be chosen as most of the
- * PTP apps are implemented to rx burst 1 pkt.
- */
- if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- pick_rx_func(eth_dev, nix_eth_rx_burst);
- else
- pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
-
- /* Copy multi seg version with no offload for tear down sequence */
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- dev->rx_pkt_burst_no_offload =
- nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
deleted file mode 100644
index 98406244e2..0000000000
--- a/drivers/net/octeontx2/otx2_rx.h
+++ /dev/null
@@ -1,583 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RX_H__
-#define __OTX2_RX_H__
-
-#include <rte_ether.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_fp.h"
-
-/* Default mark value used when none is provided. */
-#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
-
-#define PTYPE_NON_TUNNEL_WIDTH 16
-#define PTYPE_TUNNEL_WIDTH 12
-#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_NON_TUNNEL_WIDTH)
-#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_TUNNEL_WIDTH)
-#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
- PTYPE_TUNNEL_ARRAY_SZ) *\
- sizeof(uint16_t))
-
-#define NIX_RX_OFFLOAD_NONE (0)
-#define NIX_RX_OFFLOAD_RSS_F BIT(0)
-#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
-#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
-#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
-#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
-#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
-#define NIX_RX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control cqe_to_mbuf conversion function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_RX_MULTI_SEG_F BIT(15)
-#define NIX_TIMESYNC_RX_OFFSET 8
-
-/* Inline IPsec offsets */
-
-/* nix_cqe_hdr_s + nix_rx_parse_s + nix_rx_sg_s + nix_iova_s */
-#define INLINE_CPT_RESULT_OFFSET 80
-
-struct otx2_timesync_info {
- uint64_t rx_tstamp;
- rte_iova_t tx_tstamp_iova;
- uint64_t *tx_tstamp;
- uint64_t rx_tstamp_dynflag;
- int tstamp_dynfield_offset;
- uint8_t tx_ready;
- uint8_t rx_ready;
-} __rte_cache_aligned;
-
-union mbuf_initializer {
- struct {
- uint16_t data_off;
- uint16_t refcnt;
- uint16_t nb_segs;
- uint16_t port;
- } fields;
- uint64_t value;
-};
-
-static inline rte_mbuf_timestamp_t *
-otx2_timestamp_dynfield(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *info)
-{
- return RTE_MBUF_DYNFIELD(mbuf,
- info->tstamp_dynfield_offset, rte_mbuf_timestamp_t *);
-}
-
-static __rte_always_inline void
-otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *tstamp, const uint16_t flag,
- uint64_t *tstamp_ptr)
-{
- if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
- (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
- NIX_TIMESYNC_RX_OFFSET)) {
-
- mbuf->pkt_len -= NIX_TIMESYNC_RX_OFFSET;
-
- /* Reading the rx timestamp inserted by CGX, viz at
- * starting of the packet data.
- */
- *otx2_timestamp_dynfield(mbuf, tstamp) =
- rte_be_to_cpu_64(*tstamp_ptr);
- /* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
- * PTP packets are received.
- */
- if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
- tstamp->rx_tstamp =
- *otx2_timestamp_dynfield(mbuf, tstamp);
- tstamp->rx_ready = 1;
- mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
- RTE_MBUF_F_RX_IEEE1588_TMST |
- tstamp->rx_tstamp_dynflag;
- }
- }
-}
-
-static __rte_always_inline uint64_t
-nix_clear_data_off(uint64_t oldval)
-{
- union mbuf_initializer mbuf_init = { .value = oldval };
-
- mbuf_init.fields.data_off = 0;
- return mbuf_init.value;
-}
-
-static __rte_always_inline struct rte_mbuf *
-nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
-{
- rte_iova_t buff;
-
- /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
- buff = *((rte_iova_t *)((uint64_t *)cq + 9));
- return (struct rte_mbuf *)(buff - data_off);
-}
-
-
-static __rte_always_inline uint32_t
-nix_ptype_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint16_t * const ptype = lookup_mem;
- const uint16_t lh_lg_lf = (in & 0xFFF0000000000000) >> 52;
- const uint16_t tu_l2 = ptype[(in & 0x000FFFF000000000) >> 36];
- const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lh_lg_lf];
-
- return (il4_tu << PTYPE_NON_TUNNEL_WIDTH) | tu_l2;
-}
-
-static __rte_always_inline uint32_t
-nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint32_t * const ol_flags = (const uint32_t *)
- ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
-
- return ol_flags[(in & 0xfff00000) >> 20];
-}
-
-static inline uint64_t
-nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
- struct rte_mbuf *mbuf)
-{
- /* There is no separate bit to check match_id
- * is valid or not? and no flag to identify it is an
- * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
- * action. The former case addressed through 0 being invalid
- * value and inc/dec match_id pair when MARK is activated.
- * The later case addressed through defining
- * OTX2_FLOW_MARK_DEFAULT as value for
- * RTE_FLOW_ACTION_TYPE_MARK.
- * This would translate to not use
- * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
- * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
- * i.e valid mark_id's are from
- * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
- */
- if (likely(match_id)) {
- ol_flags |= RTE_MBUF_F_RX_FDIR;
- if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
- mbuf->hash.fdir.hi = match_id - 1;
- }
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline void
-nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
- struct rte_mbuf *mbuf, uint64_t rearm)
-{
- const rte_iova_t *iova_list;
- struct rte_mbuf *head;
- const rte_iova_t *eol;
- uint8_t nb_segs;
- uint64_t sg;
-
- sg = *(const uint64_t *)(rx + 1);
- nb_segs = (sg >> 48) & 0x3;
- mbuf->nb_segs = nb_segs;
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
-
- eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
- /* Skip SG_S and first IOVA*/
- iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
- nb_segs--;
-
- rearm = rearm & ~0xFFFF;
-
- head = mbuf;
- while (nb_segs) {
- mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
- mbuf = mbuf->next;
-
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
- *(uint64_t *)(&mbuf->rearm_data) = rearm;
- nb_segs--;
- iova_list++;
-
- if (!nb_segs && (iova_list + 1 < eol)) {
- sg = *(const uint64_t *)(iova_list);
- nb_segs = (sg >> 48) & 0x3;
- head->nb_segs += nb_segs;
- iova_list = (const rte_iova_t *)(iova_list + 1);
- }
- }
- mbuf->next = NULL;
-}
-
-static __rte_always_inline uint16_t
-nix_rx_sec_cptres_get(const void *cq)
-{
- volatile const struct otx2_cpt_res *res;
-
- res = (volatile const struct otx2_cpt_res *)((const char *)cq +
- INLINE_CPT_RESULT_OFFSET);
-
- return res->u16[0];
-}
-
-static __rte_always_inline void *
-nix_rx_sec_sa_get(const void * const lookup_mem, int spi, uint16_t port)
-{
- const uint64_t *const *sa_tbl = (const uint64_t * const *)
- ((const uint8_t *)lookup_mem + OTX2_NIX_SA_TBL_START);
-
- return (void *)sa_tbl[port][spi];
-}
-
-static __rte_always_inline uint64_t
-nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
- const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
- const void * const lookup_mem)
-{
- uint8_t *l2_ptr, *l3_ptr, *l2_ptr_actual, *l3_ptr_actual;
- struct otx2_ipsec_fp_in_sa *sa;
- uint16_t m_len, l2_len, ip_len;
- struct rte_ipv6_hdr *ip6h;
- struct rte_ipv4_hdr *iph;
- uint16_t *ether_type;
- uint32_t spi;
- int i;
-
- if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
-
- /* 20 bits of tag would have the SPI */
- spi = cq->tag & 0xFFFFF;
-
- sa = nix_rx_sec_sa_get(lookup_mem, spi, m->port);
- *rte_security_dynfield(m) = sa->udata64;
-
- l2_ptr = rte_pktmbuf_mtod(m, uint8_t *);
- l2_len = rx->lcptr - rx->laptr;
- l3_ptr = RTE_PTR_ADD(l2_ptr, l2_len);
-
- if (sa->replay_win_sz) {
- if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
- }
-
- l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
- l3_ptr_actual = RTE_PTR_ADD(l3_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
-
- for (i = l2_len - RTE_ETHER_TYPE_LEN - 1; i >= 0; i--)
- l2_ptr_actual[i] = l2_ptr[i];
-
- m->data_off += sizeof(struct otx2_ipsec_fp_res_hdr);
-
- ether_type = RTE_PTR_SUB(l3_ptr_actual, RTE_ETHER_TYPE_LEN);
-
- iph = (struct rte_ipv4_hdr *)l3_ptr_actual;
- if ((iph->version_ihl >> 4) == 4) {
- ip_len = rte_be_to_cpu_16(iph->total_length);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- } else {
- ip6h = (struct rte_ipv6_hdr *)iph;
- ip_len = rte_be_to_cpu_16(ip6h->payload_len);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- }
-
- m_len = ip_len + l2_len;
- m->data_len = m_len;
- m->pkt_len = m_len;
- return RTE_MBUF_F_RX_SEC_OFFLOAD;
-}
-
-static __rte_always_inline void
-otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
- struct rte_mbuf *mbuf, const void *lookup_mem,
- const uint64_t val, const uint16_t flag)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
- const uint64_t w1 = *(const uint64_t *)rx;
- const uint16_t len = rx->pkt_lenm1 + 1;
- uint64_t ol_flags = 0;
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- if (flag & NIX_RX_OFFLOAD_PTYPE_F)
- mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
- else
- mbuf->packet_type = 0;
-
- if (flag & NIX_RX_OFFLOAD_RSS_F) {
- mbuf->hash.rss = tag;
- ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- }
-
- if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
- ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
-
- if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- if (rx->vtag0_gone) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- mbuf->vlan_tci = rx->vtag0_tci;
- }
- if (rx->vtag1_gone) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = rx->vtag1_tci;
- }
- }
-
- if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
- ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
-
- if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
- cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
- *(uint64_t *)(&mbuf->rearm_data) = val;
- ol_flags |= nix_rx_sec_mbuf_update(rx, cq, mbuf, lookup_mem);
- mbuf->ol_flags = ol_flags;
- return;
- }
-
- mbuf->ol_flags = ol_flags;
- *(uint64_t *)(&mbuf->rearm_data) = val;
- mbuf->pkt_len = len;
-
- if (flag & NIX_RX_MULTI_SEG_F) {
- nix_cqe_xtract_mseg(rx, mbuf, val);
- } else {
- mbuf->data_len = len;
- mbuf->next = NULL;
- }
-}
-
-#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
-#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
-#define RSS_F NIX_RX_OFFLOAD_RSS_F
-#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
-#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
-#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
-#define RX_SEC_F NIX_RX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
-#define NIX_RX_FASTPATH_MODES \
-R(no_offload, 0, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
-R(rss, 0, 0, 0, 0, 0, 0, 1, RSS_F) \
-R(ptype, 0, 0, 0, 0, 0, 1, 0, PTYPE_F) \
-R(ptype_rss, 0, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
-R(cksum, 0, 0, 0, 0, 1, 0, 0, CKSUM_F) \
-R(cksum_rss, 0, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
-R(cksum_ptype, 0, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
-R(cksum_ptype_rss, 0, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
-R(vlan, 0, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
-R(vlan_rss, 0, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
-R(vlan_ptype, 0, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
-R(vlan_ptype_rss, 0, 0, 0, 1, 0, 1, 1, \
- RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum, 0, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
-R(vlan_cksum_rss, 0, 0, 0, 1, 1, 0, 1, \
- RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype, 0, 0, 0, 1, 1, 1, 0, \
- RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(vlan_cksum_ptype_rss, 0, 0, 0, 1, 1, 1, 1, \
- RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark, 0, 0, 1, 0, 0, 0, 0, MARK_F) \
-R(mark_rss, 0, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
-R(mark_ptype, 0, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
-R(mark_ptype_rss, 0, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F) \
-R(mark_cksum, 0, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
-R(mark_cksum_rss, 0, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F) \
-R(mark_cksum_ptype, 0, 0, 1, 0, 1, 1, 0, \
- MARK_F | CKSUM_F | PTYPE_F) \
-R(mark_cksum_ptype_rss, 0, 0, 1, 0, 1, 1, 1, \
- MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark_vlan, 0, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
-R(mark_vlan_rss, 0, 0, 1, 1, 0, 0, 1, \
- MARK_F | RX_VLAN_F | RSS_F) \
-R(mark_vlan_ptype, 0, 0, 1, 1, 0, 1, 0, \
- MARK_F | RX_VLAN_F | PTYPE_F) \
-R(mark_vlan_ptype_rss, 0, 0, 1, 1, 0, 1, 1, \
- MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(mark_vlan_cksum, 0, 0, 1, 1, 1, 0, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F) \
-R(mark_vlan_cksum_rss, 0, 0, 1, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(mark_vlan_cksum_ptype, 0, 0, 1, 1, 1, 1, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(mark_vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts, 0, 1, 0, 0, 0, 0, 0, TS_F) \
-R(ts_rss, 0, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
-R(ts_ptype, 0, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
-R(ts_ptype_rss, 0, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F) \
-R(ts_cksum, 0, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
-R(ts_cksum_rss, 0, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F) \
-R(ts_cksum_ptype, 0, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F) \
-R(ts_cksum_ptype_rss, 0, 1, 0, 0, 1, 1, 1, \
- TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_vlan, 0, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
-R(ts_vlan_rss, 0, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F) \
-R(ts_vlan_ptype, 0, 1, 0, 1, 0, 1, 0, \
- TS_F | RX_VLAN_F | PTYPE_F) \
-R(ts_vlan_ptype_rss, 0, 1, 0, 1, 0, 1, 1, \
- TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_vlan_cksum, 0, 1, 0, 1, 1, 0, 0, \
- TS_F | RX_VLAN_F | CKSUM_F) \
-R(ts_vlan_cksum_rss, 0, 1, 0, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(ts_vlan_cksum_ptype, 0, 1, 0, 1, 1, 1, 0, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_vlan_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, 1, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark, 0, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
-R(ts_mark_rss, 0, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F) \
-R(ts_mark_ptype, 0, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F) \
-R(ts_mark_ptype_rss, 0, 1, 1, 0, 0, 1, 1, \
- TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(ts_mark_cksum, 0, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F) \
-R(ts_mark_cksum_rss, 0, 1, 1, 0, 1, 0, 1, \
- TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(ts_mark_cksum_ptype, 0, 1, 1, 0, 1, 1, 0, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_cksum_ptype_rss, 0, 1, 1, 0, 1, 1, 1, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan, 0, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
-R(ts_mark_vlan_rss, 0, 1, 1, 1, 0, 0, 1, \
- TS_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(ts_mark_vlan_ptype, 0, 1, 1, 1, 0, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(ts_mark_vlan_ptype_rss, 0, 1, 1, 1, 0, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec, 1, 0, 0, 0, 0, 0, 0, RX_SEC_F) \
-R(sec_rss, 1, 0, 0, 0, 0, 0, 1, RX_SEC_F | RSS_F) \
-R(sec_ptype, 1, 0, 0, 0, 0, 1, 0, RX_SEC_F | PTYPE_F) \
-R(sec_ptype_rss, 1, 0, 0, 0, 0, 1, 1, \
- RX_SEC_F | PTYPE_F | RSS_F) \
-R(sec_cksum, 1, 0, 0, 0, 1, 0, 0, RX_SEC_F | CKSUM_F) \
-R(sec_cksum_rss, 1, 0, 0, 0, 1, 0, 1, \
- RX_SEC_F | CKSUM_F | RSS_F) \
-R(sec_cksum_ptype, 1, 0, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_cksum_ptype_rss, 1, 0, 0, 0, 1, 1, 1, \
- RX_SEC_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_vlan, 1, 0, 0, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_vlan_rss, 1, 0, 0, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_vlan_ptype, 1, 0, 0, 1, 0, 1, 0, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F) \
-R(sec_vlan_ptype_rss, 1, 0, 0, 1, 0, 1, 1, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_vlan_cksum, 1, 0, 0, 1, 1, 0, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F) \
-R(sec_vlan_cksum_rss, 1, 0, 0, 1, 1, 0, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_vlan_cksum_ptype, 1, 0, 0, 1, 1, 1, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_vlan_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark, 1, 0, 1, 0, 0, 0, 0, RX_SEC_F | MARK_F) \
-R(sec_mark_rss, 1, 0, 1, 0, 0, 0, 1, RX_SEC_F | MARK_F | RSS_F)\
-R(sec_mark_ptype, 1, 0, 1, 0, 0, 1, 0, \
- RX_SEC_F | MARK_F | PTYPE_F) \
-R(sec_mark_ptype_rss, 1, 0, 1, 0, 0, 1, 1, \
- RX_SEC_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_mark_cksum, 1, 0, 1, 0, 1, 0, 0, \
- RX_SEC_F | MARK_F | CKSUM_F) \
-R(sec_mark_cksum_rss, 1, 0, 1, 0, 1, 0, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_mark_cksum_ptype, 1, 0, 1, 0, 1, 1, 0, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_cksum_ptype_rss, 1, 0, 1, 0, 1, 1, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan, 1, 0, 1, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_mark_vlan_rss, 1, 0, 1, 1, 0, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(sec_mark_vlan_ptype, 1, 0, 1, 1, 0, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_mark_vlan_ptype_rss, 1, 0, 1, 1, 0, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan_cksum, 1, 0, 1, 1, 1, 0, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_mark_vlan_cksum_rss, 1, 0, 1, 1, 1, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_mark_vlan_cksum_ptype, 1, 0, 1, 1, 1, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_vlan_cksum_ptype_rss, \
- 1, 0, 1, 1, 1, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts, 1, 1, 0, 0, 0, 0, 0, RX_SEC_F | TS_F) \
-R(sec_ts_rss, 1, 1, 0, 0, 0, 0, 1, RX_SEC_F | TS_F | RSS_F) \
-R(sec_ts_ptype, 1, 1, 0, 0, 0, 1, 0, RX_SEC_F | TS_F | PTYPE_F)\
-R(sec_ts_ptype_rss, 1, 1, 0, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | PTYPE_F | RSS_F) \
-R(sec_ts_cksum, 1, 1, 0, 0, 1, 0, 0, RX_SEC_F | TS_F | CKSUM_F)\
-R(sec_ts_cksum_rss, 1, 1, 0, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | CKSUM_F | RSS_F) \
-R(sec_ts_cksum_ptype, 1, 1, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_cksum_ptype_rss, 1, 1, 0, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan, 1, 1, 0, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F) \
-R(sec_ts_vlan_rss, 1, 1, 0, 1, 0, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_vlan_ptype, 1, 1, 0, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_vlan_ptype_rss, 1, 1, 0, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan_cksum, 1, 1, 0, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_vlan_cksum_rss, 1, 1, 0, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_ts_vlan_cksum_ptype, 1, 1, 0, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_vlan_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts_mark, 1, 1, 1, 0, 0, 0, 0, RX_SEC_F | TS_F | MARK_F) \
-R(sec_ts_mark_rss, 1, 1, 1, 0, 0, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RSS_F) \
-R(sec_ts_mark_ptype, 1, 1, 1, 0, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F) \
-R(sec_ts_mark_ptype_rss, 1, 1, 1, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_cksum, 1, 1, 1, 0, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F) \
-R(sec_ts_mark_cksum_rss, 1, 1, 1, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_ts_mark_cksum_ptype, 1, 1, 1, 0, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_mark_cksum_ptype_rss, 1, 1, 1, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_vlan, 1, 1, 1, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F) \
-R(sec_ts_mark_vlan_rss, 1, 1, 1, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_mark_vlan_ptype, 1, 1, 1, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_mark_vlan_ptype_rss, 1, 1, 1, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum, 1, 1, 1, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_mark_vlan_cksum_rss, 1, 1, 1, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F) \
-R(sec_ts_mark_vlan_cksum_ptype_rss, \
- 1, 1, 1, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F | RSS_F)
-#endif /* __OTX2_RX_H__ */
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
deleted file mode 100644
index 3adf21608c..0000000000
--- a/drivers/net/octeontx2/otx2_stats.c
+++ /dev/null
@@ -1,397 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include "otx2_ethdev.h"
-
-struct otx2_nix_xstats_name {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- uint32_t offset;
-};
-
-static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
- {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
- {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
- {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
- {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
- {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
-};
-
-static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
- {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
- {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
- {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
- {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
- {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
- {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
- {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
- {"rx_err", NIX_STAT_LF_RX_RX_ERR},
- {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
- {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
- {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
- {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
-};
-
-static const struct otx2_nix_xstats_name nix_q_xstats[] = {
- {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
-};
-
-#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
-#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
-#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
-
-#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
- OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
-
-int
-otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t reg, val;
- uint32_t qidx, i;
- int64_t *addr;
-
- stats->opackets = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
- stats->oerrors = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
- stats->obytes = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
-
- stats->ipackets = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
- stats->imissed = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
- stats->ibytes = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
- stats->ierrors = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->txmap[i] & (1U << 31)) {
- qidx = dev->txmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_opackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_obytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] = val;
- }
- }
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->rxmap[i] & (1U << 31)) {
- qidx = dev->rxmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ipackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ibytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] += val;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- uint8_t stat_idx, uint8_t is_rx)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (is_rx)
- dev->rxmap[stat_idx] = ((1U << 31) | queue_id);
- else
- dev->txmap[stat_idx] = ((1U << 31) | queue_id);
-
- return 0;
-}
-
-int
-otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- unsigned int i, count = 0;
- uint64_t reg, val;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (xstats == NULL)
- return 0;
-
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- reg = (((uint64_t)i) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
- nix_q_xstats[0].offset));
- if (val & OP_ERR)
- val = 0;
- xstats[count].value += val;
- }
- xstats[count].id = count;
- count++;
-
- return count;
-}
-
-int
-otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- unsigned int i, count = 0;
-
- RTE_SET_USED(eth_dev);
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
- return -ENOMEM;
-
- if (xstats_names) {
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_tx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_rx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_q_xstats[i].name);
- count++;
- }
- }
-
- return OTX2_NIX_NUM_XSTATS_REG;
-}
-
-int
-otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (limit > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (xstats_names == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
- sizeof(xstats_names[i].name));
- }
-
- return limit;
-}
-
-int
-otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (n > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (values == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get(eth_dev, xstats, n);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- values[i] = xstats[ids[i]].value;
- }
-
- return n;
-}
-
-static int
-nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint32_t i;
- int rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read rq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
- aq->rq.octs = 0;
- aq->rq.pkts = 0;
- aq->rq.drop_octs = 0;
- aq->rq.drop_pkts = 0;
- aq->rq.re_pkts = 0;
-
- aq->rq_mask.octs = ~(aq->rq_mask.octs);
- aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
- aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
- aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
- aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write rq context");
- return rc;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read sq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
- otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
- aq->sq.octs = 0;
- aq->sq.pkts = 0;
- aq->sq.drop_octs = 0;
- aq->sq.drop_pkts = 0;
-
- aq->sq_mask.octs = ~(aq->sq_mask.octs);
- aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
- aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
- aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write sq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int ret;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- ret = otx2_mbox_process(mbox);
- if (ret != 0)
- return ret;
-
- /* Reset queue stats */
- return nix_queue_stats_reset(eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
deleted file mode 100644
index 6aff1f9587..0000000000
--- a/drivers/net/octeontx2/otx2_tm.c
+++ /dev/null
@@ -1,3317 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_tm.h"
-
-/* Use last LVL_CNT nodes as default nodes */
-#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
-
-enum otx2_tm_node_level {
- OTX2_TM_LVL_ROOT = 0,
- OTX2_TM_LVL_SCH1,
- OTX2_TM_LVL_SCH2,
- OTX2_TM_LVL_SCH3,
- OTX2_TM_LVL_SCH4,
- OTX2_TM_LVL_QUEUE,
- OTX2_TM_LVL_MAX,
-};
-
-static inline
-uint64_t shaper2regval(struct shaper_params *shaper)
-{
- return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
- (shaper->div_exp << 13) | (shaper->exponent << 9) |
- (shaper->mantissa << 1);
-}
-
-int
-otx2_nix_get_link(struct otx2_eth_dev *dev)
-{
- int link = 13 /* SDP */;
- uint16_t lmac_chan;
- uint16_t map;
-
- lmac_chan = dev->tx_chan_base;
-
- /* CGX lmac link */
- if (lmac_chan >= 0x800) {
- map = lmac_chan & 0x7FF;
- link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
- } else if (lmac_chan < 0x700) {
- /* LBK channel */
- link = 12;
- }
-
- return link;
-}
-
-static uint8_t
-nix_get_relchan(struct otx2_eth_dev *dev)
-{
- return dev->tx_chan_base & 0xff;
-}
-
-static bool
-nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
-{
- bool is_lbk = otx2_dev_is_lbk(dev);
- return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
-}
-
-static bool
-nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
-{
- if (nix_tm_have_tl1_access(dev))
- return (lvl == OTX2_TM_LVL_QUEUE);
-
- return (lvl == OTX2_TM_LVL_SCH4);
-}
-
-static int
-find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
-{
- struct otx2_nix_tm_node *child_node;
-
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (!child_node->parent)
- continue;
- if (!(child_node->parent->id == node_id))
- continue;
- if (child_node->priority == child_node->parent->rr_prio)
- continue;
- return child_node->hw_id - child_node->priority;
- }
- return 0;
-}
-
-
-static struct otx2_nix_tm_shaper_profile *
-nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
-{
- struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
-
- TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
- if (tm_shaper_profile->shaper_profile_id == shaper_id)
- return tm_shaper_profile;
- }
- return NULL;
-}
-
-static inline uint64_t
-shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p, uint64_t *div_exp_p)
-{
- uint64_t div_exp, exponent, mantissa;
-
- /* Boundary checks */
- if (value < MIN_SHAPER_RATE ||
- value > MAX_SHAPER_RATE)
- return 0;
-
- if (value <= SHAPER_RATE(0, 0, 0)) {
- /* Calculate rate div_exp and mantissa using
- * the following formula:
- *
- * value = (2E6 * (256 + mantissa)
- * / ((1 << div_exp) * 256))
- */
- div_exp = 0;
- exponent = 0;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
- div_exp += 1;
-
- while (value <
- ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
- ((1 << div_exp) * 256)))
- mantissa -= 1;
- } else {
- /* Calculate rate exponent and mantissa using
- * the following formula:
- *
- * value = (2E6 * ((256 + mantissa) << exponent)) / 256
- *
- */
- div_exp = 0;
- exponent = MAX_RATE_EXPONENT;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
- exponent -= 1;
-
- while (value < ((NIX_SHAPER_RATE_CONST *
- ((256 + mantissa) << exponent)) / 256))
- mantissa -= 1;
- }
-
- if (div_exp > MAX_RATE_DIV_EXP ||
- exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
- return 0;
-
- if (div_exp_p)
- *div_exp_p = div_exp;
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- /* Calculate real rate value */
- return SHAPER_RATE(exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p)
-{
- uint64_t exponent, mantissa;
-
- if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
- return 0;
-
- /* Calculate burst exponent and mantissa using
- * the following formula:
- *
- * value = (((256 + mantissa) << (exponent + 1)
- / 256)
- *
- */
- exponent = MAX_BURST_EXPONENT;
- mantissa = MAX_BURST_MANTISSA;
-
- while (value < (1ull << (exponent + 1)))
- exponent -= 1;
-
- while (value < ((256 + mantissa) << (exponent + 1)) / 256)
- mantissa -= 1;
-
- if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
- return 0;
-
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- return SHAPER_BURST(exponent, mantissa);
-}
-
-static void
-shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
- struct shaper_params *cir,
- struct shaper_params *pir)
-{
- struct rte_tm_shaper_params *param = &profile->params;
-
- if (!profile)
- return;
-
- /* Calculate CIR exponent and mantissa */
- if (param->committed.rate)
- cir->rate = shaper_rate_to_nix(param->committed.rate,
- &cir->exponent,
- &cir->mantissa,
- &cir->div_exp);
-
- /* Calculate PIR exponent and mantissa */
- if (param->peak.rate)
- pir->rate = shaper_rate_to_nix(param->peak.rate,
- &pir->exponent,
- &pir->mantissa,
- &pir->div_exp);
-
- /* Calculate CIR burst exponent and mantissa */
- if (param->committed.size)
- cir->burst = shaper_burst_to_nix(param->committed.size,
- &cir->burst_exponent,
- &cir->burst_mantissa);
-
- /* Calculate PIR burst exponent and mantissa */
- if (param->peak.size)
- pir->burst = shaper_burst_to_nix(param->peak.size,
- &pir->burst_exponent,
- &pir->burst_mantissa);
-}
-
-static void
-shaper_default_red_algo(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile)
-{
- struct shaper_params cir, pir;
-
- /* C0 doesn't support STALL when both PIR & CIR are enabled */
- if (profile && otx2_dev_is_96xx_Cx(dev)) {
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- if (pir.rate && cir.rate) {
- tm_node->red_algo = NIX_REDALG_DISCARD;
- tm_node->flags |= NIX_TM_NODE_RED_DISCARD;
- return;
- }
- }
-
- tm_node->red_algo = NIX_REDALG_STD;
- tm_node->flags &= ~NIX_TM_NODE_RED_DISCARD;
-}
-
-static int
-populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
-
- /*
- * Default config for TL1.
- * For VF this is always ignored.
- */
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_TL1;
-
- /* Set DWRR quantum */
- req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
- req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
- req->num_regs++;
-
- req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
- req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
- req->num_regs++;
-
- req->reg[2] = NIX_AF_TL1X_CIR(schq);
- req->regval[2] = 0;
- req->num_regs++;
-
- return otx2_mbox_process(mbox);
-}
-
-static uint8_t
-prepare_tm_sched_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint64_t strict_prio = tm_node->priority;
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint64_t rr_quantum;
- uint8_t k = 0;
-
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- /* For children to root, strict prio is default if either
- * device root is TL2 or TL1 Static Priority is disabled.
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- dev->tm_flags & NIX_TM_TL1_NO_SP))
- strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
-
- otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
- "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, strict_prio, rr_quantum, tm_node);
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- regval[k] = rr_quantum;
- k++;
-
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- struct shaper_params cir, pir;
- uint32_t schq = tm_node->hw_id;
- uint64_t adjust = 0;
- uint8_t k = 0;
-
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- /* Packet length adjust */
- if (tm_node->pkt_mode)
- adjust = 1;
- else if (profile)
- adjust = profile->params.pkt_length_adjust & 0x1FF;
-
- otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64
- "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)"
- "adjust 0x%" PRIx64 "(pktmode %u) (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst,
- adjust, tm_node->pkt_mode, tm_node);
-
- switch (tm_node->hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_MDQX_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED ALG */
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL4X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL3X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL2X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- /* Configure CIR */
- reg[k] = NIX_AF_TL1X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure length disable and adjust */
- reg[k] = NIX_AF_TL1X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint8_t k = 0;
-
- otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
- tm_node->id, enable, tm_node);
-
- regval[k] = enable;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- k++;
- break;
- default:
- break;
- }
-
- return k;
-}
-
-static int
-populate_tm_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
- uint64_t regval[MAX_REGS_PER_MBOX_MSG];
- uint64_t reg[MAX_REGS_PER_MBOX_MSG];
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t parent = 0, child = 0;
- uint32_t hw_lvl, rr_prio, schq;
- struct nix_txschq_config *req;
- int rc = -EFAULT;
- uint8_t k = 0;
-
- memset(regval_mask, 0, sizeof(regval_mask));
- profile = nix_tm_shaper_profile_search(dev,
- tm_node->params.shaper_profile_id);
- rr_prio = tm_node->rr_prio;
- hw_lvl = tm_node->hw_lvl;
- schq = tm_node->hw_id;
-
- /* Root node will not have a parent node */
- if (hw_lvl == dev->otx2_tm_root_lvl)
- parent = tm_node->parent_hw_id;
- else
- parent = tm_node->parent->hw_id;
-
- /* Do we need this trigger to configure TL1 */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- hw_lvl == dev->otx2_tm_root_lvl) {
- rc = populate_tm_tl1_default(dev, parent);
- if (rc)
- goto error;
- }
-
- if (hw_lvl != NIX_TXSCH_LVL_SMQ)
- child = find_prio_anchor(dev, tm_node->id);
-
- /* Override default rr_prio when TL1
- * Static Priority is disabled
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- dev->tm_flags & NIX_TM_TL1_NO_SP) {
- rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
- child = 0;
- }
-
- otx2_tm_dbg("Topology config node %s(%u)->%s(%"PRIu64") lvl %u, id %u"
- " prio_anchor %"PRIu64" rr_prio %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, nix_hwlvl2str(hw_lvl + 1),
- parent, tm_node->lvl, tm_node->id, child, rr_prio, tm_node);
-
- /* Prepare Topology and Link config */
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
-
- /* Set xoff which will be cleared later and minimum length
- * which will be used for zero padding if packet length is
- * smaller
- */
- reg[k] = NIX_AF_SMQX_CFG(schq);
- regval[k] = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
- NIX_MIN_HW_FRS;
- regval_mask[k] = ~(BIT_ULL(50) | (0x7ULL << 36) | 0x7f);
- k++;
-
- /* Parent and schedule conf */
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Configure TL4 to send to SDP channel instead of CGX/LBK */
- if (otx2_dev_is_sdp(dev)) {
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- regval[k] = BIT_ULL(12);
- k++;
- }
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
- k++;
-
- break;
- }
-
- /* Prepare schedule config */
- k += prepare_tm_sched_reg(dev, tm_node, ®[k], ®val[k]);
-
- /* Prepare shaping config */
- k += prepare_tm_shaper_reg(tm_node, profile, ®[k], ®val[k]);
-
- if (!k)
- return 0;
-
- /* Copy and send config mbox */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = hw_lvl;
- req->num_regs = k;
-
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- goto error;
-
- return 0;
-error:
- otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
- return rc;
-}
-
-
-static int
-nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t hw_lvl;
- int rc = 0;
-
- for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl == hw_lvl &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
- rc = populate_tm_reg(dev, tm_node);
- if (rc)
- goto exit;
- }
- }
- }
-exit:
- return rc;
-}
-
-static struct otx2_nix_tm_node *
-nix_tm_node_search(struct otx2_eth_dev *dev,
- uint32_t node_id, bool user)
-{
- struct otx2_nix_tm_node *tm_node;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->id == node_id &&
- (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
- return tm_node;
- }
- return NULL;
-}
-
-static uint32_t
-check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->parent->id == parent_id))
- continue;
-
- if (tm_node->priority == priority)
- rr_num++;
- }
- return rr_num;
-}
-
-static int
-nix_tm_update_parent_info(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node_child;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_nix_tm_node *parent;
- uint32_t rr_num = 0;
- uint32_t priority;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
- /* Count group of children of same priority i.e are RR */
- parent = tm_node->parent;
- priority = tm_node->priority;
- rr_num = check_rr(dev, priority, parent->id);
-
- /* Assuming that multiple RR groups are
- * not configured based on capability.
- */
- if (rr_num > 1) {
- parent->rr_prio = priority;
- parent->rr_num = rr_num;
- }
-
- /* Find out static priority children that are not in RR */
- TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
- if (!tm_node_child->parent)
- continue;
- if (parent->id != tm_node_child->parent->id)
- continue;
- if (parent->max_prio == UINT32_MAX &&
- tm_node_child->priority != parent->rr_prio)
- parent->max_prio = 0;
-
- if (parent->max_prio < tm_node_child->priority &&
- parent->rr_prio != tm_node_child->priority)
- parent->max_prio = tm_node_child->priority;
- }
- }
-
- return 0;
-}
-
-static int
-nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint16_t hw_lvl,
- uint16_t lvl, bool user,
- struct rte_tm_node_params *params)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *parent_node;
- uint32_t profile_id;
-
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- parent_node = nix_tm_node_search(dev, parent_node_id, user);
-
- tm_node = rte_zmalloc("otx2_nix_tm_node",
- sizeof(struct otx2_nix_tm_node), 0);
- if (!tm_node)
- return -ENOMEM;
-
- tm_node->lvl = lvl;
- tm_node->hw_lvl = hw_lvl;
-
- /* Maintain minimum weight */
- if (!weight)
- weight = 1;
-
- tm_node->id = node_id;
- tm_node->priority = priority;
- tm_node->weight = weight;
- tm_node->rr_prio = 0xf;
- tm_node->max_prio = UINT32_MAX;
- tm_node->hw_id = UINT32_MAX;
- tm_node->flags = 0;
- if (user)
- tm_node->flags = NIX_TM_NODE_USER;
-
- /* Packet mode */
- if (!nix_tm_is_leaf(dev, lvl) &&
- ((profile && profile->params.packet_mode) ||
- (params->nonleaf.wfq_weight_mode &&
- params->nonleaf.n_sp_priorities &&
- !params->nonleaf.wfq_weight_mode[0])))
- tm_node->pkt_mode = 1;
-
- rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
-
- if (profile)
- profile->reference_count++;
-
- tm_node->parent = parent_node;
- tm_node->parent_hw_id = UINT32_MAX;
- shaper_default_red_algo(dev, tm_node, profile);
-
- TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
-
- return 0;
-}
-
-static int
-nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *shaper_profile;
-
- while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
- if (shaper_profile->reference_count)
- otx2_tm_dbg("Shaper profile %u has non zero references",
- shaper_profile->shaper_profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
- rte_free(shaper_profile);
- }
-
- return 0;
-}
-
-static int
-nix_clear_path_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct nix_txschq_config *req;
- struct otx2_nix_tm_node *p;
- int rc;
-
- /* Manipulating SW_XOFF not supported on Ax */
- if (otx2_dev_is_Ax(dev))
- return 0;
-
- /* Enable nodes in path for flush to succeed */
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- p = tm_node;
- else
- p = tm_node->parent;
- while (p) {
- if (!(p->flags & NIX_TM_NODE_ENABLED) &&
- (p->flags & NIX_TM_NODE_HWRES)) {
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = p->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
- req->regval);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
-
- p->flags |= NIX_TM_NODE_ENABLED;
- }
- p = p->parent;
- }
-
- return 0;
-}
-
-static int
-nix_smq_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
- uint16_t smq;
- int rc;
-
- smq = tm_node->hw_id;
- otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
- enable ? "enable" : "disable");
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_SMQ;
- req->num_regs = 1;
-
- req->reg[0] = NIX_AF_SMQX_CFG(smq);
- req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
- req->regval_mask[0] = enable ?
- ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
-{
- struct otx2_eth_txq *txq = __txq;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint64_t aura_handle;
- int rc;
-
- otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
- enable ? "enable" : "disable");
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return -EFAULT;
- mbox = lf->mbox;
- /* Set/clear sqb aura fc_ena */
- aura_handle = txq->sqb_pool->pool_id;
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
- /* Below is not needed for aura writes but AF driver needs it */
- /* AF will translate to associated poolctx */
- req->aura.pool_addr = req->aura_id;
-
- req->aura.fc_ena = enable;
- req->aura_mask.fc_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Read back npa aura ctx */
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Init when enabled as there might be no triggers */
- if (enable)
- *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
- else
- *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
- /* Sync write barrier */
- rte_wmb();
-
- return 0;
-}
-
-static int
-nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
-{
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_eth_dev *dev = txq->dev;
- uint64_t wdata, val, prev;
- uint16_t sq = txq->sq;
- int64_t *regaddr;
- uint64_t timeout;/* 10's of usec */
-
- /* Wait for enough time based on shaper min rate */
- timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
- timeout = timeout / dev->tm_rate_min;
- if (!timeout)
- timeout = 10000;
-
- wdata = ((uint64_t)sq << 32);
- regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
-
- /* Spin multiple iterations as "txq->fc_cache_pkts" can still
- * have space to send pkts even though fc_mem is disabled
- */
-
- while (true) {
- prev = val;
- rte_delay_us(10);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
- /* Continue on error */
- if (val & BIT_ULL(63))
- continue;
-
- if (prev != val)
- continue;
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- /* SQ reached quiescent state */
- if (sqb_cnt <= 1 && head_off == tail_off &&
- (*txq->fc_mem == txq->nb_sqb_bufs)) {
- break;
- }
-
- /* Timeout */
- if (!timeout)
- goto exit;
- timeout--;
- }
-
- return 0;
-exit:
- otx2_nix_tm_dump(dev);
- return -EFAULT;
-}
-
-/* Flush and disable tx queue and its parent SMQ */
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq;
- struct otx2_eth_dev *dev;
- uint16_t sq;
- bool user;
- int rc;
-
- txq = _txq;
- dev = txq->dev;
- sq = txq->sq;
-
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
- otx2_err("Invalid node/state for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable CGX RXTX to drain pkts */
- if (!dev_started) {
- /* Though it enables both RX MCAM Entries and CGX Link
- * we assume all the rx queues are stopped way back.
- */
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc) {
- otx2_err("cgx start failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Disable smq xoff for case it was enabled earlier */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
-
- /* As per HRM, to disable an SQ, all other SQ's
- * that feed to same SMQ must be paused before SMQ flush.
- */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- sq = sibling->id;
- txq = dev->eth_dev->data->tx_queues[sq];
- if (!txq)
- continue;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
- return rc;
- }
- }
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Disable and flush */
- rc = nix_smq_xoff(dev, tm_node->parent, true);
- if (rc) {
- otx2_err("Failed to disable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- goto cleanup;
- }
-cleanup:
- /* Restore cgx state */
- if (!dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-int otx2_nix_sq_flush_post(void *_txq)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq = _txq;
- struct otx2_eth_txq *s_txq;
- struct otx2_eth_dev *dev;
- bool once = false;
- uint16_t sq, s_sq;
- bool user;
- int rc;
-
- dev = txq->dev;
- sq = txq->sq;
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node) {
- otx2_err("Invalid node for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable all the siblings back */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
-
- if (sibling->id == sq)
- continue;
-
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- s_sq = sibling->id;
- s_txq = dev->eth_dev->data->tx_queues[s_sq];
- if (!s_txq)
- continue;
-
- if (!once) {
- /* Enable back if any SQ is still present */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
- once = true;
- }
-
- rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_sq_sched_data(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool rr_quantum_only)
-{
- struct rte_eth_dev *eth_dev = dev->eth_dev;
- struct otx2_mbox *mbox = dev->mbox;
- uint16_t sq = tm_node->id, smq;
- struct nix_aq_enq_req *req;
- uint64_t rr_quantum;
- int rc;
-
- smq = tm_node->parent->hw_id;
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- if (rr_quantum_only)
- otx2_tm_dbg("Update sq(%u) rr_quantum 0x%"PRIx64, sq, rr_quantum);
- else
- otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%"PRIx64,
- sq, smq, rr_quantum);
-
- if (sq > eth_dev->data->nb_tx_queues)
- return -EFAULT;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- req->qidx = sq;
- req->ctype = NIX_AQ_CTYPE_SQ;
- req->op = NIX_AQ_INSTOP_WRITE;
-
- /* smq update only when needed */
- if (!rr_quantum_only) {
- req->sq.smq = smq;
- req->sq_mask.smq = ~req->sq_mask.smq;
- }
- req->sq.smq_rr_quantum = rr_quantum;
- req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set smq, rc=%d", rc);
- return rc;
-}
-
-int otx2_nix_sq_enable(void *_txq)
-{
- struct otx2_eth_txq *txq = _txq;
- int rc;
-
- /* Enable sqb_aura fc */
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
- uint32_t flags, bool hw_only)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *next_node;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_free_req *req;
- uint32_t profile_id;
- int rc = 0;
-
- next_node = TAILQ_FIRST(&dev->node_list);
- while (next_node) {
- tm_node = next_node;
- next_node = TAILQ_NEXT(tm_node, node);
-
- /* Check for only requested nodes */
- if ((tm_node->flags & flags_mask) != flags)
- continue;
-
- if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
- tm_node->flags & NIX_TM_NODE_HWRES) {
- /* Free specific HW resource */
- otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
- nix_hwlvl2str(tm_node->hw_lvl),
- tm_node->hw_id, tm_node->lvl,
- tm_node->id, tm_node);
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = 0;
- req->schq_lvl = tm_node->hw_lvl;
- req->schq = tm_node->hw_id;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- tm_node->flags &= ~NIX_TM_NODE_HWRES;
- }
-
- /* Leave software elements if needed */
- if (hw_only)
- continue;
-
- otx2_tm_dbg("Free node lvl %u id %u (%p)",
- tm_node->lvl, tm_node->id, tm_node);
-
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile)
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- }
-
- if (!flags_mask) {
- /* Free all hw resources */
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = TXSCHQ_FREE_ALL;
-
- return otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static uint8_t
-nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_rsp *rsp)
-{
- uint16_t schq;
- uint8_t lvl;
-
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
- for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
- dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
- dev->txschq_contig_list[lvl][schq] =
- rsp->schq_contig_list[lvl][schq];
- }
-
- dev->txschq[lvl] = rsp->schq[lvl];
- dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
- }
- return 0;
-}
-
-static int
-nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *child,
- struct otx2_nix_tm_node *parent)
-{
- uint32_t hw_id, schq_con_index, prio_offset;
- uint32_t l_id, schq_index;
-
- otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
- nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
-
- child->flags |= NIX_TM_NODE_HWRES;
-
- /* Process root nodes */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- int idx = 0;
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_contig_index[l_id];
- hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_contig_index[l_id]++;
- /* Update TL1 hw_id for its parent for config purpose */
- idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
- hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
- child->parent_hw_id = hw_id;
- return 0;
- }
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_index[l_id];
- hw_id = dev->txschq_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_index[l_id]++;
- return 0;
- }
-
- /* Process children with parents */
- l_id = child->hw_lvl;
- schq_index = dev->txschq_index[l_id];
- schq_con_index = dev->txschq_contig_index[l_id];
-
- if (child->priority == parent->rr_prio) {
- hw_id = dev->txschq_list[l_id][schq_index];
- child->hw_id = hw_id;
- child->parent_hw_id = parent->hw_id;
- dev->txschq_index[l_id]++;
- } else {
- prio_offset = schq_con_index + child->priority;
- hw_id = dev->txschq_contig_list[l_id][prio_offset];
- child->hw_id = hw_id;
- }
- return 0;
-}
-
-static int
-nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *parent, *child;
- uint32_t child_hw_lvl, con_index_inc, i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
- TAILQ_FOREACH(parent, &dev->node_list, node) {
- child_hw_lvl = parent->hw_lvl - 1;
- if (parent->hw_lvl != i)
- continue;
- TAILQ_FOREACH(child, &dev->node_list, node) {
- if (!child->parent)
- continue;
- if (child->parent->id != parent->id)
- continue;
- nix_tm_assign_id_to_node(dev, child, parent);
- }
-
- con_index_inc = parent->max_prio + 1;
- dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
-
- /*
- * Explicitly assign id to parent node if it
- * doesn't have a parent
- */
- if (parent->hw_lvl == dev->otx2_tm_root_lvl)
- nix_tm_assign_id_to_node(dev, parent, NULL);
- }
- }
- return 0;
-}
-
-static uint8_t
-nix_tm_count_req_schq(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req, uint8_t lvl)
-{
- struct otx2_nix_tm_node *tm_node;
- uint8_t contig_count;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (lvl == tm_node->hw_lvl) {
- req->schq[lvl - 1] += tm_node->rr_num;
- if (tm_node->max_prio != UINT32_MAX) {
- contig_count = tm_node->max_prio + 1;
- req->schq_contig[lvl - 1] += contig_count;
- }
- }
- if (lvl == dev->otx2_tm_root_lvl &&
- dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
- tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
- req->schq_contig[dev->otx2_tm_root_lvl]++;
- }
- }
-
- req->schq[NIX_TXSCH_LVL_TL1] = 1;
- req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req)
-{
- uint8_t i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
- nix_tm_count_req_schq(dev, req, i);
-
- for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
- dev->txschq_index[i] = 0;
- dev->txschq_contig_index[i] = 0;
- }
- return 0;
-}
-
-static int
-nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_alloc_req *req;
- struct nix_txsch_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
-
- rc = nix_tm_prepare_txschq_req(dev, req);
- if (rc)
- return rc;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- nix_tm_copy_rsp_to_dev(dev, rsp);
- dev->link_cfg_lvl = rsp->link_cfg_lvl;
-
- nix_tm_assign_hw_id(dev);
- return 0;
-}
-
-static int
-nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint16_t sq;
- int rc;
-
- nix_tm_update_parent_info(dev);
-
- rc = nix_tm_send_txsch_alloc_msg(dev);
- if (rc) {
- otx2_err("TM failed to alloc tm resources=%d", rc);
- return rc;
- }
-
- rc = nix_tm_txsch_reg_config(dev);
- if (rc) {
- otx2_err("TM failed to configure sched registers=%d", rc);
- return rc;
- }
-
- /* Trigger MTU recalculate as SMQ needs MTU conf */
- if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc) {
- otx2_err("TM MTU update failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Mark all non-leaf's as enabled */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- if (!xmit_enable)
- return 0;
-
- /* Update SQ Sched Data while SQ is idle */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- rc = nix_sq_sched_data(dev, tm_node, false);
- if (rc) {
- otx2_err("SQ %u sched update failed, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- }
-
- /* Finally XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- return rc;
- }
- }
-
- /* Enable xmit as all the topology is ready */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- sq = tm_node->id;
- txq = eth_dev->data->tx_queues[sq];
-
- rc = otx2_nix_sq_enable(txq);
- if (rc) {
- otx2_err("TM sw xon failed on SQ %u, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox,
- struct nix_txschq_config *req,
- struct rte_tm_error *error)
-{
- int rc;
-
- if (!req->num_regs ||
- req->num_regs > MAX_REGS_PER_MBOX_MSG) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "invalid config";
- return -EIO;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- }
- return rc;
-}
-
-static uint16_t
-nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
-{
- if (nix_tm_have_tl1_access(dev)) {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL1;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH4:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- } else {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- }
-}
-
-static uint16_t
-nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
-{
- if (hw_lvl >= NIX_TXSCH_LVL_CNT)
- return 0;
-
- /* MDQ doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- return 0;
-
- /* PF's TL1 with VF's enabled doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- (dev->tm_flags & NIX_TM_TL1_NO_SP)))
- return 0;
-
- return TXSCH_TLX_SP_PRIO_MAX - 1;
-}
-
-
-static int
-validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
- uint32_t parent_id, uint32_t priority,
- struct rte_tm_error *error)
-{
- uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
- int i;
-
- /* Validate priority against max */
- if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "unsupported priority value";
- return -EINVAL;
- }
-
- if (parent_id == RTE_TM_NODE_ID_NULL)
- return 0;
-
- memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
- priorities[priority] = 1;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->flags & NIX_TM_NODE_USER))
- continue;
-
- if (tm_node->parent->id != parent_id)
- continue;
-
- priorities[tm_node->priority]++;
- }
-
- for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
- if (priorities[i] > 1)
- rr_num++;
-
- /* At max, one rr groups per parent */
- if (rr_num > 1) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "multiple DWRR node priority";
- return -EINVAL;
- }
-
- /* Check for previous priority to avoid holes in priorities */
- if (priority && !priorities[priority - 1]) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority not in order";
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
- uint64_t *regval, uint32_t hw_lvl)
-{
- volatile struct nix_txschq_config *req;
- struct nix_txschq_config *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->read = 1;
- req->lvl = hw_lvl;
- req->reg[0] = reg;
- req->num_regs = 1;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc)
- return rc;
- *regval = rsp->regval[0];
- return 0;
-}
-
-/* Search for min rate in topology */
-static void
-nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t rate_min = 1E9; /* 1 Gbps */
-
- TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
- if (profile->params.peak.rate &&
- profile->params.peak.rate < rate_min)
- rate_min = profile->params.peak.rate;
-
- if (profile->params.committed.rate &&
- profile->params.committed.rate < rate_min)
- rate_min = profile->params.committed.rate;
- }
-
- dev->tm_rate_min = rate_min;
-}
-
-static int
-nix_xmit_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint64_t wdata, val;
- int i, rc;
-
- otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
-
- /* Enable CGX RXTX to drain pkts */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
- }
-
- /* XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Flush all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq, rc=%d\n", rc);
- goto cleanup;
- }
- }
-
- /* XOFF & Flush all SMQ's. HRM mandates
- * all SQ's empty before SMQ flush is issued.
- */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, true);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Verify sanity of all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- wdata = ((uint64_t)txq->sq << 32);
- val = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- if (sqb_cnt > 1 || head_off != tail_off ||
- (*txq->fc_mem != txq->nb_sqb_bufs))
- otx2_err("Failed to gracefully flush sq %u", txq->sq);
- }
-
-cleanup:
- /* restore cgx state */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-static int
-otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- int *is_leaf, struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
-
- if (is_leaf == NULL) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- return -EINVAL;
- }
- if (nix_tm_is_leaf(dev, tm_node->lvl))
- *is_leaf = true;
- else
- *is_leaf = false;
- return 0;
-}
-
-static int
-otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
- struct rte_tm_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int rc, max_nr_nodes = 0, i;
- struct free_rsrcs_rsp *rsp;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
- max_nr_nodes += rsp->schq[i];
-
- cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
- /* TL1 level is reserved for PF */
- cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
- OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
- cap->non_leaf_nodes_identical = 1;
- cap->leaf_nodes_identical = 1;
-
- /* Shaper Capabilities */
- cap->shaper_private_n_max = max_nr_nodes;
- cap->shaper_n_max = max_nr_nodes;
- cap->shaper_private_dual_rate_n_max = max_nr_nodes;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
- cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN;
- cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX;
-
- /* Schedule Capabilities */
- cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
- cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
- cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
- cap->sched_wfq_n_groups_max = 1;
- cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->sched_wfq_packet_mode_supported = 1;
- cap->sched_wfq_byte_mode_supported = 1;
-
- cap->dynamic_update_mask =
- RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
- RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
- cap->stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES |
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- for (i = 0; i < RTE_COLORS; i++) {
- cap->mark_vlan_dei_supported[i] = false;
- cap->mark_ip_ecn_tcp_supported[i] = false;
- cap->mark_ip_dscp_supported[i] = false;
- }
-
- return 0;
-}
-
-static int
-otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
- struct rte_tm_level_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct free_rsrcs_rsp *rsp;
- uint16_t hw_lvl;
- int rc;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
-
- if (nix_tm_is_leaf(dev, lvl)) {
- /* Leaf */
- cap->n_nodes_max = dev->tm_leaf_cnt;
- cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
- cap->leaf_nodes_identical = 1;
- cap->leaf.stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
-
- } else if (lvl == OTX2_TM_LVL_ROOT) {
- /* Root node, aka TL2(vf)/TL1(pf) */
- cap->n_nodes_max = 1;
- cap->n_nodes_nonleaf_max = 1;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported =
- nix_tm_have_tl1_access(dev) ? false : true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (nix_tm_have_tl1_access(dev))
- cap->nonleaf.stats_mask =
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- } else if ((lvl < OTX2_TM_LVL_MAX) &&
- (hw_lvl < NIX_TXSCH_LVL_CNT)) {
- /* TL2, TL3, TL4, MDQ */
- cap->n_nodes_max = rsp->schq[hw_lvl];
- cap->n_nodes_nonleaf_max = cap->n_nodes_max;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported = true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- /* MDQ doesn't support Strict Priority */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max =
- rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
- } else {
- /* unsupported level */
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct free_rsrcs_rsp *rsp;
- int rc, hw_lvl, lvl;
-
- memset(cap, 0, sizeof(*cap));
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- hw_lvl = tm_node->hw_lvl;
- lvl = tm_node->lvl;
-
- /* Leaf node */
- if (nix_tm_is_leaf(dev, lvl)) {
- cap->stats_mask = RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
- return 0;
- }
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- /* Non Leaf Shaper */
- cap->shaper_private_supported = true;
- cap->shaper_private_dual_rate_supported =
- (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
-
- /* Non Leaf Scheduler */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
-
- cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_children_per_group_max =
- cap->nonleaf.sched_n_children_max;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (hw_lvl == NIX_TXSCH_LVL_TL1)
- cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_shaper_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile;
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID exist";
- return -EINVAL;
- }
-
- /* Committed rate and burst size can be enabled/disabled */
- if (params->committed.size || params->committed.rate) {
- if (params->committed.size < MIN_SHAPER_BURST ||
- params->committed.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->committed.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper committed rate invalid";
- return -EINVAL;
- }
- }
-
- /* Peak rate and burst size can be enabled/disabled */
- if (params->peak.size || params->peak.rate) {
- if (params->peak.size < MIN_SHAPER_BURST ||
- params->peak.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->peak.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper peak rate invalid";
- return -EINVAL;
- }
- }
-
- if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN ||
- params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
- error->message = "length adjust invalid";
- return -EINVAL;
- }
-
- profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
- sizeof(struct otx2_nix_tm_shaper_profile), 0);
- if (!profile)
- return -ENOMEM;
-
- profile->shaper_profile_id = profile_id;
- rte_memcpy(&profile->params, params,
- sizeof(struct rte_tm_shaper_params));
- TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
-
- otx2_tm_dbg("Added TM shaper profile %u, "
- " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
- ", cbs %" PRIu64 " , adj %u, pkt mode %d",
- profile_id,
- params->peak.rate * 8,
- params->peak.size,
- params->committed.rate * 8,
- params->committed.size,
- params->pkt_length_adjust,
- params->packet_mode);
-
- /* Translate rate as bits per second */
- profile->params.peak.rate = profile->params.peak.rate * 8;
- profile->params.committed.rate = profile->params.committed.rate * 8;
- /* Always use PIR for single rate shaping */
- if (!params->peak.rate && params->committed.rate) {
- profile->params.peak = profile->params.committed;
- memset(&profile->params.committed, 0,
- sizeof(profile->params.committed));
- }
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
-
- if (profile->reference_count) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper profile in use";
- return -EINVAL;
- }
-
- otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
- rte_free(profile);
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint32_t lvl,
- struct rte_tm_node_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_nix_tm_node *parent_node;
- int rc, pkt_mode, clear_on_fail = 0;
- uint32_t exp_next_lvl, i;
- uint32_t profile_id;
- uint16_t hw_lvl;
-
- /* we don't support dynamic updates */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "dynamic update not supported";
- return -EIO;
- }
-
- /* Leaf nodes have to be same priority */
- if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "queue shapers must be priority 0";
- return -EIO;
- }
-
- parent_node = nix_tm_node_search(dev, parent_node_id, true);
-
- /* find the right level */
- if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
- if (parent_node_id == RTE_TM_NODE_ID_NULL) {
- lvl = OTX2_TM_LVL_ROOT;
- } else if (parent_node) {
- lvl = parent_node->lvl + 1;
- } else {
- /* Neigher proper parent nor proper level id given */
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -ERANGE;
- }
- }
-
- /* Translate rte_tm level id's to nix hw level id's */
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
- if (hw_lvl == NIX_TXSCH_LVL_CNT &&
- !nix_tm_is_leaf(dev, lvl)) {
- error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
- error->message = "invalid level id";
- return -ERANGE;
- }
-
- if (node_id < dev->tm_leaf_cnt)
- exp_next_lvl = NIX_TXSCH_LVL_SMQ;
- else
- exp_next_lvl = hw_lvl + 1;
-
- /* Check if there is no parent node yet */
- if (hw_lvl != dev->otx2_tm_root_lvl &&
- (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -EINVAL;
- }
-
- /* Check if a node already exists */
- if (nix_tm_node_search(dev, node_id, true)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "node already exists";
- return -EINVAL;
- }
-
- if (!nix_tm_is_leaf(dev, lvl)) {
- /* Check if shaper profile exists for non leaf node */
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "invalid shaper profile";
- return -EINVAL;
- }
-
- /* Minimum static priority count is 1 */
- if (!params->nonleaf.n_sp_priorities ||
- params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) {
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
- error->message = "invalid sp priorities";
- return -EINVAL;
- }
-
- pkt_mode = 0;
- /* Validate weight mode */
- for (i = 0; i < params->nonleaf.n_sp_priorities &&
- params->nonleaf.wfq_weight_mode; i++) {
- pkt_mode = !params->nonleaf.wfq_weight_mode[i];
- if (pkt_mode == !params->nonleaf.wfq_weight_mode[0])
- continue;
-
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
- error->message = "unsupported weight mode";
- return -EINVAL;
- }
-
- if (profile && params->nonleaf.n_sp_priorities &&
- pkt_mode != profile->params.packet_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper wfq packet mode mismatch";
- return -EINVAL;
- }
- }
-
- /* Check if there is second DWRR already in siblings or holes in prio */
- if (validate_prio(dev, lvl, parent_node_id, priority, error))
- return -EINVAL;
-
- if (weight > MAX_SCHED_WEIGHT) {
- error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "max weight exceeded";
- return -EINVAL;
- }
-
- rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
- priority, weight, hw_lvl,
- lvl, true, params);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- /* cleanup user added nodes */
- if (clear_on_fail)
- nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, false);
- error->message = "failed to add node";
- return rc;
- }
- error->type = RTE_TM_ERROR_TYPE_NONE;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *child_node;
- struct otx2_nix_tm_shaper_profile *profile;
- uint32_t profile_id;
-
- /* we don't support dynamic updates yet */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "hierarchy exists";
- return -EIO;
- }
-
- if (node_id == RTE_TM_NODE_ID_NULL) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node id";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Check for any existing children */
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (child_node->parent == tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "children exist";
- return -EINVAL;
- }
- }
-
- /* Remove shaper profile reference */
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- return 0;
-}
-
-static int
-nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error, bool suspend)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint16_t flags;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- flags = tm_node->flags;
- flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
- (flags | NIX_TM_NODE_ENABLED);
-
- if (tm_node->flags == flags)
- return 0;
-
- /* send mbox for state change */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node, suspend,
- req->reg, req->regval);
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags = flags;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
-}
-
-static int
-otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
-}
-
-static int
-otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
- int clear_on_fail,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint32_t leaf_cnt = 0;
- int rc;
-
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy exists";
- return -EINVAL;
- }
-
- /* Check if we have all the leaf nodes */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->flags & NIX_TM_NODE_USER &&
- tm_node->id < dev->tm_leaf_cnt)
- leaf_cnt++;
- }
-
- if (leaf_cnt != dev->tm_leaf_cnt) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "incomplete hierarchy";
- return -EINVAL;
- }
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- /* Delete default/ratelimit tree */
- if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free default resources";
- return rc;
- }
- dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
- NIX_TM_RATE_LIMIT_TREE);
- }
-
- /* Free up user alloc'ed resources */
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free user resources";
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "alloc resources failed";
- /* TODO should we restore default config ? */
- if (clear_on_fail)
- nix_tm_free_resources(dev, 0, 0, false);
- return rc;
- }
-
- error->type = RTE_TM_ERROR_TYPE_NONE;
- dev->tm_flags |= NIX_TM_COMMITTED;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node";
- return -EINVAL;
- }
-
- if (profile_id == tm_node->params.shaper_profile_id)
- return 0;
-
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
- }
-
- if (profile && profile->params.packet_mode != tm_node->pkt_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile pkt mode mismatch";
- return -EINVAL;
- }
-
- tm_node->params.shaper_profile_id = profile_id;
-
- /* Nothing to do if not yet committed */
- if (!(dev->tm_flags & NIX_TM_COMMITTED))
- return 0;
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Flush the specific node with SW_XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
- k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
- req->num_regs = k;
-
- rc = send_tm_reqval(mbox, req, error);
- if (rc)
- return rc;
-
- shaper_default_red_algo(dev, tm_node, profile);
-
- /* Update the PIR/CIR and clear SW XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
-
- k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
-
- req->num_regs = k;
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id, uint32_t new_parent_id,
- uint32_t priority, uint32_t weight,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_nix_tm_node *new_parent;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Parent id valid only for non root nodes */
- if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
- new_parent = nix_tm_node_search(dev, new_parent_id, true);
- if (!new_parent) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "no such parent node";
- return -EINVAL;
- }
-
- /* Current support is only for dynamic weight update */
- if (tm_node->parent != new_parent ||
- tm_node->priority != priority) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "only weight update supported";
- return -EINVAL;
- }
- }
-
- /* Skip if no change */
- if (tm_node->weight == weight)
- return 0;
-
- tm_node->weight = weight;
-
- /* For leaf nodes, SQ CTX needs update */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- /* Update SQ quantum data on the fly */
- rc = nix_sq_sched_data(dev, tm_node, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "sq sched data update failed";
- return rc;
- }
- } else {
- /* XOFF Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XOFF this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* Update new weight for current node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sched_reg(dev, tm_node,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_stats *stats,
- uint64_t *stats_mask, int clear,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint64_t reg, val;
- int64_t *addr;
- int rc = 0;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "HW resources not allocated";
- return -EINVAL;
- }
-
- /* Stats support only for leaf node or TL1 root */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- reg = (((uint64_t)tm_node->id) << 32);
-
- /* Packets */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_pkts = val - tm_node->last_pkts;
-
- /* Bytes */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_bytes = val - tm_node->last_bytes;
-
- if (clear) {
- tm_node->last_pkts = stats->n_pkts;
- tm_node->last_bytes = stats->n_bytes;
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
-
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "stats read error";
-
- /* RED Drop packets */
- reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
- val - tm_node->last_pkts;
-
- /* RED Drop bytes */
- reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
- val - tm_node->last_bytes;
-
- /* Clear stats */
- if (clear) {
- tm_node->last_pkts =
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
- tm_node->last_bytes =
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- } else {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "unsupported node";
- rc = -EINVAL;
- }
-
-exit:
- return rc;
-}
-
-const struct rte_tm_ops otx2_tm_ops = {
- .node_type_get = otx2_nix_tm_node_type_get,
-
- .capabilities_get = otx2_nix_tm_capa_get,
- .level_capabilities_get = otx2_nix_tm_level_capa_get,
- .node_capabilities_get = otx2_nix_tm_node_capa_get,
-
- .shaper_profile_add = otx2_nix_tm_shaper_profile_add,
- .shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
-
- .node_add = otx2_nix_tm_node_add,
- .node_delete = otx2_nix_tm_node_delete,
- .node_suspend = otx2_nix_tm_node_suspend,
- .node_resume = otx2_nix_tm_node_resume,
- .hierarchy_commit = otx2_nix_tm_hierarchy_commit,
-
- .node_shaper_update = otx2_nix_tm_node_shaper_update,
- .node_parent_update = otx2_nix_tm_node_parent_update,
- .node_stats_read = otx2_nix_tm_node_stats_read,
-};
-
-static int
-nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i;
- int rc = 0, leaf_level;
-
- /* Default params */
- memset(¶ms, 0, sizeof(params));
- params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 4;
- leaf_level = OTX2_TM_LVL_QUEUE;
- } else {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 3;
- leaf_level = OTX2_TM_LVL_SCH4;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- leaf_level, false, ¶ms);
- if (rc)
- break;
- }
-
-exit:
- return rc;
-}
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- TAILQ_INIT(&dev->node_list);
- TAILQ_INIT(&dev->shaper_profile_list);
- dev->tm_rate_min = 1E9; /* 1Gbps */
-}
-
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- int rc;
-
- /* Free up all resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
- dev->tm_flags = NIX_TM_DEFAULT_TREE;
-
- /* Disable TL1 Static Priority when VF's are enabled
- * as otherwise VF's TL2 reallocation will be needed
- * runtime to support a specific topology of PF.
- */
- if (pci_dev->max_vfs)
- dev->tm_flags |= NIX_TM_TL1_NO_SP;
-
- rc = nix_tm_prepare_default_tree(eth_dev);
- if (rc != 0)
- return rc;
-
- rc = nix_tm_alloc_resources(eth_dev, false);
- if (rc != 0)
- return rc;
- dev->tm_leaf_cnt = sq_cnt;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i, rc = 0;
-
- memset(¶ms, 0, sizeof(params));
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 3;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i,
- leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_QUEUE,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- return 0;
- }
-
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 2;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- break;
- }
-error:
- return rc;
-}
-
-static int
-otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
- struct otx2_nix_tm_node *tm_node,
- uint64_t tx_rate)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile profile;
- struct otx2_mbox *mbox = dev->mbox;
- volatile uint64_t *reg, *regval;
- struct nix_txschq_config *req;
- uint16_t flags;
- uint8_t k = 0;
- int rc;
-
- flags = tm_node->flags;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_MDQ;
- reg = req->reg;
- regval = req->regval;
-
- if (tx_rate == 0) {
- k += prepare_tm_sw_xoff(tm_node, true, ®[k], ®val[k]);
- flags &= ~NIX_TM_NODE_ENABLED;
- goto exit;
- }
-
- if (!(flags & NIX_TM_NODE_ENABLED)) {
- k += prepare_tm_sw_xoff(tm_node, false, ®[k], ®val[k]);
- flags |= NIX_TM_NODE_ENABLED;
- }
-
- /* Use only PIR for rate limit */
- memset(&profile, 0, sizeof(profile));
- profile.params.peak.rate = tx_rate;
- /* Minimum burst of ~4us Bytes of Tx */
- profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
- (4ull * tx_rate) / (1E6 * 8));
- if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
- dev->tm_rate_min = tx_rate;
-
- k += prepare_tm_shaper_reg(tm_node, &profile, ®[k], ®val[k]);
-exit:
- req->num_regs = k;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- tm_node->flags = flags;
- return 0;
-}
-
-int
-otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate_mbps)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- if (queue_idx >= eth_dev->data->nb_tx_queues)
- return -EINVAL;
-
- if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
- goto error;
-
- if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- eth_dev->data->nb_tx_queues > 1) {
- /* For TM topology change ethdev needs to be stopped */
- if (eth_dev->data->dev_started)
- return -EBUSY;
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc < 0) {
- otx2_tm_dbg("failed to free default resources, rc %d",
- rc);
- return -EIO;
- }
-
- rc = nix_tm_prepare_rate_limited_tree(eth_dev);
- if (rc < 0) {
- otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc != 0) {
- otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
- return rc;
- }
-
- dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
- dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
- }
-
- tm_node = nix_tm_node_search(dev, queue_idx, false);
-
- /* check if we found a valid leaf node */
- if (!tm_node ||
- !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent ||
- tm_node->parent->hw_id == UINT32_MAX)
- return -EIO;
-
- return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
-error:
- otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
- return -EINVAL;
-}
-
-int
-otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (!arg)
- return -EINVAL;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- *(const void **)arg = &otx2_tm_ops;
-
- return 0;
-}
-
-int
-otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- /* Xmit is assumed to be disabled */
- /* Free up resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
-
- dev->tm_flags = 0;
- return 0;
-}
-
-int
-otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq)
-{
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* 0..sq_cnt-1 are leaf nodes */
- if (sq >= dev->tm_leaf_cnt)
- return -EINVAL;
-
- /* Search for internal node first */
- tm_node = nix_tm_node_search(dev, sq, false);
- if (!tm_node)
- tm_node = nix_tm_node_search(dev, sq, true);
-
- /* Check if we found a valid leaf node */
- if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
- return -EIO;
- }
-
- /* Get SMQ Id of leaf node's parent */
- *smq = tm_node->parent->hw_id;
- *rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc)
- return rc;
- tm_node->flags |= NIX_TM_NODE_ENABLED;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
deleted file mode 100644
index db44d4891f..0000000000
--- a/drivers/net/octeontx2/otx2_tm.h
+++ /dev/null
@@ -1,176 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TM_H__
-#define __OTX2_TM_H__
-
-#include <stdbool.h>
-
-#include <rte_tm_driver.h>
-
-#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
-#define NIX_TM_COMMITTED BIT_ULL(1)
-#define NIX_TM_RATE_LIMIT_TREE BIT_ULL(2)
-#define NIX_TM_TL1_NO_SP BIT_ULL(3)
-
-struct otx2_eth_dev;
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
-int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate);
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
-int otx2_nix_sq_flush_post(void *_txq);
-int otx2_nix_sq_enable(void *_txq);
-int otx2_nix_get_link(struct otx2_eth_dev *dev);
-int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
-
-struct otx2_nix_tm_node {
- TAILQ_ENTRY(otx2_nix_tm_node) node;
- uint32_t id;
- uint32_t hw_id;
- uint32_t priority;
- uint32_t weight;
- uint16_t lvl;
- uint16_t hw_lvl;
- uint32_t rr_prio;
- uint32_t rr_num;
- uint32_t max_prio;
- uint32_t parent_hw_id;
- uint32_t flags:16;
-#define NIX_TM_NODE_HWRES BIT_ULL(0)
-#define NIX_TM_NODE_ENABLED BIT_ULL(1)
-#define NIX_TM_NODE_USER BIT_ULL(2)
-#define NIX_TM_NODE_RED_DISCARD BIT_ULL(3)
- /* Shaper algorithm for RED state @NIX_REDALG_E */
- uint32_t red_algo:2;
- uint32_t pkt_mode:1;
-
- struct otx2_nix_tm_node *parent;
- struct rte_tm_node_params params;
-
- /* Last stats */
- uint64_t last_pkts;
- uint64_t last_bytes;
-};
-
-struct otx2_nix_tm_shaper_profile {
- TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
- uint32_t shaper_profile_id;
- uint32_t reference_count;
- struct rte_tm_shaper_params params; /* Rate in bits/sec */
-};
-
-struct shaper_params {
- uint64_t burst_exponent;
- uint64_t burst_mantissa;
- uint64_t div_exp;
- uint64_t exponent;
- uint64_t mantissa;
- uint64_t burst;
- uint64_t rate;
-};
-
-TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
-TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
-
-#define MAX_SCHED_WEIGHT ((uint8_t)~0)
-#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
-#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight) \
- ((((__weight) & MAX_SCHED_WEIGHT) * \
- NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
-
-/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
-/* = NIX_MAX_HW_MTU */
-#define DEFAULT_RR_WEIGHT 71
-
-/** NIX rate limits */
-#define MAX_RATE_DIV_EXP 12
-#define MAX_RATE_EXPONENT 0xf
-#define MAX_RATE_MANTISSA 0xff
-
-#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
-
-/* NIX rate calculation in Bits/Sec
- * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
- * << NIX_*_PIR[RATE_EXPONENT]) / 256
- * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *
- * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
- * << NIX_*_CIR[RATE_EXPONENT]) / 256
- * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- */
-#define SHAPER_RATE(exponent, mantissa, div_exp) \
- ((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
- / (((1ull << (div_exp)) * 256)))
-
-/* 96xx rate limits in Bits/Sec */
-#define MIN_SHAPER_RATE \
- SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE \
- SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
-
-/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */
-#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1)
-#define NIX_LENGTH_ADJUST_MAX 255
-
-/** TM Shaper - low level operations */
-
-/** NIX burst limits */
-#define MAX_BURST_EXPONENT 0xf
-#define MAX_BURST_MANTISSA 0xff
-
-/* NIX burst calculation
- * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
- * << (NIX_*_PIR[BURST_EXPONENT] + 1))
- * / 256
- *
- * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
- * << (NIX_*_CIR[BURST_EXPONENT] + 1))
- * / 256
- */
-#define SHAPER_BURST(exponent, mantissa) \
- (((256 + (mantissa)) << ((exponent) + 1)) / 256)
-
-/** Shaper burst limits */
-#define MIN_SHAPER_BURST \
- SHAPER_BURST(0, 0)
-
-#define MAX_SHAPER_BURST \
- SHAPER_BURST(MAX_BURST_EXPONENT,\
- MAX_BURST_MANTISSA)
-
-/* Default TL1 priority and Quantum from AF */
-#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
-#define TXSCH_TL1_DFLT_RR_PRIO 1
-
-#define TXSCH_TLX_SP_PRIO_MAX 10
-
-static inline const char *
-nix_hwlvl2str(uint32_t hw_lvl)
-{
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- return "SMQ/MDQ";
- case NIX_TXSCH_LVL_TL4:
- return "TL4";
- case NIX_TXSCH_LVL_TL3:
- return "TL3";
- case NIX_TXSCH_LVL_TL2:
- return "TL2";
- case NIX_TXSCH_LVL_TL1:
- return "TL1";
- default:
- break;
- }
-
- return "???";
-}
-
-#endif /* __OTX2_TM_H__ */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
deleted file mode 100644
index e95184632f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.c
+++ /dev/null
@@ -1,1077 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-
-#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
- /* Cached value is low, Update the fc_cache_pkts */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
- /* Multiply with sqe_per_sqb to express in pkts */ \
- (txq)->fc_cache_pkts = \
- ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
- (txq)->sqes_per_sqb_log2; \
- /* Check it again for the room */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) \
- return 0; \
- } \
-} while (0)
-
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint16_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, 4, flags);
- otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint64_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
- uint16_t segdw;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, segdw,
- flags);
- otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-#define NIX_DESCS_PER_LOOP 4
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
- uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
- uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- uint64x2_t senddesc01_w0, senddesc23_w0;
- uint64x2_t senddesc01_w1, senddesc23_w1;
- uint64x2_t sgdesc01_w0, sgdesc23_w0;
- uint64x2_t sgdesc01_w1, sgdesc23_w1;
- struct otx2_eth_txq *txq = tx_queue;
- uint64_t *lmt_addr = txq->lmt_addr;
- rte_iova_t io_addr = txq->io_addr;
- uint64x2_t ltypes01, ltypes23;
- uint64x2_t xtmp128, ytmp128;
- uint64x2_t xmask01, xmask23;
- uint64x2_t cmd00, cmd01;
- uint64x2_t cmd10, cmd11;
- uint64x2_t cmd20, cmd21;
- uint64x2_t cmd30, cmd31;
- uint64_t lmt_status, i;
- uint16_t pkts_left;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
- senddesc23_w0 = senddesc01_w0;
- senddesc01_w1 = vdupq_n_u64(0);
- senddesc23_w1 = senddesc01_w1;
- sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
- sgdesc23_w0 = sgdesc01_w0;
-
- for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
- /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
- senddesc01_w0 = vbicq_u64(senddesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
- sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
-
- senddesc23_w0 = senddesc01_w0;
- sgdesc23_w0 = sgdesc01_w0;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, buf_iova));
- /*
- * Get mbuf's, olflags, iova, pktlen, dataoff
- * dataoff_iovaX.D[0] = iova,
- * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
- * len_olflagsX.D[0] = ol_flags,
- * len_olflagsX.D[1](63:32) = mbuf->pkt_len
- */
- dataoff_iova0 = vld1q_u64(mbuf0);
- len_olflags0 = vld1q_u64(mbuf0 + 2);
- dataoff_iova1 = vld1q_u64(mbuf1);
- len_olflags1 = vld1q_u64(mbuf1 + 2);
- dataoff_iova2 = vld1q_u64(mbuf2);
- len_olflags2 = vld1q_u64(mbuf2 + 2);
- dataoff_iova3 = vld1q_u64(mbuf3);
- len_olflags3 = vld1q_u64(mbuf3 + 2);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- struct rte_mbuf *mbuf;
- /* Set don't free bit if reference count > 1 */
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- } else {
- struct rte_mbuf *mbuf;
- /* Mark mempool object as "put" since
- * it is freed by NIX
- */
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
- RTE_SET_USED(mbuf);
- }
-
- /* Move mbufs to point pool */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (flags &
- (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- /* Get tx_offload for ol2, ol3, l2, l3 lengths */
- /*
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- */
-
- asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf0 + 2) : "memory");
-
- asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf1 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf2 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf3 + 2) : "memory");
-
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- } else {
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- }
-
- const uint8x16_t shuf_mask2 = {
- 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
-
- /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
- const uint64x2_t and_mask0 = {
- 0xFFFFFFFFFFFFFFFF,
- 0x000000000000FFFF,
- };
-
- dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
- dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
- dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
- dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
-
- /*
- * Pick only 16 bits of pktlen preset at bits 63:32
- * and place them at bits 15:0.
- */
- xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
- ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
-
- /* Add pairwise to get dataoff + iova in sgdesc_w1 */
- sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
- sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
-
- /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
- * pktlen at 15:0 position.
- */
- sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
- sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
-
- if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * il3/il4 types. But we still use ol3/ol4 types in
- * senddesc_w1 as only one header processing is enabled.
- */
- const uint8x16_t tbl = {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6 assumed) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):L3_LEN(9):L2_LEN(7+z)
- * E(47):L3_LEN(9):L2_LEN(7+z)
- */
- senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
- senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
-
- /* Move OLFLAGS bits 55:52 to 51:48
- * with zeros preprended on the byte and rest
- * don't care
- */
- xtmp128 = vshrq_n_u8(xtmp128, 4);
- ytmp128 = vshrq_n_u8(ytmp128, 4);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * ol3/ol4 types.
- */
-
- const uint8x16_t tbl = {
- /* [0-15] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
- 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer ol flags only */
- const uint64x2_t o_cksum_mask = {
- 0x1C00020000000000,
- 0x1C00020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift oltype by 2 to start nibble from BIT(56)
- * instead of BIT(58)
- */
- xtmp128 = vshrq_n_u8(xtmp128, 2);
- ytmp128 = vshrq_n_u8(ytmp128, 2);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 56:63 of oltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /* Lookup table to translate ol_flags to
- * ol4type, ol3type, il4type, il3type of senddesc_w1
- */
- const uint8x16x2_t tbl = {
- {
- {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x22, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x32, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x03, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_IP_CKSUM
- */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- },
-
- {
- /* [16-31] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IP_CKSUM
- */
- 0x32, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4
- */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM |
- * OUTER_IPV6
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- },
- }
- };
-
- /* Extract olflags to translate to oltype & iltype */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- */
- const uint32x4_t tshft_4 = {
- 1, 0,
- 1, 0,
- };
- senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
- senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
-
- /*
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer and inner header ol_flags */
- const uint64x2_t oi_cksum_mask = {
- 0x1CF0020000000000,
- 0x1CF0020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift right oltype by 2 and iltype by 4
- * to start oltype nibble from BIT(58)
- * instead of BIT(56) and iltype nibble from BIT(48)
- * instead of BIT(52).
- */
- const int8x16_t tshft5 = {
- 8, 8, 8, 8, 8, 8, -4, -2,
- 8, 8, 8, 8, 8, 8, -4, -2,
- };
-
- xtmp128 = vshlq_u8(xtmp128, tshft5);
- ytmp128 = vshlq_u8(ytmp128, tshft5);
- /*
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- */
- const int8x16_t tshft3 = {
- -1, 0, -1, 0, 0, 0, 0, 0,
- -1, 0, -1, 0, 0, 0, 0, 0,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Mark Bit(4) of oltype */
- const uint64x2_t oi_cksum_mask2 = {
- 0x1000000000000000,
- 0x1000000000000000,
- };
-
- xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
- ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
-
- /* Do the lookup */
- ltypes01 = vqtbl2q_u8(tbl, xtmp128);
- ltypes23 = vqtbl2q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype and
- * Bit 56:63 of oltype and place it in corresponding
- * place in senddesc_w1.
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
- * l3len, l2len, ol3len, ol2len.
- * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
- * a = a + (a << 16)
- * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
- * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 16));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 16));
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- } else {
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-
- /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
- }
-
- do {
- vst1q_u64(lmt_addr, cmd00);
- vst1q_u64(lmt_addr + 2, cmd01);
- vst1q_u64(lmt_addr + 4, cmd10);
- vst1q_u64(lmt_addr + 6, cmd11);
- vst1q_u64(lmt_addr + 8, cmd20);
- vst1q_u64(lmt_addr + 10, cmd21);
- vst1q_u64(lmt_addr + 12, cmd30);
- vst1q_u64(lmt_addr + 14, cmd31);
- lmt_status = otx2_lmt_submit(io_addr);
-
- } while (lmt_status == 0);
- tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
- }
-
- if (unlikely(pkts_left))
- pkts += nix_xmit_pkts(tx_queue, tx_pkts, pkts_left, cmd, flags);
-
- return pkts;
-}
-
-#else
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- RTE_SET_USED(tx_queue);
- RTE_SET_USED(tx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(cmd);
- RTE_SET_USED(flags);
- return 0;
-}
-#endif
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* VLAN, TSTMP, TSO is not supported by vec */ \
- if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
- (flags) & NIX_TX_OFFLOAD_TSTAMP_F || \
- (flags) & NIX_TX_OFFLOAD_TSO_F) \
- return 0; \
- return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd, (flags)); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-static inline void
-pick_tx_func(struct rte_eth_dev *eth_dev,
- const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
- eth_dev->tx_pkt_burst = tx_burst
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
-}
-
-void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- if (dev->scalar_ena ||
- (dev->tx_offload_flags &
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F |
- NIX_TX_OFFLOAD_TSO_F)))
- pick_tx_func(eth_dev, nix_eth_tx_burst);
- else
- pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-
- if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
-
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
deleted file mode 100644
index 4bbd5a390f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.h
+++ /dev/null
@@ -1,791 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TX_H__
-#define __OTX2_TX_H__
-
-#define NIX_TX_OFFLOAD_NONE (0)
-#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
-#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
-#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
-#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
-#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
-#define NIX_TX_OFFLOAD_TSO_F BIT(5)
-#define NIX_TX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control xmit_prepare function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_TX_MULTI_SEG_F BIT(15)
-
-#define NIX_TX_NEED_SEND_HDR_W1 \
- (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
- NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_TX_NEED_EXT_HDR \
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F | \
- NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_UDP_TUN_BITMASK \
- ((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
- (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
-
-#define NIX_LSO_FORMAT_IDX_TSOV4 (0)
-#define NIX_LSO_FORMAT_IDX_TSOV6 (1)
-
-/* Function to determine no of tx subdesc required in case ext
- * sub desc is enabled.
- */
-static __rte_always_inline int
-otx2_nix_tx_ext_subs(const uint16_t flags)
-{
- return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
- ((flags & (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
- 1 : 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
- const uint64_t ol_flags, const uint16_t no_segdw,
- const uint16_t flags)
-{
- if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- struct nix_send_mem_s *send_mem;
- uint16_t off = (no_segdw - 1) << 1;
- const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
-
- send_mem = (struct nix_send_mem_s *)(cmd + off);
- if (flags & NIX_TX_MULTI_SEG_F) {
- /* Retrieving the default desc values */
- cmd[off] = send_mem_desc[6];
-
- /* Using compiler barier to avoid voilation of C
- * aliasing rules.
- */
- rte_compiler_barrier();
- }
-
- /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
- * should not be recorded, hence changing the alg type to
- * NIX_SENDMEMALG_SET and also changing send mem addr field to
- * next 8 bytes as it corrpt the actual tx tstamp registered
- * address.
- */
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp);
-
- send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
- (is_ol_tstamp));
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_pktmbuf_detach(struct rte_mbuf *m)
-{
- struct rte_mempool *mp = m->pool;
- uint32_t mbuf_size, buf_len;
- struct rte_mbuf *md;
- uint16_t priv_size;
- uint16_t refcount;
-
- /* Update refcount of direct mbuf */
- md = rte_mbuf_from_indirect(m);
- refcount = rte_mbuf_refcnt_update(md, -1);
-
- priv_size = rte_pktmbuf_priv_size(mp);
- mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size);
- buf_len = rte_pktmbuf_data_room_size(mp);
-
- m->priv_size = priv_size;
- m->buf_addr = (char *)m + mbuf_size;
- m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size;
- m->buf_len = (uint16_t)buf_len;
- rte_pktmbuf_reset_headroom(m);
- m->data_len = 0;
- m->ol_flags = 0;
- m->next = NULL;
- m->nb_segs = 1;
-
- /* Now indirect mbuf is safe to free */
- rte_pktmbuf_free(m);
-
- if (refcount == 0) {
- rte_mbuf_refcnt_set(md, 1);
- md->data_len = 0;
- md->ol_flags = 0;
- md->next = NULL;
- md->nb_segs = 1;
- return 0;
- } else {
- return 1;
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_prefree_seg(struct rte_mbuf *m)
-{
- if (likely(rte_mbuf_refcnt_read(m) == 1)) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- } else if (rte_mbuf_refcnt_update(m, -1) == 0) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- rte_mbuf_refcnt_set(m, 1);
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- }
-
- /* Mbuf is having refcount more than 1 so need not to be freed */
- return 1;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
-{
- uint64_t mask, ol_flags = m->ol_flags;
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
- uint16_t *iplen, *oiplen, *oudplen;
- uint16_t lso_sb, paylen;
-
- mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
- lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
- m->l2_len + m->l3_len + m->l4_len;
-
- /* Reduce payload len from base headers */
- paylen = m->pkt_len - lso_sb;
-
- /* Get iplen position assuming no tunnel hdr */
- iplen = (uint16_t *)(mdata + m->l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
-
- oiplen = (uint16_t *)(mdata + m->outer_l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
- *oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
- paylen);
-
- /* Update format for UDP tunneled packet */
- if (is_udp_tun) {
- oudplen = (uint16_t *)(mdata + m->outer_l2_len +
- m->outer_l3_len + 4);
- *oudplen =
- rte_cpu_to_be_16(rte_be_to_cpu_16(*oudplen) -
- paylen);
- }
-
- /* Update iplen position to inner ip hdr */
- iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
- m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- }
-
- *iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
- }
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- uint64_t ol_flags = 0, mask;
- union nix_send_hdr_w1_u w1;
- union nix_send_sg_s *sg;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- if (flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
- sg = (union nix_send_sg_s *)(cmd + 4);
- /* Clear previous markings */
- send_hdr_ext->w0.lso = 0;
- send_hdr_ext->w1.u = 0;
- } else {
- sg = (union nix_send_sg_s *)(cmd + 2);
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- ol_flags = m->ol_flags;
- w1.u = 0;
- }
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- send_hdr->w0.total = m->data_len;
- send_hdr->w0.aura =
- npa_lf_aura_handle_to_aura(m->pool->pool_id);
- }
-
- /*
- * L3type: 2 => IPV4
- * 3 => IPV4 with csum
- * 4 => IPV6
- * L3type and L3ptr needs to be set for either
- * L3 csum or L4 csum or LSO
- *
- */
-
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t ol3type =
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L3 */
- w1.ol3type = ol3type;
- mask = 0xffffull << ((!!ol3type) << 4);
- w1.ol3ptr = ~mask & m->outer_l2_len;
- w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- /* Inner L3 */
- w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
- w1.il3ptr = w1.ol4ptr + m->l2_len;
- w1.il4ptr = w1.il3ptr + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
-
- /* In case of no tunnel header use only
- * shift IL3/IL4 fields a bit to use
- * OL3/OL4 for header checksum
- */
- mask = !ol3type;
- w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
- ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
-
- } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t outer_l2_len = m->outer_l2_len;
-
- /* Outer L3 */
- w1.ol3ptr = outer_l2_len;
- w1.ol4ptr = outer_l2_len + m->outer_l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
- const uint8_t l2_len = m->l2_len;
-
- /* Always use OLXPTR and OLXTYPE when only
- * when one header is present
- */
-
- /* Inner L3 */
- w1.ol3ptr = l2_len;
- w1.ol4ptr = l2_len + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
- }
-
- if (flags & NIX_TX_NEED_EXT_HDR &&
- flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
- send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
- /* HW will update ptr after vlan0 update */
- send_hdr_ext->w1.vlan1_ins_ptr = 12;
- send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
-
- send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
- /* 2B before end of l2 header */
- send_hdr_ext->w1.vlan0_ins_ptr = 12;
- send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
- }
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uint16_t lso_sb;
- uint64_t mask;
-
- mask = -(!w1.il3type);
- lso_sb = (mask & w1.ol4ptr) + (~mask & w1.il4ptr) + m->l4_len;
-
- send_hdr_ext->w0.lso_sb = lso_sb;
- send_hdr_ext->w0.lso = 1;
- send_hdr_ext->w0.lso_mps = m->tso_segsz;
- send_hdr_ext->w0.lso_format =
- NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
- w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
-
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
- uint8_t shift = is_udp_tun ? 32 : 0;
-
- shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
- shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
-
- w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
- w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
- /* Update format for UDP tunneled packet */
- send_hdr_ext->w0.lso_format = (lso_tun_fmt >> shift);
- }
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
- send_hdr->w1.u = w1.u;
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- sg->seg1_size = m->data_len;
- *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* DF bit = 1 if refcount of current mbuf or parent mbuf
- * is greater than 1
- * DF bit = 0 otherwise
- */
- send_hdr->w0.df = otx2_nix_prefree_seg(m);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
- if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- }
-}
-
-
-static __rte_always_inline void
-otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
- const rte_iova_t io_addr, const uint32_t flags)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prep_lmt(uint64_t *cmd, void *lmt_addr, const uint32_t flags)
-{
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit(io_addr);
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit_release(io_addr);
-}
-
-static __rte_always_inline uint16_t
-otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
-{
- struct nix_send_hdr_s *send_hdr;
- union nix_send_sg_s *sg;
- struct rte_mbuf *m_next;
- uint64_t *slist, sg_u;
- uint64_t nb_segs;
- uint64_t segdw;
- uint8_t off, i;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- send_hdr->w0.total = m->pkt_len;
- send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
-
- if (flags & NIX_TX_NEED_EXT_HDR)
- off = 2;
- else
- off = 0;
-
- sg = (union nix_send_sg_s *)&cmd[2 + off];
- /* Clear sg->u header before use */
- sg->u &= 0xFC00000000000000;
- sg_u = sg->u;
- slist = &cmd[3 + off];
-
- i = 0;
- nb_segs = m->nb_segs;
-
- /* Fill mbuf segments */
- do {
- m_next = m->next;
- sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
- *slist = rte_mbuf_data_iova(m);
- /* Set invert df if buffer is not to be freed by H/W */
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (otx2_nix_prefree_seg(m) << (i + 55));
- /* Commit changes to mbuf */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
- if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- rte_io_wmb();
-#endif
- slist++;
- i++;
- nb_segs--;
- if (i > 2 && nb_segs) {
- i = 0;
- /* Next SG subdesc */
- *(uint64_t *)slist = sg_u & 0xFC00000000000000;
- sg->u = sg_u;
- sg->segs = 3;
- sg = (union nix_send_sg_s *)slist;
- sg_u = sg->u;
- slist++;
- }
- m = m_next;
- } while (nb_segs);
-
- sg->u = sg_u;
- sg->segs = i;
- segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
- /* Roundup extra dwords to multiple of 2 */
- segdw = (segdw >> 1) + (segdw & 0x1);
- /* Default dwords */
- segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
- send_hdr->w0.sizem1 = segdw - 1;
-
- return segdw;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_prep_lmt(uint64_t *cmd, void *lmt_addr, uint16_t segdw)
-{
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one_release(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- rte_io_wmb();
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
-#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
-#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
-#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
-#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
-#define TSO_F NIX_TX_OFFLOAD_TSO_F
-#define TX_SEC_F NIX_TX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSO] [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES \
-T(no_offload, 0, 0, 0, 0, 0, 0, 0, 4, \
- NIX_TX_OFFLOAD_NONE) \
-T(l3l4csum, 0, 0, 0, 0, 0, 0, 1, 4, \
- L3L4CSUM_F) \
-T(ol3ol4csum, 0, 0, 0, 0, 0, 1, 0, 4, \
- OL3OL4CSUM_F) \
-T(ol3ol4csum_l3l4csum, 0, 0, 0, 0, 0, 1, 1, 4, \
- OL3OL4CSUM_F | L3L4CSUM_F) \
-T(vlan, 0, 0, 0, 0, 1, 0, 0, 6, \
- VLAN_F) \
-T(vlan_l3l4csum, 0, 0, 0, 0, 1, 0, 1, 6, \
- VLAN_F | L3L4CSUM_F) \
-T(vlan_ol3ol4csum, 0, 0, 0, 0, 1, 1, 0, 6, \
- VLAN_F | OL3OL4CSUM_F) \
-T(vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 0, 1, 1, 1, 6, \
- VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff, 0, 0, 0, 1, 0, 0, 0, 4, \
- NOFF_F) \
-T(noff_l3l4csum, 0, 0, 0, 1, 0, 0, 1, 4, \
- NOFF_F | L3L4CSUM_F) \
-T(noff_ol3ol4csum, 0, 0, 0, 1, 0, 1, 0, 4, \
- NOFF_F | OL3OL4CSUM_F) \
-T(noff_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 0, 1, 1, 4, \
- NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff_vlan, 0, 0, 0, 1, 1, 0, 0, 6, \
- NOFF_F | VLAN_F) \
-T(noff_vlan_l3l4csum, 0, 0, 0, 1, 1, 0, 1, 6, \
- NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(noff_vlan_ol3ol4csum, 0, 0, 0, 1, 1, 1, 0, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 1, 1, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts, 0, 0, 1, 0, 0, 0, 0, 8, \
- TSP_F) \
-T(ts_l3l4csum, 0, 0, 1, 0, 0, 0, 1, 8, \
- TSP_F | L3L4CSUM_F) \
-T(ts_ol3ol4csum, 0, 0, 1, 0, 0, 1, 0, 8, \
- TSP_F | OL3OL4CSUM_F) \
-T(ts_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 0, 1, 1, 8, \
- TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_vlan, 0, 0, 1, 0, 1, 0, 0, 8, \
- TSP_F | VLAN_F) \
-T(ts_vlan_l3l4csum, 0, 0, 1, 0, 1, 0, 1, 8, \
- TSP_F | VLAN_F | L3L4CSUM_F) \
-T(ts_vlan_ol3ol4csum, 0, 0, 1, 0, 1, 1, 0, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 1, 1, 1, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff, 0, 0, 1, 1, 0, 0, 0, 8, \
- TSP_F | NOFF_F) \
-T(ts_noff_l3l4csum, 0, 0, 1, 1, 0, 0, 1, 8, \
- TSP_F | NOFF_F | L3L4CSUM_F) \
-T(ts_noff_ol3ol4csum, 0, 0, 1, 1, 0, 1, 0, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(ts_noff_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 0, 1, 1, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff_vlan, 0, 0, 1, 1, 1, 0, 0, 8, \
- TSP_F | NOFF_F | VLAN_F) \
-T(ts_noff_vlan_l3l4csum, 0, 0, 1, 1, 1, 0, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum, 0, 0, 1, 1, 1, 1, 0, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 1, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
- \
-T(tso, 0, 1, 0, 0, 0, 0, 0, 6, \
- TSO_F) \
-T(tso_l3l4csum, 0, 1, 0, 0, 0, 0, 1, 6, \
- TSO_F | L3L4CSUM_F) \
-T(tso_ol3ol4csum, 0, 1, 0, 0, 0, 1, 0, 6, \
- TSO_F | OL3OL4CSUM_F) \
-T(tso_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 0, 1, 1, 6, \
- TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_vlan, 0, 1, 0, 0, 1, 0, 0, 6, \
- TSO_F | VLAN_F) \
-T(tso_vlan_l3l4csum, 0, 1, 0, 0, 1, 0, 1, 6, \
- TSO_F | VLAN_F | L3L4CSUM_F) \
-T(tso_vlan_ol3ol4csum, 0, 1, 0, 0, 1, 1, 0, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 1, 1, 1, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff, 0, 1, 0, 1, 0, 0, 0, 6, \
- TSO_F | NOFF_F) \
-T(tso_noff_l3l4csum, 0, 1, 0, 1, 0, 0, 1, 6, \
- TSO_F | NOFF_F | L3L4CSUM_F) \
-T(tso_noff_ol3ol4csum, 0, 1, 0, 1, 0, 1, 0, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 0, 1, 1, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff_vlan, 0, 1, 0, 1, 1, 0, 0, 6, \
- TSO_F | NOFF_F | VLAN_F) \
-T(tso_noff_vlan_l3l4csum, 0, 1, 0, 1, 1, 0, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum, 0, 1, 0, 1, 1, 1, 0, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 1, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts, 0, 1, 1, 0, 0, 0, 0, 8, \
- TSO_F | TSP_F) \
-T(tso_ts_l3l4csum, 0, 1, 1, 0, 0, 0, 1, 8, \
- TSO_F | TSP_F | L3L4CSUM_F) \
-T(tso_ts_ol3ol4csum, 0, 1, 1, 0, 0, 1, 0, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(tso_ts_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 0, 1, 1, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_vlan, 0, 1, 1, 0, 1, 0, 0, 8, \
- TSO_F | TSP_F | VLAN_F) \
-T(tso_ts_vlan_l3l4csum, 0, 1, 1, 0, 1, 0, 1, 8, \
- TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum, 0, 1, 1, 0, 1, 1, 0, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 1, 1, 1, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff, 0, 1, 1, 1, 0, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F) \
-T(tso_ts_noff_l3l4csum, 0, 1, 1, 1, 0, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum, 0, 1, 1, 1, 0, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 0, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan, 0, 1, 1, 1, 1, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(tso_ts_noff_vlan_l3l4csum, 0, 1, 1, 1, 1, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum, 0, 1, 1, 1, 1, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec, 1, 0, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F) \
-T(sec_l3l4csum, 1, 0, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | L3L4CSUM_F) \
-T(sec_ol3ol4csum, 1, 0, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | OL3OL4CSUM_F) \
-T(sec_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_vlan, 1, 0, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | VLAN_F) \
-T(sec_vlan_l3l4csum, 1, 0, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | VLAN_F | L3L4CSUM_F) \
-T(sec_vlan_ol3ol4csum, 1, 0, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff, 1, 0, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | NOFF_F) \
-T(sec_noff_l3l4csum, 1, 0, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | NOFF_F | L3L4CSUM_F) \
-T(sec_noff_ol3ol4csum, 1, 0, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_noff_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff_vlan, 1, 0, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F) \
-T(sec_noff_vlan_l3l4csum, 1, 0, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum, 1, 0, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts, 1, 0, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F) \
-T(sec_ts_l3l4csum, 1, 0, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | L3L4CSUM_F) \
-T(sec_ts_ol3ol4csum, 1, 0, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_ts_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_vlan, 1, 0, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F) \
-T(sec_ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum, 1, 0, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff, 1, 0, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F) \
-T(sec_ts_noff_l3l4csum, 1, 0, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum, 1, 0, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan, 1, 0, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_ts_noff_vlan_l3l4csum, 1, 0, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum, 1, 0, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso, 1, 1, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F) \
-T(sec_tso_l3l4csum, 1, 1, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | L3L4CSUM_F) \
-T(sec_tso_ol3ol4csum, 1, 1, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F) \
-T(sec_tso_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_vlan, 1, 1, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F) \
-T(sec_tso_vlan_l3l4csum, 1, 1, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum, 1, 1, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff, 1, 1, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F) \
-T(sec_tso_noff_l3l4csum, 1, 1, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum, 1, 1, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan, 1, 1, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F) \
-T(sec_tso_noff_vlan_l3l4csum, 1, 1, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum, 1, 1, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts, 1, 1, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F) \
-T(sec_tso_ts_l3l4csum, 1, 1, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | L3L4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum, 1, 1, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan, 1, 1, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F) \
-T(sec_tso_ts_vlan_l3l4csum, 1, 1, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum, 1, 1, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff, 1, 1, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F) \
-T(sec_tso_ts_noff_l3l4csum, 1, 1, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum, 1, 1, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff_vlan, 1, 1, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_tso_ts_noff_vlan_l3l4csum, 1, 1, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\
-T(sec_tso_ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F | L3L4CSUM_F)
-#endif /* __OTX2_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
deleted file mode 100644
index cce643b7b5..0000000000
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ /dev/null
@@ -1,1035 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-
-#define VLAN_ID_MATCH 0x1
-#define VTAG_F_MATCH 0x2
-#define MAC_ADDR_MATCH 0x4
-#define QINQ_F_MATCH 0x8
-#define VLAN_DROP 0x10
-#define DEF_F_ENTRY 0x20
-
-enum vtag_cfg_dir {
- VTAG_TX,
- VTAG_RX
-};
-
-static int
-nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- if (enable)
- req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
- else
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
-
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static void
-nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry, bool qinq, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
- uint64_t action = 0, vtag_action = 0;
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= (uint64_t)pcifunc << 4;
- entry->action = action;
-
- if (drop) {
- entry->action &= ~((uint64_t)0xF);
- entry->action |= NIX_RX_ACTIONOP_DROP;
- return;
- }
-
- if (!qinq) {
- /* VTAG0 fields denote CTAG in single vlan case */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- } else {
- /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
- vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
- vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
- }
-
- entry->vtag_action = vtag_action;
-}
-
-static void
-nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
- int vtag_index)
-{
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } vtag_action;
-
- uint64_t action;
-
- action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
-
- /*
- * Take offset from LA since in case of untagged packet,
- * lbptr is zero.
- */
- if (type == RTE_ETH_VLAN_TYPE_OUTER) {
- vtag_action.act.vtag0_def = vtag_index;
- vtag_action.act.vtag0_lid = NPC_LID_LA;
- vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
- } else {
- vtag_action.act.vtag1_def = vtag_index;
- vtag_action.act.vtag1_lid = NPC_LID_LA;
- vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
- }
-
- entry->action = action;
- entry->vtag_action = vtag_action.reg;
-}
-
-static int
-nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static int
-nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf, uint8_t ena)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct msghdr *rsp;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
-
- req->entry = ent_idx;
- req->intf = intf;
- req->enable_entry = ena;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- return rc;
-}
-
-static int
-nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_and_write_entry_req *req;
- struct npc_mcam_alloc_and_write_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
-
- if (intf == NPC_MCAM_RX) {
- if (!drop && dev->vlan_info.def_rx_mcam_idx) {
- req->priority = NPC_MCAM_HIGHER_PRIO;
- req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
- } else if (drop && dev->vlan_info.qinq_mcam_idx) {
- req->priority = NPC_MCAM_LOWER_PRIO;
- req->ref_entry = dev->vlan_info.qinq_mcam_idx;
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
-
- req->intf = intf;
- req->enable_entry = 1;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->entry;
-}
-
-static void
-nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_read_entry_req *req;
- struct npc_mcam_read_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t intf, mcam_ena;
- int idx, rc = -EINVAL;
- uint8_t *mac_addr;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Read entry first */
- req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
-
- req->entry = mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read entry %d", mcam_index);
- return;
- }
-
- entry = rsp->entry_data;
- intf = rsp->intf;
- mcam_ena = rsp->enable;
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- if (enable) {
- mcam_mask = 0;
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
-
- } else {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
-
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* Write back the mcam entry */
- rc = nix_vlan_mcam_write(eth_dev, mcam_index,
- &entry, intf, mcam_ena);
- if (rc) {
- otx2_err("Failed to write entry %d", mcam_index);
- return;
- }
-}
-
-void
-otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
-
- /* Already in required mode */
- if (enable == vlan->promisc_on)
- return;
-
- /* Update default rx entry */
- if (vlan->def_rx_mcam_idx)
- nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
-
- /* Update all other rx filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
- nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
-
- vlan->promisc_on = enable;
-}
-
-/* Configure mcam entry with required MCAM search rules */
-static int
-nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
- uint16_t vlan_id, uint16_t flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t *mac_addr;
- int idx, kwi = 0;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- /* Channel base extracted to KW0[11:0] */
- entry.kw[kwi] = dev->rx_chan_base;
- entry.kw_mask[kwi] = BIT_ULL(12) - 1;
-
- /* Adds vlan_id & LB CTAG flag to MCAM KW */
- if (flags & VLAN_ID_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
-
- mcam_data = (uint16_t)vlan_id;
- mcam_mask = (BIT_ULL(16) - 1);
- otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
- &mcam_data, mkex->lb_xtract.len);
- otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
- &mcam_mask, mkex->lb_xtract.len);
- }
-
- /* Adds LB STAG flag to MCAM KW */
- if (flags & QINQ_F_MATCH) {
- entry.kw[kwi] |= NPC_LT_LB_STAG_QINQ << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
- }
-
- /* Adds LB CTAG & LB STAG flags to MCAM KW */
- if (flags & VTAG_F_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
- }
-
- /* Adds port MAC address to MCAM KW */
- if (flags & MAC_ADDR_MATCH) {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* VLAN_DROP: for drop action for all vlan packets when filter is on.
- * For QinQ, enable vtag action for both outer & inner tags
- */
- if (flags & VLAN_DROP)
- nix_set_rx_vlan_action(eth_dev, &entry, false, true);
- else if (flags & QINQ_F_MATCH)
- nix_set_rx_vlan_action(eth_dev, &entry, true, false);
- else
- nix_set_rx_vlan_action(eth_dev, &entry, false, false);
-
- if (flags & DEF_F_ENTRY)
- dev->vlan_info.def_rx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
- flags & VLAN_DROP);
-}
-
-/* Installs/Removes/Modifies default rx entry */
-static int
-nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
- bool filter, bool enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- uint16_t flags = 0;
- int mcam_idx, rc;
-
- /* Use default mcam entry to either drop vlan traffic when
- * vlan filter is on or strip vtag when strip is enabled.
- * Allocate default entry which matches port mac address
- * and vtag(ctag/stag) flags with drop action.
- */
- if (!vlan->def_rx_mcam_idx) {
- if (!eth_dev->data->promiscuous)
- flags = MAC_ADDR_MATCH;
-
- if (filter && enable)
- flags |= VTAG_F_MATCH | VLAN_DROP;
- else if (strip && enable)
- flags |= VTAG_F_MATCH;
- else
- return 0;
-
- flags |= DEF_F_ENTRY;
-
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- return -mcam_idx;
- }
-
- vlan->def_rx_mcam_idx = mcam_idx;
- return 0;
- }
-
- /* Filter is already enabled, so packets would be dropped anyways. No
- * processing needed for enabling strip wrt mcam entry.
- */
-
- /* Filter disable request */
- if (vlan->filter_on && filter && !enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
-
- /* Free default rx entry only when
- * 1. strip is not on and
- * 2. qinq entry is allocated before default entry.
- */
- if (vlan->strip_on ||
- (vlan->qinq_on && !vlan->qinq_before_def)) {
- if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- RTE_ETH_MQ_RX_RSS)
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_RSS;
- else
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_UCAST;
- return nix_vlan_mcam_write(eth_dev,
- vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent,
- NIX_INTF_RX, 1);
- } else {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- /* Filter enable request */
- if (!vlan->filter_on && filter && enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
- vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
- return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
- }
-
- /* Strip disable request */
- if (vlan->strip_on && strip && !enable) {
- if (!vlan->filter_on &&
- !(vlan->qinq_on && !vlan->qinq_before_def)) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- return 0;
-}
-
-/* Installs/Removes default tx entry */
-static int
-nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, int vtag_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct mcam_entry entry;
- uint16_t pf_func;
- int rc;
-
- if (!vlan->def_tx_mcam_idx && enable) {
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Only pf_func is matched, swap it's bytes */
- pf_func = (dev->pf_func & 0xff) << 8;
- pf_func |= (dev->pf_func >> 8) & 0xff;
-
- /* PF Func extracted to KW1[47:32] */
- entry.kw[0] = (uint64_t)pf_func << 32;
- entry.kw_mask[0] = (BIT_ULL(16) - 1) << 32;
-
- nix_set_tx_vlan_action(&entry, type, vtag_index);
- vlan->def_tx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
- NIX_INTF_TX, 0);
- }
-
- if (vlan->def_tx_mcam_idx && !enable) {
- rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
-
- return 0;
-}
-
-/* Configure vlan stripping on or off */
-static int
-nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- int rc = -EINVAL;
-
- rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
- if (rc) {
- otx2_err("Failed to config default rx entry");
- return rc;
- }
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- /* cfg_type = 1 for rx vlan cfg */
- vtag_cfg->cfg_type = VTAG_RX;
-
- if (enable)
- vtag_cfg->rx.strip_vtag = 1;
- else
- vtag_cfg->rx.strip_vtag = 0;
-
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- /* Use rx vtag type index[0] for now */
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- dev->vlan_info.strip_on = enable;
- return rc;
-}
-
-/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
-static int
-nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
- uint16_t vlan_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc = -EINVAL;
-
- if (!vlan_id && enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- /* Enable/disable existing vlan filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (vlan_id) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_enb_dis(dev,
- entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- } else {
- rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- }
-
- if (!vlan_id && !enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- return 0;
-}
-
-/* Enable/disable vlan filtering for the given vlan_id */
-int
-otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int entry_exists = 0;
- int rc = -EINVAL;
- int mcam_idx;
-
- if (!vlan_id) {
- otx2_err("Vlan Id can't be zero");
- return rc;
- }
-
- if (!vlan->def_rx_mcam_idx) {
- otx2_err("Vlan Filtering is disabled, enable it first");
- return rc;
- }
-
- if (on) {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- /* Vlan entry already exists */
- entry_exists = 1;
- /* Mcam entry already allocated */
- if (entry->mcam_idx) {
- rc = nix_vlan_hw_filter(eth_dev, on,
- vlan_id);
- return rc;
- }
- break;
- }
- }
-
- if (!entry_exists) {
- entry = rte_zmalloc("otx2_nix_vlan_entry",
- sizeof(struct vlan_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- return -ENOMEM;
- }
- }
-
- /* Enables vlan_id & mac address based filtering */
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH |
- MAC_ADDR_MATCH);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- return mcam_idx;
- }
-
- entry->mcam_idx = mcam_idx;
- if (!entry_exists) {
- entry->vlan_id = vlan_id;
- TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
- }
- } else {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_free(dev, entry->mcam_idx);
- if (rc)
- return rc;
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- break;
- }
- }
- }
- return 0;
-}
-
-/* Configure double vlan(qinq) on or off */
-static int
-otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
- const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan_info;
- int mcam_idx;
- int rc;
-
- vlan_info = &dev->vlan_info;
-
- if (!enable) {
- if (!vlan_info->qinq_mcam_idx)
- return 0;
-
- rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
- if (rc)
- return rc;
-
- vlan_info->qinq_mcam_idx = 0;
- dev->vlan_info.qinq_on = 0;
- vlan_info->qinq_before_def = 0;
- return 0;
- }
-
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
- QINQ_F_MATCH | MAC_ADDR_MATCH);
- if (mcam_idx < 0)
- return mcam_idx;
-
- if (!vlan_info->def_rx_mcam_idx)
- vlan_info->qinq_before_def = 1;
-
- vlan_info->qinq_mcam_idx = mcam_idx;
- dev->vlan_info.qinq_on = 1;
- return 0;
-}
-
-int
-otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t offloads = dev->rx_offloads;
- struct rte_eth_rxmode *rxmode;
- int rc = 0;
-
- rxmode = ð_dev->data->dev_conf.rxmode;
-
- if (mask & RTE_ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, true);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, false);
- }
- if (rc)
- goto done;
- }
-
- if (mask & RTE_ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, true, 0);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, false, 0);
- }
- if (rc)
- goto done;
- }
-
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
- if (!dev->vlan_info.qinq_on) {
- offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, true);
- if (rc)
- goto done;
- }
- } else {
- if (dev->vlan_info.qinq_on) {
- offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, false);
- if (rc)
- goto done;
- }
- }
-
- if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
- dev->rx_offloads |= offloads;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(eth_dev);
- }
-
-done:
- return rc;
-}
-
-int
-otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct nix_set_vlan_tpid *tpid_cfg;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
-
- tpid_cfg->tpid = tpid;
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
- else
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- dev->vlan_info.outer_vlan_tpid = tpid;
- else
- dev->vlan_info.inner_vlan_tpid = tpid;
- return 0;
-}
-
-int
-otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
- struct otx2_mbox *mbox = otx2_dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- struct otx2_vlan_info *vlan;
- int rc, rc1, vtag_index = 0;
-
- if (vlan_id == 0) {
- otx2_err("vlan id can't be zero");
- return -EINVAL;
- }
-
- vlan = &otx2_dev->vlan_info;
-
- if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
- otx2_err("pvid %d is already enabled", vlan_id);
- return -EINVAL;
- }
-
- if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
- otx2_err("another pvid is enabled, disable that first");
- return -EINVAL;
- }
-
- /* No pvid active */
- if (!on && !vlan->pvid_insert_on)
- return 0;
-
- /* Given pvid already disabled */
- if (!on && vlan->pvid != vlan_id)
- return 0;
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
-
- if (on) {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- if (vlan->outer_vlan_tpid)
- vtag_cfg->tx.vtag0 = ((uint32_t)vlan->outer_vlan_tpid
- << 16) | vlan_id;
- else
- vtag_cfg->tx.vtag0 =
- ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- } else {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
- vtag_cfg->tx.free_vtag0 = 1;
- }
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (on) {
- vtag_index = rsp->vtag0_idx;
- } else {
- vlan->pvid = 0;
- vlan->pvid_insert_on = 0;
- vlan->outer_vlan_idx = 0;
- }
-
- rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
- vtag_index, on);
- if (rc < 0) {
- printf("Default tx entry failed with rc %d\n", rc);
- vtag_cfg->tx.vtag0_idx = vtag_index;
- vtag_cfg->tx.free_vtag0 = 1;
- vtag_cfg->tx.cfg_vtag0 = 0;
-
- rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc1)
- otx2_err("Vtag free failed");
-
- return rc;
- }
-
- if (on) {
- vlan->pvid = vlan_id;
- vlan->pvid_insert_on = 1;
- vlan->outer_vlan_idx = vtag_index;
- }
-
- return 0;
-}
-
-void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
- __rte_unused uint16_t queue,
- __rte_unused int on)
-{
- otx2_err("Not Supported");
-}
-
-static int
-nix_vlan_rx_mkex_offset(uint64_t mask)
-{
- int nib_count = 0;
-
- while (mask) {
- nib_count += mask & 1;
- mask >>= 1;
- }
-
- return nib_count * 4;
-}
-
-static int
-nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
-{
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_xtract_info *x_info = NULL;
- uint64_t rx_keyx;
- otx2_dxcfg_t *p;
- int rc = -EINVAL;
-
- if (npc == NULL) {
- otx2_err("Missing npc mkex configuration");
- return rc;
- }
-
-#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
-
- rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
- if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
- return rc;
-
- if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
- NPC_KEX_LB_LTYPE_NIBBLE_ENA)
- return rc;
-
- mkex->lb_lt_offset =
- nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
-
- p = &npc->prx_dxcfg;
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
- memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
- memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
-
- return 0;
-}
-
-static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_entry *entry;
- int rc;
-
- /* VLAN filters can't be set without setting filtern on */
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
- if (rc) {
- otx2_err("Failed to reinstall vlan filters");
- return;
- }
-
- TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
- rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
- if (rc)
- otx2_err("Failed to reinstall filter for vlan:%d",
- entry->vlan_id);
- }
-}
-
-int
-otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, mask;
-
- /* Port initialized for first time or restarted */
- if (!dev->configured) {
- rc = nix_vlan_get_mkex_info(dev);
- if (rc) {
- otx2_err("Failed to get vlan mkex info rc=%d", rc);
- return rc;
- }
-
- TAILQ_INIT(&dev->vlan_info.fltr_tbl);
- } else {
- /* Reinstall all mcam entries now if filter offload is set */
- if (eth_dev->data->dev_conf.rxmode.offloads &
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
- nix_vlan_reinstall_vlan_filters(eth_dev);
- }
-
- mask =
- RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
- rc = otx2_nix_vlan_offload_set(eth_dev, mask);
- if (rc) {
- otx2_err("Failed to set vlan offload rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-int
-otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc;
-
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (!dev->configured) {
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- } else {
- /* MCAM entries freed by flow_fini & lf_free on
- * port stop.
- */
- entry->mcam_idx = 0;
- }
- }
-
- if (!dev->configured) {
- if (vlan->def_rx_mcam_idx) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- }
- }
-
- otx2_nix_config_double_vlan(eth_dev, false);
- vlan->def_rx_mcam_idx = 0;
- return 0;
-}
diff --git a/drivers/net/octeontx2/version.map b/drivers/net/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/net/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
index 9326925025..dc720368ab 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -113,7 +113,7 @@
#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
+#define PCI_DEVID_CN9K_EP_NET_VF 0xB203 /* OCTEON 9 EP mode */
#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
int
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index fd5e8ed263..8a59a1a194 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -150,7 +150,7 @@ struct otx_ep_iq_config {
/** The instruction (input) queue.
* The input queue is used to post raw (instruction) mode data or packet data
- * to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one
+ * to OCTEON 9 device from the host. Each IQ of a OTX_EP EP VF device has one
* such structure to represent it.
*/
struct otx_ep_instr_queue {
@@ -170,12 +170,12 @@ struct otx_ep_instr_queue {
/* Input ring index, where the driver should write the next packet */
uint32_t host_write_index;
- /* Input ring index, where the OCTEON TX2 should read the next packet */
+ /* Input ring index, where the OCTEON 9 should read the next packet */
uint32_t otx_read_index;
uint32_t reset_instr_cnt;
- /** This index aids in finding the window in the queue where OCTEON TX2
+ /** This index aids in finding the window in the queue where OCTEON 9
* has read the commands.
*/
uint32_t flush_index;
@@ -195,7 +195,7 @@ struct otx_ep_instr_queue {
/* OTX_EP instruction count register for this ring. */
void *inst_cnt_reg;
- /* Number of instructions pending to be posted to OCTEON TX2. */
+ /* Number of instructions pending to be posted to OCTEON 9. */
uint32_t fill_cnt;
/* Statistics for this input queue. */
@@ -230,8 +230,8 @@ union otx_ep_rh {
};
#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh))
-/** Information about packet DMA'ed by OCTEON TX2.
- * The format of the information available at Info Pointer after OCTEON TX2
+/** Information about packet DMA'ed by OCTEON 9.
+ * The format of the information available at Info Pointer after OCTEON 9
* has posted a packet. Not all descriptors have valid information. Only
* the Info field of the first descriptor for a packet has information
* about the packet.
@@ -295,7 +295,7 @@ struct otx_ep_droq {
/* Driver should read the next packet at this index */
uint32_t read_idx;
- /* OCTEON TX2 will write the next packet at this index */
+ /* OCTEON 9 will write the next packet at this index */
uint32_t write_idx;
/* At this index, the driver will refill the descriptor's buffer */
@@ -326,7 +326,7 @@ struct otx_ep_droq {
*/
void *pkts_credit_reg;
- /** Pointer to the mapped packet sent register. OCTEON TX2 writes the
+ /** Pointer to the mapped packet sent register. OCTEON 9 writes the
* number of packets DMA'ed to host memory in this register.
*/
void *pkts_sent_reg;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index c3cec6d833..806add246b 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -102,7 +102,7 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = otx_ep_vf_setup_device(otx_epvf);
otx_epvf->fn_list.disable_io_queues(otx_epvf);
break;
- case PCI_DEVID_OCTEONTX2_EP_NET_VF:
+ case PCI_DEVID_CN9K_EP_NET_VF:
case PCI_DEVID_CN98XX_EP_NET_VF:
otx_epvf->chip_id = dev_id;
ret = otx2_ep_vf_setup_device(otx_epvf);
@@ -137,7 +137,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts;
- else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+ else if (otx_epvf->chip_id == PCI_DEVID_CN9K_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CN98XX_EP_NET_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts;
ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
@@ -422,7 +422,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->pdev = pdev;
otx_epdev_init(otx_epvf);
- if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF)
+ if (pdev->id.device_id == PCI_DEVID_CN9K_EP_NET_VF)
otx_epvf->pkind = SDP_OTX2_PKIND;
else
otx_epvf->pkind = SDP_PKIND;
@@ -450,7 +450,7 @@ otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
/* Set of PCI devices this driver supports */
static const struct rte_pci_id pci_id_otx_ep_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN9K_EP_NET_VF) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN98XX_EP_NET_VF) },
{ .vendor_id = 0, /* sentinel */ }
};
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 9338b30672..59df6ad857 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -85,7 +85,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq = otx_ep->instr_queue[iq_no];
q_size = conf->iq.instr_type * num_descs;
- /* IQ memory creation for Instruction submission to OCTEON TX2 */
+ /* IQ memory creation for Instruction submission to OCTEON 9 */
iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev,
"instr_queue", iq_no, q_size,
OTX_EP_PCI_RING_ALIGN,
@@ -106,8 +106,8 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->nb_desc = num_descs;
/* Create a IQ request list to hold requests that have been
- * posted to OCTEON TX2. This list will be used for freeing the IQ
- * data buffer(s) later once the OCTEON TX2 fetched the requests.
+ * posted to OCTEON 9. This list will be used for freeing the IQ
+ * data buffer(s) later once the OCTEON 9 fetched the requests.
*/
iq->req_list = rte_zmalloc_socket("request_list",
(iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE),
@@ -450,7 +450,7 @@ post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd)
uint8_t *iqptr, cmdsize;
/* This ensures that the read index does not wrap around to
- * the same position if queue gets full before OCTEON TX2 could
+ * the same position if queue gets full before OCTEON 9 could
* fetch any instr.
*/
if (iq->instr_pending > (iq->nb_desc - 1))
@@ -979,7 +979,7 @@ otx_ep_check_droq_pkts(struct otx_ep_droq *droq)
return new_pkts;
}
-/* Check for response arrival from OCTEON TX2
+/* Check for response arrival from OCTEON 9
* returns number of requests completed
*/
uint16_t
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 6cea732228..ace4627218 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -65,11 +65,11 @@
intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
'SVendor': None, 'SDevice': None}
-octeontx2_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
+cnxk_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
'SVendor': None, 'SDevice': None}
-octeontx2_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
+cnxk_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
'SVendor': None, 'SDevice': None}
-octeontx2_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
+cn9k_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
'SVendor': None, 'SDevice': None}
network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class]
@@ -77,10 +77,10 @@
crypto_devices = [encryption_class, intel_processor_class]
dma_devices = [cnxk_dma, hisilicon_dma,
intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx]
-eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso]
-mempool_devices = [cavium_fpa, octeontx2_npa]
+eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, cnxk_sso]
+mempool_devices = [cavium_fpa, cnxk_npa]
compress_devices = [cavium_zip]
-regex_devices = [octeontx2_ree]
+regex_devices = [cn9k_ree]
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev,
intel_ntb_skx, intel_ntb_icx]
--
2.34.1
^ permalink raw reply [relevance 1%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-13 8:14 3% ` Akhil Goyal
@ 2021-12-13 9:27 0% ` Ramkumar Balu
2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
0 siblings, 1 reply; 200+ results
From: Ramkumar Balu @ 2021-12-13 9:27 UTC (permalink / raw)
To: Akhil Goyal, Kusztal, ArkadiuszX, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
> ++Ram for openssl
>
> > ECDSA op:
> > rte_crypto_param k;
> > /**< The ECDSA per-message secret number, which is an integer
> > * in the interval (1, n-1)
> > */
> > DSA op:
> > No 'k'.
> >
> > This one I think have described some time ago:
> > Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
> > Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
> >
> > So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
We can have a standard way to represent if a particular rte_crypto_param is set by the application or not. Then, it is up to the PMD to perform the op or return error code if unable to proceed.
> >
> > The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
> > In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
This case could occur for other params as well. Having a few as nested variables and others as pointers could be confusing for memory alloc/dealloc. However, the rte_crypto_param already has a data pointer inside it which can be used in same manner. For example, in this case (k.data == NULL), PMD should generate random number if possible or push to response ring with error code. This can be done without breaking backward compatibility.
This can be the standard way for PMDs to find if a particular rte_crypto_param is valid or NULL.
> >
> > Another options would be:
> > - Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
> > - Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
I think the previous solution itself is more straightforward and simpler unless we want to have functionality to configure random number generator for each device.
Thanks,
Ramkumar Balu
^ permalink raw reply [relevance 0%]
* RE: [RFC] cryptodev: asymmetric crypto random number source
2021-12-03 10:03 3% [RFC] cryptodev: asymmetric crypto random number source Kusztal, ArkadiuszX
@ 2021-12-13 8:14 3% ` Akhil Goyal
2021-12-13 9:27 0% ` Ramkumar Balu
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2021-12-13 8:14 UTC (permalink / raw)
To: Kusztal, ArkadiuszX, Anoob Joseph, Zhang, Roy Fan; +Cc: dev, Ramkumar Balu
[-- Attachment #1: Type: text/plain, Size: 1147 bytes --]
++Ram for openssl
ECDSA op:
rte_crypto_param k;
/**< The ECDSA per-message secret number, which is an integer
* in the interval (1, n-1)
*/
DSA op:
No 'k'.
This one I think have described some time ago:
Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
Another options would be:
* Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
* Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
[-- Attachment #2: Type: text/html, Size: 9006 bytes --]
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
@ 2021-12-11 9:04 2% ` jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
1 sibling, 0 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev, Thomas Monjalon, Ray Kinsella, Nithin Dabilpuram,
Kiran Kumar K, Sunil Kumar Kori, Satha Rao, Jerin Jacob,
Liron Himi
Cc: david.marchand, ferruh.yigit
From: Liron Himi <lironh@marvell.com>
update driver to use the REE cnxk code
replace octeontx2/otx2 with cn9k
Signed-off-by: Liron Himi <lironh@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 8 +-
devtools/check-abi.sh | 4 +
doc/guides/platform/cnxk.rst | 3 +
doc/guides/platform/octeontx2.rst | 3 -
.../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
.../features/{octeontx2.ini => cn9k.ini} | 2 +-
doc/guides/regexdevs/index.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 2 +-
.../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 ++++++++----------
drivers/regex/cn9k/cn9k_regexdev.h | 44 ++
.../cn9k_regexdev_compiler.c} | 34 +-
drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
drivers/regex/{octeontx2 => cn9k}/version.map | 0
drivers/regex/meson.build | 2 +-
drivers/regex/octeontx2/otx2_regexdev.h | 109 -----
.../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
.../regex/octeontx2/otx2_regexdev_hw_access.c | 167 --------
.../regex/octeontx2/otx2_regexdev_hw_access.h | 202 ---------
drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 -----------------
drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 --
21 files changed, 273 insertions(+), 1205 deletions(-)
rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
rename drivers/regex/{octeontx2 => cn9k}/version.map (100%)
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 18d9edaf88..854b81f2a3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1236,11 +1236,11 @@ F: doc/guides/dmadevs/dpaa.rst
RegEx Drivers
-------------
-Marvell OCTEON TX2 regex
+Marvell OCTEON CN9K regex
M: Liron Himi <lironh@marvell.com>
-F: drivers/regex/octeontx2/
-F: doc/guides/regexdevs/octeontx2.rst
-F: doc/guides/regexdevs/features/octeontx2.ini
+F: drivers/regex/cn9k/
+F: doc/guides/regexdevs/cn9k.rst
+F: doc/guides/regexdevs/features/cn9k.ini
Mellanox mlx5
M: Ori Kam <orika@nvidia.com>
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ca523eb94c..5e654189a8 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,6 +48,10 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
+ if grep -qE "\<librte_regex_octeontx2" $dump; then
+ echo "Skipped removed driver $name."
+ continue
+ fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
echo "Error: cannot find $name in $newdir" >&2
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 88995cc70c..5213df3ccd 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -156,6 +156,9 @@ This section lists dataplane H/W block(s) available in cnxk SoC.
#. **Dmadev Driver**
See :doc:`../dmadevs/cnxk` for DPI Dmadev driver information.
+#. **Regex Device Driver**
+ See :doc:`../regexdevs/cn9k` for REE Regex device driver information.
+
Procedure to Setup Platform
---------------------------
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
index 3a3d28571c..5ab43abbdd 100644
--- a/doc/guides/platform/octeontx2.rst
+++ b/doc/guides/platform/octeontx2.rst
@@ -155,9 +155,6 @@ This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
#. **Crypto Device Driver**
See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-#. **Regex Device Driver**
- See :doc:`../regexdevs/octeontx2` for REE regex device driver information.
-
Procedure to Setup Platform
---------------------------
diff --git a/doc/guides/regexdevs/octeontx2.rst b/doc/guides/regexdevs/cn9k.rst
similarity index 69%
rename from doc/guides/regexdevs/octeontx2.rst
rename to doc/guides/regexdevs/cn9k.rst
index b39d457d60..c23c295b93 100644
--- a/doc/guides/regexdevs/octeontx2.rst
+++ b/doc/guides/regexdevs/cn9k.rst
@@ -1,20 +1,20 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright(c) 2020 Marvell International Ltd.
-OCTEON TX2 REE Regexdev Driver
+CN9K REE Regexdev Driver
==============================
-The OCTEON TX2 REE PMD (**librte_regex_octeontx2**) provides poll mode
-regexdev driver support for the inbuilt regex device found in the **Marvell OCTEON TX2**
+The CN9K REE PMD (**librte_regex_cn9k**) provides poll mode
+regexdev driver support for the inbuilt regex device found in the **Marvell CN9K**
SoC family.
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
+More information about CN9K SoC can be found at `Marvell Official Website
<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
Features
--------
-Features of the OCTEON TX2 REE PMD are:
+Features of the CN9K REE PMD are:
- 36 queues
- Up to 254 matches for each regex operation
@@ -22,12 +22,12 @@ Features of the OCTEON TX2 REE PMD are:
Prerequisites and Compilation procedure
---------------------------------------
- See :doc:`../platform/octeontx2` for setup information.
+ See :doc:`../platform/cnxk` for setup information.
Device Setup
------------
-The OCTEON TX2 REE devices will need to be bound to a user-space IO driver
+The CN9K REE devices will need to be bound to a user-space IO driver
for use. The script ``dpdk-devbind.py`` script included with DPDK can be
used to view the state of the devices and to bind them to a suitable
DPDK-supported kernel driver. When querying the status of the devices,
@@ -38,12 +38,12 @@ those devices alone.
Debugging Options
-----------------
-.. _table_octeontx2_regex_debug_options:
+.. _table_cn9k_regex_debug_options:
-.. table:: OCTEON TX2 regex device debug options
+.. table:: CN9K regex device debug options
+---+------------+-------------------------------------------------------+
| # | Component | EAL log command |
+===+============+=======================================================+
- | 1 | REE | --log-level='pmd\.regex\.octeontx2,8' |
+ | 1 | REE | --log-level='pmd\.regex\.cn9k,8' |
+---+------------+-------------------------------------------------------+
diff --git a/doc/guides/regexdevs/features/octeontx2.ini b/doc/guides/regexdevs/features/cn9k.ini
similarity index 80%
rename from doc/guides/regexdevs/features/octeontx2.ini
rename to doc/guides/regexdevs/features/cn9k.ini
index c9b421a16d..b029af8ac2 100644
--- a/doc/guides/regexdevs/features/octeontx2.ini
+++ b/doc/guides/regexdevs/features/cn9k.ini
@@ -1,5 +1,5 @@
;
-; Supported features of the 'octeontx2' regex driver.
+; Supported features of the 'cn9k' regex driver.
;
; Refer to default.ini for the full list of available driver features.
;
diff --git a/doc/guides/regexdevs/index.rst b/doc/guides/regexdevs/index.rst
index b1abc826bd..11a33fc09e 100644
--- a/doc/guides/regexdevs/index.rst
+++ b/doc/guides/regexdevs/index.rst
@@ -13,4 +13,4 @@ which can be used from an application through RegEx API.
features_overview
mlx5
- octeontx2
+ cn9k
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index af7ce90ba3..7fd15398e4 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -290,7 +290,7 @@ New Features
Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
- See the :doc:`../regexdevs/octeontx2` for more details.
+ See ``regexdevs/octeontx2`` for more details.
* **Updated Software Eventdev driver.**
diff --git a/drivers/regex/octeontx2/otx2_regexdev.c b/drivers/regex/cn9k/cn9k_regexdev.c
similarity index 61%
rename from drivers/regex/octeontx2/otx2_regexdev.c
rename to drivers/regex/cn9k/cn9k_regexdev.c
index b6e55853e9..32d20c1be8 100644
--- a/drivers/regex/octeontx2/otx2_regexdev.c
+++ b/drivers/regex/cn9k/cn9k_regexdev.c
@@ -13,12 +13,8 @@
/* REE common headers */
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev.h"
-#include "otx2_regexdev_compiler.h"
-#include "otx2_regexdev_hw_access.h"
-#include "otx2_regexdev_mbox.h"
+#include "cn9k_regexdev.h"
+#include "cn9k_regexdev_compiler.h"
/* HW matches are at offset 0x80 from RES_PTR_ADDR
@@ -35,9 +31,6 @@
#define REE_MAX_RULES_PER_GROUP 0xFFFF
#define REE_MAX_GROUPS 0xFFFF
-/* This is temporarily here */
-#define REE0_PF 19
-#define REE1_PF 20
#define REE_RULE_DB_VERSION 2
#define REE_RULE_DB_REVISION 0
@@ -58,32 +51,32 @@ struct ree_rule_db {
static void
qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
{
- snprintf(name, size, "otx2_ree_lf_mem_%u:%u", dev_id, qp_id);
+ snprintf(name, size, "cn9k_ree_lf_mem_%u:%u", dev_id, qp_id);
}
-static struct otx2_ree_qp *
+static struct roc_ree_qp *
ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- struct otx2_ree_vf *vf = &data->vf;
+ struct roc_ree_vf *vf = &data->vf;
const struct rte_memzone *lf_mem;
uint32_t len, iq_len, size_div2;
char name[RTE_MEMZONE_NAMESIZE];
uint64_t used_len, iova;
- struct otx2_ree_qp *qp;
+ struct roc_ree_qp *qp;
uint8_t *va;
int ret;
/* Allocate queue pair */
- qp = rte_zmalloc("OCTEON TX2 Regex PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN);
+ qp = rte_zmalloc("CN9K Regex PMD Queue Pair", sizeof(*qp),
+ ROC_ALIGN);
if (qp == NULL) {
- otx2_err("Could not allocate queue pair");
+ cn9k_err("Could not allocate queue pair");
return NULL;
}
- iq_len = OTX2_REE_IQ_LEN;
+ iq_len = REE_IQ_LEN;
/*
* Queue size must be in units of 128B 2 * REE_INST_S (which is 64B),
@@ -93,13 +86,13 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
size_div2 = iq_len >> 1;
/* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(struct otx2_ree_rid), 8);
+ len = iq_len * RTE_ALIGN(sizeof(struct roc_ree_rid), 8);
/* So that instruction queues start as pg size aligned */
len = RTE_ALIGN(len, pg_sz);
/* For instruction queues */
- len += OTX2_REE_IQ_LEN * sizeof(union otx2_ree_inst);
+ len += REE_IQ_LEN * sizeof(union roc_ree_inst);
/* Waste after instruction queues */
len = RTE_ALIGN(len, pg_sz);
@@ -107,11 +100,11 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
qp_id);
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
+ lf_mem = rte_memzone_reserve_aligned(name, len, rte_socket_id(),
RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
RTE_CACHE_LINE_SIZE);
if (lf_mem == NULL) {
- otx2_err("Could not allocate reserved memzone");
+ cn9k_err("Could not allocate reserved memzone");
goto qp_free;
}
@@ -121,24 +114,24 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
memset(va, 0, len);
/* Initialize pending queue */
- qp->pend_q.rid_queue = (struct otx2_ree_rid *)va;
+ qp->pend_q.rid_queue = (struct roc_ree_rid *)va;
qp->pend_q.enq_tail = 0;
qp->pend_q.deq_head = 0;
qp->pend_q.pending_count = 0;
- used_len = iq_len * RTE_ALIGN(sizeof(struct otx2_ree_rid), 8);
+ used_len = iq_len * RTE_ALIGN(sizeof(struct roc_ree_rid), 8);
used_len = RTE_ALIGN(used_len, pg_sz);
iova += used_len;
qp->iq_dma_addr = iova;
qp->id = qp_id;
- qp->base = OTX2_REE_LF_BAR2(vf, qp_id);
- qp->otx2_regexdev_jobid = 0;
+ qp->base = roc_ree_qp_get_base(vf, qp_id);
+ qp->roc_regexdev_jobid = 0;
qp->write_offset = 0;
- ret = otx2_ree_iq_enable(dev, qp, OTX2_REE_QUEUE_HI_PRIO, size_div2);
+ ret = roc_ree_iq_enable(vf, qp, REE_QUEUE_HI_PRIO, size_div2);
if (ret) {
- otx2_err("Could not enable instruction queue");
+ cn9k_err("Could not enable instruction queue");
goto qp_free;
}
@@ -150,13 +143,13 @@ ree_qp_create(const struct rte_regexdev *dev, uint16_t qp_id)
}
static int
-ree_qp_destroy(const struct rte_regexdev *dev, struct otx2_ree_qp *qp)
+ree_qp_destroy(const struct rte_regexdev *dev, struct roc_ree_qp *qp)
{
const struct rte_memzone *lf_mem;
char name[RTE_MEMZONE_NAMESIZE];
int ret;
- otx2_ree_iq_disable(qp);
+ roc_ree_iq_disable(qp);
qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
qp->id);
@@ -175,8 +168,8 @@ ree_qp_destroy(const struct rte_regexdev *dev, struct otx2_ree_qp *qp)
static int
ree_queue_pair_release(struct rte_regexdev *dev, uint16_t qp_id)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
int ret;
ree_func_trace("Queue=%d", qp_id);
@@ -186,7 +179,7 @@ ree_queue_pair_release(struct rte_regexdev *dev, uint16_t qp_id)
ret = ree_qp_destroy(dev, qp);
if (ret) {
- otx2_err("Could not destroy queue pair %d", qp_id);
+ cn9k_err("Could not destroy queue pair %d", qp_id);
return ret;
}
@@ -200,12 +193,12 @@ ree_dev_register(const char *name)
{
struct rte_regexdev *dev;
- otx2_ree_dbg("Creating regexdev %s\n", name);
+ cn9k_ree_dbg("Creating regexdev %s\n", name);
/* allocate device structure */
dev = rte_regexdev_register(name);
if (dev == NULL) {
- otx2_err("Failed to allocate regex device for %s", name);
+ cn9k_err("Failed to allocate regex device for %s", name);
return NULL;
}
@@ -213,12 +206,12 @@ ree_dev_register(const char *name)
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
dev->data->dev_private =
rte_zmalloc_socket("regexdev device private",
- sizeof(struct otx2_ree_data),
+ sizeof(struct cn9k_ree_data),
RTE_CACHE_LINE_SIZE,
rte_socket_id());
if (dev->data->dev_private == NULL) {
- otx2_err("Cannot allocate memory for dev %s private data",
+ cn9k_err("Cannot allocate memory for dev %s private data",
name);
rte_regexdev_unregister(dev);
@@ -232,7 +225,7 @@ ree_dev_register(const char *name)
static int
ree_dev_unregister(struct rte_regexdev *dev)
{
- otx2_ree_dbg("Closing regex device %s", dev->device->name);
+ cn9k_ree_dbg("Closing regex device %s", dev->device->name);
/* free regex device */
rte_regexdev_unregister(dev);
@@ -246,8 +239,8 @@ ree_dev_unregister(struct rte_regexdev *dev)
static int
ree_dev_fini(struct rte_regexdev *dev)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct rte_pci_device *pci_dev;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
int i, ret;
ree_func_trace();
@@ -258,9 +251,9 @@ ree_dev_fini(struct rte_regexdev *dev)
return ret;
}
- ret = otx2_ree_queues_detach(dev);
+ ret = roc_ree_queues_detach(vf);
if (ret)
- otx2_err("Could not detach queues");
+ cn9k_err("Could not detach queues");
/* TEMP : should be in lib */
if (data->queue_pairs)
@@ -268,33 +261,32 @@ ree_dev_fini(struct rte_regexdev *dev)
if (data->rules)
rte_free(data->rules);
- pci_dev = container_of(dev->device, struct rte_pci_device, device);
- otx2_dev_fini(pci_dev, &(data->vf.otx2_dev));
+ roc_ree_dev_fini(vf);
ret = ree_dev_unregister(dev);
if (ret)
- otx2_err("Could not destroy PMD");
+ cn9k_err("Could not destroy PMD");
return ret;
}
static inline int
-ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
- struct otx2_ree_pending_queue *pend_q)
+ree_enqueue(struct roc_ree_qp *qp, struct rte_regex_ops *op,
+ struct roc_ree_pending_queue *pend_q)
{
- union otx2_ree_inst inst;
- union otx2_ree_res *res;
+ union roc_ree_inst inst;
+ union ree_res *res;
uint32_t offset;
- if (unlikely(pend_q->pending_count >= OTX2_REE_DEFAULT_CMD_QLEN)) {
- otx2_err("Pending count %" PRIu64 " is greater than Q size %d",
- pend_q->pending_count, OTX2_REE_DEFAULT_CMD_QLEN);
+ if (unlikely(pend_q->pending_count >= REE_DEFAULT_CMD_QLEN)) {
+ cn9k_err("Pending count %" PRIu64 " is greater than Q size %d",
+ pend_q->pending_count, REE_DEFAULT_CMD_QLEN);
return -EAGAIN;
}
- if (unlikely(op->mbuf->data_len > OTX2_REE_MAX_PAYLOAD_SIZE ||
+ if (unlikely(op->mbuf->data_len > REE_MAX_PAYLOAD_SIZE ||
op->mbuf->data_len == 0)) {
- otx2_err("Packet length %d is greater than MAX payload %d",
- op->mbuf->data_len, OTX2_REE_MAX_PAYLOAD_SIZE);
+ cn9k_err("Packet length %d is greater than MAX payload %d",
+ op->mbuf->data_len, REE_MAX_PAYLOAD_SIZE);
return -EAGAIN;
}
@@ -324,7 +316,7 @@ ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
inst.cn98xx.ree_job_ctrl = (0x1 << 8);
else
inst.cn98xx.ree_job_ctrl = 0;
- inst.cn98xx.ree_job_id = qp->otx2_regexdev_jobid;
+ inst.cn98xx.ree_job_id = qp->roc_regexdev_jobid;
/* W 7 */
inst.cn98xx.ree_job_subset_id_0 = op->group_id0;
if (op->req_flags & RTE_REGEX_OPS_REQ_GROUP_ID1_VALID_F)
@@ -348,33 +340,33 @@ ree_enqueue(struct otx2_ree_qp *qp, struct rte_regex_ops *op,
pend_q->rid_queue[pend_q->enq_tail].user_id = op->user_id;
/* Mark result as not done */
- res = (union otx2_ree_res *)(op);
+ res = (union ree_res *)(op);
res->s.done = 0;
res->s.ree_err = 0;
/* We will use soft queue length here to limit requests */
- REE_MOD_INC(pend_q->enq_tail, OTX2_REE_DEFAULT_CMD_QLEN);
+ REE_MOD_INC(pend_q->enq_tail, REE_DEFAULT_CMD_QLEN);
pend_q->pending_count += 1;
- REE_MOD_INC(qp->otx2_regexdev_jobid, 0xFFFFFF);
- REE_MOD_INC(qp->write_offset, OTX2_REE_IQ_LEN);
+ REE_MOD_INC(qp->roc_regexdev_jobid, 0xFFFFFF);
+ REE_MOD_INC(qp->write_offset, REE_IQ_LEN);
return 0;
}
static uint16_t
-otx2_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
struct rte_regex_ops **ops, uint16_t nb_ops)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
- struct otx2_ree_pending_queue *pend_q;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
+ struct roc_ree_pending_queue *pend_q;
uint16_t nb_allowed, count = 0;
struct rte_regex_ops *op;
int ret;
pend_q = &qp->pend_q;
- nb_allowed = OTX2_REE_DEFAULT_CMD_QLEN - pend_q->pending_count;
+ nb_allowed = REE_DEFAULT_CMD_QLEN - pend_q->pending_count;
if (nb_ops > nb_allowed)
nb_ops = nb_allowed;
@@ -392,7 +384,7 @@ otx2_ree_enqueue_burst(struct rte_regexdev *dev, uint16_t qp_id,
rte_io_wmb();
/* Update Doorbell */
- otx2_write64(count, qp->base + OTX2_REE_LF_DOORBELL);
+ plt_write64(count, qp->base + REE_LF_DOORBELL);
return count;
}
@@ -422,15 +414,15 @@ ree_dequeue_post_process(struct rte_regex_ops *ops)
}
if (unlikely(ree_res_status != REE_TYPE_RESULT_DESC)) {
- if (ree_res_status & OTX2_REE_STATUS_PMI_SOJ_BIT)
+ if (ree_res_status & REE_STATUS_PMI_SOJ_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_PMI_SOJ_F;
- if (ree_res_status & OTX2_REE_STATUS_PMI_EOJ_BIT)
+ if (ree_res_status & REE_STATUS_PMI_EOJ_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_PMI_EOJ_F;
- if (ree_res_status & OTX2_REE_STATUS_ML_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_ML_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_SCAN_TIMEOUT_F;
- if (ree_res_status & OTX2_REE_STATUS_MM_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_MM_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_MATCH_F;
- if (ree_res_status & OTX2_REE_STATUS_MP_CNT_DET_BIT)
+ if (ree_res_status & REE_STATUS_MP_CNT_DET_BIT)
ops->rsp_flags |= RTE_REGEX_OPS_RSP_MAX_PREFIX_F;
}
if (ops->nb_matches > 0) {
@@ -439,22 +431,22 @@ ree_dequeue_post_process(struct rte_regex_ops *ops)
ops->nb_matches : REE_NUM_MATCHES_ALIGN);
match = (uint64_t)ops + REE_MATCH_OFFSET;
match += (ops->nb_matches - off) *
- sizeof(union otx2_ree_match);
+ sizeof(union ree_match);
memcpy((void *)ops->matches, (void *)match,
- off * sizeof(union otx2_ree_match));
+ off * sizeof(union ree_match));
}
}
static uint16_t
-otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
struct rte_regex_ops **ops, uint16_t nb_ops)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp = data->queue_pairs[qp_id];
- struct otx2_ree_pending_queue *pend_q;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp = data->queue_pairs[qp_id];
+ struct roc_ree_pending_queue *pend_q;
int i, nb_pending, nb_completed = 0;
volatile struct ree_res_s_98 *res;
- struct otx2_ree_rid *rid;
+ struct roc_ree_rid *rid;
pend_q = &qp->pend_q;
@@ -474,7 +466,7 @@ otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
ops[i] = (struct rte_regex_ops *)(rid->rid);
ops[i]->user_id = rid->user_id;
- REE_MOD_INC(pend_q->deq_head, OTX2_REE_DEFAULT_CMD_QLEN);
+ REE_MOD_INC(pend_q->deq_head, REE_DEFAULT_CMD_QLEN);
pend_q->pending_count -= 1;
}
@@ -487,10 +479,10 @@ otx2_ree_dequeue_burst(struct rte_regexdev *dev, uint16_t qp_id,
}
static int
-otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
+cn9k_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
ree_func_trace();
@@ -502,7 +494,7 @@ otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
info->max_queue_pairs = vf->max_queues;
info->max_matches = vf->max_matches;
- info->max_payload_size = OTX2_REE_MAX_PAYLOAD_SIZE;
+ info->max_payload_size = REE_MAX_PAYLOAD_SIZE;
info->max_rules_per_group = data->max_rules_per_group;
info->max_groups = data->max_groups;
info->regexdev_capa = data->regexdev_capa;
@@ -512,11 +504,11 @@ otx2_ree_dev_info_get(struct rte_regexdev *dev, struct rte_regexdev_info *info)
}
static int
-otx2_ree_dev_config(struct rte_regexdev *dev,
+cn9k_ree_dev_config(struct rte_regexdev *dev,
const struct rte_regexdev_config *cfg)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
const struct ree_rule_db *rule_db;
uint32_t rule_db_len;
int ret;
@@ -524,29 +516,29 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
ree_func_trace();
if (cfg->nb_queue_pairs > vf->max_queues) {
- otx2_err("Invalid number of queue pairs requested");
+ cn9k_err("Invalid number of queue pairs requested");
return -EINVAL;
}
if (cfg->nb_max_matches != vf->max_matches) {
- otx2_err("Invalid number of max matches requested");
+ cn9k_err("Invalid number of max matches requested");
return -EINVAL;
}
if (cfg->dev_cfg_flags != 0) {
- otx2_err("Invalid device configuration flags requested");
+ cn9k_err("Invalid device configuration flags requested");
return -EINVAL;
}
/* Unregister error interrupts */
if (vf->err_intr_registered)
- otx2_ree_err_intr_unregister(dev);
+ roc_ree_err_intr_unregister(vf);
/* Detach queues */
if (vf->nb_queues) {
- ret = otx2_ree_queues_detach(dev);
+ ret = roc_ree_queues_detach(vf);
if (ret) {
- otx2_err("Could not detach REE queues");
+ cn9k_err("Could not detach REE queues");
return ret;
}
}
@@ -559,7 +551,7 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
if (data->queue_pairs == NULL) {
data->nb_queue_pairs = 0;
- otx2_err("Failed to get memory for qp meta data, nb_queues %u",
+ cn9k_err("Failed to get memory for qp meta data, nb_queues %u",
cfg->nb_queue_pairs);
return -ENOMEM;
}
@@ -579,7 +571,7 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
qp = rte_realloc(qp, sizeof(qp[0]) * cfg->nb_queue_pairs,
RTE_CACHE_LINE_SIZE);
if (qp == NULL) {
- otx2_err("Failed to realloc qp meta data, nb_queues %u",
+ cn9k_err("Failed to realloc qp meta data, nb_queues %u",
cfg->nb_queue_pairs);
return -ENOMEM;
}
@@ -594,52 +586,52 @@ otx2_ree_dev_config(struct rte_regexdev *dev,
data->nb_queue_pairs = cfg->nb_queue_pairs;
/* Attach queues */
- otx2_ree_dbg("Attach %d queues", cfg->nb_queue_pairs);
- ret = otx2_ree_queues_attach(dev, cfg->nb_queue_pairs);
+ cn9k_ree_dbg("Attach %d queues", cfg->nb_queue_pairs);
+ ret = roc_ree_queues_attach(vf, cfg->nb_queue_pairs);
if (ret) {
- otx2_err("Could not attach queues");
+ cn9k_err("Could not attach queues");
return -ENODEV;
}
- ret = otx2_ree_msix_offsets_get(dev);
+ ret = roc_ree_msix_offsets_get(vf);
if (ret) {
- otx2_err("Could not get MSI-X offsets");
+ cn9k_err("Could not get MSI-X offsets");
goto queues_detach;
}
if (cfg->rule_db && cfg->rule_db_len) {
- otx2_ree_dbg("rule_db length %d", cfg->rule_db_len);
+ cn9k_ree_dbg("rule_db length %d", cfg->rule_db_len);
rule_db = (const struct ree_rule_db *)cfg->rule_db;
rule_db_len = rule_db->number_of_entries *
sizeof(struct ree_rule_db_entry);
- otx2_ree_dbg("rule_db number of entries %d",
+ cn9k_ree_dbg("rule_db number of entries %d",
rule_db->number_of_entries);
if (rule_db_len > cfg->rule_db_len) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
ret = -EINVAL;
goto queues_detach;
}
- ret = otx2_ree_rule_db_prog(dev, (const char *)rule_db->entries,
- rule_db_len, NULL, OTX2_REE_NON_INC_PROG);
+ ret = roc_ree_rule_db_prog(vf, (const char *)rule_db->entries,
+ rule_db_len, NULL, REE_NON_INC_PROG);
if (ret) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
goto queues_detach;
}
}
- dev->enqueue = otx2_ree_enqueue_burst;
- dev->dequeue = otx2_ree_dequeue_burst;
+ dev->enqueue = cn9k_ree_enqueue_burst;
+ dev->dequeue = cn9k_ree_dequeue_burst;
rte_mb();
return 0;
queues_detach:
- otx2_ree_queues_detach(dev);
+ roc_ree_queues_detach(vf);
return ret;
}
static int
-otx2_ree_stop(struct rte_regexdev *dev)
+cn9k_ree_stop(struct rte_regexdev *dev)
{
RTE_SET_USED(dev);
@@ -648,18 +640,20 @@ otx2_ree_stop(struct rte_regexdev *dev)
}
static int
-otx2_ree_start(struct rte_regexdev *dev)
+cn9k_ree_start(struct rte_regexdev *dev)
{
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
uint32_t rule_db_len = 0;
int ret;
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, NULL);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, NULL);
if (ret)
return ret;
if (rule_db_len == 0) {
- otx2_err("Rule db not programmed");
+ cn9k_err("Rule db not programmed");
return -EFAULT;
}
@@ -667,56 +661,55 @@ otx2_ree_start(struct rte_regexdev *dev)
}
static int
-otx2_ree_close(struct rte_regexdev *dev)
+cn9k_ree_close(struct rte_regexdev *dev)
{
return ree_dev_fini(dev);
}
static int
-otx2_ree_queue_pair_setup(struct rte_regexdev *dev, uint16_t qp_id,
+cn9k_ree_queue_pair_setup(struct rte_regexdev *dev, uint16_t qp_id,
const struct rte_regexdev_qp_conf *qp_conf)
{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_qp *qp;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_qp *qp;
ree_func_trace("Queue=%d", qp_id);
if (data->queue_pairs[qp_id] != NULL)
ree_queue_pair_release(dev, qp_id);
- if (qp_conf->nb_desc > OTX2_REE_DEFAULT_CMD_QLEN) {
- otx2_err("Could not setup queue pair for %u descriptors",
+ if (qp_conf->nb_desc > REE_DEFAULT_CMD_QLEN) {
+ cn9k_err("Could not setup queue pair for %u descriptors",
qp_conf->nb_desc);
return -EINVAL;
}
if (qp_conf->qp_conf_flags != 0) {
- otx2_err("Could not setup queue pair with configuration flags 0x%x",
+ cn9k_err("Could not setup queue pair with configuration flags 0x%x",
qp_conf->qp_conf_flags);
return -EINVAL;
}
qp = ree_qp_create(dev, qp_id);
if (qp == NULL) {
- otx2_err("Could not create queue pair %d", qp_id);
+ cn9k_err("Could not create queue pair %d", qp_id);
return -ENOMEM;
}
- qp->cb = qp_conf->cb;
data->queue_pairs[qp_id] = qp;
return 0;
}
static int
-otx2_ree_rule_db_compile_activate(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_activate(struct rte_regexdev *dev)
{
- return otx2_ree_rule_db_compile_prog(dev);
+ return cn9k_ree_rule_db_compile_prog(dev);
}
static int
-otx2_ree_rule_db_update(struct rte_regexdev *dev,
+cn9k_ree_rule_db_update(struct rte_regexdev *dev,
const struct rte_regexdev_rule *rules, uint16_t nb_rules)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
struct rte_regexdev_rule *old_ptr;
uint32_t i, sum_nb_rules;
@@ -770,10 +763,11 @@ otx2_ree_rule_db_update(struct rte_regexdev *dev,
}
static int
-otx2_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
+cn9k_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
uint32_t rule_db_len)
{
-
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
const struct ree_rule_db *ree_rule_db;
uint32_t ree_rule_db_len;
int ret;
@@ -784,21 +778,23 @@ otx2_ree_rule_db_import(struct rte_regexdev *dev, const char *rule_db,
ree_rule_db_len = ree_rule_db->number_of_entries *
sizeof(struct ree_rule_db_entry);
if (ree_rule_db_len > rule_db_len) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
return -EINVAL;
}
- ret = otx2_ree_rule_db_prog(dev, (const char *)ree_rule_db->entries,
- ree_rule_db_len, NULL, OTX2_REE_NON_INC_PROG);
+ ret = roc_ree_rule_db_prog(vf, (const char *)ree_rule_db->entries,
+ ree_rule_db_len, NULL, REE_NON_INC_PROG);
if (ret) {
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
return -ENOSPC;
}
return 0;
}
static int
-otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
+cn9k_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
{
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
struct ree_rule_db *ree_rule_db;
uint32_t rule_dbi_len;
uint32_t rule_db_len;
@@ -806,7 +802,7 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, &rule_dbi_len);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, &rule_dbi_len);
if (ret)
return ret;
@@ -816,10 +812,10 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
}
ree_rule_db = (struct ree_rule_db *)rule_db;
- ret = otx2_ree_rule_db_get(dev, (char *)ree_rule_db->entries,
+ ret = roc_ree_rule_db_get(vf, (char *)ree_rule_db->entries,
rule_db_len, NULL, 0);
if (ret) {
- otx2_err("Could not export rule db");
+ cn9k_err("Could not export rule db");
return -EFAULT;
}
ree_rule_db->number_of_entries =
@@ -830,55 +826,44 @@ otx2_ree_rule_db_export(struct rte_regexdev *dev, char *rule_db)
return 0;
}
-static int
-ree_get_blkaddr(struct otx2_dev *dev)
-{
- int pf;
-
- pf = otx2_get_pf(dev->pf_func);
- if (pf == REE0_PF)
- return RVU_BLOCK_ADDR_REE0;
- else if (pf == REE1_PF)
- return RVU_BLOCK_ADDR_REE1;
- else
- return 0;
-}
-
-static struct rte_regexdev_ops otx2_ree_ops = {
- .dev_info_get = otx2_ree_dev_info_get,
- .dev_configure = otx2_ree_dev_config,
- .dev_qp_setup = otx2_ree_queue_pair_setup,
- .dev_start = otx2_ree_start,
- .dev_stop = otx2_ree_stop,
- .dev_close = otx2_ree_close,
- .dev_attr_get = NULL,
- .dev_attr_set = NULL,
- .dev_rule_db_update = otx2_ree_rule_db_update,
- .dev_rule_db_compile_activate =
- otx2_ree_rule_db_compile_activate,
- .dev_db_import = otx2_ree_rule_db_import,
- .dev_db_export = otx2_ree_rule_db_export,
- .dev_xstats_names_get = NULL,
- .dev_xstats_get = NULL,
- .dev_xstats_by_name_get = NULL,
- .dev_xstats_reset = NULL,
- .dev_selftest = NULL,
- .dev_dump = NULL,
+static struct rte_regexdev_ops cn9k_ree_ops = {
+ .dev_info_get = cn9k_ree_dev_info_get,
+ .dev_configure = cn9k_ree_dev_config,
+ .dev_qp_setup = cn9k_ree_queue_pair_setup,
+ .dev_start = cn9k_ree_start,
+ .dev_stop = cn9k_ree_stop,
+ .dev_close = cn9k_ree_close,
+ .dev_attr_get = NULL,
+ .dev_attr_set = NULL,
+ .dev_rule_db_update = cn9k_ree_rule_db_update,
+ .dev_rule_db_compile_activate =
+ cn9k_ree_rule_db_compile_activate,
+ .dev_db_import = cn9k_ree_rule_db_import,
+ .dev_db_export = cn9k_ree_rule_db_export,
+ .dev_xstats_names_get = NULL,
+ .dev_xstats_get = NULL,
+ .dev_xstats_by_name_get = NULL,
+ .dev_xstats_reset = NULL,
+ .dev_selftest = NULL,
+ .dev_dump = NULL,
};
static int
-otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+cn9k_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
struct rte_pci_device *pci_dev)
{
char name[RTE_REGEXDEV_NAME_MAX_LEN];
- struct otx2_ree_data *data;
- struct otx2_dev *otx2_dev;
+ struct cn9k_ree_data *data;
struct rte_regexdev *dev;
- uint8_t max_matches = 0;
- struct otx2_ree_vf *vf;
- uint16_t nb_queues = 0;
+ struct roc_ree_vf *vf;
int ret;
+ ret = roc_plt_init();
+ if (ret < 0) {
+ plt_err("Failed to initialize platform model");
+ return ret;
+ }
+
rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
dev = ree_dev_register(name);
@@ -887,63 +872,19 @@ otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
goto exit;
}
- dev->dev_ops = &otx2_ree_ops;
+ dev->dev_ops = &cn9k_ree_ops;
dev->device = &pci_dev->device;
/* Get private data space allocated */
data = dev->data->dev_private;
vf = &data->vf;
-
- otx2_dev = &vf->otx2_dev;
-
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
+ vf->pci_dev = pci_dev;
+ ret = roc_ree_dev_init(vf);
if (ret) {
- otx2_err("Could not initialize otx2_dev");
+ plt_err("Failed to initialize roc cpt rc=%d", ret);
goto dev_unregister;
}
- /* Get REE block address */
- vf->block_address = ree_get_blkaddr(otx2_dev);
- if (!vf->block_address) {
- otx2_err("Could not determine block PF number");
- goto otx2_dev_fini;
- }
- /* Get number of queues available on the device */
- ret = otx2_ree_available_queues_get(dev, &nb_queues);
- if (ret) {
- otx2_err("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_REE_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- otx2_err("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- otx2_ree_dbg("Max queues supported by device: %d", vf->max_queues);
-
- /* Get number of maximum matches supported on the device */
- ret = otx2_ree_max_matches_get(dev, &max_matches);
- if (ret) {
- otx2_err("Could not determine the maximum matches supported");
- goto otx2_dev_fini;
- }
- /* Don't exceed the limits set per VF */
- max_matches = RTE_MIN(max_matches, OTX2_REE_MAX_MATCHES_PER_VF);
- if (max_matches == 0) {
- otx2_err("Could not determine the maximum matches supported");
- goto otx2_dev_fini;
- }
-
- vf->max_matches = max_matches;
-
- otx2_ree_dbg("Max matches supported by device: %d", vf->max_matches);
data->rule_flags = RTE_REGEX_PCRE_RULE_ALLOW_EMPTY_F |
RTE_REGEX_PCRE_RULE_ANCHORED_F;
data->regexdev_capa = 0;
@@ -954,18 +895,16 @@ otx2_ree_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
dev->state = RTE_REGEXDEV_READY;
return 0;
-otx2_dev_fini:
- otx2_dev_fini(pci_dev, otx2_dev);
dev_unregister:
ree_dev_unregister(dev);
exit:
- otx2_err("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
+ cn9k_err("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
pci_dev->id.vendor_id, pci_dev->id.device_id);
return ret;
}
static int
-otx2_ree_pci_remove(struct rte_pci_device *pci_dev)
+cn9k_ree_pci_remove(struct rte_pci_device *pci_dev)
{
char name[RTE_REGEXDEV_NAME_MAX_LEN];
struct rte_regexdev *dev = NULL;
@@ -986,20 +925,20 @@ otx2_ree_pci_remove(struct rte_pci_device *pci_dev)
static struct rte_pci_id pci_id_ree_table[] = {
{
RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_REE_PF)
+ PCI_DEVID_CNXK_RVU_REE_PF)
},
{
.vendor_id = 0,
}
};
-static struct rte_pci_driver otx2_regexdev_pmd = {
+static struct rte_pci_driver cn9k_regexdev_pmd = {
.id_table = pci_id_ree_table,
.drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_ree_pci_probe,
- .remove = otx2_ree_pci_remove,
+ .probe = cn9k_ree_pci_probe,
+ .remove = cn9k_ree_pci_remove,
};
-RTE_PMD_REGISTER_PCI(REGEXDEV_NAME_OCTEONTX2_PMD, otx2_regexdev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(REGEXDEV_NAME_OCTEONTX2_PMD, pci_id_ree_table);
+RTE_PMD_REGISTER_PCI(REGEXDEV_NAME_CN9K_PMD, cn9k_regexdev_pmd);
+RTE_PMD_REGISTER_PCI_TABLE(REGEXDEV_NAME_CN9K_PMD, pci_id_ree_table);
diff --git a/drivers/regex/cn9k/cn9k_regexdev.h b/drivers/regex/cn9k/cn9k_regexdev.h
new file mode 100644
index 0000000000..c715502167
--- /dev/null
+++ b/drivers/regex/cn9k/cn9k_regexdev.h
@@ -0,0 +1,44 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CN9K_REGEXDEV_H_
+#define _CN9K_REGEXDEV_H_
+
+#include <rte_common.h>
+#include <rte_regexdev.h>
+
+#include "roc_api.h"
+
+#define cn9k_ree_dbg plt_ree_dbg
+#define cn9k_err plt_err
+
+#define ree_func_trace cn9k_ree_dbg
+
+/* Marvell CN9K Regex PMD device name */
+#define REGEXDEV_NAME_CN9K_PMD regex_cn9k
+
+/**
+ * Device private data
+ */
+struct cn9k_ree_data {
+ uint32_t regexdev_capa;
+ uint64_t rule_flags;
+ /**< Feature flags exposes HW/SW features for the given device */
+ uint16_t max_rules_per_group;
+ /**< Maximum rules supported per subset by this device */
+ uint16_t max_groups;
+ /**< Maximum subset supported by this device */
+ void **queue_pairs;
+ /**< Array of pointers to queue pairs. */
+ uint16_t nb_queue_pairs;
+ /**< Number of device queue pairs. */
+ struct roc_ree_vf vf;
+ /**< vf data */
+ struct rte_regexdev_rule *rules;
+ /**< rules to be compiled */
+ uint16_t nb_rules;
+ /**< number of rules */
+} __rte_cache_aligned;
+
+#endif /* _CN9K_REGEXDEV_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_compiler.c b/drivers/regex/cn9k/cn9k_regexdev_compiler.c
similarity index 86%
rename from drivers/regex/octeontx2/otx2_regexdev_compiler.c
rename to drivers/regex/cn9k/cn9k_regexdev_compiler.c
index 785459f741..935b8a53b4 100644
--- a/drivers/regex/octeontx2/otx2_regexdev_compiler.c
+++ b/drivers/regex/cn9k/cn9k_regexdev_compiler.c
@@ -5,9 +5,8 @@
#include <rte_malloc.h>
#include <rte_regexdev.h>
-#include "otx2_regexdev.h"
-#include "otx2_regexdev_compiler.h"
-#include "otx2_regexdev_mbox.h"
+#include "cn9k_regexdev.h"
+#include "cn9k_regexdev_compiler.h"
#ifdef REE_COMPILER_SDK
#include <rxp-compiler.h>
@@ -65,7 +64,7 @@ ree_rule_db_compile(const struct rte_regexdev_rule *rules,
nb_rules*sizeof(struct rxp_rule_entry), 0);
if (ruleset.rules == NULL) {
- otx2_err("Could not allocate memory for rule compilation\n");
+ cn9k_err("Could not allocate memory for rule compilation\n");
return -EFAULT;
}
if (rof_for_incremental_compile)
@@ -126,9 +125,10 @@ ree_rule_db_compile(const struct rte_regexdev_rule *rules,
}
int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev)
{
- struct otx2_ree_data *data = dev->data->dev_private;
+ struct cn9k_ree_data *data = dev->data->dev_private;
+ struct roc_ree_vf *vf = &data->vf;
char compiler_version[] = "20.5.2.eda0fa2";
char timestamp[] = "19700101_000001";
uint32_t rule_db_len, rule_dbi_len;
@@ -144,25 +144,25 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
ree_func_trace();
- ret = otx2_ree_rule_db_len_get(dev, &rule_db_len, &rule_dbi_len);
+ ret = roc_ree_rule_db_len_get(vf, &rule_db_len, &rule_dbi_len);
if (ret != 0) {
- otx2_err("Could not get rule db length");
+ cn9k_err("Could not get rule db length");
return ret;
}
if (rule_db_len > 0) {
- otx2_ree_dbg("Incremental compile, rule db len %d rule dbi len %d",
+ cn9k_ree_dbg("Incremental compile, rule db len %d rule dbi len %d",
rule_db_len, rule_dbi_len);
rule_db = rte_malloc("ree_rule_db", rule_db_len, 0);
if (!rule_db) {
- otx2_err("Could not allocate memory for rule db");
+ cn9k_err("Could not allocate memory for rule db");
return -EFAULT;
}
- ret = otx2_ree_rule_db_get(dev, rule_db, rule_db_len,
+ ret = roc_ree_rule_db_get(vf, rule_db, rule_db_len,
(char *)rule_dbi, rule_dbi_len);
if (ret) {
- otx2_err("Could not read rule db");
+ cn9k_err("Could not read rule db");
rte_free(rule_db);
return -EFAULT;
}
@@ -188,7 +188,7 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
ret = ree_rule_db_compile(data->rules, data->nb_rules, &rof,
&rofi, &rof_inc, rofi_inc_p);
if (rofi->number_of_entries == 0) {
- otx2_ree_dbg("No change to rule db");
+ cn9k_ree_dbg("No change to rule db");
ret = 0;
goto free_structs;
}
@@ -201,14 +201,14 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
&rofi, NULL, NULL);
}
if (ret != 0) {
- otx2_err("Could not compile rule db");
+ cn9k_err("Could not compile rule db");
goto free_structs;
}
rule_db_len = rof->number_of_entries * sizeof(struct rxp_rof_entry);
- ret = otx2_ree_rule_db_prog(dev, (char *)rof->rof_entries, rule_db_len,
+ ret = roc_ree_rule_db_prog(vf, (char *)rof->rof_entries, rule_db_len,
rofi_rof_entries, rule_dbi_len);
if (ret)
- otx2_err("Could not program rule db");
+ cn9k_err("Could not program rule db");
free_structs:
rxp_free_structs(NULL, NULL, NULL, NULL, NULL, &rof, NULL, &rofi, NULL,
@@ -221,7 +221,7 @@ otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
}
#else
int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev)
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev)
{
RTE_SET_USED(dev);
return -ENOTSUP;
diff --git a/drivers/regex/cn9k/cn9k_regexdev_compiler.h b/drivers/regex/cn9k/cn9k_regexdev_compiler.h
new file mode 100644
index 0000000000..4c29a69ada
--- /dev/null
+++ b/drivers/regex/cn9k/cn9k_regexdev_compiler.h
@@ -0,0 +1,11 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (C) 2020 Marvell International Ltd.
+ */
+
+#ifndef _CN9K_REGEXDEV_COMPILER_H_
+#define _CN9K_REGEXDEV_COMPILER_H_
+
+int
+cn9k_ree_rule_db_compile_prog(struct rte_regexdev *dev);
+
+#endif /* _CN9K_REGEXDEV_COMPILER_H_ */
diff --git a/drivers/regex/octeontx2/meson.build b/drivers/regex/cn9k/meson.build
similarity index 65%
rename from drivers/regex/octeontx2/meson.build
rename to drivers/regex/cn9k/meson.build
index 3f81add5bf..bb0504fba1 100644
--- a/drivers/regex/octeontx2/meson.build
+++ b/drivers/regex/cn9k/meson.build
@@ -16,12 +16,10 @@ if lib.found()
endif
sources = files(
- 'otx2_regexdev.c',
- 'otx2_regexdev_compiler.c',
- 'otx2_regexdev_hw_access.c',
- 'otx2_regexdev_mbox.c',
+ 'cn9k_regexdev.c',
+ 'cn9k_regexdev_compiler.c',
)
-deps += ['bus_pci', 'common_octeontx2', 'regexdev']
+deps += ['bus_pci', 'regexdev']
+deps += ['common_cnxk', 'mempool_cnxk']
-includes += include_directories('../../common/octeontx2')
diff --git a/drivers/regex/octeontx2/version.map b/drivers/regex/cn9k/version.map
similarity index 100%
rename from drivers/regex/octeontx2/version.map
rename to drivers/regex/cn9k/version.map
diff --git a/drivers/regex/meson.build b/drivers/regex/meson.build
index 94222e55fe..7ad55af8ca 100644
--- a/drivers/regex/meson.build
+++ b/drivers/regex/meson.build
@@ -3,6 +3,6 @@
drivers = [
'mlx5',
- 'octeontx2',
+ 'cn9k',
]
std_deps = ['ethdev', 'kvargs'] # 'ethdev' also pulls in mbuf, net, eal etc
diff --git a/drivers/regex/octeontx2/otx2_regexdev.h b/drivers/regex/octeontx2/otx2_regexdev.h
deleted file mode 100644
index d710535f5f..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev.h
+++ /dev/null
@@ -1,109 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_H_
-#define _OTX2_REGEXDEV_H_
-
-#include <rte_common.h>
-#include <rte_regexdev.h>
-
-#include "otx2_dev.h"
-
-#define ree_func_trace otx2_ree_dbg
-
-/* Marvell OCTEON TX2 Regex PMD device name */
-#define REGEXDEV_NAME_OCTEONTX2_PMD regex_octeontx2
-
-#define OTX2_REE_MAX_LFS 36
-#define OTX2_REE_MAX_QUEUES_PER_VF 36
-#define OTX2_REE_MAX_MATCHES_PER_VF 254
-
-#define OTX2_REE_MAX_PAYLOAD_SIZE (1 << 14)
-
-#define OTX2_REE_NON_INC_PROG 0
-#define OTX2_REE_INC_PROG 1
-
-#define REE_MOD_INC(i, l) ((i) == (l - 1) ? (i) = 0 : (i)++)
-
-
-/**
- * Device vf data
- */
-struct otx2_ree_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of regex queues attached */
- uint16_t max_matches;
- /**< Max matches supported*/
- uint16_t lf_msixoff[OTX2_REE_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t block_address;
- /**< REE Block Address */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
-};
-
-/**
- * Device private data
- */
-struct otx2_ree_data {
- uint32_t regexdev_capa;
- uint64_t rule_flags;
- /**< Feature flags exposes HW/SW features for the given device */
- uint16_t max_rules_per_group;
- /**< Maximum rules supported per subset by this device */
- uint16_t max_groups;
- /**< Maximum subset supported by this device */
- void **queue_pairs;
- /**< Array of pointers to queue pairs. */
- uint16_t nb_queue_pairs;
- /**< Number of device queue pairs. */
- struct otx2_ree_vf vf;
- /**< vf data */
- struct rte_regexdev_rule *rules;
- /**< rules to be compiled */
- uint16_t nb_rules;
- /**< number of rules */
-} __rte_cache_aligned;
-
-struct otx2_ree_rid {
- uintptr_t rid;
- /** Request id of a ree operation */
- uint64_t user_id;
- /* Client data */
- /**< IOVA address of the pattern to be matched. */
-};
-
-struct otx2_ree_pending_queue {
- uint64_t pending_count;
- /** Pending requests count */
- struct otx2_ree_rid *rid_queue;
- /** Array of pending requests */
- uint16_t enq_tail;
- /** Tail of queue to be used for enqueue */
- uint16_t deq_head;
- /** Head of queue to be used for dequeue */
-};
-
-struct otx2_ree_qp {
- uint32_t id;
- /**< Queue pair id */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- struct otx2_ree_pending_queue pend_q;
- /**< Pending queue */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- uint32_t otx2_regexdev_jobid;
- /**< Job ID */
- uint32_t write_offset;
- /**< write offset */
- regexdev_stop_flush_t cb;
- /**< Callback function called during rte_regex_dev_stop()*/
-};
-
-#endif /* _OTX2_REGEXDEV_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_compiler.h b/drivers/regex/octeontx2/otx2_regexdev_compiler.h
deleted file mode 100644
index 8d2625bf7f..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_compiler.h
+++ /dev/null
@@ -1,11 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_COMPILER_H_
-#define _OTX2_REGEXDEV_COMPILER_H_
-
-int
-otx2_ree_rule_db_compile_prog(struct rte_regexdev *dev);
-
-#endif /* _OTX2_REGEXDEV_COMPILER_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c b/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
deleted file mode 100644
index f8031d0f72..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.c
+++ /dev/null
@@ -1,167 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev_hw_access.h"
-#include "otx2_regexdev_mbox.h"
-
-static void
-ree_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_REE_LF_MISC_INT);
- if (intr == 0)
- return;
-
- otx2_ree_dbg("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_REE_LF_MISC_INT);
-}
-
-static void
-ree_lf_err_intr_unregister(const struct rte_regexdev *dev, uint16_t msix_off,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, ree_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_ree_err_intr_unregister(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_REE_LF_BAR2(vf, i);
- ree_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-ree_lf_err_intr_register(const struct rte_regexdev *dev, uint16_t msix_off,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, ree_lf_err_intr_handler, (void *)base,
- msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_REE_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_ree_err_intr_register(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid REE LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_REE_LF_BAR2(vf, i);
- ret = ree_lf_err_intr_register(dev, vf->lf_msixoff[i], base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_REE_LF_BAR2(vf, j);
- ree_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
- return ret;
-}
-
-int
-otx2_ree_iq_enable(const struct rte_regexdev *dev, const struct otx2_ree_qp *qp,
- uint8_t pri, uint32_t size_div2)
-{
- union otx2_ree_lf_sbuf_addr base;
- union otx2_ree_lf_ena lf_ena;
-
- /* Set instruction queue size and priority */
- otx2_ree_config_lf(dev, qp->id, pri, size_div2);
-
- /* Set instruction queue base address */
- /* Should be written after SBUF_CTL and before LF_ENA */
-
- base.u = otx2_read64(qp->base + OTX2_REE_LF_SBUF_ADDR);
- base.s.ptr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_REE_LF_SBUF_ADDR);
-
- /* Enable instruction queue */
-
- lf_ena.u = otx2_read64(qp->base + OTX2_REE_LF_ENA);
- lf_ena.s.ena = 1;
- otx2_write64(lf_ena.u, qp->base + OTX2_REE_LF_ENA);
-
- return 0;
-}
-
-void
-otx2_ree_iq_disable(struct otx2_ree_qp *qp)
-{
- union otx2_ree_lf_ena lf_ena;
-
- /* Stop instruction execution */
- lf_ena.u = otx2_read64(qp->base + OTX2_REE_LF_ENA);
- lf_ena.s.ena = 0x0;
- otx2_write64(lf_ena.u, qp->base + OTX2_REE_LF_ENA);
-}
-
-int
-otx2_ree_max_matches_get(const struct rte_regexdev *dev, uint8_t *max_matches)
-{
- union otx2_ree_af_reexm_max_match reexm_max_match;
- int ret;
-
- ret = otx2_ree_af_reg_read(dev, REE_AF_REEXM_MAX_MATCH,
- &reexm_max_match.u);
- if (ret)
- return ret;
-
- *max_matches = reexm_max_match.s.max;
- return 0;
-}
diff --git a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h b/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
deleted file mode 100644
index dedf5f3282..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_hw_access.h
+++ /dev/null
@@ -1,202 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_HW_ACCESS_H_
-#define _OTX2_REGEXDEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include "otx2_regexdev.h"
-
-/* REE instruction queue length */
-#define OTX2_REE_IQ_LEN (1 << 13)
-
-#define OTX2_REE_DEFAULT_CMD_QLEN OTX2_REE_IQ_LEN
-
-/* Status register bits */
-#define OTX2_REE_STATUS_PMI_EOJ_BIT (1 << 14)
-#define OTX2_REE_STATUS_PMI_SOJ_BIT (1 << 13)
-#define OTX2_REE_STATUS_MP_CNT_DET_BIT (1 << 7)
-#define OTX2_REE_STATUS_MM_CNT_DET_BIT (1 << 6)
-#define OTX2_REE_STATUS_ML_CNT_DET_BIT (1 << 5)
-#define OTX2_REE_STATUS_MST_CNT_DET_BIT (1 << 4)
-#define OTX2_REE_STATUS_MPT_CNT_DET_BIT (1 << 3)
-
-/* Register offsets */
-/* REE LF registers */
-#define OTX2_REE_LF_DONE_INT 0x120ull
-#define OTX2_REE_LF_DONE_INT_W1S 0x130ull
-#define OTX2_REE_LF_DONE_INT_ENA_W1S 0x138ull
-#define OTX2_REE_LF_DONE_INT_ENA_W1C 0x140ull
-#define OTX2_REE_LF_MISC_INT 0x300ull
-#define OTX2_REE_LF_MISC_INT_W1S 0x310ull
-#define OTX2_REE_LF_MISC_INT_ENA_W1S 0x320ull
-#define OTX2_REE_LF_MISC_INT_ENA_W1C 0x330ull
-#define OTX2_REE_LF_ENA 0x10ull
-#define OTX2_REE_LF_SBUF_ADDR 0x20ull
-#define OTX2_REE_LF_DONE 0x100ull
-#define OTX2_REE_LF_DONE_ACK 0x110ull
-#define OTX2_REE_LF_DONE_WAIT 0x148ull
-#define OTX2_REE_LF_DOORBELL 0x400ull
-#define OTX2_REE_LF_OUTSTAND_JOB 0x410ull
-
-/* BAR 0 */
-#define OTX2_REE_AF_QUE_SBUF_CTL(a) (0x1200ull | (uint64_t)(a) << 3)
-#define OTX2_REE_PRIV_LF_CFG(a) (0x41000ull | (uint64_t)(a) << 3)
-
-#define OTX2_REE_LF_BAR2(vf, q_id) \
- ((vf)->otx2_dev.bar2 + \
- (((vf)->block_address << 20) | ((q_id) << 12)))
-
-
-#define OTX2_REE_QUEUE_HI_PRIO 0x1
-
-enum ree_desc_type_e {
- REE_TYPE_JOB_DESC = 0x0,
- REE_TYPE_RESULT_DESC = 0x1,
- REE_TYPE_ENUM_LAST = 0x2
-};
-
-union otx2_ree_priv_lf_cfg {
- uint64_t u;
- struct {
- uint64_t slot : 8;
- uint64_t pf_func : 16;
- uint64_t reserved_24_62 : 39;
- uint64_t ena : 1;
- } s;
-};
-
-
-union otx2_ree_lf_sbuf_addr {
- uint64_t u;
- struct {
- uint64_t off : 7;
- uint64_t ptr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_ree_lf_ena {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t reserved_1_63 : 63;
- } s;
-};
-
-union otx2_ree_af_reexm_max_match {
- uint64_t u;
- struct {
- uint64_t max : 8;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_ree_lf_done {
- uint64_t u;
- struct {
- uint64_t done : 20;
- uint64_t reserved_20_63 : 44;
- } s;
-};
-
-union otx2_ree_inst {
- uint64_t u[8];
- struct {
- uint64_t doneint : 1;
- uint64_t reserved_1_3 : 3;
- uint64_t dg : 1;
- uint64_t reserved_5_7 : 3;
- uint64_t ooj : 1;
- uint64_t reserved_9_15 : 7;
- uint64_t reserved_16_63 : 48;
- uint64_t inp_ptr_addr : 64;
- uint64_t inp_ptr_ctl : 64;
- uint64_t res_ptr_addr : 64;
- uint64_t wq_ptr : 64;
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t ggrp : 10;
- uint64_t reserved_364_383 : 20;
- uint64_t reserved_384_391 : 8;
- uint64_t ree_job_id : 24;
- uint64_t ree_job_ctrl : 16;
- uint64_t ree_job_length : 15;
- uint64_t reserved_447_447 : 1;
- uint64_t ree_job_subset_id_0 : 16;
- uint64_t ree_job_subset_id_1 : 16;
- uint64_t ree_job_subset_id_2 : 16;
- uint64_t ree_job_subset_id_3 : 16;
- } cn98xx;
-};
-
-union otx2_ree_res_status {
- uint64_t u;
- struct {
- uint64_t job_type : 3;
- uint64_t mpt_cnt_det : 1;
- uint64_t mst_cnt_det : 1;
- uint64_t ml_cnt_det : 1;
- uint64_t mm_cnt_det : 1;
- uint64_t mp_cnt_det : 1;
- uint64_t mode : 2;
- uint64_t reserved_10_11 : 2;
- uint64_t reserved_12_12 : 1;
- uint64_t pmi_soj : 1;
- uint64_t pmi_eoj : 1;
- uint64_t reserved_15_15 : 1;
- uint64_t reserved_16_63 : 48;
- } s;
-};
-
-union otx2_ree_res {
- uint64_t u[8];
- struct ree_res_s_98 {
- uint64_t done : 1;
- uint64_t hwjid : 7;
- uint64_t ree_res_job_id : 24;
- uint64_t ree_res_status : 16;
- uint64_t ree_res_dmcnt : 8;
- uint64_t ree_res_mcnt : 8;
- uint64_t ree_meta_ptcnt : 16;
- uint64_t ree_meta_icnt : 16;
- uint64_t ree_meta_lcnt : 16;
- uint64_t ree_pmi_min_byte_ptr : 16;
- uint64_t ree_err : 1;
- uint64_t reserved_129_190 : 62;
- uint64_t doneint : 1;
- uint64_t reserved_192_255 : 64;
- uint64_t reserved_256_319 : 64;
- uint64_t reserved_320_383 : 64;
- uint64_t reserved_384_447 : 64;
- uint64_t reserved_448_511 : 64;
- } s;
-};
-
-union otx2_ree_match {
- uint64_t u;
- struct {
- uint64_t ree_rule_id : 32;
- uint64_t start_ptr : 14;
- uint64_t reserved_46_47 : 2;
- uint64_t match_length : 15;
- uint64_t reserved_63_63 : 1;
- } s;
-};
-
-void otx2_ree_err_intr_unregister(const struct rte_regexdev *dev);
-
-int otx2_ree_err_intr_register(const struct rte_regexdev *dev);
-
-int otx2_ree_iq_enable(const struct rte_regexdev *dev,
- const struct otx2_ree_qp *qp,
- uint8_t pri, uint32_t size_div128);
-
-void otx2_ree_iq_disable(struct otx2_ree_qp *qp);
-
-int otx2_ree_max_matches_get(const struct rte_regexdev *dev,
- uint8_t *max_matches);
-
-#endif /* _OTX2_REGEXDEV_HW_ACCESS_H_ */
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.c b/drivers/regex/octeontx2/otx2_regexdev_mbox.c
deleted file mode 100644
index 6d58d367d4..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.c
+++ /dev/null
@@ -1,401 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_regexdev_mbox.h"
-#include "otx2_regexdev.h"
-
-int
-otx2_ree_available_queues_get(const struct rte_regexdev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct free_rsrcs_rsp *rsp;
- struct otx2_dev *otx2_dev;
- int ret;
-
- otx2_dev = &vf->otx2_dev;
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (vf->block_address == RVU_BLOCK_ADDR_REE0)
- *nb_queues = rsp->ree0;
- else
- *nb_queues = rsp->ree1;
- return 0;
-}
-
-int
-otx2_ree_queues_attach(const struct rte_regexdev *dev, uint8_t nb_queues)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct rsrc_attach_req *req;
- struct otx2_mbox *mbox;
-
- /* Ask AF to attach required LFs */
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- /* 1 LF = 1 queue */
- req->reelfs = nb_queues;
- req->ree_blkaddr = vf->block_address;
-
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
-
- return 0;
-}
-
-int
-otx2_ree_queues_detach(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct rsrc_detach_req *req;
- struct otx2_mbox *mbox;
-
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->reelfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_ree_msix_offsets_get(const struct rte_regexdev *dev)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct msix_offset_rsp *rsp;
- struct otx2_mbox *mbox;
- uint32_t i, ret;
-
- /* Get REE MSI-X vector offsets */
- mbox = vf->otx2_dev.mbox;
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->block_address == RVU_BLOCK_ADDR_REE0)
- vf->lf_msixoff[i] = rsp->ree0_lf_msixoff[i];
- else
- vf->lf_msixoff[i] = rsp->ree1_lf_msixoff[i];
- otx2_ree_dbg("lf_msixoff[%d] 0x%x", i, vf->lf_msixoff[i]);
- }
-
- return 0;
-}
-
-static int
-ree_send_mbox_msg(struct otx2_ree_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- otx2_err("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
- uint32_t size)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_lf_req_msg *req;
- struct otx2_mbox *mbox;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- req = otx2_mbox_alloc_msg_ree_config_lf(mbox);
-
- req->lf = lf;
- req->pri = pri ? 1 : 0;
- req->size = size;
- req->blkaddr = vf->block_address;
-
- ret = otx2_mbox_process(mbox);
- if (ret < 0) {
- otx2_err("Could not get mailbox response");
- return ret;
- }
- return 0;
-}
-
-int
-otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t *val)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_rd_wr_reg_msg *msg;
- struct otx2_mbox_dev *mdev;
- struct otx2_mbox *mbox;
- int ret, off;
-
- mbox = vf->otx2_dev.mbox;
- mdev = &mbox->dev[0];
- msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
- sizeof(*msg), sizeof(*msg));
- if (msg == NULL) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = vf->block_address;
-
- ret = ree_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct ree_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_rd_wr_reg_msg *msg;
- struct otx2_mbox *mbox;
-
- mbox = vf->otx2_dev.mbox;
- msg = (struct ree_rd_wr_reg_msg *)otx2_mbox_alloc_msg_rsp(mbox, 0,
- sizeof(*msg), sizeof(*msg));
- if (msg == NULL) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_REE_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = vf->block_address;
-
- return ree_send_mbox_msg(vf);
-}
-
-int
-otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
- uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct ree_rule_db_get_req_msg *req;
- struct ree_rule_db_get_rsp_msg *rsp;
- char *rule_db_ptr = (char *)rule_db;
- struct otx2_ree_vf *vf = &data->vf;
- struct otx2_mbox *mbox;
- int ret, last = 0;
- uint32_t len = 0;
-
- mbox = vf->otx2_dev.mbox;
- if (!rule_db) {
- otx2_err("Couldn't return rule db due to NULL pointer");
- return -EFAULT;
- }
-
- while (!last) {
- req = (struct ree_rule_db_get_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- req->is_dbi = 0;
- req->offset = len;
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_db_len < len + rsp->len) {
- otx2_err("Rule db size is too small");
- return -EFAULT;
- }
- otx2_mbox_memcpy(rule_db_ptr, rsp->rule_db, rsp->len);
- len += rsp->len;
- rule_db_ptr = rule_db_ptr + rsp->len;
- last = rsp->is_last;
- }
-
- if (rule_dbi) {
- req = (struct ree_rule_db_get_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- req->is_dbi = 1;
- req->offset = 0;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_dbi_len < rsp->len) {
- otx2_err("Rule dbi size is too small");
- return -EFAULT;
- }
- otx2_mbox_memcpy(rule_dbi, rsp->rule_db, rsp->len);
- }
- return 0;
-}
-
-int
-otx2_ree_rule_db_len_get(const struct rte_regexdev *dev,
- uint32_t *rule_db_len,
- uint32_t *rule_dbi_len)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- struct ree_rule_db_len_rsp_msg *rsp;
- struct otx2_ree_vf *vf = &data->vf;
- struct ree_req_msg *req;
- struct otx2_mbox *mbox;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- req = (struct ree_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req), sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- req->hdr.id = MBOX_MSG_REE_RULE_DB_LEN_GET;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->blkaddr = vf->block_address;
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
- if (rule_db_len != NULL)
- *rule_db_len = rsp->len;
- if (rule_dbi_len != NULL)
- *rule_dbi_len = rsp->inc_len;
-
- return 0;
-}
-
-static int
-ree_db_msg(const struct rte_regexdev *dev, const char *db, uint32_t db_len,
- int inc, int dbi)
-{
- struct otx2_ree_data *data = dev->data->dev_private;
- uint32_t len_left = db_len, offset = 0;
- struct ree_rule_db_prog_req_msg *req;
- struct otx2_ree_vf *vf = &data->vf;
- const char *rule_db_ptr = db;
- struct otx2_mbox *mbox;
- struct msg_rsp *rsp;
- int ret;
-
- mbox = vf->otx2_dev.mbox;
- while (len_left) {
- req = (struct ree_rule_db_prog_req_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*req),
- sizeof(*rsp));
- if (!req) {
- otx2_err("Could not allocate mailbox message");
- return -EFAULT;
- }
- req->hdr.id = MBOX_MSG_REE_RULE_DB_PROG;
- req->hdr.sig = OTX2_MBOX_REQ_SIG;
- req->hdr.pcifunc = vf->otx2_dev.pf_func;
- req->offset = offset;
- req->total_len = db_len;
- req->len = REE_RULE_DB_REQ_BLOCK_SIZE;
- req->is_incremental = inc;
- req->is_dbi = dbi;
- req->blkaddr = vf->block_address;
-
- if (len_left < REE_RULE_DB_REQ_BLOCK_SIZE) {
- req->is_last = true;
- req->len = len_left;
- }
- otx2_mbox_memcpy(req->rule_db, rule_db_ptr, req->len);
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret) {
- otx2_err("Programming mailbox processing failed");
- return ret;
- }
- len_left -= req->len;
- offset += req->len;
- rule_db_ptr = rule_db_ptr + req->len;
- }
- return 0;
-}
-
-int
-otx2_ree_rule_db_prog(const struct rte_regexdev *dev, const char *rule_db,
- uint32_t rule_db_len, const char *rule_dbi,
- uint32_t rule_dbi_len)
-{
- int inc, ret;
-
- if (rule_db_len == 0) {
- otx2_err("Couldn't program empty rule db");
- return -EFAULT;
- }
- inc = (rule_dbi_len != 0);
- if ((rule_db == NULL) || (inc && (rule_dbi == NULL))) {
- otx2_err("Couldn't program NULL rule db");
- return -EFAULT;
- }
- if (inc) {
- ret = ree_db_msg(dev, rule_dbi, rule_dbi_len, inc, 1);
- if (ret)
- return ret;
- }
- return ree_db_msg(dev, rule_db, rule_db_len, inc, 0);
-}
diff --git a/drivers/regex/octeontx2/otx2_regexdev_mbox.h b/drivers/regex/octeontx2/otx2_regexdev_mbox.h
deleted file mode 100644
index 953efa6724..0000000000
--- a/drivers/regex/octeontx2/otx2_regexdev_mbox.h
+++ /dev/null
@@ -1,38 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_REGEXDEV_MBOX_H_
-#define _OTX2_REGEXDEV_MBOX_H_
-
-#include <rte_regexdev.h>
-
-int otx2_ree_available_queues_get(const struct rte_regexdev *dev,
- uint16_t *nb_queues);
-
-int otx2_ree_queues_attach(const struct rte_regexdev *dev, uint8_t nb_queues);
-
-int otx2_ree_queues_detach(const struct rte_regexdev *dev);
-
-int otx2_ree_msix_offsets_get(const struct rte_regexdev *dev);
-
-int otx2_ree_config_lf(const struct rte_regexdev *dev, uint8_t lf, uint8_t pri,
- uint32_t size);
-
-int otx2_ree_af_reg_read(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t *val);
-
-int otx2_ree_af_reg_write(const struct rte_regexdev *dev, uint64_t reg,
- uint64_t val);
-
-int otx2_ree_rule_db_get(const struct rte_regexdev *dev, char *rule_db,
- uint32_t rule_db_len, char *rule_dbi, uint32_t rule_dbi_len);
-
-int otx2_ree_rule_db_len_get(const struct rte_regexdev *dev,
- uint32_t *rule_db_len, uint32_t *rule_dbi_len);
-
-int otx2_ree_rule_db_prog(const struct rte_regexdev *dev, const char *rule_db,
- uint32_t rule_db_len, const char *rule_dbi,
- uint32_t rule_dbi_len);
-
-#endif /* _OTX2_REGEXDEV_MBOX_H_ */
--
2.34.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers
2021-12-08 9:14 3% ` Jerin Jacob
@ 2021-12-11 9:04 2% ` jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
1 sibling, 2 replies; 200+ results
From: jerinj @ 2021-12-11 9:04 UTC (permalink / raw)
To: dev; +Cc: thomas, david.marchand, ferruh.yigit, Jerin Jacob
From: Jerin Jacob <jerinj@marvell.com>
This patch series enables the following deprecation notice
-------------------------------------------------------------
In the view of enabling unified driver for octeontx2(cn9k)/
octeontx3(cn10k), removing drivers/octeontx2 drivers and
replace with drivers/cnxk/ which supports both octeontx2(cn9k)
and octeontx3(cn10k) SoCs.
This deprecation notice is to do following actions in DPDK v22.02 version.
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename drivers/regex/octeontx2/ as drivers/regex/cn9k/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
Last two actions are to align naming convention as cnxk scheme.
-----------------------------------------------------------------
v5:
- Fixed issues related devtools/check-abi.sh
- Include http://patches.dpdk.org/project/dpdk/patch/20211206083542.3115019-1-jerinj@marvell.com/
patches in this series
- Removal touching old release notes in
http://patches.dpdk.org/project/dpdk/patch/20211206083542.3115019-1-jerinj@marvell.com/
v4:
- squashed the 4th patch
v3:
- fix documentation issues
v2:
- fix review comments.
- split original patch.
- add the driver patch.
Jerin Jacob (1):
drivers: remove octeontx2 drivers
Liron Himi (4):
common/cnxk: add REE HW definitions
common/cnxk: add REE mbox definitions
common/cnxk: add REE support
regex/cn9k: use cnxk infrastructure
MAINTAINERS | 45 +-
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 4 +
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 15 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 523 ---
.../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
.../features/{octeontx2.ini => cn9k.ini} | 2 +-
doc/guides/regexdevs/index.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 8 +-
doc/guides/rel_notes/release_19_11.rst | 2 +-
doc/guides/rel_notes/release_20_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/cnxk/hw/ree.h | 126 +
drivers/common/cnxk/hw/rvu.h | 5 +
drivers/common/cnxk/meson.build | 1 +
drivers/common/cnxk/roc_api.h | 4 +
drivers/common/cnxk/roc_constants.h | 2 +
drivers/common/cnxk/roc_mbox.h | 100 +
drivers/common/cnxk/roc_platform.c | 1 +
drivers/common/cnxk/roc_platform.h | 2 +
drivers/common/cnxk/roc_priv.h | 3 +
drivers/common/cnxk/roc_ree.c | 647 ++++
drivers/common/cnxk/roc_ree.h | 137 +
drivers/common/cnxk/roc_ree_priv.h | 18 +
drivers/common/cnxk/version.map | 18 +-
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
drivers/net/octeontx2/otx2_rss.c | 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
.../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 +-
drivers/regex/cn9k/cn9k_regexdev.h | 44 +
.../cn9k_regexdev_compiler.c} | 34 +-
drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
.../octeontx2 => regex/cn9k}/version.map | 0
drivers/regex/meson.build | 2 +-
drivers/regex/octeontx2/otx2_regexdev.h | 109 -
.../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
.../regex/octeontx2/otx2_regexdev_hw_access.c | 167 -
.../regex/octeontx2/otx2_regexdev_hw_access.h | 202 -
drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 --
drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 -
drivers/regex/octeontx2/version.map | 3 -
usertools/dpdk-devbind.py | 12 +-
179 files changed, 1427 insertions(+), 53329 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
create mode 100644 drivers/common/cnxk/hw/ree.h
create mode 100644 drivers/common/cnxk/roc_ree.c
create mode 100644 drivers/common/cnxk/roc_ree.h
create mode 100644 drivers/common/cnxk/roc_ree_priv.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
rename drivers/{event/octeontx2 => regex/cn9k}/version.map (100%)
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
delete mode 100644 drivers/regex/octeontx2/version.map
--
2.34.1
^ permalink raw reply [relevance 2%]
* Re: [PATCH v4 0/4] regex/cn9k: use cnxk infrastructure
@ 2021-12-08 9:14 3% ` Jerin Jacob
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
1 sibling, 0 replies; 200+ results
From: Jerin Jacob @ 2021-12-08 9:14 UTC (permalink / raw)
To: Liron Himi; +Cc: Jerin Jacob, dpdk-dev
On Wed, Dec 8, 2021 at 12:02 AM <lironh@marvell.com> wrote:
>
> From: Liron Himi <lironh@marvell.com>
>
> 3 patches add support for REE into cnkx infrastructure.
> the last patch change the octeontx2 driver to use
> the new cnxk code. in addition all references to
> octeontx2/otx2 were replaced with cn9k.
Series Acked-by: Jerin Jacob <jerinj@marvell.com>
There still an issue with check-abi.sh[1]
[1]
http://mails.dpdk.org/archives/test-report/2021-December/247701.html
I will send v5 with this fix and remove octeontx2 drivers patches as one series.
>
> v4:
> - squashed the 4th patch
>
> v3:
> - fix documentation issues
>
> v2:
> - fix review comments.
> - split original patch.
> - add the driver patch.
>
> Liron Himi (4):
> common/cnxk: add REE HW definitions
> common/cnxk: add REE mbox definitions
> common/cnxk: add REE support
> regex/cn9k: use cnxk infrastructure
>
> MAINTAINERS | 8 +-
> doc/guides/platform/cnxk.rst | 3 +
> doc/guides/platform/octeontx2.rst | 3 -
> .../regexdevs/{octeontx2.rst => cn9k.rst} | 20 +-
> .../features/{octeontx2.ini => cn9k.ini} | 2 +-
> doc/guides/regexdevs/index.rst | 2 +-
> doc/guides/rel_notes/release_20_11.rst | 2 +-
> drivers/common/cnxk/hw/ree.h | 126 ++++
> drivers/common/cnxk/hw/rvu.h | 5 +
> drivers/common/cnxk/meson.build | 1 +
> drivers/common/cnxk/roc_api.h | 4 +
> drivers/common/cnxk/roc_constants.h | 2 +
> drivers/common/cnxk/roc_mbox.h | 100 +++
> drivers/common/cnxk/roc_platform.c | 1 +
> drivers/common/cnxk/roc_platform.h | 2 +
> drivers/common/cnxk/roc_priv.h | 3 +
> drivers/common/cnxk/roc_ree.c | 647 ++++++++++++++++++
> drivers/common/cnxk/roc_ree.h | 137 ++++
> drivers/common/cnxk/roc_ree_priv.h | 18 +
> drivers/common/cnxk/version.map | 18 +-
> .../otx2_regexdev.c => cn9k/cn9k_regexdev.c} | 405 +++++------
> drivers/regex/cn9k/cn9k_regexdev.h | 44 ++
> .../cn9k_regexdev_compiler.c} | 34 +-
> drivers/regex/cn9k/cn9k_regexdev_compiler.h | 11 +
> drivers/regex/{octeontx2 => cn9k}/meson.build | 10 +-
> drivers/regex/{octeontx2 => cn9k}/version.map | 0
> drivers/regex/meson.build | 2 +-
> drivers/regex/octeontx2/otx2_regexdev.h | 109 ---
> .../regex/octeontx2/otx2_regexdev_compiler.h | 11 -
> .../regex/octeontx2/otx2_regexdev_hw_access.c | 167 -----
> .../regex/octeontx2/otx2_regexdev_hw_access.h | 202 ------
> drivers/regex/octeontx2/otx2_regexdev_mbox.c | 401 -----------
> drivers/regex/octeontx2/otx2_regexdev_mbox.h | 38 -
> 33 files changed, 1332 insertions(+), 1206 deletions(-)
> rename doc/guides/regexdevs/{octeontx2.rst => cn9k.rst} (69%)
> rename doc/guides/regexdevs/features/{octeontx2.ini => cn9k.ini} (80%)
> create mode 100644 drivers/common/cnxk/hw/ree.h
> create mode 100644 drivers/common/cnxk/roc_ree.c
> create mode 100644 drivers/common/cnxk/roc_ree.h
> create mode 100644 drivers/common/cnxk/roc_ree_priv.h
> rename drivers/regex/{octeontx2/otx2_regexdev.c => cn9k/cn9k_regexdev.c} (61%)
> create mode 100644 drivers/regex/cn9k/cn9k_regexdev.h
> rename drivers/regex/{octeontx2/otx2_regexdev_compiler.c => cn9k/cn9k_regexdev_compiler.c} (86%)
> create mode 100644 drivers/regex/cn9k/cn9k_regexdev_compiler.h
> rename drivers/regex/{octeontx2 => cn9k}/meson.build (65%)
> rename drivers/regex/{octeontx2 => cn9k}/version.map (100%)
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_compiler.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.c
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_hw_access.h
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.c
> delete mode 100644 drivers/regex/octeontx2/otx2_regexdev_mbox.h
>
> --
> 2.28.0
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-07 11:01 0% ` Ferruh Yigit
@ 2021-12-07 11:51 0% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2021-12-07 11:51 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon, John McNamara, David Marchand
Cc: Jerin Jacob, dpdk-dev, Akhil Goyal, Declan Doherty, Ruifeng Wang,
Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi,
Jerin Jacob
On 07/12/2021 11:01, Ferruh Yigit wrote:
> On 12/7/2021 7:39 AM, Jerin Jacob wrote:
>> On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>>
>>> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
>>>> From: Jerin Jacob<jerinj@marvell.com>
>>>>
>>>> As per the deprecation notice, In the view of enabling unified driver
>>>> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
>>>> drivers and replace with drivers/cnxk/ which
>>>> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>>>>
>>>> This patch does the following
>>>>
>>>> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
>>>> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
>>>> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
>>>> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
>>>> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
>>>> - Rename config/arm/arm64_octeontx2_linux_gcc as
>>>> config/arm/arm64_cn9k_linux_gcc
>>>> - Update the documentation and MAINTAINERS to reflect the same.
>>>> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
>>>> documentation is not accounted for this change as kernel documentation
>>>> still uses OCTEONTX2.
>>>>
>>>> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
>>>> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
>>>> ---
>>>> MAINTAINERS | 37 -
>>>> app/test/meson.build | 1 -
>>>> app/test/test_cryptodev.c | 7 -
>>>> app/test/test_cryptodev.h | 1 -
>>>> app/test/test_cryptodev_asym.c | 17 -
>>>> app/test/test_eventdev.c | 8 -
>>>> config/arm/arm64_cn10k_linux_gcc | 1 -
>>>> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
>>>> config/arm/meson.build | 10 +-
>>>> devtools/check-abi.sh | 4 +
>>>> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
>>>> doc/guides/cryptodevs/index.rst | 1 -
>>>> doc/guides/cryptodevs/octeontx2.rst | 188 -
>>>> doc/guides/dmadevs/cnxk.rst | 2 +-
>>>> doc/guides/eventdevs/features/octeontx2.ini | 30 -
>>>> doc/guides/eventdevs/index.rst | 1 -
>>>> doc/guides/eventdevs/octeontx2.rst | 178 -
>>>> doc/guides/mempool/index.rst | 1 -
>>>> doc/guides/mempool/octeontx2.rst | 92 -
>>>> doc/guides/nics/cnxk.rst | 4 +-
>>>> doc/guides/nics/features/octeontx2.ini | 97 -
>>>> doc/guides/nics/features/octeontx2_vec.ini | 48 -
>>>> doc/guides/nics/features/octeontx2_vf.ini | 45 -
>>>> doc/guides/nics/index.rst | 1 -
>>>> doc/guides/nics/octeontx2.rst | 465 ---
>>>> doc/guides/nics/octeontx_ep.rst | 4 +-
>>>> doc/guides/platform/cnxk.rst | 12 +
>>>> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
>>>> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
>>>> doc/guides/platform/index.rst | 1 -
>>>> doc/guides/platform/octeontx2.rst | 520 ---
>>>> doc/guides/rel_notes/deprecation.rst | 17 -
>>>> doc/guides/rel_notes/release_19_08.rst | 12 +-
>>>> doc/guides/rel_notes/release_19_11.rst | 6 +-
>>>> doc/guides/rel_notes/release_20_02.rst | 8 +-
>>>> doc/guides/rel_notes/release_20_05.rst | 4 +-
>>>> doc/guides/rel_notes/release_20_08.rst | 6 +-
>>>> doc/guides/rel_notes/release_20_11.rst | 8 +-
>>>> doc/guides/rel_notes/release_21_02.rst | 10 +-
>>>> doc/guides/rel_notes/release_21_05.rst | 6 +-
>>>> doc/guides/rel_notes/release_21_11.rst | 2 +-
>>>
>>> Not sure about updating old release notes files, using 'octeontx2' still can make
>>> sense for the context of those releases.
>>
>> OK. I will send v2 with keeping octeontx2 in OLD release notes.
>>
>>
>
> Not related with this set specifically, a more general question about updating
> old release notes.
> For me release notes should be frozen with the release and shouldn't be updated
> at all afterwards, but there is no agreement on this and in practice old release
> notes are updated.
>
> My question is, is there any benefit to keep a separate release notes file for
> each release, and need to maintain old ones.
> What about having a single release file, 'release.rst', and reset it after each
> release?
>
I think there is a benefit to keeping them all - you can quickly
look/grep through the files for multiple releases. e.g. if you wanted to
check when a driver/feature was added etc. I agree it doesn't make sense
to update them, unless there was a mistake at the time of release.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-07 7:39 3% ` Jerin Jacob
@ 2021-12-07 11:01 0% ` Ferruh Yigit
2021-12-07 11:51 0% ` Kevin Traynor
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-12-07 11:01 UTC (permalink / raw)
To: Thomas Monjalon, John McNamara, David Marchand
Cc: Jerin Jacob, dpdk-dev, Akhil Goyal, Declan Doherty, Ruifeng Wang,
Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi,
Jerin Jacob
On 12/7/2021 7:39 AM, Jerin Jacob wrote:
> On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
>>> From: Jerin Jacob<jerinj@marvell.com>
>>>
>>> As per the deprecation notice, In the view of enabling unified driver
>>> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
>>> drivers and replace with drivers/cnxk/ which
>>> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>>>
>>> This patch does the following
>>>
>>> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
>>> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
>>> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
>>> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
>>> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
>>> - Rename config/arm/arm64_octeontx2_linux_gcc as
>>> config/arm/arm64_cn9k_linux_gcc
>>> - Update the documentation and MAINTAINERS to reflect the same.
>>> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
>>> documentation is not accounted for this change as kernel documentation
>>> still uses OCTEONTX2.
>>>
>>> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
>>> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
>>> ---
>>> MAINTAINERS | 37 -
>>> app/test/meson.build | 1 -
>>> app/test/test_cryptodev.c | 7 -
>>> app/test/test_cryptodev.h | 1 -
>>> app/test/test_cryptodev_asym.c | 17 -
>>> app/test/test_eventdev.c | 8 -
>>> config/arm/arm64_cn10k_linux_gcc | 1 -
>>> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
>>> config/arm/meson.build | 10 +-
>>> devtools/check-abi.sh | 4 +
>>> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
>>> doc/guides/cryptodevs/index.rst | 1 -
>>> doc/guides/cryptodevs/octeontx2.rst | 188 -
>>> doc/guides/dmadevs/cnxk.rst | 2 +-
>>> doc/guides/eventdevs/features/octeontx2.ini | 30 -
>>> doc/guides/eventdevs/index.rst | 1 -
>>> doc/guides/eventdevs/octeontx2.rst | 178 -
>>> doc/guides/mempool/index.rst | 1 -
>>> doc/guides/mempool/octeontx2.rst | 92 -
>>> doc/guides/nics/cnxk.rst | 4 +-
>>> doc/guides/nics/features/octeontx2.ini | 97 -
>>> doc/guides/nics/features/octeontx2_vec.ini | 48 -
>>> doc/guides/nics/features/octeontx2_vf.ini | 45 -
>>> doc/guides/nics/index.rst | 1 -
>>> doc/guides/nics/octeontx2.rst | 465 ---
>>> doc/guides/nics/octeontx_ep.rst | 4 +-
>>> doc/guides/platform/cnxk.rst | 12 +
>>> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
>>> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
>>> doc/guides/platform/index.rst | 1 -
>>> doc/guides/platform/octeontx2.rst | 520 ---
>>> doc/guides/rel_notes/deprecation.rst | 17 -
>>> doc/guides/rel_notes/release_19_08.rst | 12 +-
>>> doc/guides/rel_notes/release_19_11.rst | 6 +-
>>> doc/guides/rel_notes/release_20_02.rst | 8 +-
>>> doc/guides/rel_notes/release_20_05.rst | 4 +-
>>> doc/guides/rel_notes/release_20_08.rst | 6 +-
>>> doc/guides/rel_notes/release_20_11.rst | 8 +-
>>> doc/guides/rel_notes/release_21_02.rst | 10 +-
>>> doc/guides/rel_notes/release_21_05.rst | 6 +-
>>> doc/guides/rel_notes/release_21_11.rst | 2 +-
>>
>> Not sure about updating old release notes files, using 'octeontx2' still can make
>> sense for the context of those releases.
>
> OK. I will send v2 with keeping octeontx2 in OLD release notes.
>
>
Not related with this set specifically, a more general question about updating
old release notes.
For me release notes should be frozen with the release and shouldn't be updated
at all afterwards, but there is no agreement on this and in practice old release
notes are updated.
My question is, is there any benefit to keep a separate release notes file for
each release, and need to maintain old ones.
What about having a single release file, 'release.rst', and reset it after each
release?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-06 13:35 3% ` Ferruh Yigit
@ 2021-12-07 7:39 3% ` Jerin Jacob
2021-12-07 11:01 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-12-07 7:39 UTC (permalink / raw)
To: Ferruh Yigit
Cc: Jerin Jacob, dpdk-dev, Thomas Monjalon, Akhil Goyal,
Declan Doherty, Ruifeng Wang, Jan Viktorin, Bruce Richardson,
Ray Kinsella, Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov, Satananda Burla, Liron Himi
On Mon, Dec 6, 2021 at 7:05 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
> > From: Jerin Jacob<jerinj@marvell.com>
> >
> > As per the deprecation notice, In the view of enabling unified driver
> > for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
> > drivers and replace with drivers/cnxk/ which
> > supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
> >
> > This patch does the following
> >
> > - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
> > - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
> > - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
> > - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
> > - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
> > - Rename config/arm/arm64_octeontx2_linux_gcc as
> > config/arm/arm64_cn9k_linux_gcc
> > - Update the documentation and MAINTAINERS to reflect the same.
> > - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
> > documentation is not accounted for this change as kernel documentation
> > still uses OCTEONTX2.
> >
> > Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
> > Signed-off-by: Jerin Jacob<jerinj@marvell.com>
> > ---
> > MAINTAINERS | 37 -
> > app/test/meson.build | 1 -
> > app/test/test_cryptodev.c | 7 -
> > app/test/test_cryptodev.h | 1 -
> > app/test/test_cryptodev_asym.c | 17 -
> > app/test/test_eventdev.c | 8 -
> > config/arm/arm64_cn10k_linux_gcc | 1 -
> > ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
> > config/arm/meson.build | 10 +-
> > devtools/check-abi.sh | 4 +
> > doc/guides/cryptodevs/features/octeontx2.ini | 87 -
> > doc/guides/cryptodevs/index.rst | 1 -
> > doc/guides/cryptodevs/octeontx2.rst | 188 -
> > doc/guides/dmadevs/cnxk.rst | 2 +-
> > doc/guides/eventdevs/features/octeontx2.ini | 30 -
> > doc/guides/eventdevs/index.rst | 1 -
> > doc/guides/eventdevs/octeontx2.rst | 178 -
> > doc/guides/mempool/index.rst | 1 -
> > doc/guides/mempool/octeontx2.rst | 92 -
> > doc/guides/nics/cnxk.rst | 4 +-
> > doc/guides/nics/features/octeontx2.ini | 97 -
> > doc/guides/nics/features/octeontx2_vec.ini | 48 -
> > doc/guides/nics/features/octeontx2_vf.ini | 45 -
> > doc/guides/nics/index.rst | 1 -
> > doc/guides/nics/octeontx2.rst | 465 ---
> > doc/guides/nics/octeontx_ep.rst | 4 +-
> > doc/guides/platform/cnxk.rst | 12 +
> > .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
> > .../img/octeontx2_resource_virtualization.svg | 2418 ------------
> > doc/guides/platform/index.rst | 1 -
> > doc/guides/platform/octeontx2.rst | 520 ---
> > doc/guides/rel_notes/deprecation.rst | 17 -
> > doc/guides/rel_notes/release_19_08.rst | 12 +-
> > doc/guides/rel_notes/release_19_11.rst | 6 +-
> > doc/guides/rel_notes/release_20_02.rst | 8 +-
> > doc/guides/rel_notes/release_20_05.rst | 4 +-
> > doc/guides/rel_notes/release_20_08.rst | 6 +-
> > doc/guides/rel_notes/release_20_11.rst | 8 +-
> > doc/guides/rel_notes/release_21_02.rst | 10 +-
> > doc/guides/rel_notes/release_21_05.rst | 6 +-
> > doc/guides/rel_notes/release_21_11.rst | 2 +-
>
> Not sure about updating old release notes files, using 'octeontx2' still can make
> sense for the context of those releases.
OK. I will send v2 with keeping octeontx2 in OLD release notes.
>
> Also search still gives some instances of 'octeontx2', like 'devtools/check-abi.sh'
> one, can you please confirm if OK to have them:
> $git grep -i octeontx2
This change to skip octeontx2 driver for ABI check as it is removed.
This change is needed.
if grep -qE "\<librte_*.*_octeontx2" $dump; then
echo "Skipped removed driver $name."
>
> Except from above items, agree with change in principal and build test looks good:
> Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
2021-12-06 8:35 1% [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers jerinj
@ 2021-12-06 13:35 3% ` Ferruh Yigit
2021-12-07 7:39 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Ferruh Yigit @ 2021-12-06 13:35 UTC (permalink / raw)
To: jerinj, dev, Thomas Monjalon, Akhil Goyal, Declan Doherty,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov
Cc: sburla, lironh
On 12/6/2021 8:35 AM, jerinj@marvell.com wrote:
> From: Jerin Jacob<jerinj@marvell.com>
>
> As per the deprecation notice, In the view of enabling unified driver
> for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
> drivers and replace with drivers/cnxk/ which
> supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
>
> This patch does the following
>
> - Replace drivers/common/octeontx2/ with drivers/common/cnxk/
> - Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
> - Replace drivers/net/octeontx2/ with drivers/net/cnxk/
> - Replace drivers/event/octeontx2/ with drivers/event/cnxk/
> - Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
> - Rename config/arm/arm64_octeontx2_linux_gcc as
> config/arm/arm64_cn9k_linux_gcc
> - Update the documentation and MAINTAINERS to reflect the same.
> - Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
> documentation is not accounted for this change as kernel documentation
> still uses OCTEONTX2.
>
> Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
> Signed-off-by: Jerin Jacob<jerinj@marvell.com>
> ---
> MAINTAINERS | 37 -
> app/test/meson.build | 1 -
> app/test/test_cryptodev.c | 7 -
> app/test/test_cryptodev.h | 1 -
> app/test/test_cryptodev_asym.c | 17 -
> app/test/test_eventdev.c | 8 -
> config/arm/arm64_cn10k_linux_gcc | 1 -
> ...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
> config/arm/meson.build | 10 +-
> devtools/check-abi.sh | 4 +
> doc/guides/cryptodevs/features/octeontx2.ini | 87 -
> doc/guides/cryptodevs/index.rst | 1 -
> doc/guides/cryptodevs/octeontx2.rst | 188 -
> doc/guides/dmadevs/cnxk.rst | 2 +-
> doc/guides/eventdevs/features/octeontx2.ini | 30 -
> doc/guides/eventdevs/index.rst | 1 -
> doc/guides/eventdevs/octeontx2.rst | 178 -
> doc/guides/mempool/index.rst | 1 -
> doc/guides/mempool/octeontx2.rst | 92 -
> doc/guides/nics/cnxk.rst | 4 +-
> doc/guides/nics/features/octeontx2.ini | 97 -
> doc/guides/nics/features/octeontx2_vec.ini | 48 -
> doc/guides/nics/features/octeontx2_vf.ini | 45 -
> doc/guides/nics/index.rst | 1 -
> doc/guides/nics/octeontx2.rst | 465 ---
> doc/guides/nics/octeontx_ep.rst | 4 +-
> doc/guides/platform/cnxk.rst | 12 +
> .../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
> .../img/octeontx2_resource_virtualization.svg | 2418 ------------
> doc/guides/platform/index.rst | 1 -
> doc/guides/platform/octeontx2.rst | 520 ---
> doc/guides/rel_notes/deprecation.rst | 17 -
> doc/guides/rel_notes/release_19_08.rst | 12 +-
> doc/guides/rel_notes/release_19_11.rst | 6 +-
> doc/guides/rel_notes/release_20_02.rst | 8 +-
> doc/guides/rel_notes/release_20_05.rst | 4 +-
> doc/guides/rel_notes/release_20_08.rst | 6 +-
> doc/guides/rel_notes/release_20_11.rst | 8 +-
> doc/guides/rel_notes/release_21_02.rst | 10 +-
> doc/guides/rel_notes/release_21_05.rst | 6 +-
> doc/guides/rel_notes/release_21_11.rst | 2 +-
Not sure about updating old release notes files, using 'octeontx2' still can make
sense for the context of those releases.
Also search still gives some instances of 'octeontx2', like 'devtools/check-abi.sh'
one, can you please confirm if OK to have them:
$git grep -i octeontx2
Except from above items, agree with change in principal and build test looks good:
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-05 18:00 0% ` Stephen Hemminger
@ 2021-12-06 9:57 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2021-12-06 9:57 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sun, Dec 5, 2021 at 11:30 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Sun, 5 Dec 2021 12:33:57 +0530
> Jerin Jacob <jerinjacobk@gmail.com> wrote:
>
> > On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > >
> > > On Sat, 4 Dec 2021 22:54:58 +0530
> > > <jerinj@marvell.com> wrote:
> > >
> > > > + /**
> > > > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > > > + *
> > > > + * Based on device support and use-case need, there are two different
> > > > + * ways to enable PFC. The first case is the port level PFC
> > > > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > > > + * API shall be used to configure the PFC, and PFC frames will be
> > > > + * generated using based on VLAN TC value.
> > > > + * The second case is the queue level PFC configuration, in this case,
> > > > + * Any packet field content can be used to steer the packet to the
> > > > + * specific queue using rte_flow or RSS and then use
> > > > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > > > + * on each queue. Based on congestion selected on the specific queue,
> > > > + * configured TC shall be used to generate PFC frames.
> > > > + *
> > > > + * When set to non zero value, application must use queue level
> > > > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > > > + * instead of port level PFC configuration via
> > > > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > > > + * PFC configuration.
> > > > + */
> > > > + uint8_t pfc_queue_tc_max;
> > > > + uint8_t reserved_8s[7];
> > > > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> > >
> > > Not sure you can claim ABI compatibility because the previous versions of DPDK
> > > did not enforce that reserved fields must be zero. The Linux kernel
> > > learned this when adding flags for new system calls; reserved fields only
> > > work if you enforce that application must set them to zero.
> >
> > In this case it rte_eth_dev_info is an out parameter and implementation of
> > rte_eth_dev_info_get() already memseting to 0.
> > Do you still see any other ABI issue?
> >
> > See rte_eth_dev_info_get()
> > /*
> > * Init dev_info before port_id check since caller does not have
> > * return status and does not know if get is successful or not.
> > */
> > memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
>
> The concern was from the misreading comment. It talks about what application should do.
> Could you reword the comment so that it describes what pfc_queue_tc_max is here
The comment is at rte_eth_dev_info::pfc_queue_tc_max. So it is implied
that get pararamter.
current comment
---
+ * When set to non zero value, application must use queue level
+ * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
+ * instead of port level PFC configuration via
+ * rte_eth_dev_priority_flow_ctrl_set() API to realize
+ * PFC configuration.
---
Is updating to following help to clarify. If so, I will send v2, if
not, Please suggest.
---
+ * When set to non zero value by the driver, application must use queue level
^^^^^^^^^^^
+ * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
+ * instead of port level PFC configuration via
+ * rte_eth_dev_priority_flow_ctrl_set() API to realize
+ * PFC configuration.
---
> and move the flow control set part of the comment to where the API for that is.
The comment is needed for rte_eth_dev_priority_flow_ctrl_set() and
rte_eth_dev_priority_flow_ctrl_queue_set().
Instead of duplicating the comments, I added the comment at
rte_eth_dev_info::pfc_queue_tc_max and
added "@see struct rte_eth_dev_info::pfc_queue_tc_max priority flow
control usage models"
in rte_eth_dev_priority_flow_ctrl_set() and
rte_eth_dev_priority_flow_ctrl_queue_set().
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers
@ 2021-12-06 8:35 1% jerinj
2021-12-06 13:35 3% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: jerinj @ 2021-12-06 8:35 UTC (permalink / raw)
To: dev, Thomas Monjalon, Akhil Goyal, Declan Doherty, Jerin Jacob,
Ruifeng Wang, Jan Viktorin, Bruce Richardson, Ray Kinsella,
Radha Mohan Chintakuntla, Veerasenareddy Burru,
Nithin Dabilpuram, Kiran Kumar K, Sunil Kumar Kori, Satha Rao,
Nalla Pradeep, Ciara Power, Pavan Nikhilesh, Shijith Thotton,
Ashwin Sekhar T K, Anatoly Burakov
Cc: ferruh.yigit, sburla, lironh
From: Jerin Jacob <jerinj@marvell.com>
As per the deprecation notice, In the view of enabling unified driver
for octeontx2(cn9k)/octeontx3(cn10k), removing drivers/octeontx2
drivers and replace with drivers/cnxk/ which
supports both octeontx2(cn9k) and octeontx3(cn10k) SoCs.
This patch does the following
- Replace drivers/common/octeontx2/ with drivers/common/cnxk/
- Replace drivers/mempool/octeontx2/ with drivers/mempool/cnxk/
- Replace drivers/net/octeontx2/ with drivers/net/cnxk/
- Replace drivers/event/octeontx2/ with drivers/event/cnxk/
- Replace drivers/crypto/octeontx2/ with drivers/crypto/cnxk/
- Rename config/arm/arm64_octeontx2_linux_gcc as
config/arm/arm64_cn9k_linux_gcc
- Update the documentation and MAINTAINERS to reflect the same.
- Change the reference to OCTEONTX2 as OCTEON 9. The kernel related
documentation is not accounted for this change as kernel documentation
still uses OCTEONTX2.
Depends-on: series-20804 ("common/cnxk: add REE HW definitions")
Signed-off-by: Jerin Jacob <jerinj@marvell.com>
---
MAINTAINERS | 37 -
app/test/meson.build | 1 -
app/test/test_cryptodev.c | 7 -
app/test/test_cryptodev.h | 1 -
app/test/test_cryptodev_asym.c | 17 -
app/test/test_eventdev.c | 8 -
config/arm/arm64_cn10k_linux_gcc | 1 -
...teontx2_linux_gcc => arm64_cn9k_linux_gcc} | 3 +-
config/arm/meson.build | 10 +-
devtools/check-abi.sh | 4 +
doc/guides/cryptodevs/features/octeontx2.ini | 87 -
doc/guides/cryptodevs/index.rst | 1 -
doc/guides/cryptodevs/octeontx2.rst | 188 -
doc/guides/dmadevs/cnxk.rst | 2 +-
doc/guides/eventdevs/features/octeontx2.ini | 30 -
doc/guides/eventdevs/index.rst | 1 -
doc/guides/eventdevs/octeontx2.rst | 178 -
doc/guides/mempool/index.rst | 1 -
doc/guides/mempool/octeontx2.rst | 92 -
doc/guides/nics/cnxk.rst | 4 +-
doc/guides/nics/features/octeontx2.ini | 97 -
doc/guides/nics/features/octeontx2_vec.ini | 48 -
doc/guides/nics/features/octeontx2_vf.ini | 45 -
doc/guides/nics/index.rst | 1 -
doc/guides/nics/octeontx2.rst | 465 ---
doc/guides/nics/octeontx_ep.rst | 4 +-
doc/guides/platform/cnxk.rst | 12 +
.../octeontx2_packet_flow_hw_accelerators.svg | 2804 --------------
.../img/octeontx2_resource_virtualization.svg | 2418 ------------
doc/guides/platform/index.rst | 1 -
doc/guides/platform/octeontx2.rst | 520 ---
doc/guides/rel_notes/deprecation.rst | 17 -
doc/guides/rel_notes/release_19_08.rst | 12 +-
doc/guides/rel_notes/release_19_11.rst | 6 +-
doc/guides/rel_notes/release_20_02.rst | 8 +-
doc/guides/rel_notes/release_20_05.rst | 4 +-
doc/guides/rel_notes/release_20_08.rst | 6 +-
doc/guides/rel_notes/release_20_11.rst | 8 +-
doc/guides/rel_notes/release_21_02.rst | 10 +-
doc/guides/rel_notes/release_21_05.rst | 6 +-
doc/guides/rel_notes/release_21_11.rst | 2 +-
doc/guides/tools/cryptoperf.rst | 1 -
drivers/common/meson.build | 1 -
drivers/common/octeontx2/hw/otx2_nix.h | 1391 -------
drivers/common/octeontx2/hw/otx2_npa.h | 305 --
drivers/common/octeontx2/hw/otx2_npc.h | 503 ---
drivers/common/octeontx2/hw/otx2_ree.h | 27 -
drivers/common/octeontx2/hw/otx2_rvu.h | 219 --
drivers/common/octeontx2/hw/otx2_sdp.h | 184 -
drivers/common/octeontx2/hw/otx2_sso.h | 209 --
drivers/common/octeontx2/hw/otx2_ssow.h | 56 -
drivers/common/octeontx2/hw/otx2_tim.h | 34 -
drivers/common/octeontx2/meson.build | 24 -
drivers/common/octeontx2/otx2_common.c | 216 --
drivers/common/octeontx2/otx2_common.h | 179 -
drivers/common/octeontx2/otx2_dev.c | 1074 ------
drivers/common/octeontx2/otx2_dev.h | 161 -
drivers/common/octeontx2/otx2_io_arm64.h | 114 -
drivers/common/octeontx2/otx2_io_generic.h | 75 -
drivers/common/octeontx2/otx2_irq.c | 288 --
drivers/common/octeontx2/otx2_irq.h | 28 -
drivers/common/octeontx2/otx2_mbox.c | 465 ---
drivers/common/octeontx2/otx2_mbox.h | 1958 ----------
drivers/common/octeontx2/otx2_sec_idev.c | 183 -
drivers/common/octeontx2/otx2_sec_idev.h | 43 -
drivers/common/octeontx2/version.map | 44 -
drivers/crypto/meson.build | 1 -
drivers/crypto/octeontx2/meson.build | 30 -
drivers/crypto/octeontx2/otx2_cryptodev.c | 188 -
drivers/crypto/octeontx2/otx2_cryptodev.h | 63 -
.../octeontx2/otx2_cryptodev_capabilities.c | 924 -----
.../octeontx2/otx2_cryptodev_capabilities.h | 45 -
.../octeontx2/otx2_cryptodev_hw_access.c | 225 --
.../octeontx2/otx2_cryptodev_hw_access.h | 161 -
.../crypto/octeontx2/otx2_cryptodev_mbox.c | 285 --
.../crypto/octeontx2/otx2_cryptodev_mbox.h | 37 -
drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 1438 -------
drivers/crypto/octeontx2/otx2_cryptodev_ops.h | 15 -
.../octeontx2/otx2_cryptodev_ops_helper.h | 82 -
drivers/crypto/octeontx2/otx2_cryptodev_qp.h | 46 -
drivers/crypto/octeontx2/otx2_cryptodev_sec.c | 655 ----
drivers/crypto/octeontx2/otx2_cryptodev_sec.h | 64 -
.../crypto/octeontx2/otx2_ipsec_anti_replay.h | 227 --
drivers/crypto/octeontx2/otx2_ipsec_fp.h | 371 --
drivers/crypto/octeontx2/otx2_ipsec_po.h | 447 ---
drivers/crypto/octeontx2/otx2_ipsec_po_ops.h | 167 -
drivers/crypto/octeontx2/otx2_security.h | 37 -
drivers/crypto/octeontx2/version.map | 13 -
drivers/event/cnxk/cn9k_eventdev.c | 10 +
drivers/event/meson.build | 1 -
drivers/event/octeontx2/meson.build | 26 -
drivers/event/octeontx2/otx2_evdev.c | 1900 ----------
drivers/event/octeontx2/otx2_evdev.h | 430 ---
drivers/event/octeontx2/otx2_evdev_adptr.c | 656 ----
.../event/octeontx2/otx2_evdev_crypto_adptr.c | 132 -
.../octeontx2/otx2_evdev_crypto_adptr_rx.h | 77 -
.../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 -
drivers/event/octeontx2/otx2_evdev_irq.c | 272 --
drivers/event/octeontx2/otx2_evdev_selftest.c | 1517 --------
drivers/event/octeontx2/otx2_evdev_stats.h | 286 --
drivers/event/octeontx2/otx2_tim_evdev.c | 735 ----
drivers/event/octeontx2/otx2_tim_evdev.h | 256 --
drivers/event/octeontx2/otx2_tim_worker.c | 192 -
drivers/event/octeontx2/otx2_tim_worker.h | 598 ---
drivers/event/octeontx2/otx2_worker.c | 372 --
drivers/event/octeontx2/otx2_worker.h | 339 --
drivers/event/octeontx2/otx2_worker_dual.c | 345 --
drivers/event/octeontx2/otx2_worker_dual.h | 110 -
drivers/event/octeontx2/version.map | 3 -
drivers/mempool/cnxk/cnxk_mempool.c | 56 +-
drivers/mempool/meson.build | 1 -
drivers/mempool/octeontx2/meson.build | 18 -
drivers/mempool/octeontx2/otx2_mempool.c | 457 ---
drivers/mempool/octeontx2/otx2_mempool.h | 221 --
.../mempool/octeontx2/otx2_mempool_debug.c | 135 -
drivers/mempool/octeontx2/otx2_mempool_irq.c | 303 --
drivers/mempool/octeontx2/otx2_mempool_ops.c | 901 -----
drivers/mempool/octeontx2/version.map | 8 -
drivers/net/cnxk/cn9k_ethdev.c | 15 +
drivers/net/meson.build | 1 -
drivers/net/octeontx2/meson.build | 47 -
drivers/net/octeontx2/otx2_ethdev.c | 2814 --------------
drivers/net/octeontx2/otx2_ethdev.h | 619 ---
drivers/net/octeontx2/otx2_ethdev_debug.c | 811 ----
drivers/net/octeontx2/otx2_ethdev_devargs.c | 215 --
drivers/net/octeontx2/otx2_ethdev_irq.c | 493 ---
drivers/net/octeontx2/otx2_ethdev_ops.c | 589 ---
drivers/net/octeontx2/otx2_ethdev_sec.c | 923 -----
drivers/net/octeontx2/otx2_ethdev_sec.h | 130 -
drivers/net/octeontx2/otx2_ethdev_sec_tx.h | 182 -
drivers/net/octeontx2/otx2_flow.c | 1189 ------
drivers/net/octeontx2/otx2_flow.h | 414 --
drivers/net/octeontx2/otx2_flow_ctrl.c | 252 --
drivers/net/octeontx2/otx2_flow_dump.c | 595 ---
drivers/net/octeontx2/otx2_flow_parse.c | 1239 ------
drivers/net/octeontx2/otx2_flow_utils.c | 969 -----
drivers/net/octeontx2/otx2_link.c | 287 --
drivers/net/octeontx2/otx2_lookup.c | 352 --
drivers/net/octeontx2/otx2_mac.c | 151 -
drivers/net/octeontx2/otx2_mcast.c | 339 --
drivers/net/octeontx2/otx2_ptp.c | 450 ---
| 427 ---
drivers/net/octeontx2/otx2_rx.c | 430 ---
drivers/net/octeontx2/otx2_rx.h | 583 ---
drivers/net/octeontx2/otx2_stats.c | 397 --
drivers/net/octeontx2/otx2_tm.c | 3317 -----------------
drivers/net/octeontx2/otx2_tm.h | 176 -
drivers/net/octeontx2/otx2_tx.c | 1077 ------
drivers/net/octeontx2/otx2_tx.h | 791 ----
drivers/net/octeontx2/otx2_vlan.c | 1035 -----
drivers/net/octeontx2/version.map | 3 -
drivers/net/octeontx_ep/otx2_ep_vf.h | 2 +-
drivers/net/octeontx_ep/otx_ep_common.h | 16 +-
drivers/net/octeontx_ep/otx_ep_ethdev.c | 8 +-
drivers/net/octeontx_ep/otx_ep_rxtx.c | 10 +-
usertools/dpdk-devbind.py | 12 +-
156 files changed, 121 insertions(+), 52149 deletions(-)
rename config/arm/{arm64_octeontx2_linux_gcc => arm64_cn9k_linux_gcc} (84%)
delete mode 100644 doc/guides/cryptodevs/features/octeontx2.ini
delete mode 100644 doc/guides/cryptodevs/octeontx2.rst
delete mode 100644 doc/guides/eventdevs/features/octeontx2.ini
delete mode 100644 doc/guides/eventdevs/octeontx2.rst
delete mode 100644 doc/guides/mempool/octeontx2.rst
delete mode 100644 doc/guides/nics/features/octeontx2.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vec.ini
delete mode 100644 doc/guides/nics/features/octeontx2_vf.ini
delete mode 100644 doc/guides/nics/octeontx2.rst
delete mode 100644 doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
delete mode 100644 doc/guides/platform/img/octeontx2_resource_virtualization.svg
delete mode 100644 doc/guides/platform/octeontx2.rst
delete mode 100644 drivers/common/octeontx2/hw/otx2_nix.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npa.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_npc.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ree.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_rvu.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sdp.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_sso.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_ssow.h
delete mode 100644 drivers/common/octeontx2/hw/otx2_tim.h
delete mode 100644 drivers/common/octeontx2/meson.build
delete mode 100644 drivers/common/octeontx2/otx2_common.c
delete mode 100644 drivers/common/octeontx2/otx2_common.h
delete mode 100644 drivers/common/octeontx2/otx2_dev.c
delete mode 100644 drivers/common/octeontx2/otx2_dev.h
delete mode 100644 drivers/common/octeontx2/otx2_io_arm64.h
delete mode 100644 drivers/common/octeontx2/otx2_io_generic.h
delete mode 100644 drivers/common/octeontx2/otx2_irq.c
delete mode 100644 drivers/common/octeontx2/otx2_irq.h
delete mode 100644 drivers/common/octeontx2/otx2_mbox.c
delete mode 100644 drivers/common/octeontx2/otx2_mbox.h
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.c
delete mode 100644 drivers/common/octeontx2/otx2_sec_idev.h
delete mode 100644 drivers/common/octeontx2/version.map
delete mode 100644 drivers/crypto/octeontx2/meson.build
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_qp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.c
delete mode 100644 drivers/crypto/octeontx2/otx2_cryptodev_sec.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_fp.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po.h
delete mode 100644 drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
delete mode 100644 drivers/crypto/octeontx2/otx2_security.h
delete mode 100644 drivers/crypto/octeontx2/version.map
delete mode 100644 drivers/event/octeontx2/meson.build
delete mode 100644 drivers/event/octeontx2/otx2_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
delete mode 100644 drivers/event/octeontx2/otx2_evdev_irq.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_selftest.c
delete mode 100644 drivers/event/octeontx2/otx2_evdev_stats.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_evdev.h
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_tim_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker.c
delete mode 100644 drivers/event/octeontx2/otx2_worker.h
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.c
delete mode 100644 drivers/event/octeontx2/otx2_worker_dual.h
delete mode 100644 drivers/event/octeontx2/version.map
delete mode 100644 drivers/mempool/octeontx2/meson.build
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool.h
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_debug.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_irq.c
delete mode 100644 drivers/mempool/octeontx2/otx2_mempool_ops.c
delete mode 100644 drivers/mempool/octeontx2/version.map
delete mode 100644 drivers/net/octeontx2/meson.build
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_debug.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_devargs.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_irq.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_ops.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.c
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec.h
delete mode 100644 drivers/net/octeontx2/otx2_ethdev_sec_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_flow.c
delete mode 100644 drivers/net/octeontx2/otx2_flow.h
delete mode 100644 drivers/net/octeontx2/otx2_flow_ctrl.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_dump.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_parse.c
delete mode 100644 drivers/net/octeontx2/otx2_flow_utils.c
delete mode 100644 drivers/net/octeontx2/otx2_link.c
delete mode 100644 drivers/net/octeontx2/otx2_lookup.c
delete mode 100644 drivers/net/octeontx2/otx2_mac.c
delete mode 100644 drivers/net/octeontx2/otx2_mcast.c
delete mode 100644 drivers/net/octeontx2/otx2_ptp.c
delete mode 100644 drivers/net/octeontx2/otx2_rss.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.c
delete mode 100644 drivers/net/octeontx2/otx2_rx.h
delete mode 100644 drivers/net/octeontx2/otx2_stats.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.c
delete mode 100644 drivers/net/octeontx2/otx2_tm.h
delete mode 100644 drivers/net/octeontx2/otx2_tx.c
delete mode 100644 drivers/net/octeontx2/otx2_tx.h
delete mode 100644 drivers/net/octeontx2/otx2_vlan.c
delete mode 100644 drivers/net/octeontx2/version.map
diff --git a/MAINTAINERS b/MAINTAINERS
index 854b81f2a3..336bbb3547 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -534,15 +534,6 @@ T: git://dpdk.org/next/dpdk-next-net-mrvl
F: drivers/mempool/cnxk/
F: doc/guides/mempool/cnxk.rst
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-F: drivers/common/octeontx2/
-F: drivers/mempool/octeontx2/
-F: doc/guides/platform/img/octeontx2_*
-F: doc/guides/platform/octeontx2.rst
-F: doc/guides/mempool/octeontx2.rst
-
Bus Drivers
-----------
@@ -795,21 +786,6 @@ F: drivers/net/mvneta/
F: doc/guides/nics/mvneta.rst
F: doc/guides/nics/features/mvneta.ini
-Marvell OCTEON TX2
-M: Jerin Jacob <jerinj@marvell.com>
-M: Nithin Dabilpuram <ndabilpuram@marvell.com>
-M: Kiran Kumar K <kirankumark@marvell.com>
-T: git://dpdk.org/next/dpdk-next-net-mrvl
-F: drivers/net/octeontx2/
-F: doc/guides/nics/features/octeontx2*.ini
-F: doc/guides/nics/octeontx2.rst
-
-Marvell OCTEON TX2 - security
-M: Anoob Joseph <anoobj@marvell.com>
-T: git://dpdk.org/next/dpdk-next-crypto
-F: drivers/common/octeontx2/otx2_sec*
-F: drivers/net/octeontx2/otx2_ethdev_sec*
-
Marvell OCTEON TX EP - endpoint
M: Nalla Pradeep <pnalla@marvell.com>
M: Radha Mohan Chintakuntla <radhac@marvell.com>
@@ -1115,13 +1091,6 @@ F: drivers/crypto/nitrox/
F: doc/guides/cryptodevs/nitrox.rst
F: doc/guides/cryptodevs/features/nitrox.ini
-Marvell OCTEON TX2 crypto
-M: Ankur Dwivedi <adwivedi@marvell.com>
-M: Anoob Joseph <anoobj@marvell.com>
-F: drivers/crypto/octeontx2/
-F: doc/guides/cryptodevs/octeontx2.rst
-F: doc/guides/cryptodevs/features/octeontx2.ini
-
Mellanox mlx5
M: Matan Azrad <matan@nvidia.com>
F: drivers/crypto/mlx5/
@@ -1298,12 +1267,6 @@ M: Shijith Thotton <sthotton@marvell.com>
F: drivers/event/cnxk/
F: doc/guides/eventdevs/cnxk.rst
-Marvell OCTEON TX2
-M: Pavan Nikhilesh <pbhagavatula@marvell.com>
-M: Jerin Jacob <jerinj@marvell.com>
-F: drivers/event/octeontx2/
-F: doc/guides/eventdevs/octeontx2.rst
-
NXP DPAA eventdev
M: Hemant Agrawal <hemant.agrawal@nxp.com>
M: Nipun Gupta <nipun.gupta@nxp.com>
diff --git a/app/test/meson.build b/app/test/meson.build
index 2b480adfba..344a609a4d 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -341,7 +341,6 @@ driver_test_names = [
'cryptodev_dpaa_sec_autotest',
'cryptodev_dpaa2_sec_autotest',
'cryptodev_null_autotest',
- 'cryptodev_octeontx2_autotest',
'cryptodev_openssl_autotest',
'cryptodev_openssl_asym_autotest',
'cryptodev_qat_autotest',
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 10b48cdadb..293f59b48c 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -15615,12 +15615,6 @@ test_cryptodev_octeontx(void)
return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX_SYM_PMD));
}
-static int
-test_cryptodev_octeontx2(void)
-{
- return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
-}
-
static int
test_cryptodev_caam_jr(void)
{
@@ -15733,7 +15727,6 @@ REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec);
REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp);
REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio);
REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx);
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2);
REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr);
REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox);
REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs);
diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h
index 90c8287365..70f23a3f67 100644
--- a/app/test/test_cryptodev.h
+++ b/app/test/test_cryptodev.h
@@ -68,7 +68,6 @@
#define CRYPTODEV_NAME_CCP_PMD crypto_ccp
#define CRYPTODEV_NAME_VIRTIO_PMD crypto_virtio
#define CRYPTODEV_NAME_OCTEONTX_SYM_PMD crypto_octeontx
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
#define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr
#define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym
#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs
diff --git a/app/test/test_cryptodev_asym.c b/app/test/test_cryptodev_asym.c
index 9d19a6d6d9..68f4d8e7a6 100644
--- a/app/test/test_cryptodev_asym.c
+++ b/app/test/test_cryptodev_asym.c
@@ -2375,20 +2375,6 @@ test_cryptodev_octeontx_asym(void)
return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
}
-static int
-test_cryptodev_octeontx2_asym(void)
-{
- gbl_driver_id = rte_cryptodev_driver_id_get(
- RTE_STR(CRYPTODEV_NAME_OCTEONTX2_PMD));
- if (gbl_driver_id == -1) {
- RTE_LOG(ERR, USER1, "OCTEONTX2 PMD must be loaded.\n");
- return TEST_FAILED;
- }
-
- /* Use test suite registered for crypto_octeontx PMD */
- return unit_test_suite_runner(&cryptodev_octeontx_asym_testsuite);
-}
-
static int
test_cryptodev_cn9k_asym(void)
{
@@ -2424,8 +2410,5 @@ REGISTER_TEST_COMMAND(cryptodev_qat_asym_autotest, test_cryptodev_qat_asym);
REGISTER_TEST_COMMAND(cryptodev_octeontx_asym_autotest,
test_cryptodev_octeontx_asym);
-
-REGISTER_TEST_COMMAND(cryptodev_octeontx2_asym_autotest,
- test_cryptodev_octeontx2_asym);
REGISTER_TEST_COMMAND(cryptodev_cn9k_asym_autotest, test_cryptodev_cn9k_asym);
REGISTER_TEST_COMMAND(cryptodev_cn10k_asym_autotest, test_cryptodev_cn10k_asym);
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index 843d9766b0..10028fe11d 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1018,12 +1018,6 @@ test_eventdev_selftest_octeontx(void)
return test_eventdev_selftest_impl("event_octeontx", "");
}
-static int
-test_eventdev_selftest_octeontx2(void)
-{
- return test_eventdev_selftest_impl("event_octeontx2", "");
-}
-
static int
test_eventdev_selftest_dpaa2(void)
{
@@ -1052,8 +1046,6 @@ REGISTER_TEST_COMMAND(eventdev_common_autotest, test_eventdev_common);
REGISTER_TEST_COMMAND(eventdev_selftest_sw, test_eventdev_selftest_sw);
REGISTER_TEST_COMMAND(eventdev_selftest_octeontx,
test_eventdev_selftest_octeontx);
-REGISTER_TEST_COMMAND(eventdev_selftest_octeontx2,
- test_eventdev_selftest_octeontx2);
REGISTER_TEST_COMMAND(eventdev_selftest_dpaa2, test_eventdev_selftest_dpaa2);
REGISTER_TEST_COMMAND(eventdev_selftest_dlb2, test_eventdev_selftest_dlb2);
REGISTER_TEST_COMMAND(eventdev_selftest_cn9k, test_eventdev_selftest_cn9k);
diff --git a/config/arm/arm64_cn10k_linux_gcc b/config/arm/arm64_cn10k_linux_gcc
index 88e5f10945..a3578c03a1 100644
--- a/config/arm/arm64_cn10k_linux_gcc
+++ b/config/arm/arm64_cn10k_linux_gcc
@@ -14,4 +14,3 @@ endian = 'little'
[properties]
platform = 'cn10k'
-disable_drivers = 'common/octeontx2'
diff --git a/config/arm/arm64_octeontx2_linux_gcc b/config/arm/arm64_cn9k_linux_gcc
similarity index 84%
rename from config/arm/arm64_octeontx2_linux_gcc
rename to config/arm/arm64_cn9k_linux_gcc
index 8fbdd3868d..a94b44a551 100644
--- a/config/arm/arm64_octeontx2_linux_gcc
+++ b/config/arm/arm64_cn9k_linux_gcc
@@ -13,5 +13,4 @@ cpu = 'armv8-a'
endian = 'little'
[properties]
-platform = 'octeontx2'
-disable_drivers = 'common/cnxk'
+platform = 'cn9k'
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 213324d262..16e808cdd5 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -139,7 +139,7 @@ implementer_cavium = {
'march_features': ['crc', 'crypto', 'lse'],
'compiler_options': ['-mcpu=octeontx2'],
'flags': [
- ['RTE_MACHINE', '"octeontx2"'],
+ ['RTE_MACHINE', '"cn9k"'],
['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true],
['RTE_MAX_LCORE', 36],
@@ -340,8 +340,8 @@ soc_n2 = {
'numa': false
}
-soc_octeontx2 = {
- 'description': 'Marvell OCTEON TX2',
+soc_cn9k = {
+ 'description': 'Marvell OCTEON 9',
'implementer': '0x43',
'part_number': '0xb2',
'numa': false
@@ -377,6 +377,7 @@ generic_aarch32: Generic un-optimized build for armv8 aarch32 execution mode.
armada: Marvell ARMADA
bluefield: NVIDIA BlueField
centriq2400: Qualcomm Centriq 2400
+cn9k: Marvell OCTEON 9
cn10k: Marvell OCTEON 10
dpaa: NXP DPAA
emag: Ampere eMAG
@@ -385,7 +386,6 @@ kunpeng920: HiSilicon Kunpeng 920
kunpeng930: HiSilicon Kunpeng 930
n1sdp: Arm Neoverse N1SDP
n2: Arm Neoverse N2
-octeontx2: Marvell OCTEON TX2
stingray: Broadcom Stingray
thunderx2: Marvell ThunderX2 T99
thunderxt88: Marvell ThunderX T88
@@ -399,6 +399,7 @@ socs = {
'armada': soc_armada,
'bluefield': soc_bluefield,
'centriq2400': soc_centriq2400,
+ 'cn9k': soc_cn9k,
'cn10k' : soc_cn10k,
'dpaa': soc_dpaa,
'emag': soc_emag,
@@ -407,7 +408,6 @@ socs = {
'kunpeng930': soc_kunpeng930,
'n1sdp': soc_n1sdp,
'n2': soc_n2,
- 'octeontx2': soc_octeontx2,
'stingray': soc_stingray,
'thunderx2': soc_thunderx2,
'thunderxt88': soc_thunderxt88
diff --git a/devtools/check-abi.sh b/devtools/check-abi.sh
index ca523eb94c..675f10142e 100755
--- a/devtools/check-abi.sh
+++ b/devtools/check-abi.sh
@@ -48,6 +48,10 @@ for dump in $(find $refdir -name "*.dump"); do
echo "Skipped removed driver $name."
continue
fi
+ if grep -qE "\<librte_*.*_octeontx2" $dump; then
+ echo "Skipped removed driver $name."
+ continue
+ fi
dump2=$(find $newdir -name $name)
if [ -z "$dump2" ] || [ ! -e "$dump2" ]; then
echo "Error: cannot find $name in $newdir" >&2
diff --git a/doc/guides/cryptodevs/features/octeontx2.ini b/doc/guides/cryptodevs/features/octeontx2.ini
deleted file mode 100644
index c54dc9409c..0000000000
--- a/doc/guides/cryptodevs/features/octeontx2.ini
+++ /dev/null
@@ -1,87 +0,0 @@
-;
-; Supported features of the 'octeontx2' crypto driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Symmetric crypto = Y
-Asymmetric crypto = Y
-Sym operation chaining = Y
-HW Accelerated = Y
-Protocol offload = Y
-In Place SGL = Y
-OOP SGL In LB Out = Y
-OOP SGL In SGL Out = Y
-OOP LB In LB Out = Y
-RSA PRIV OP KEY QT = Y
-Digest encrypted = Y
-Symmetric sessionless = Y
-
-;
-; Supported crypto algorithms of 'octeontx2' crypto driver.
-;
-[Cipher]
-NULL = Y
-3DES CBC = Y
-3DES ECB = Y
-AES CBC (128) = Y
-AES CBC (192) = Y
-AES CBC (256) = Y
-AES CTR (128) = Y
-AES CTR (192) = Y
-AES CTR (256) = Y
-AES XTS (128) = Y
-AES XTS (256) = Y
-DES CBC = Y
-KASUMI F8 = Y
-SNOW3G UEA2 = Y
-ZUC EEA3 = Y
-
-;
-; Supported authentication algorithms of 'octeontx2' crypto driver.
-;
-[Auth]
-NULL = Y
-AES GMAC = Y
-KASUMI F9 = Y
-MD5 = Y
-MD5 HMAC = Y
-SHA1 = Y
-SHA1 HMAC = Y
-SHA224 = Y
-SHA224 HMAC = Y
-SHA256 = Y
-SHA256 HMAC = Y
-SHA384 = Y
-SHA384 HMAC = Y
-SHA512 = Y
-SHA512 HMAC = Y
-SNOW3G UIA2 = Y
-ZUC EIA3 = Y
-
-;
-; Supported AEAD algorithms of 'octeontx2' crypto driver.
-;
-[AEAD]
-AES GCM (128) = Y
-AES GCM (192) = Y
-AES GCM (256) = Y
-CHACHA20-POLY1305 = Y
-
-;
-; Supported Asymmetric algorithms of the 'octeontx2' crypto driver.
-;
-[Asymmetric]
-RSA = Y
-DSA =
-Modular Exponentiation = Y
-Modular Inversion =
-Diffie-hellman =
-ECDSA = Y
-ECPM = Y
-
-;
-; Supported Operating systems of the 'octeontx2' crypto driver.
-;
-[OS]
-Linux = Y
diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst
index 3dcc2ecd2e..39cca6dbde 100644
--- a/doc/guides/cryptodevs/index.rst
+++ b/doc/guides/cryptodevs/index.rst
@@ -22,7 +22,6 @@ Crypto Device Drivers
dpaa_sec
kasumi
octeontx
- octeontx2
openssl
mlx5
mvsam
diff --git a/doc/guides/cryptodevs/octeontx2.rst b/doc/guides/cryptodevs/octeontx2.rst
deleted file mode 100644
index 811e61a1f6..0000000000
--- a/doc/guides/cryptodevs/octeontx2.rst
+++ /dev/null
@@ -1,188 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-
-Marvell OCTEON TX2 Crypto Poll Mode Driver
-==========================================
-
-The OCTEON TX2 crypto poll mode driver provides support for offloading
-cryptographic operations to cryptographic accelerator units on the
-**OCTEON TX2** :sup:`®` family of processors (CN9XXX).
-
-More information about OCTEON TX2 SoCs may be obtained from `<https://www.marvell.com>`_
-
-Features
---------
-
-The OCTEON TX2 crypto PMD has support for:
-
-Symmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Cipher algorithms:
-
-* ``RTE_CRYPTO_CIPHER_NULL``
-* ``RTE_CRYPTO_CIPHER_3DES_CBC``
-* ``RTE_CRYPTO_CIPHER_3DES_ECB``
-* ``RTE_CRYPTO_CIPHER_AES_CBC``
-* ``RTE_CRYPTO_CIPHER_AES_CTR``
-* ``RTE_CRYPTO_CIPHER_AES_XTS``
-* ``RTE_CRYPTO_CIPHER_DES_CBC``
-* ``RTE_CRYPTO_CIPHER_KASUMI_F8``
-* ``RTE_CRYPTO_CIPHER_SNOW3G_UEA2``
-* ``RTE_CRYPTO_CIPHER_ZUC_EEA3``
-
-Hash algorithms:
-
-* ``RTE_CRYPTO_AUTH_NULL``
-* ``RTE_CRYPTO_AUTH_AES_GMAC``
-* ``RTE_CRYPTO_AUTH_KASUMI_F9``
-* ``RTE_CRYPTO_AUTH_MD5``
-* ``RTE_CRYPTO_AUTH_MD5_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA1``
-* ``RTE_CRYPTO_AUTH_SHA1_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA224``
-* ``RTE_CRYPTO_AUTH_SHA224_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA256``
-* ``RTE_CRYPTO_AUTH_SHA256_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA384``
-* ``RTE_CRYPTO_AUTH_SHA384_HMAC``
-* ``RTE_CRYPTO_AUTH_SHA512``
-* ``RTE_CRYPTO_AUTH_SHA512_HMAC``
-* ``RTE_CRYPTO_AUTH_SNOW3G_UIA2``
-* ``RTE_CRYPTO_AUTH_ZUC_EIA3``
-
-AEAD algorithms:
-
-* ``RTE_CRYPTO_AEAD_AES_GCM``
-* ``RTE_CRYPTO_AEAD_CHACHA20_POLY1305``
-
-Asymmetric Crypto Algorithms
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-* ``RTE_CRYPTO_ASYM_XFORM_RSA``
-* ``RTE_CRYPTO_ASYM_XFORM_MODEX``
-
-
-Installation
-------------
-
-The OCTEON TX2 crypto PMD may be compiled natively on an OCTEON TX2 platform or
-cross-compiled on an x86 platform.
-
-Refer to :doc:`../platform/octeontx2` for instructions to build your DPDK
-application.
-
-.. note::
-
- The OCTEON TX2 crypto PMD uses services from the kernel mode OCTEON TX2
- crypto PF driver in linux. This driver is included in the OCTEON TX SDK.
-
-Initialization
---------------
-
-List the CPT PF devices available on your OCTEON TX2 platform:
-
-.. code-block:: console
-
- lspci -d:a0fd
-
-``a0fd`` is the CPT PF device id. You should see output similar to:
-
-.. code-block:: console
-
- 0002:10:00.0 Class 1080: Device 177d:a0fd
-
-Set ``sriov_numvfs`` on the CPT PF device, to create a VF:
-
-.. code-block:: console
-
- echo 1 > /sys/bus/pci/drivers/octeontx2-cpt/0002:10:00.0/sriov_numvfs
-
-Bind the CPT VF device to the vfio_pci driver:
-
-.. code-block:: console
-
- echo '177d a0fe' > /sys/bus/pci/drivers/vfio-pci/new_id
- echo 0002:10:00.1 > /sys/bus/pci/devices/0002:10:00.1/driver/unbind
- echo 0002:10:00.1 > /sys/bus/pci/drivers/vfio-pci/bind
-
-Another way to bind the VF would be to use the ``dpdk-devbind.py`` script:
-
-.. code-block:: console
-
- cd <dpdk directory>
- ./usertools/dpdk-devbind.py -u 0002:10:00.1
- ./usertools/dpdk-devbind.py -b vfio-pci 0002:10.00.1
-
-.. note::
-
- * For CN98xx SoC, it is recommended to use even and odd DBDF VFs to achieve
- higher performance as even VF uses one crypto engine and odd one uses
- another crypto engine.
-
- * Ensure that sufficient huge pages are available for your application::
-
- dpdk-hugepages.py --setup 4G --pagesize 512M
-
- Refer to :ref:`linux_gsg_hugepages` for more details.
-
-Debugging Options
------------------
-
-.. _table_octeontx2_crypto_debug_options:
-
-.. table:: OCTEON TX2 crypto PMD debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | CPT | --log-level='pmd\.crypto\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Testing
--------
-
-The symmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_autotest
-
-The asymmetric crypto operations on OCTEON TX2 crypto PMD may be verified by running the test
-application:
-
-.. code-block:: console
-
- ./dpdk-test
- RTE>>cryptodev_octeontx2_asym_autotest
-
-
-Lookaside IPsec Support
------------------------
-
-The OCTEON TX2 SoC can accelerate IPsec traffic in lookaside protocol mode,
-with its **cryptographic accelerator (CPT)**. ``OCTEON TX2 crypto PMD`` implements
-this as an ``RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL`` offload.
-
-Refer to :doc:`../prog_guide/rte_security` for more details on protocol offloads.
-
-This feature can be tested with ipsec-secgw sample application.
-
-
-Features supported
-~~~~~~~~~~~~~~~~~~
-
-* IPv4
-* IPv6
-* ESP
-* Tunnel mode
-* Transport mode(IPv4)
-* ESN
-* Anti-replay
-* UDP Encapsulation
-* AES-128/192/256-GCM
-* AES-128/192/256-CBC-SHA1-HMAC
-* AES-128/192/256-CBC-SHA256-128-HMAC
diff --git a/doc/guides/dmadevs/cnxk.rst b/doc/guides/dmadevs/cnxk.rst
index da2dd59071..418b9a9d63 100644
--- a/doc/guides/dmadevs/cnxk.rst
+++ b/doc/guides/dmadevs/cnxk.rst
@@ -7,7 +7,7 @@ CNXK DMA Device Driver
======================
The ``cnxk`` dmadev driver provides a poll-mode driver (PMD) for Marvell DPI DMA
-Hardware Accelerator block found in OCTEONTX2 and OCTEONTX3 family of SoCs.
+Hardware Accelerator block found in OCTEON 9 and OCTEON 10 family of SoCs.
Each DMA queue is exposed as a VF function when SRIOV is enabled.
The block supports following modes of DMA transfers:
diff --git a/doc/guides/eventdevs/features/octeontx2.ini b/doc/guides/eventdevs/features/octeontx2.ini
deleted file mode 100644
index 05b84beb6e..0000000000
--- a/doc/guides/eventdevs/features/octeontx2.ini
+++ /dev/null
@@ -1,30 +0,0 @@
-;
-; Supported features of the 'octeontx2' eventdev driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Scheduling Features]
-queue_qos = Y
-distributed_sched = Y
-queue_all_types = Y
-nonseq_mode = Y
-runtime_port_link = Y
-multiple_queue_port = Y
-carry_flow_id = Y
-maintenance_free = Y
-
-[Eth Rx adapter Features]
-internal_port = Y
-multi_eventq = Y
-
-[Eth Tx adapter Features]
-internal_port = Y
-
-[Crypto adapter Features]
-internal_port_op_new = Y
-internal_port_op_fwd = Y
-internal_port_qp_ev_bind = Y
-
-[Timer adapter Features]
-internal_port = Y
-periodic = Y
diff --git a/doc/guides/eventdevs/index.rst b/doc/guides/eventdevs/index.rst
index b11657f7ae..eed19ad28c 100644
--- a/doc/guides/eventdevs/index.rst
+++ b/doc/guides/eventdevs/index.rst
@@ -19,5 +19,4 @@ application through the eventdev API.
dsw
sw
octeontx
- octeontx2
opdl
diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst
deleted file mode 100644
index 0fa57abfa3..0000000000
--- a/doc/guides/eventdevs/octeontx2.rst
+++ /dev/null
@@ -1,178 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 SSO Eventdev Driver
-===============================
-
-The OCTEON TX2 SSO PMD (**librte_event_octeontx2**) provides poll mode
-eventdev driver support for the inbuilt event device found in the **Marvell OCTEON TX2**
-SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 SSO PMD are:
-
-- 256 Event queues
-- 26 (dual) and 52 (single) Event ports
-- HW event scheduler
-- Supports 1M flows per event queue
-- Flow based event pipelining
-- Flow pinning support in flow based event pipelining
-- Queue based event pipelining
-- Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
-- Event scheduling QoS based on event queue priority
-- Open system with configurable amount of outstanding events limited only by
- DRAM
-- HW accelerated dequeue timeout support to enable power management
-- HW managed event timers support through TIM, with high precision and
- time granularity of 2.5us.
-- Up to 256 TIM rings aka event timer adapters.
-- Up to 8 rings traversed in parallel.
-- HW managed packets enqueued from ethdev to eventdev exposed through event eth
- RX adapter.
-- N:1 ethernet device Rx queue to Event queue mapping.
-- Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
- capability while maintaining receive packet order.
-- Full Rx/Tx offload support defined through ethdev queue config.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-
-Runtime Config Options
-----------------------
-
-- ``Maximum number of in-flight events`` (default ``8192``)
-
- In **Marvell OCTEON TX2** the max number of in-flight events are only limited
- by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
- upper limit for in-flight events.
- For example::
-
- -a 0002:0e:00.0,xae_cnt=16384
-
-- ``Force legacy mode``
-
- The ``single_ws`` devargs parameter is introduced to force legacy mode i.e
- single workslot mode in SSO and disable the default dual workslot mode.
- For example::
-
- -a 0002:0e:00.0,single_ws=1
-
-- ``Event Group QoS support``
-
- SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
- events. By default the buffers are assigned to the SSO GGRPs to
- satisfy minimum HW requirements. SSO is free to assign the remaining
- buffers to GGRPs based on a preconfigured threshold.
- We can control the QoS of SSO GGRP by modifying the above mentioned
- thresholds. GGRPs that have higher importance can be assigned higher
- thresholds than the rest. The dictionary format is as follows
- [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
- default.
- For example::
-
- -a 0002:0e:00.0,qos=[1-50-50-50]
-
-- ``TIM disable NPA``
-
- By default chunks are allocated from NPA then TIM can automatically free
- them when traversing the list of chunks. The ``tim_disable_npa`` devargs
- parameter disables NPA and uses software mempool to manage chunks
- For example::
-
- -a 0002:0e:00.0,tim_disable_npa=1
-
-- ``TIM modify chunk slots``
-
- The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
- Chunks are used to store event timers, a chunk can be visualised as an array
- where the last element points to the next chunk and rest of them are used to
- store events. TIM traverses the list of chunks and enqueues the event timers
- to SSO. The default value is 255 and the max value is 4095.
- For example::
-
- -a 0002:0e:00.0,tim_chnk_slots=1023
-
-- ``TIM enable arm/cancel statistics``
-
- The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
- event timer adapter.
- For example::
-
- -a 0002:0e:00.0,tim_stats_ena=1
-
-- ``TIM limit max rings reserved``
-
- The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
- rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
- resources we can avoid starving other applications by not grabbing all the
- rings.
- For example::
-
- -a 0002:0e:00.0,tim_rings_lmt=5
-
-- ``TIM ring control internal parameters``
-
- When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
- control each TIM rings internal parameters uniquely. The following dict
- format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
- default values.
- For Example::
-
- -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:0e:00.0,npa_lock_mask=0xf
-
-- ``Force Rx Back pressure``
-
- Force Rx back pressure when same mempool is used across ethernet device
- connected to event device.
-
- For example::
-
- -a 0002:0e:00.0,force_rx_bp=1
-
-Debugging Options
------------------
-
-.. _table_octeontx2_event_debug_options:
-
-.. table:: OCTEON TX2 event device debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | SSO | --log-level='pmd\.event\.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' |
- +---+------------+-------------------------------------------------------+
-
-Limitations
------------
-
-Rx adapter support
-~~~~~~~~~~~~~~~~~~
-
-Using the same mempool for all the ethernet device ports connected to
-event device would cause back pressure to be asserted only on the first
-ethernet device.
-Back pressure is automatically disabled when using same mempool for all the
-ethernet devices connected to event device to override this applications can
-use `force_rx_bp=1` device arguments.
-Using unique mempool per each ethernet device is recommended when they are
-connected to event device.
diff --git a/doc/guides/mempool/index.rst b/doc/guides/mempool/index.rst
index ce53bc1ac7..e4b6ee7d31 100644
--- a/doc/guides/mempool/index.rst
+++ b/doc/guides/mempool/index.rst
@@ -13,6 +13,5 @@ application through the mempool API.
cnxk
octeontx
- octeontx2
ring
stack
diff --git a/doc/guides/mempool/octeontx2.rst b/doc/guides/mempool/octeontx2.rst
deleted file mode 100644
index 1272c1e72b..0000000000
--- a/doc/guides/mempool/octeontx2.rst
+++ /dev/null
@@ -1,92 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-OCTEON TX2 NPA Mempool Driver
-=============================
-
-The OCTEON TX2 NPA PMD (**librte_mempool_octeontx2**) provides mempool
-driver support for the integrated mempool device found in **Marvell OCTEON TX2** SoC family.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Features
---------
-
-OCTEON TX2 NPA PMD supports:
-
-- Up to 128 NPA LFs
-- 1M Pools per LF
-- HW mempool manager
-- Ethdev Rx buffer allocation in HW to save CPU cycles in the Rx path.
-- Ethdev Tx buffer recycling in HW to save CPU cycles in the Tx path.
-
-Prerequisites and Compilation procedure
----------------------------------------
-
- See :doc:`../platform/octeontx2` for setup information.
-
-Pre-Installation Configuration
-------------------------------
-
-
-Runtime Config Options
-~~~~~~~~~~~~~~~~~~~~~~
-
-- ``Maximum number of mempools per application`` (default ``128``)
-
- The maximum number of mempools per application needs to be configured on
- HW during mempool driver initialization. HW can support up to 1M mempools,
- Since each mempool costs set of HW resources, the ``max_pools`` ``devargs``
- parameter is being introduced to configure the number of mempools required
- for the application.
- For example::
-
- -a 0002:02:00.0,max_pools=512
-
- With the above configuration, the driver will set up only 512 mempools for
- the given application to save HW resources.
-
-.. note::
-
- Since this configuration is per application, the end user needs to
- provide ``max_pools`` parameter to the first PCIe device probed by the given
- application.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-Debugging Options
-~~~~~~~~~~~~~~~~~
-
-.. _table_octeontx2_mempool_debug_options:
-
-.. table:: OCTEON TX2 mempool debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NPA | --log-level='pmd\.mempool.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
-
-Standalone mempool device
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
- The ``usertools/dpdk-devbind.py`` script shall enumerate all the mempool devices
- available in the system. In order to avoid, the end user to bind the mempool
- device prior to use ethdev and/or eventdev device, the respective driver
- configures an NPA LF and attach to the first probed ethdev or eventdev device.
- In case, if end user need to run mempool as a standalone device
- (without ethdev or eventdev), end user needs to bind a mempool device using
- ``usertools/dpdk-devbind.py``
-
- Example command to run ``mempool_autotest`` test with standalone OCTEONTX2 NPA device::
-
- echo "mempool_autotest" | <build_dir>/app/test/dpdk-test -c 0xf0 --mbuf-pool-ops-name="octeontx2_npa"
diff --git a/doc/guides/nics/cnxk.rst b/doc/guides/nics/cnxk.rst
index 84f9865654..2119ba51c8 100644
--- a/doc/guides/nics/cnxk.rst
+++ b/doc/guides/nics/cnxk.rst
@@ -178,7 +178,7 @@ Runtime Config Options
* ``rss_adder<7:0> = flow_tag<7:0>``
Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
+ RSS adder scheme used in OCTEON 9 products.
By default, the driver runs in the latter mode.
Setting this flag to 1 to select the legacy mode.
@@ -291,7 +291,7 @@ Limitations
The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
+recycling on OCTEON 9 SoC platform.
CRC stripping
~~~~~~~~~~~~~
diff --git a/doc/guides/nics/features/octeontx2.ini b/doc/guides/nics/features/octeontx2.ini
deleted file mode 100644
index bf0c2890f2..0000000000
--- a/doc/guides/nics/features/octeontx2.ini
+++ /dev/null
@@ -1,97 +0,0 @@
-;
-; Supported features of the 'octeontx2' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Rx interrupt = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-TSO = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Timesync = Y
-Timestamp offload = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Stats per queue = Y
-Extended stats = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
-
-[rte_flow items]
-any = Y
-arp_eth_ipv4 = Y
-esp = Y
-eth = Y
-e_tag = Y
-geneve = Y
-gre = Y
-gre_key = Y
-gtpc = Y
-gtpu = Y
-higig2 = Y
-icmp = Y
-ipv4 = Y
-ipv6 = Y
-ipv6_ext = Y
-mpls = Y
-nvgre = Y
-raw = Y
-sctp = Y
-tcp = Y
-udp = Y
-vlan = Y
-vxlan = Y
-vxlan_gpe = Y
-
-[rte_flow actions]
-count = Y
-drop = Y
-flag = Y
-mark = Y
-of_pop_vlan = Y
-of_push_vlan = Y
-of_set_vlan_pcp = Y
-of_set_vlan_vid = Y
-pf = Y
-port_id = Y
-port_representor = Y
-queue = Y
-rss = Y
-security = Y
-vf = Y
diff --git a/doc/guides/nics/features/octeontx2_vec.ini b/doc/guides/nics/features/octeontx2_vec.ini
deleted file mode 100644
index c405db7cf9..0000000000
--- a/doc/guides/nics/features/octeontx2_vec.ini
+++ /dev/null
@@ -1,48 +0,0 @@
-;
-; Supported features of the 'octeontx2_vec' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-SR-IOV = Y
-Multiprocess aware = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-MTU update = Y
-Promiscuous mode = Y
-Allmulticast mode = Y
-Unicast MAC filter = Y
-Multicast MAC filter = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-VLAN filter = Y
-Flow control = Y
-Rate limitation = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/features/octeontx2_vf.ini b/doc/guides/nics/features/octeontx2_vf.ini
deleted file mode 100644
index 5ac7a49a5c..0000000000
--- a/doc/guides/nics/features/octeontx2_vf.ini
+++ /dev/null
@@ -1,45 +0,0 @@
-;
-; Supported features of the 'octeontx2_vf' network poll mode driver.
-;
-; Refer to default.ini for the full list of available PMD features.
-;
-[Features]
-Speed capabilities = Y
-Lock-free Tx queue = Y
-Multiprocess aware = Y
-Rx interrupt = Y
-Link status = Y
-Link status event = Y
-Runtime Rx queue setup = Y
-Runtime Tx queue setup = Y
-Burst mode info = Y
-Fast mbuf free = Y
-Free Tx mbuf on demand = Y
-Queue start/stop = Y
-TSO = Y
-RSS hash = Y
-RSS key update = Y
-RSS reta update = Y
-Inner RSS = Y
-Inline protocol = Y
-VLAN filter = Y
-Rate limitation = Y
-Scattered Rx = Y
-VLAN offload = Y
-QinQ offload = Y
-L3 checksum offload = Y
-L4 checksum offload = Y
-Inner L3 checksum = Y
-Inner L4 checksum = Y
-Packet type parsing = Y
-Rx descriptor status = Y
-Tx descriptor status = Y
-Basic stats = Y
-Extended stats = Y
-Stats per queue = Y
-FW version = Y
-Module EEPROM dump = Y
-Registers dump = Y
-Linux = Y
-ARMv8 = Y
-Usage doc = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 1c94caccea..f48e9f815c 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -52,7 +52,6 @@ Network Interface Controller Drivers
ngbe
null
octeontx
- octeontx2
octeontx_ep
pfe
qede
diff --git a/doc/guides/nics/octeontx2.rst b/doc/guides/nics/octeontx2.rst
deleted file mode 100644
index 4ce067f2c5..0000000000
--- a/doc/guides/nics/octeontx2.rst
+++ /dev/null
@@ -1,465 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(C) 2019 Marvell International Ltd.
-
-OCTEON TX2 Poll Mode driver
-===========================
-
-The OCTEON TX2 ETHDEV PMD (**librte_net_octeontx2**) provides poll mode ethdev
-driver support for the inbuilt network device found in **Marvell OCTEON TX2**
-SoC family as well as for their virtual functions (VF) in SR-IOV context.
-
-More information can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
-
-Features
---------
-
-Features of the OCTEON TX2 Ethdev PMD are:
-
-- Packet type information
-- Promiscuous mode
-- Jumbo frames
-- SR-IOV VF
-- Lock-free Tx queue
-- Multiple queues for TX and RX
-- Receiver Side Scaling (RSS)
-- MAC/VLAN filtering
-- Multicast MAC filtering
-- Generic flow API
-- Inner and Outer Checksum offload
-- VLAN/QinQ stripping and insertion
-- Port hardware statistics
-- Link state information
-- Link flow control
-- MTU update
-- Scatter-Gather IO support
-- Vector Poll mode driver
-- Debug utilities - Context dump and error interrupt support
-- IEEE1588 timestamping
-- HW offloaded `ethdev Rx queue` to `eventdev event queue` packet injection
-- Support Rx interrupt
-- Inline IPsec processing support
-- :ref:`Traffic Management API <otx2_tmapi>`
-
-Prerequisites
--------------
-
-See :doc:`../platform/octeontx2` for setup information.
-
-
-Driver compilation and testing
-------------------------------
-
-Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
-for details.
-
-#. Running testpmd:
-
- Follow instructions available in the document
- :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
- to run testpmd.
-
- Example output:
-
- .. code-block:: console
-
- ./<build_dir>/app/dpdk-testpmd -c 0x300 -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
- EAL: Detected 24 lcore(s)
- EAL: Detected 1 NUMA nodes
- EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
- EAL: No available hugepages reported in hugepages-2048kB
- EAL: Probing VFIO support...
- EAL: VFIO support initialized
- EAL: PCI device 0002:02:00.0 on NUMA socket 0
- EAL: probe driver: 177d:a063 net_octeontx2
- EAL: using IOMMU type 1 (Type 1)
- testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=267456, size=2176, socket=0
- testpmd: preferred mempool ops selected: octeontx2_npa
- Configuring Port 0 (socket 0)
- PMD: Port 0: Link Up - speed 40000 Mbps - full-duplex
-
- Port 0: link state change event
- Port 0: 36:10:66:88:7A:57
- Checking link statuses...
- Done
- No commandline core given, start packet forwarding
- io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
- Logical Core 9 (socket 0) forwards packets on 1 streams:
- RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
-
- io packet forwarding packets/burst=32
- nb forwarding cores=1 - nb forwarding ports=1
- port 0: RX queue number: 1 Tx queue number: 1
- Rx offloads=0x0 Tx offloads=0x10000
- RX queue: 0
- RX desc=512 - RX free threshold=0
- RX threshold registers: pthresh=0 hthresh=0 wthresh=0
- RX Offloads=0x0
- TX queue: 0
- TX desc=512 - TX free threshold=0
- TX threshold registers: pthresh=0 hthresh=0 wthresh=0
- TX offloads=0x10000 - TX RS bit threshold=0
- Press enter to exit
-
-Runtime Config Options
-----------------------
-
-- ``Rx&Tx scalar mode enable`` (default ``0``)
-
- Ethdev supports both scalar and vector mode, it may be selected at runtime
- using ``scalar_enable`` ``devargs`` parameter.
-
-- ``RSS reta size`` (default ``64``)
-
- RSS redirection table size may be configured during runtime using ``reta_size``
- ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,reta_size=256
-
- With the above configuration, reta table of size 256 is populated.
-
-- ``Flow priority levels`` (default ``3``)
-
- RTE Flow priority levels can be configured during runtime using
- ``flow_max_priority`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_max_priority=10
-
- With the above configuration, priority level was set to 10 (0-9). Max
- priority level supported is 32.
-
-- ``Reserve Flow entries`` (default ``8``)
-
- RTE flow entries can be pre allocated and the size of pre allocation can be
- selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,flow_prealloc_size=4
-
- With the above configuration, pre alloc size was set to 4. Max pre alloc
- size supported is 32.
-
-- ``Max SQB buffer count`` (default ``512``)
-
- Send queue descriptor buffer count may be limited during runtime using
- ``max_sqb_count`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,max_sqb_count=64
-
- With the above configuration, each send queue's descriptor buffer count is
- limited to a maximum of 64 buffers.
-
-- ``Switch header enable`` (default ``none``)
-
- A port can be configured to a specific switch header type by using
- ``switch_header`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,switch_header="higig2"
-
- With the above configuration, higig2 will be enabled on that port and the
- traffic on this port should be higig2 traffic only. Supported switch header
- types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2" and "vlan_exdsa".
-
-- ``RSS tag as XOR`` (default ``0``)
-
- C0 HW revision onward, The HW gives an option to configure the RSS adder as
-
- * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
-
- * ``rss_adder<7:0> = flow_tag<7:0>``
-
- Latter one aligns with standard NIC behavior vs former one is a legacy
- RSS adder scheme used in OCTEON TX2 products.
-
- By default, the driver runs in the latter mode from C0 HW revision onward.
- Setting this flag to 1 to select the legacy mode.
-
- For example to select the legacy mode(RSS tag adder as XOR)::
-
- -a 0002:02:00.0,tag_as_xor=1
-
-- ``Max SPI for inbound inline IPsec`` (default ``1``)
-
- Max SPI supported for inbound inline IPsec processing can be specified by
- ``ipsec_in_max_spi`` ``devargs`` parameter.
-
- For example::
-
- -a 0002:02:00.0,ipsec_in_max_spi=128
-
- With the above configuration, application can enable inline IPsec processing
- on 128 SAs (SPI 0-127).
-
-- ``Lock Rx contexts in NDC cache``
-
- Lock Rx contexts in NDC cache by using ``lock_rx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_rx_ctx=1
-
-- ``Lock Tx contexts in NDC cache``
-
- Lock Tx contexts in NDC cache by using ``lock_tx_ctx`` parameter.
-
- For example::
-
- -a 0002:02:00.0,lock_tx_ctx=1
-
-.. note::
-
- Above devarg parameters are configurable per device, user needs to pass the
- parameters to all the PCIe devices if application requires to configure on
- all the ethdev ports.
-
-- ``Lock NPA contexts in NDC``
-
- Lock NPA aura and pool contexts in NDC cache.
- The device args take hexadecimal bitmask where each bit represent the
- corresponding aura/pool id.
-
- For example::
-
- -a 0002:02:00.0,npa_lock_mask=0xf
-
-.. _otx2_tmapi:
-
-Traffic Management API
-----------------------
-
-OCTEON TX2 PMD supports generic DPDK Traffic Management API which allows to
-configure the following features:
-
-#. Hierarchical scheduling
-#. Single rate - Two color, Two rate - Three color shaping
-
-Both DWRR and Static Priority(SP) hierarchical scheduling is supported.
-
-Every parent can have atmost 10 SP Children and unlimited DWRR children.
-
-Both PF & VF supports traffic management API with PF supporting 6 levels
-and VF supporting 5 levels of topology.
-
-Limitations
------------
-
-``mempool_octeontx2`` external mempool handler dependency
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler
-as it is performance wise most effective way for packet allocation and Tx buffer
-recycling on OCTEON TX2 SoC platform.
-
-CRC stripping
-~~~~~~~~~~~~~
-
-The OCTEON TX2 SoC family NICs strip the CRC for every packet being received by
-the host interface irrespective of the offload configuration.
-
-Multicast MAC filtering
-~~~~~~~~~~~~~~~~~~~~~~~
-
-``net_octeontx2`` PMD supports multicast mac filtering feature only on physical
-function devices.
-
-SDP interface support
-~~~~~~~~~~~~~~~~~~~~~
-OCTEON TX2 SDP interface support is limited to PF device, No VF support.
-
-Inline Protocol Processing
-~~~~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` PMD doesn't support the following features for packets to be
-inline protocol processed.
-- TSO offload
-- VLAN/QinQ offload
-- Fragmentation
-
-Debugging Options
------------------
-
-.. _table_octeontx2_ethdev_debug_options:
-
-.. table:: OCTEON TX2 ethdev debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | NIX | --log-level='pmd\.net.octeontx2,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | NPC | --log-level='pmd\.net.octeontx2\.flow,8' |
- +---+------------+-------------------------------------------------------+
-
-RTE Flow Support
-----------------
-
-The OCTEON TX2 SoC family NIC has support for the following patterns and
-actions.
-
-Patterns:
-
-.. _table_octeontx2_supported_flow_item_types:
-
-.. table:: Item types
-
- +----+--------------------------------+
- | # | Pattern Type |
- +====+================================+
- | 1 | RTE_FLOW_ITEM_TYPE_ETH |
- +----+--------------------------------+
- | 2 | RTE_FLOW_ITEM_TYPE_VLAN |
- +----+--------------------------------+
- | 3 | RTE_FLOW_ITEM_TYPE_E_TAG |
- +----+--------------------------------+
- | 4 | RTE_FLOW_ITEM_TYPE_IPV4 |
- +----+--------------------------------+
- | 5 | RTE_FLOW_ITEM_TYPE_IPV6 |
- +----+--------------------------------+
- | 6 | RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4|
- +----+--------------------------------+
- | 7 | RTE_FLOW_ITEM_TYPE_MPLS |
- +----+--------------------------------+
- | 8 | RTE_FLOW_ITEM_TYPE_ICMP |
- +----+--------------------------------+
- | 9 | RTE_FLOW_ITEM_TYPE_UDP |
- +----+--------------------------------+
- | 10 | RTE_FLOW_ITEM_TYPE_TCP |
- +----+--------------------------------+
- | 11 | RTE_FLOW_ITEM_TYPE_SCTP |
- +----+--------------------------------+
- | 12 | RTE_FLOW_ITEM_TYPE_ESP |
- +----+--------------------------------+
- | 13 | RTE_FLOW_ITEM_TYPE_GRE |
- +----+--------------------------------+
- | 14 | RTE_FLOW_ITEM_TYPE_NVGRE |
- +----+--------------------------------+
- | 15 | RTE_FLOW_ITEM_TYPE_VXLAN |
- +----+--------------------------------+
- | 16 | RTE_FLOW_ITEM_TYPE_GTPC |
- +----+--------------------------------+
- | 17 | RTE_FLOW_ITEM_TYPE_GTPU |
- +----+--------------------------------+
- | 18 | RTE_FLOW_ITEM_TYPE_GENEVE |
- +----+--------------------------------+
- | 19 | RTE_FLOW_ITEM_TYPE_VXLAN_GPE |
- +----+--------------------------------+
- | 20 | RTE_FLOW_ITEM_TYPE_IPV6_EXT |
- +----+--------------------------------+
- | 21 | RTE_FLOW_ITEM_TYPE_VOID |
- +----+--------------------------------+
- | 22 | RTE_FLOW_ITEM_TYPE_ANY |
- +----+--------------------------------+
- | 23 | RTE_FLOW_ITEM_TYPE_GRE_KEY |
- +----+--------------------------------+
- | 24 | RTE_FLOW_ITEM_TYPE_HIGIG2 |
- +----+--------------------------------+
- | 25 | RTE_FLOW_ITEM_TYPE_RAW |
- +----+--------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
- bits in the GRE header are equal to 0.
-
-Actions:
-
-.. _table_octeontx2_supported_ingress_action_types:
-
-.. table:: Ingress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_VOID |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_MARK |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_FLAG |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 6 | RTE_FLOW_ACTION_TYPE_QUEUE |
- +----+-----------------------------------------+
- | 7 | RTE_FLOW_ACTION_TYPE_RSS |
- +----+-----------------------------------------+
- | 8 | RTE_FLOW_ACTION_TYPE_SECURITY |
- +----+-----------------------------------------+
- | 9 | RTE_FLOW_ACTION_TYPE_PF |
- +----+-----------------------------------------+
- | 10 | RTE_FLOW_ACTION_TYPE_VF |
- +----+-----------------------------------------+
- | 11 | RTE_FLOW_ACTION_TYPE_OF_POP_VLAN |
- +----+-----------------------------------------+
- | 12 | RTE_FLOW_ACTION_TYPE_PORT_ID |
- +----+-----------------------------------------+
- | 13 | RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR |
- +----+-----------------------------------------+
-
-.. note::
-
- ``RTE_FLOW_ACTION_TYPE_PORT_ID``, ``RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR``
- are only supported between PF and its VFs.
-
-.. _table_octeontx2_supported_egress_action_types:
-
-.. table:: Egress action types
-
- +----+-----------------------------------------+
- | # | Action Type |
- +====+=========================================+
- | 1 | RTE_FLOW_ACTION_TYPE_COUNT |
- +----+-----------------------------------------+
- | 2 | RTE_FLOW_ACTION_TYPE_DROP |
- +----+-----------------------------------------+
- | 3 | RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN |
- +----+-----------------------------------------+
- | 4 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID |
- +----+-----------------------------------------+
- | 5 | RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP |
- +----+-----------------------------------------+
-
-Custom protocols supported in RTE Flow
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
-
-* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
-* ``NGIO`` can be parsed at L3 level.
-
-For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
-respective switch header.
-
-For example::
-
- -a 0002:02:00.0,switch_header="vlan_exdsa"
-
-The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
-pattern.
-
-- ``relative`` Selects the layer at which parsing is done.
-
- - 0 for ``exdsa`` and ``vlan_exdsa``.
-
- - 1 for ``NGIO``.
-
-- ``offset`` The offset in the header where the pattern should be matched.
-- ``length`` Length of the pattern.
-- ``pattern`` Pattern as a byte string.
-
-Example usage in testpmd::
-
- ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
- --rx-offloads=0x00080000 --rxq 8 --txq 8
- testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
- spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
diff --git a/doc/guides/nics/octeontx_ep.rst b/doc/guides/nics/octeontx_ep.rst
index b512ccfdab..2ec8a034b5 100644
--- a/doc/guides/nics/octeontx_ep.rst
+++ b/doc/guides/nics/octeontx_ep.rst
@@ -5,7 +5,7 @@ OCTEON TX EP Poll Mode driver
=============================
The OCTEON TX EP ETHDEV PMD (**librte_pmd_octeontx_ep**) provides poll mode
-ethdev driver support for the virtual functions (VF) of **Marvell OCTEON TX2**
+ethdev driver support for the virtual functions (VF) of **Marvell OCTEON 9**
and **Cavium OCTEON TX** families of adapters in SR-IOV context.
More information can be found at `Marvell Official Website
@@ -24,4 +24,4 @@ must be installed separately:
allocates resources such as number of VFs, input/output queues for itself and
the number of i/o queues each VF can use.
-See :doc:`../platform/octeontx2` for SDP interface information which provides PCIe endpoint support for a remote host.
+See :doc:`../platform/cnxk` for SDP interface information which provides PCIe endpoint support for a remote host.
diff --git a/doc/guides/platform/cnxk.rst b/doc/guides/platform/cnxk.rst
index 5213df3ccd..97e38c868c 100644
--- a/doc/guides/platform/cnxk.rst
+++ b/doc/guides/platform/cnxk.rst
@@ -13,6 +13,9 @@ More information about CN9K and CN10K SoC can be found at `Marvell Official Webs
Supported OCTEON cnxk SoCs
--------------------------
+- CN93xx
+- CN96xx
+- CN98xx
- CN106xx
- CNF105xx
@@ -583,6 +586,15 @@ Cross Compilation
Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
+CN9K:
+
+.. code-block:: console
+
+ meson build --cross-file config/arm/arm64_cn9k_linux_gcc
+ ninja -C build
+
+CN10K:
+
.. code-block:: console
meson build --cross-file config/arm/arm64_cn10k_linux_gcc
diff --git a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg b/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
deleted file mode 100644
index ecd575947a..0000000000
--- a/doc/guides/platform/img/octeontx2_packet_flow_hw_accelerators.svg
+++ /dev/null
@@ -1,2804 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_packet_flow_hw_accelerators.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker18508"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path18506" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker18096"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path18094"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker17550"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Sstart"
- inkscape:collect="always">
- <path
- transform="scale(0.2) translate(6,0)"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17548" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker17156"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow1Send">
- <path
- transform="scale(0.2) rotate(180) translate(6,0)"
- style="fill-rule:evenodd;stroke:#00db00;stroke-width:1pt;stroke-opacity:1;fill:#00db00;fill-opacity:1"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- id="path17154" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient13962">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop13958" />
- <stop
- style="stop-color:#fc0000;stop-opacity:0;"
- offset="1"
- id="stop13960" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Send"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Send"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6218"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) rotate(180) translate(6,0)" />
- </marker>
- <linearGradient
- id="linearGradient13170"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop13168" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker12747"
- style="overflow:visible;"
- inkscape:isstock="true">
- <path
- id="path12745"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#ff0000;stroke-opacity:1;fill:#ff0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10821"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend"
- inkscape:collect="always">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10819" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible;"
- id="marker10463"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="Arrow2Mend">
- <path
- transform="scale(0.6) rotate(180) translate(0,0)"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- id="path10461" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow2Mend"
- style="overflow:visible;"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6230"
- style="fill-rule:evenodd;stroke-width:0.625;stroke-linejoin:round;stroke:#fe0000;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 8.7185878,4.0337352 L -2.2072895,0.016013256 L 8.7185884,-4.0017078 C 6.9730900,-1.6296469 6.9831476,1.6157441 8.7185878,4.0337352 z "
- transform="scale(0.6) rotate(180) translate(0,0)" />
- </marker>
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker9807"
- refX="0.0"
- refY="0.0"
- orient="auto"
- inkscape:stockid="TriangleOutS">
- <path
- transform="scale(0.2)"
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- id="path9805" />
- </marker>
- <marker
- inkscape:stockid="TriangleOutS"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="TriangleOutS"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6351"
- d="M 5.77,0.0 L -2.88,5.0 L -2.88,-5.0 L 5.77,0.0 z "
- style="fill-rule:evenodd;stroke:#fe0000;stroke-width:1pt;stroke-opacity:1;fill:#fe0000;fill-opacity:1"
- transform="scale(0.2)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Sstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="Arrow1Sstart"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- id="path6215"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.2) translate(6,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4340">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4336" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4338" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient4330">
- <stop
- style="stop-color:#d7eef4;stop-opacity:1;"
- offset="0"
- id="stop4326" />
- <stop
- style="stop-color:#d7eef4;stop-opacity:0;"
- offset="1"
- id="stop4328" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient3596">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3592" />
- <stop
- style="stop-color:#6ba6fd;stop-opacity:0;"
- offset="1"
- id="stop3594" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3602"
- x1="113.62777"
- y1="238.35289"
- x2="178.07406"
- y2="238.35289"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3604"
- x1="106.04746"
- y1="231.17514"
- x2="170.49375"
- y2="231.17514"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3606"
- x1="97.456466"
- y1="223.48468"
- x2="161.90276"
- y2="223.48468"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(4,-22)" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,112.21043,-29.394096)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2419105,0,0,0.99933655,110.714,51.863352)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.3078944,0,0,0.99916717,224.87462,63.380078)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-8-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="matrix(1.2309135,0,0,0.9993652,359.82239,-48.56566)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-4-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(-35.122992,139.17627)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(32.977515,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(100.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(168.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(236.97751,139.08289)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-7"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(516.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-5-73"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(448.30192,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-1-59"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(380.30193,138.74331)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-9-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <linearGradient
- gradientTransform="translate(312.20142,138.83669)"
- inkscape:collect="always"
- xlink:href="#linearGradient3596"
- id="linearGradient3608-8"
- x1="88.739166"
- y1="215.40981"
- x2="153.18546"
- y2="215.40981"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4330"
- id="radialGradient4334"
- cx="222.02666"
- cy="354.61401"
- fx="222.02666"
- fy="354.61401"
- r="171.25233"
- gradientTransform="matrix(1,0,0,0.15767701,0,298.69953)"
- gradientUnits="userSpaceOnUse" />
- <radialGradient
- inkscape:collect="always"
- xlink:href="#linearGradient4340"
- id="radialGradient4342"
- cx="535.05641"
- cy="353.56737"
- fx="535.05641"
- fy="353.56737"
- r="136.95767"
- gradientTransform="matrix(1.0000096,0,0,0.19866251,-0.00515595,284.82679)"
- gradientUnits="userSpaceOnUse" />
- <marker
- inkscape:isstock="true"
- style="overflow:visible"
- id="marker28236"
- refX="0"
- refY="0"
- orient="auto"
- inkscape:stockid="Arrow2Mstart">
- <path
- inkscape:connector-curvature="0"
- transform="scale(0.6)"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- style="fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- id="path28234" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3706"
- style="overflow:visible">
- <path
- id="path3704"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect14461"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8"
- style="fill:#fe0000;fill-opacity:1;fill-rule:evenodd;stroke:#fe0000;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient13962"
- id="linearGradient14808"
- x1="447.95767"
- y1="176.3018"
- x2="576.27008"
- y2="176.3018"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(0,-8)" />
- <marker
- inkscape:stockid="Arrow2Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow2Mend-3-1-6"
- style="overflow:visible"
- inkscape:isstock="true"
- inkscape:collect="always">
- <path
- inkscape:connector-curvature="0"
- id="path6230-9-8-5"
- style="fill:#808080;fill-opacity:1;fill-rule:evenodd;stroke:#808080;stroke-width:0.625;stroke-linejoin:round;stroke-opacity:1"
- d="M 8.7185878,4.0337352 -2.2072895,0.01601326 8.7185884,-4.0017078 c -1.7454984,2.3720609 -1.7354408,5.6174519 -6e-7,8.035443 z"
- transform="scale(-0.6)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-53"
- style="overflow:visible">
- <path
- id="path4533-35"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-99"
- style="overflow:visible">
- <path
- id="path4533-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.8101934"
- inkscape:cx="434.42776"
- inkscape:cy="99.90063"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-7"
- width="64.18129"
- height="45.550591"
- x="575.72662"
- y="144.79553" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8-5"
- width="64.18129"
- height="45.550591"
- x="584.44391"
- y="152.87041" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42-0"
- width="64.18129"
- height="45.550591"
- x="593.03491"
- y="160.56087" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0-3"
- width="64.18129"
- height="45.550591"
- x="600.61523"
- y="167.73862" />
- <rect
- style="fill:#aaffcc;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26491222;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46-4"
- width="64.18129"
- height="45.550591"
- x="608.70087"
- y="175.42906" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#aaffcc;fill-opacity:1;stroke:none"
- transform="matrix(0.71467688,0,0,0.72506311,529.61388,101.41825)"><flowRegion
- id="flowRegion1855-0"
- style="fill:#aaffcc"><rect
- id="rect1857-5"
- width="67.17514"
- height="33.941124"
- x="120.20815"
- y="120.75856"
- style="fill:#aaffcc" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#aaffcc"
- id="flowPara1976" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3606);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-8"
- width="64.18129"
- height="45.550591"
- x="101.58897"
- y="178.70938" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3604);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-42"
- width="64.18129"
- height="45.550591"
- x="110.17996"
- y="186.39984" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:url(#linearGradient3602);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-0"
- width="64.18129"
- height="45.550591"
- x="117.76027"
- y="193.57759" />
- <rect
- style="fill:#f4d7d7;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-46"
- width="64.18129"
- height="45.550591"
- x="125.84592"
- y="201.26804" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86"
- width="79.001617"
- height="45.521675"
- x="221.60374"
- y="163.11812" />
- <rect
- style="fill:#d7f4e3;fill-opacity:1;stroke:url(#linearGradient3608-4-8);stroke-width:0.29522076;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5"
- width="79.70742"
- height="45.52037"
- x="221.08463"
- y="244.37004" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718"
- width="125.8186"
- height="100.36277"
- x="321.87323"
- y="112.72702" />
- <rect
- style="fill:#ffd5d5;fill-opacity:1;stroke:url(#linearGradient3608-4-8-7);stroke-width:0.30293623;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-5-3"
- width="83.942352"
- height="45.512653"
- x="341.10928"
- y="255.85414" />
- <rect
- style="fill:#ffb380;fill-opacity:1;stroke:url(#linearGradient3608-4-9);stroke-width:0.293915;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-86-2"
- width="79.001617"
- height="45.521675"
- x="469.21576"
- y="143.94656" />
- <rect
- style="opacity:1;fill:url(#radialGradient4334);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.32037571;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783"
- width="342.1843"
- height="53.684738"
- x="50.934502"
- y="327.77164" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="64.18129"
- height="45.550591"
- x="53.748672"
- y="331.81079" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3"
- width="64.18129"
- height="45.550591"
- x="121.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6"
- width="64.18129"
- height="45.550591"
- x="189.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4"
- width="64.18129"
- height="45.550591"
- x="257.84918"
- y="331.71741" />
- <rect
- style="fill:#e9ddaf;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-7);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-9"
- width="64.18129"
- height="45.550591"
- x="325.84918"
- y="331.71741" />
- <rect
- style="opacity:1;fill:url(#radialGradient4342);fill-opacity:1;stroke:#6ba6fd;stroke-width:0.28768006;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3783-8"
- width="273.62766"
- height="54.131645"
- x="398.24258"
- y="328.00156" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-5"
- width="64.18129"
- height="45.550591"
- x="401.07309"
- y="331.47122" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-8);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-0"
- width="64.18129"
- height="45.550591"
- x="469.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-59);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-3"
- width="64.18129"
- height="45.550591"
- x="537.17358"
- y="331.37781" />
- <rect
- style="fill:#dde9af;fill-opacity:1;stroke:url(#linearGradient3608-9-1-5-73);stroke-width:0.26499999;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-3-6-4-6"
- width="64.18129"
- height="45.550591"
- x="605.17358"
- y="331.37781" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3"
- width="27.798103"
- height="21.434149"
- x="325.80197"
- y="117.21037" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="140.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9"
- width="27.798103"
- height="21.434149"
- x="325.2959"
- y="164.20857" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5"
- width="27.798103"
- height="21.434149"
- x="356.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="140.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2"
- width="27.798103"
- height="21.434149"
- x="355.86447"
- y="164.38893" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5"
- width="27.798103"
- height="21.434149"
- x="386.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6"
- width="27.798103"
- height="21.434149"
- x="385.86447"
- y="164.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9"
- width="27.798103"
- height="21.434149"
- x="416.37054"
- y="117.39072" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="140.38895" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8"
- width="27.798103"
- height="21.434149"
- x="415.86447"
- y="164.38896" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5"
- width="27.798103"
- height="21.434149"
- x="324.61139"
- y="187.85849" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0"
- width="27.798103"
- height="21.434149"
- x="355.17996"
- y="188.03886" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0"
- width="27.798103"
- height="21.434149"
- x="385.17996"
- y="188.03888" />
- <rect
- style="opacity:1;fill:#ffeeaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4"
- width="27.798103"
- height="21.434149"
- x="415.17996"
- y="188.03889" />
- <rect
- style="opacity:1;fill:#d7eef4;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.31139579;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-5"
- width="125.8186"
- height="100.36277"
- x="452.24075"
- y="208.56764" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-9"
- width="27.798103"
- height="21.434149"
- x="456.16949"
- y="213.05098" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-8"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="236.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-55"
- width="27.798103"
- height="21.434149"
- x="455.66342"
- y="260.04919" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-7"
- width="27.798103"
- height="21.434149"
- x="486.73807"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-5"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="236.22954" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-3"
- width="27.798103"
- height="21.434149"
- x="486.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-2"
- width="27.798103"
- height="21.434149"
- x="516.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-5"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-1"
- width="27.798103"
- height="21.434149"
- x="516.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-5-5-9-6"
- width="27.798103"
- height="21.434149"
- x="546.73804"
- y="213.23134" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-1-9-3-1"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="236.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-7"
- width="27.798103"
- height="21.434149"
- x="546.23199"
- y="260.22955" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-5-1"
- width="27.798103"
- height="21.434149"
- x="454.97891"
- y="283.6991" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-0-6"
- width="27.798103"
- height="21.434149"
- x="485.54749"
- y="283.87946" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-0-7"
- width="27.798103"
- height="21.434149"
- x="515.54749"
- y="283.87949" />
- <rect
- style="opacity:1;fill:#ffccaa;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.837071;stroke-linecap:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect3718-3-8-9-2-6-8-4-2"
- width="27.798103"
- height="21.434149"
- x="545.54749"
- y="283.87952" />
- <g
- id="g5089"
- transform="matrix(0.7206312,0,0,1.0073979,12.37404,-312.02679)"
- style="fill:#ff8080">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#ff8080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455" />
- <path
- inkscape:connector-curvature="0"
- id="path5083"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#ff8080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1" />
- </g>
- <g
- id="g5089-4"
- transform="matrix(-0.6745281,0,0,0.97266112,143.12774,-266.3349)"
- style="fill:#000080;fill-opacity:1">
- <path
- inkscape:connector-curvature="0"
- d="m 64.439519,501.23542 v 5.43455 h 45.917801 v -5.43455 z"
- style="opacity:1;fill:#000080;fill-opacity:1;stroke:#6ba6fd;stroke-width:1.09656608;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:fill markers stroke"
- id="rect4455-9" />
- <path
- inkscape:connector-curvature="0"
- id="path5083-2"
- d="m 108.30535,494.82846 c 13.96414,8.6951 13.96414,8.40526 13.96414,8.40526 l -12.46798,9.85445 z"
- style="fill:#000080;stroke:#000000;stroke-width:0.53767502px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;fill-opacity:1" />
- </g>
- <flowRoot
- xml:space="preserve"
- id="flowRoot5112"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(52.199711,162.55901)"><flowRegion
- id="flowRegion5114"><rect
- id="rect5116"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118">Tx</flowPara></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot5112-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(49.878465,112.26812)"><flowRegion
- id="flowRegion5114-7"><rect
- id="rect5116-7"
- width="28.991377"
- height="19.79899"
- x="22.627417"
- y="64.897125" /></flowRegion><flowPara
- id="flowPara5118-5">Rx</flowPara></flowRoot> <path
- style="fill:none;stroke:#f60300;stroke-width:0.783;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:0.783, 0.78300000000000003;stroke-dashoffset:0;marker-start:url(#Arrow1Sstart);marker-end:url(#TriangleOutS)"
- d="m 116.81066,179.28348 v -11.31903 l -0.37893,-12.93605 0.37893,-5.25526 3.03134,-5.25526 4.16811,-2.82976 8.3362,-1.61701 h 7.19945 l 7.19946,2.02126 3.03135,2.02126 0.37892,2.02125 -0.37892,3.23401 -0.37892,7.27652 -0.37892,8.48927 -0.37892,14.55304"
- id="path8433"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="104.04285"
- y="144.86398"
- id="text9071"><tspan
- sodipodi:role="line"
- id="tspan9069"
- x="104.04285"
- y="144.86398"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">HW loop back device</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="59.542858"
- y="53.676483"
- id="text9621"><tspan
- sodipodi:role="line"
- id="tspan9619"
- x="59.542858"
- y="65.840889" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2-4-3-9-0-2-9-5-6-7-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,454.1297,247.6848)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9-2-5-4-1-1-1-4-0-5-4"><rect
- id="rect1857-5-1-5-2-6-1-4-9-3-8-1-8-5-7-9-1"
- width="162.09244"
- height="78.764809"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9723" /></flowRoot> <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow2Mend)"
- d="m 181.60025,194.22211 12.72792,-7.07106 14.14214,-2.82843 12.02081,0.70711 h 1.41422 v 0"
- id="path9797"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker10821)"
- d="m 179.47893,193.51501 3.53554,-14.14214 5.65685,-12.72792 16.97056,-9.19239 8.48528,-9.19238 14.84924,-7.77818 24.04163,-8.48528 18.38478,-6.36396 38.89087,-2.82843 h 12.02082 l -2.12132,-0.7071"
- id="path10453"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.70021206;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.70021208, 0.70021208;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3)"
- d="m 299.68795,188.0612 7.97521,-5.53298 8.86135,-2.2132 7.53214,0.5533 h 0.88614 v 0"
- id="path9797-9"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#fe0000;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1)"
- d="m 300.49277,174.25976 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#marker12747)"
- d="m 299.68708,196.34344 9.19239,7.77817 7.07107,1.41421 h 4.94974 v 0"
- id="path12737"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:url(#linearGradient14808);stroke-width:4.66056013;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:4.66056002, 4.66056002;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Send)"
- d="m 447.95767,168.30181 c 119.99171,0 119.99171,0 119.99171,0"
- id="path13236"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#808080;stroke-width:0.96708673;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.96708673, 0.96708673000000001;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow2Mend-3-1-6)"
- d="m 529.56098,142.71226 7.49033,-11.23756 8.32259,-4.49504 7.07419,1.12376 h 0.83227 v 0"
- id="path9797-9-7-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 612.93538,222.50639 -5.65686,12.72792 -14.84924,3.53553 -14.14213,0.70711"
- id="path16128"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 624.95619,220.38507 -3.53553,13.43502 -12.72792,14.84925 -9.19239,5.65685 -19.09188,2.82843 -1.41422,-0.70711 h -1.41421"
- id="path16130"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#Arrow1Mend);stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0"
- d="m 635.56279,221.09217 -7.77817,33.94113 -4.24264,6.36396 -8.48528,3.53553 -10.6066,4.94975 -19.09189,5.65685 -6.36396,3.53554"
- id="path16132"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:1.01083219;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.01083222, 1.01083221999999995;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-53)"
- d="m 456.03282,270.85761 -4.96024,14.83162 -13.02062,4.11988 -12.40058,0.82399"
- id="path16128-3"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.80101544;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.80101541, 0.80101540999999998;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-99)"
- d="m 341.29831,266.70565 -6.88826,6.70663 -18.08168,1.86296 -17.22065,0.37258"
- id="path16128-6"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00faf5;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mend)"
- d="m 219.78402,264.93279 -6.36396,-9.89949 -3.53554,-16.26346 -7.77817,-8.48528 -8.48528,-4.94975 -4.94975,-2.82842"
- id="path17144"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00db00;stroke-width:1.4;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.4, 1.39999999999999991;stroke-dashoffset:0;marker-end:url(#marker17156);marker-start:url(#marker17550)"
- d="m 651.11914,221.09217 -7.07107,31.81981 -17.67766,34.64823 -21.21321,26.87005 -80.61017,1.41422 -86.97413,1.41421 -79.90306,-3.53553 -52.3259,1.41421 -24.04163,10.6066 -2.82843,1.41422"
- id="path17146"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#000000;stroke-width:1.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1.3, 1.30000000000000004;stroke-dashoffset:0;marker-start:url(#marker18096);marker-end:url(#marker18508)"
- d="M 659.60442,221.09217 C 656.776,327.86529 656.776,328.5724 656.776,328.5724"
- id="path18086"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot1853-7-2-7-8-7-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="matrix(0.57822568,0,0,0.72506311,137.7802,161.1139)"><flowRegion
- id="flowRegion1855-0-1-3-66-99-9"><rect
- id="rect1857-5-1-5-2-6-1"
- width="174.19844"
- height="91.867104"
- x="120.20815"
- y="120.75856" /></flowRegion><flowPara
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#5500d4"
- id="flowPara9188-8-4" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="155.96185"
- y="220.07472"
- id="text9071-6"><tspan
- sodipodi:role="line"
- x="158.29518"
- y="220.07472"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2100"> <tspan
- style="fill:#0000ff"
- id="tspan2327">Ethdev Ports </tspan></tspan><tspan
- sodipodi:role="line"
- x="155.96185"
- y="236.74139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104">(NIX)</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2106"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2108"><rect
- id="rect2110"
- width="42.1875"
- height="28.125"
- x="178.125"
- y="71.155365" /></flowRegion><flowPara
- id="flowPara2112" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2114"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2116"><rect
- id="rect2118"
- width="38.28125"
- height="28.90625"
- x="196.09375"
- y="74.280365" /></flowRegion><flowPara
- id="flowPara2120" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2122"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2124"><rect
- id="rect2126"
- width="39.0625"
- height="23.4375"
- x="186.71875"
- y="153.96786" /></flowRegion><flowPara
- id="flowPara2128" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="262.1366"
- y="172.08614"
- id="text9071-6-4"><tspan
- sodipodi:role="line"
- x="264.46994"
- y="172.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0">Ingress </tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="188.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176">Classification</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="205.41946"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="262.1366"
- y="222.08614"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178" /><tspan
- sodipodi:role="line"
- x="262.1366"
- y="238.75281"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="261.26727"
- y="254.46307"
- id="text9071-6-4-9"><tspan
- sodipodi:role="line"
- x="263.60062"
- y="254.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-0">Egress </tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="271.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-8">Classification</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="287.79642"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-9">(NPC)</tspan><tspan
- sodipodi:role="line"
- x="261.26727"
- y="304.46307"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3" /><tspan
- sodipodi:role="line"
- x="261.26727"
- y="321.12973"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="362.7016"
- y="111.81297"
- id="text9071-4"><tspan
- sodipodi:role="line"
- id="tspan9069-8"
- x="362.7016"
- y="111.81297"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Rx Queues</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="488.21777"
- y="207.21898"
- id="text9071-4-3"><tspan
- sodipodi:role="line"
- id="tspan9069-8-8"
- x="488.21777"
- y="207.21898"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Tx Queues</tspan></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot2311"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2313"><rect
- id="rect2315"
- width="49.21875"
- height="41.40625"
- x="195.3125"
- y="68.811615" /></flowRegion><flowPara
- id="flowPara2317" /></flowRoot> <flowRoot
- xml:space="preserve"
- id="flowRoot2319"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion2321"><rect
- id="rect2323"
- width="40.625"
- height="39.0625"
- x="196.09375"
- y="69.592865" /></flowRegion><flowPara
- id="flowPara2325" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="382.20477"
- y="263.74432"
- id="text9071-6-4-6"><tspan
- sodipodi:role="line"
- x="382.20477"
- y="263.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-9">Egress</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="280.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2176-3">Traffic Manager</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="297.07767"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2180-1">(NIX)</tspan><tspan
- sodipodi:role="line"
- x="382.20477"
- y="313.74432"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-6" /><tspan
- sodipodi:role="line"
- x="382.20477"
- y="330.41098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="500.98602"
- y="154.02556"
- id="text9071-6-4-0"><tspan
- sodipodi:role="line"
- x="503.31937"
- y="154.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2104-0-97">Scheduler </tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="170.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2389" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="187.35889"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2391">SSO</tspan><tspan
- sodipodi:role="line"
- x="500.98602"
- y="204.02556"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-60" /><tspan
- sodipodi:role="line"
- x="500.98602"
- y="220.69223"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2174-3" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="571.61627"
- y="119.24016"
- id="text9071-4-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-82"
- x="571.61627"
- y="119.24016"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Supports both poll mode and/or event mode</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="135.90683"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2416">by configuring scheduler</tspan><tspan
- sodipodi:role="line"
- x="571.61627"
- y="152.57349"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2418" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:center;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none"
- x="638.14227"
- y="192.46773"
- id="text9071-6-4-9-2"><tspan
- sodipodi:role="line"
- x="638.14227"
- y="192.46773"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2178-3-2">ARMv8</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="209.1344"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2499">Cores</tspan><tspan
- sodipodi:role="line"
- x="638.14227"
- y="225.80106"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan2174-7-8" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="180.24902"
- y="325.09399"
- id="text9071-4-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7"
- x="180.24902"
- y="325.09399"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Hardware Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="487.8916"
- y="325.91599"
- id="text9071-4-1-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-7-1"
- x="487.8916"
- y="325.91599"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff">Software Libraries</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="81.178604"
- y="350.03149"
- id="text9071-4-18"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83"
- x="81.178604"
- y="350.03149"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Mempool</tspan><tspan
- sodipodi:role="line"
- x="81.178604"
- y="366.69815"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555">(NPA)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="151.09518"
- y="348.77365"
- id="text9071-4-18-9"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-3"
- x="151.09518"
- y="348.77365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">Timer</tspan><tspan
- sodipodi:role="line"
- x="151.09518"
- y="365.44031"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-9">(TIM)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="222.56393"
- y="347.1174"
- id="text9071-4-18-0"><tspan
- sodipodi:role="line"
- x="222.56393"
- y="347.1174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90">Crypto</tspan><tspan
- sodipodi:role="line"
- x="222.56393"
- y="363.78406"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601">(CPT)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="289.00229"
- y="347.69473"
- id="text9071-4-18-0-5"><tspan
- sodipodi:role="line"
- x="289.00229"
- y="347.69473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9">Compress</tspan><tspan
- sodipodi:role="line"
- x="289.00229"
- y="364.36139"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6">(ZIP)</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="355.50653"
- y="348.60098"
- id="text9071-4-18-0-5-6"><tspan
- sodipodi:role="line"
- x="355.50653"
- y="348.60098"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-90-9-5">Shared</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="365.26764"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2645">Memory</tspan><tspan
- sodipodi:role="line"
- x="355.50653"
- y="381.93433"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2601-6-1" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="430.31393"
- y="356.4924"
- id="text9071-4-18-1"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-35"
- x="430.31393"
- y="356.4924"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">SW Ring</tspan><tspan
- sodipodi:role="line"
- x="430.31393"
- y="373.15906"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-6" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="569.37646"
- y="341.1799"
- id="text9071-4-18-2"><tspan
- sodipodi:role="line"
- id="tspan9069-8-83-4"
- x="569.37646"
- y="341.1799"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff">HASH</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="357.84656"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2742">LPM</tspan><tspan
- sodipodi:role="line"
- x="569.37646"
- y="374.51324"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2555-2">ACL</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="503.75143"
- y="355.02365"
- id="text9071-4-18-2-3"><tspan
- sodipodi:role="line"
- x="503.75143"
- y="355.02365"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2733">Mbuf</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="639.34521"
- y="355.6174"
- id="text9071-4-18-19"><tspan
- sodipodi:role="line"
- x="639.34521"
- y="355.6174"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#0000ff"
- id="tspan2771">De(Frag)</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/img/octeontx2_resource_virtualization.svg b/doc/guides/platform/img/octeontx2_resource_virtualization.svg
deleted file mode 100644
index bf976b52af..0000000000
--- a/doc/guides/platform/img/octeontx2_resource_virtualization.svg
+++ /dev/null
@@ -1,2418 +0,0 @@
-<?xml version="1.0" encoding="UTF-8" standalone="no"?>
-<!-- Created with Inkscape (http://www.inkscape.org/) -->
-
-<!--
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(c) 2019 Marvell International Ltd.
-#
--->
-
-<svg
- xmlns:osb="http://www.openswatchbook.org/uri/2009/osb"
- xmlns:dc="http://purl.org/dc/elements/1.1/"
- xmlns:cc="http://creativecommons.org/ns#"
- xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
- xmlns:svg="http://www.w3.org/2000/svg"
- xmlns="http://www.w3.org/2000/svg"
- xmlns:xlink="http://www.w3.org/1999/xlink"
- xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
- xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
- width="631.91431"
- height="288.34286"
- id="svg3868"
- version="1.1"
- inkscape:version="0.92.4 (5da689c313, 2019-01-14)"
- sodipodi:docname="octeontx2_resource_virtualization.svg"
- sodipodi:version="0.32"
- inkscape:output_extension="org.inkscape.output.svg.inkscape">
- <defs
- id="defs3870">
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker9460"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path9458"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0.0"
- refX="0.0"
- id="marker7396"
- style="overflow:visible"
- inkscape:isstock="true">
- <path
- id="path7133"
- d="M 0.0,0.0 L 5.0,-5.0 L -12.5,0.0 L 5.0,5.0 L 0.0,0.0 z "
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1;fill:#000000;fill-opacity:1"
- transform="scale(0.8) translate(12.5,0)" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5474">
- <stop
- style="stop-color:#ffffff;stop-opacity:1;"
- offset="0"
- id="stop5470" />
- <stop
- style="stop-color:#ffffff;stop-opacity:0;"
- offset="1"
- id="stop5472" />
- </linearGradient>
- <linearGradient
- inkscape:collect="always"
- id="linearGradient5464">
- <stop
- style="stop-color:#daeef5;stop-opacity:1;"
- offset="0"
- id="stop5460" />
- <stop
- style="stop-color:#daeef5;stop-opacity:0;"
- offset="1"
- id="stop5462" />
- </linearGradient>
- <linearGradient
- id="linearGradient6545"
- osb:paint="solid">
- <stop
- style="stop-color:#ffa600;stop-opacity:1;"
- offset="0"
- id="stop6543" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3302"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3294"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3290"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3286"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3228"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3188"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3184"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3180"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3176"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3172"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3168"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3164"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3160"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120"
- is_visible="true" />
- <linearGradient
- id="linearGradient3114"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3112" />
- </linearGradient>
- <linearGradient
- id="linearGradient3088"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3086" />
- </linearGradient>
- <linearGradient
- id="linearGradient3058"
- osb:paint="solid">
- <stop
- style="stop-color:#00f900;stop-opacity:1;"
- offset="0"
- id="stop3056" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3054"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3050"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3046"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3042"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3038"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3034"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3030"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3004"
- is_visible="true" />
- <linearGradient
- id="linearGradient2975"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2200;stop-opacity:1;"
- offset="0"
- id="stop2973" />
- </linearGradient>
- <linearGradient
- id="linearGradient2969"
- osb:paint="solid">
- <stop
- style="stop-color:#69ff72;stop-opacity:1;"
- offset="0"
- id="stop2967" />
- </linearGradient>
- <linearGradient
- id="linearGradient2963"
- osb:paint="solid">
- <stop
- style="stop-color:#000000;stop-opacity:1;"
- offset="0"
- id="stop2961" />
- </linearGradient>
- <linearGradient
- id="linearGradient2929"
- osb:paint="solid">
- <stop
- style="stop-color:#ff2d00;stop-opacity:1;"
- offset="0"
- id="stop2927" />
- </linearGradient>
- <linearGradient
- id="linearGradient4610"
- osb:paint="solid">
- <stop
- style="stop-color:#00ffff;stop-opacity:1;"
- offset="0"
- id="stop4608" />
- </linearGradient>
- <linearGradient
- id="linearGradient3993"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3991" />
- </linearGradient>
- <linearGradient
- id="linearGradient3808"
- osb:paint="solid">
- <stop
- style="stop-color:#6ba6fd;stop-opacity:1;"
- offset="0"
- id="stop3806" />
- </linearGradient>
- <linearGradient
- id="linearGradient3776"
- osb:paint="solid">
- <stop
- style="stop-color:#fc0000;stop-opacity:1;"
- offset="0"
- id="stop3774" />
- </linearGradient>
- <linearGradient
- id="linearGradient3438"
- osb:paint="solid">
- <stop
- style="stop-color:#b8e132;stop-opacity:1;"
- offset="0"
- id="stop3436" />
- </linearGradient>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3408"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3404"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3400"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3392"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3376"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3040"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3036"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3032"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3028"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3024"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3020"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2854"
- is_visible="true" />
- <inkscape:path-effect
- effect="bspline"
- id="path-effect2844"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <linearGradient
- id="linearGradient2828"
- osb:paint="solid">
- <stop
- style="stop-color:#ff0000;stop-opacity:1;"
- offset="0"
- id="stop2826" />
- </linearGradient>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect329"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart"
- style="overflow:visible">
- <path
- id="path4530"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend"
- style="overflow:visible">
- <path
- id="path4533"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- id="linearGradient4513">
- <stop
- style="stop-color:#fdffdb;stop-opacity:1;"
- offset="0"
- id="stop4515" />
- <stop
- style="stop-color:#dfe2d8;stop-opacity:0;"
- offset="1"
- id="stop4517" />
- </linearGradient>
- <inkscape:perspective
- sodipodi:type="inkscape:persp3d"
- inkscape:vp_x="0 : 526.18109 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_z="744.09448 : 526.18109 : 1"
- inkscape:persp3d-origin="372.04724 : 350.78739 : 1"
- id="perspective3876" />
- <inkscape:perspective
- id="perspective3886"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lend"
- style="overflow:visible">
- <path
- id="path3211"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3892"
- style="overflow:visible">
- <path
- id="path3894"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3896"
- style="overflow:visible">
- <path
- id="path3898"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Lstart"
- style="overflow:visible">
- <path
- id="path3208"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3902"
- style="overflow:visible">
- <path
- id="path3904"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3906"
- style="overflow:visible">
- <path
- id="path3908"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.8,0,0,0.8,10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Lend"
- orient="auto"
- refY="0"
- refX="0"
- id="marker3910"
- style="overflow:visible">
- <path
- id="path3912"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.8,0,0,-0.8,-10,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective4086"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective4113"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <inkscape:perspective
- id="perspective5195"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-4"
- style="overflow:visible">
- <path
- id="path4533-7"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5272"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-4"
- style="overflow:visible">
- <path
- id="path4530-5"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-0"
- style="overflow:visible">
- <path
- id="path4533-3"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:perspective
- id="perspective5317"
- inkscape:persp3d-origin="0.5 : 0.33333333 : 1"
- inkscape:vp_z="1 : 0.5 : 1"
- inkscape:vp_y="0 : 1000 : 0"
- inkscape:vp_x="0 : 0.5 : 1"
- sodipodi:type="inkscape:persp3d" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-3"
- style="overflow:visible">
- <path
- id="path4530-2"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-06"
- style="overflow:visible">
- <path
- id="path4533-1"
- d="M 0,0 5,-5 -12.5,0 5,5 0,0 z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-8"
- style="overflow:visible">
- <path
- id="path4530-7"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-9"
- style="overflow:visible">
- <path
- id="path4533-2"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="spiro"
- id="path-effect2858-0"
- is_visible="true" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3"
- style="overflow:visible">
- <path
- id="path4533-75"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-3-2"
- style="overflow:visible">
- <path
- id="path4533-75-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <inkscape:path-effect
- effect="bspline"
- id="path-effect3044-9-9"
- is_visible="true"
- weight="33.333333"
- steps="2"
- helper_size="0"
- apply_no_weight="true"
- apply_with_weight="true"
- only_selected="false" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3008-3"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7"
- is_visible="true" />
- <inkscape:path-effect
- effect="spiro"
- id="path-effect3120-7-3"
- is_visible="true" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5464"
- id="linearGradient5466"
- x1="65.724048"
- y1="169.38839"
- x2="183.38978"
- y2="169.38839"
- gradientUnits="userSpaceOnUse"
- gradientTransform="translate(-14,-4)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5476"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,105.65926,-0.6580533)" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5658"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,148.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient5695"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,206.76869,3.9208776)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-34"
- style="overflow:visible">
- <path
- id="path4530-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-45"
- style="overflow:visible">
- <path
- id="path4533-16"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7"
- style="overflow:visible">
- <path
- id="path4530-58"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1"
- style="overflow:visible">
- <path
- id="path4533-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-6"
- style="overflow:visible">
- <path
- id="path4530-58-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-3"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2"
- style="overflow:visible">
- <path
- id="path4530-58-46"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1"
- style="overflow:visible">
- <path
- id="path4533-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-7-2-6"
- style="overflow:visible">
- <path
- id="path4530-58-46-8"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-1-1-9"
- style="overflow:visible">
- <path
- id="path4533-6-4-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,192.76869,-0.0791224)"
- x1="-89.501146"
- y1="363.57419"
- x2="-30.959395"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#grad0-40"
- id="linearGradient5917"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(8.8786147,-0.0235964,-0.00460261,1.50035,-400.25558,-2006.3745)"
- x1="-0.12893644"
- y1="1717.1688"
- x2="28.140806"
- y2="1717.1688" />
- <linearGradient
- id="grad0-40"
- x1="0"
- y1="0"
- x2="1"
- y2="0"
- gradientTransform="rotate(60,0.5,0.5)">
- <stop
- offset="0"
- stop-color="#f3f6fa"
- stop-opacity="1"
- id="stop3419" />
- <stop
- offset="0.24"
- stop-color="#f9fafc"
- stop-opacity="1"
- id="stop3421" />
- <stop
- offset="0.54"
- stop-color="#feffff"
- stop-opacity="1"
- id="stop3423" />
- </linearGradient>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30"
- style="overflow:visible">
- <path
- id="path4530-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6"
- style="overflow:visible">
- <path
- id="path4533-19"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0"
- style="overflow:visible">
- <path
- id="path4530-0-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8"
- style="overflow:visible">
- <path
- id="path4533-19-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9"
- style="overflow:visible">
- <path
- id="path4530-0-6-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3"
- style="overflow:visible">
- <path
- id="path4533-19-6-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-7"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(0.62723639,0,0,1.0109144,321.82147,-1.8659026)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,376.02779,12.240541)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-81"
- style="overflow:visible">
- <path
- id="path4530-9"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-5"
- style="overflow:visible">
- <path
- id="path4533-72"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-1"
- style="overflow:visible">
- <path
- id="path4530-6"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker9714"
- style="overflow:visible">
- <path
- id="path9712"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48"
- style="overflow:visible">
- <path
- id="path4530-4"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker10117"
- style="overflow:visible">
- <path
- id="path10115"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-48-6"
- style="overflow:visible">
- <path
- id="path4530-4-0"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="marker11186"
- style="overflow:visible">
- <path
- id="path11184"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <linearGradient
- inkscape:collect="always"
- xlink:href="#linearGradient5474"
- id="linearGradient6997-8-0"
- gradientUnits="userSpaceOnUse"
- gradientTransform="matrix(1.3985479,0,0,0.98036646,497.77779,12.751681)"
- x1="-89.501144"
- y1="363.57419"
- x2="-30.959394"
- y2="363.57419" />
- <marker
- inkscape:stockid="Arrow1Mstart"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mstart-30-0-9-0"
- style="overflow:visible">
- <path
- id="path4530-0-6-4-1"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(0.4,0,0,0.4,4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- <marker
- inkscape:stockid="Arrow1Mend"
- orient="auto"
- refY="0"
- refX="0"
- id="Arrow1Mend-6-8-3-7"
- style="overflow:visible">
- <path
- id="path4533-19-6-1-5"
- d="M 0,0 5,-5 -12.5,0 5,5 Z"
- style="fill-rule:evenodd;stroke:#000000;stroke-width:1.00000003pt;marker-start:none"
- transform="matrix(-0.4,0,0,-0.4,-4,0)"
- inkscape:connector-curvature="0" />
- </marker>
- </defs>
- <sodipodi:namedview
- id="base"
- pagecolor="#ffffff"
- bordercolor="#666666"
- borderopacity="1.0"
- inkscape:pageopacity="0.0"
- inkscape:pageshadow="2"
- inkscape:zoom="1.4142136"
- inkscape:cx="371.09569"
- inkscape:cy="130.22425"
- inkscape:document-units="px"
- inkscape:current-layer="layer1"
- showgrid="false"
- inkscape:window-width="1920"
- inkscape:window-height="1057"
- inkscape:window-x="-8"
- inkscape:window-y="-8"
- inkscape:window-maximized="1"
- fit-margin-top="0.1"
- fit-margin-left="0.1"
- fit-margin-right="0.1"
- fit-margin-bottom="0.1"
- inkscape:measure-start="-29.078,219.858"
- inkscape:measure-end="346.809,219.858"
- showguides="true"
- inkscape:snap-page="true"
- inkscape:snap-others="false"
- inkscape:snap-nodes="false"
- inkscape:snap-bbox="true"
- inkscape:lockguides="false"
- inkscape:guide-bbox="true">
- <sodipodi:guide
- position="-120.20815,574.17069"
- orientation="0,1"
- id="guide7077"
- inkscape:locked="false" />
- </sodipodi:namedview>
- <metadata
- id="metadata3873">
- <rdf:RDF>
- <cc:Work
- rdf:about="">
- <dc:format>image/svg+xml</dc:format>
- <dc:type
- rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
- <dc:title />
- </cc:Work>
- </rdf:RDF>
- </metadata>
- <g
- inkscape:label="Layer 1"
- inkscape:groupmode="layer"
- id="layer1"
- transform="translate(-46.542857,-100.33361)">
- <flowRoot
- xml:space="preserve"
- id="flowRoot5313"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;letter-spacing:0px;word-spacing:0px"><flowRegion
- id="flowRegion5315"><rect
- id="rect5317"
- width="120.91525"
- height="96.873627"
- x="-192.33304"
- y="-87.130829" /></flowRegion><flowPara
- id="flowPara5319" /></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="90.320152"
- y="299.67871"
- id="text2978"
- inkscape:export-filename="/home/matz/barracuda/rapports/mbuf-api-v2-images/octeon_multi.png"
- inkscape:export-xdpi="112"
- inkscape:export-ydpi="112"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="90.320152"
- y="299.67871"
- id="tspan3006"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:15.74255753px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025"> </tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.82973665;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066"
- width="127.44949"
- height="225.03024"
- x="47.185646"
- y="111.20448" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="154.93478" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55900002;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6"
- width="117.1069"
- height="20.907221"
- x="51.955002"
- y="181.51834" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b7dfd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5096-6-2"
- width="117.1069"
- height="20.907221"
- x="51.691605"
- y="205.82234" />
- <rect
- y="154.93478"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5160"
- style="fill:url(#linearGradient5466);fill-opacity:1;stroke:#6b8afd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5162"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="231.92767" />
- <rect
- y="255.45328"
- x="52.003464"
- height="20.907221"
- width="117.1069"
- id="rect5164"
- style="fill:#daeef5;fill-opacity:1;stroke:#6b6ffd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.55883217;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166"
- width="117.1069"
- height="20.907221"
- x="52.003464"
- y="281.11758" />
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b78fd;stroke-width:0.59729731;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-6"
- width="117.0697"
- height="23.892008"
- x="52.659744"
- y="306.01089" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:'Bitstream Vera Sans';-inkscape-font-specification:'Bitstream Vera Sans';fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.955597"
- y="163.55217"
- id="text5219-26-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.955597"
- y="163.55217"
- id="tspan5223-10-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.098343"
- y="187.18845"
- id="text5219-26-1-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.098343"
- y="187.18845"
- id="tspan5223-10-9-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="96.829468"
- y="211.79611"
- id="text5219-26-1-5"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="96.829468"
- y="211.79611"
- id="tspan5223-10-9-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.770523"
- y="235.66898"
- id="text5219-26-1-5-7-6"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.770523"
- y="235.66898"
- id="tspan5223-10-9-1-6-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPC AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.895973"
- y="259.25156"
- id="text5219-26-1-5-7-6-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.895973"
- y="259.25156"
- id="tspan5223-10-9-1-6-8-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">CPT AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="94.645073"
- y="282.35391"
- id="text5219-26-1-5-7-6-3-0"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="94.645073"
- y="282.35391"
- id="tspan5223-10-9-1-6-8-3-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">RVU AF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.93084431px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.07757032"
- x="110.2803"
- y="126.02858"
- id="text5219-26"
- transform="scale(1.0076913,0.9923674)"><tspan
- sodipodi:role="line"
- x="110.2803"
- y="126.02858"
- id="tspan5223-10"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032">Linux AF driver</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="139.49821"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5325">(octeontx2_af)</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="152.96783"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#ff0000;stroke-width:1.07757032"
- id="tspan5327">PF0</tspan><tspan
- sodipodi:role="line"
- x="110.2803"
- y="160.38988"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.77570343px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.07757032"
- id="tspan5329" /></text>
- <rect
- style="fill:url(#linearGradient5476);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468"
- width="36.554455"
- height="18.169683"
- x="49.603416"
- y="357.7995" />
- <g
- id="g5594"
- transform="translate(-18,-40)">
- <text
- id="text5480"
- y="409.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#6a5400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#6a5400;fill-opacity:1"
- y="409.46326"
- x="73.41291"
- id="tspan5478"
- sodipodi:role="line">CGX-0</tspan></text>
- </g>
- <rect
- style="fill:url(#linearGradient5658);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5468-2"
- width="36.554455"
- height="18.169683"
- x="92.712852"
- y="358.37842" />
- <g
- id="g5594-7"
- transform="translate(25.109434,2.578931)">
- <text
- id="text5480-9"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#695400;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0"
- sodipodi:role="line">CGX-1</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="104.15788"
- y="355.79947"
- id="text5711"><tspan
- sodipodi:role="line"
- id="tspan5709"
- x="104.15788"
- y="392.29269" /></text>
- </g>
- <rect
- style="opacity:1;fill:url(#linearGradient6997);fill-opacity:1;stroke:#695400;stroke-width:1.16700006;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1"
- width="36.554455"
- height="18.169683"
- x="136.71284"
- y="358.37842" />
- <g
- id="g5594-7-0"
- transform="translate(69.109434,2.578931)">
- <text
- id="text5480-9-7"
- y="367.46326"
- x="73.41291"
- style="font-style:normal;font-weight:normal;font-size:40px;line-height:1.25;font-family:sans-serif;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- xml:space="preserve"><tspan
- style="font-size:8px;fill:#695400;fill-opacity:1"
- y="367.46326"
- x="73.41291"
- id="tspan5478-0-4"
- sodipodi:role="line">CGX-2</tspan></text>
- </g>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="116.4436"
- y="309.90784"
- id="text5219-26-1-5-7-6-3-0-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="116.4436"
- y="309.90784"
- id="tspan5223-10-9-1-6-8-3-1-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.33980179px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:1.03398025">CGX-FW Interface</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="m 65.54286,336.17648 v 23"
- id="path7614"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30);marker-end:url(#Arrow1Mend-6)"
- d="m 108.54285,336.67647 v 23"
- id="path7614-2"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.45899999;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0);marker-end:url(#Arrow1Mend-6-8)"
- d="m 152.54285,336.67647 v 23"
- id="path7614-2-2"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50469553;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1"
- width="100.27454"
- height="105.81976"
- x="242.65558"
- y="233.7666" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6"
- width="100.27335"
- height="106.31857"
- x="361.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.50588065;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7"
- width="100.27335"
- height="106.31857"
- x="467.40619"
- y="233.7672" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.49445513;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-7-0"
- width="95.784782"
- height="106.33"
- x="573.40039"
- y="233.76149" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:0.984;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.984, 0.98400000000000021;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart);marker-end:url(#Arrow1Mend)"
- d="M 176.02438,304.15296 C 237.06133,305.2 237.06133,305.2 237.06133,305.2"
- id="path8315"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="177.04286"
- y="299.17648"
- id="text8319"><tspan
- sodipodi:role="line"
- id="tspan8317"
- x="177.04286"
- y="299.17648"
- style="font-size:10.66666698px;line-height:1">AF-PF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="291.53308"
- y="264.67648"
- id="text8323"><tspan
- sodipodi:role="line"
- id="tspan8321"
- x="291.53308"
- y="264.67648"
- style="font-size:10px;text-align:center;text-anchor:middle"><tspan
- style="font-size:10px;fill:#0000ff"
- id="tspan8339"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11972">Linux</tspan></tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11970"> Netdev </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="281.34314"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345">driver</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="298.00983"
- id="tspan8325"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">(octeontx2_pf)</tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="314.67648"
- id="tspan8327"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10511">x</tspan></tspan><tspan
- sodipodi:role="line"
- x="291.53308"
- y="331.34314"
- id="tspan8329" /></text>
- <flowRoot
- xml:space="preserve"
- id="flowRoot8331"
- style="fill:black;fill-opacity:1;stroke:none;font-family:sans-serif;font-style:normal;font-weight:normal;font-size:13.33333333px;line-height:1.25;letter-spacing:0px;word-spacing:0px;-inkscape-font-specification:'sans-serif, Normal';font-stretch:normal;font-variant:normal;text-anchor:start;text-align:start;writing-mode:lr;font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal"><flowRegion
- id="flowRegion8333"><rect
- id="rect8335"
- width="48.5"
- height="28"
- x="252.5"
- y="208.34286" /></flowRegion><flowPara
- id="flowPara8337" /></flowRoot> <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9"
- width="71.28923"
- height="15.589548"
- x="253.89825"
- y="320.63168" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="283.97266"
- y="319.09348"
- id="text5219-26-1-5-7-6-3-0-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="283.97266"
- y="319.09348"
- id="tspan5223-10-9-1-6-8-3-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7"
- width="71.28923"
- height="15.589548"
- x="255.89822"
- y="237.88171" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="285.03787"
- y="239.81017"
- id="text5219-26-1-5-7-6-3-0-1-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="285.03787"
- y="239.81017"
- id="tspan5223-10-9-1-6-8-3-1-0-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333333px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9);marker-end:url(#Arrow1Mend-6-8-3)"
- d="m 287.54285,340.99417 v 18.3646"
- id="path7614-2-2-8"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8);fill-opacity:1;stroke:#695400;stroke-width:1.316;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4"
- width="81.505402"
- height="17.62063"
- x="251.04015"
- y="359.86615" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="263.46152"
- y="224.99915"
- id="text8319-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7"
- x="263.46152"
- y="224.99915"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="259.23218"
- y="371.46179"
- id="text8319-7-7"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3"
- x="259.23218"
- y="371.46179"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3"
- width="80.855743"
- height="92.400963"
- x="197.86496"
- y="112.97599" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4"
- width="80.855743"
- height="92.400963"
- x="286.61499"
- y="112.476" />
- <path
- style="fill:none;stroke:#580000;stroke-width:0.60000002;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:0.3, 0.3;stroke-dashoffset:0;stroke-opacity:1"
- d="m 188.04286,109.67648 c 2.5,238.5 2,238 2,238 163.49999,0.5 163.49999,0.5 163.49999,0.5 v -124 l -70,0.5 -1.5,-116 v 1.5 z"
- id="path9240"
- inkscape:connector-curvature="0" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0"
- width="80.855743"
- height="92.400963"
- x="375.11499"
- y="111.976" />
- <rect
- style="fill:#d6eaf8;fill-opacity:1;stroke:#6ba6fd;stroke-width:0.42349124;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5066-1-6-3-4-0-0"
- width="80.855743"
- height="92.400963"
- x="586.61499"
- y="111.476" />
- <path
- style="fill:none;stroke:#ff00cc;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2, 0.29999999999999999;stroke-dashoffset:0"
- d="m 675.54284,107.17648 1,239.5 -317.99999,0.5 -1,-125 14.5,0.5 -0.5,-113.5 z"
- id="path9272"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ffff;stroke-width:0.3;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:7.2,0.3;stroke-dashoffset:0"
- d="m 284.54285,109.17648 0.5,100 84,-0.5 v -99.5 z"
- id="path9274"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="231.87221"
- y="146.02637"
- id="text8323-1"
- transform="scale(1.0315378,0.96942639)"><tspan
- sodipodi:role="line"
- id="tspan8321-2"
- x="231.87221"
- y="146.02637"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="font-size:8.12077141px;fill:#0000ff;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8339-6">Linux</tspan> Netdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="159.56099"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6">driver</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="173.09561"
- id="tspan8325-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">(octeontx2_vf)</tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="186.63022"
- id="tspan8327-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10513">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="231.87221"
- y="200.16484"
- id="tspan8329-3"
- style="stroke-width:0.81207716;fill:#782121" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9"
- width="59.718147"
- height="12.272857"
- x="207.65872"
- y="185.61246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="225.56583"
- y="192.49615"
- id="text5219-26-1-5-7-6-3-0-1-6"
- transform="scale(0.99742277,1.0025839)"><tspan
- sodipodi:role="line"
- x="225.56583"
- y="192.49615"
- id="tspan5223-10-9-1-6-8-3-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5"
- width="59.718147"
- height="12.272857"
- x="209.33406"
- y="116.46765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="226.43088"
- y="124.1223"
- id="text5219-26-1-5-7-6-3-0-1-4-7"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="226.43088"
- y="124.1223"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="317.66635"
- y="121.26925"
- id="text8323-1-9"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-3"
- x="317.66635"
- y="131.14769"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="144.6823"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9400"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9402">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9398">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="158.21692"
- id="tspan8325-2-7"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">driver</tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="171.75154"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9392" /><tspan
- sodipodi:role="line"
- x="317.66635"
- y="185.28616"
- id="tspan8327-7-8"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#782121;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10515">x</tspan><tspan
- style="font-size:8.12077141px;fill:#782121;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-0">-VF1</tspan></tspan><tspan
- sodipodi:role="line"
- x="317.66635"
- y="198.82077"
- id="tspan8329-3-3"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3"
- width="59.718147"
- height="12.272857"
- x="295.65872"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="313.79312"
- y="191.99756"
- id="text5219-26-1-5-7-6-3-0-1-6-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="313.79312"
- y="191.99756"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8"
- width="59.718147"
- height="12.272857"
- x="297.33408"
- y="115.96765" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="314.65817"
- y="123.62372"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="314.65817"
- y="123.62372"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;marker-end:url(#Arrow1Mstart);marker-start:url(#Arrow1Mstart)"
- d="m 254.54285,205.17648 c 1,29 1,28.5 1,28.5"
- id="path9405"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-1);marker-end:url(#Arrow1Mstart-1)"
- d="m 324.42292,203.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-3"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="408.28308"
- y="265.83011"
- id="text8323-7"><tspan
- sodipodi:role="line"
- id="tspan8321-3"
- x="408.28308"
- y="265.83011"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10440">DPDK</tspan> Ethdev <tspan
- style="font-size:10px;fill:#00d4aa;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8343-5">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="282.49677"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-8">driver</tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="299.16345"
- id="tspan8325-5"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="408.28308"
- y="315.83011"
- id="tspan8327-1"
- style="font-size:10px;text-align:center;text-anchor:middle;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">PF<tspan
- style="fill:#ff0000;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan10517">y</tspan></tspan><tspan
- sodipodi:role="line"
- x="408.28308"
- y="332.49677"
- id="tspan8329-2" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3"
- width="71.28923"
- height="15.589548"
- x="376.64825"
- y="319.78531" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="410.92075"
- y="318.27411"
- id="text5219-26-1-5-7-6-3-0-1-62"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="410.92075"
- y="318.27411"
- id="tspan5223-10-9-1-6-8-3-1-0-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2"
- width="71.28923"
- height="15.589548"
- x="378.64822"
- y="237.03534" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="411.98596"
- y="238.99095"
- id="text5219-26-1-5-7-6-3-0-1-4-4"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="411.98596"
- y="238.99095"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="386.21152"
- y="224.15277"
- id="text8319-7-5"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8"
- x="386.21152"
- y="224.15277"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48);marker-end:url(#Arrow1Mstart-48)"
- d="m 411.29285,204.33011 c 1,29 1,28.5 1,28.5"
- id="path9405-0"
- inkscape:connector-curvature="0" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="520.61176"
- y="265.49265"
- id="text8323-7-8"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3"
- x="520.61176"
- y="265.49265"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a"
- id="tspan10440-2">DPDK</tspan> Eventdev <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="282.1593"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6">driver</tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="298.82599"
- id="tspan8325-5-4"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="520.61176"
- y="315.49265"
- id="tspan8327-1-0"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10519">z</tspan></tspan><tspan
- sodipodi:role="line"
- x="520.61176"
- y="332.1593"
- id="tspan8329-2-1" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-3-6"
- width="71.28923"
- height="15.589548"
- x="484.97693"
- y="319.44785" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="522.95496"
- y="317.94733"
- id="text5219-26-1-5-7-6-3-0-1-62-1"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="522.95496"
- y="317.94733"
- id="tspan5223-10-9-1-6-8-3-1-0-4-7"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">TIM LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.37650499;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-2-8"
- width="71.28923"
- height="15.589548"
- x="486.9769"
- y="236.69788" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:12.40776253px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:1.03398025"
- x="524.0202"
- y="238.66432"
- id="text5219-26-1-5-7-6-3-0-1-4-4-3"
- transform="scale(0.96692797,1.0342032)"><tspan
- sodipodi:role="line"
- x="524.0202"
- y="238.66432"
- id="tspan5223-10-9-1-6-8-3-1-0-8-7-6"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:9.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:1.03398025">SSO LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="619.6156"
- y="265.47531"
- id="text8323-7-8-3"><tspan
- sodipodi:role="line"
- id="tspan8321-3-3-1"
- x="619.6156"
- y="265.47531"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"> <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0000ff"
- id="tspan10562">Linux </tspan>Crypto <tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;fill:#00d4aa"
- id="tspan8343-5-3-7">PF</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="282.14197"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle"
- id="tspan8345-8-6-8">driver</tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="298.80865"
- id="tspan8325-5-4-3"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle" /><tspan
- sodipodi:role="line"
- x="619.6156"
- y="315.47531"
- id="tspan8327-1-0-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff0000"
- id="tspan10560">m</tspan></tspan><tspan
- sodipodi:role="line"
- x="619.6156"
- y="332.14197"
- id="tspan8329-2-1-9" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0"
- width="59.718147"
- height="12.272857"
- x="385.10458"
- y="183.92126" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="403.46997"
- y="190.80957"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="403.46997"
- y="190.80957"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NIX LF</tspan></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-7-5-8-5"
- width="59.718147"
- height="12.272857"
- x="386.77994"
- y="116.77647" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="404.33502"
- y="124.43062"
- id="text5219-26-1-5-7-6-3-0-1-4-7-9-8"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="404.33502"
- y="124.43062"
- id="tspan5223-10-9-1-6-8-3-1-0-8-0-9-8"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">NPA LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="402.97598"
- y="143.8235"
- id="text8323-1-7"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1"
- x="402.97598"
- y="143.8235"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"><tspan
- style="fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11102">DPDK</tspan> Ethdev <tspan
- style="fill:#0066ff;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan9396-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="157.35812"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8345-6-5">driver</tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="170.89275"
- id="tspan8327-7-2"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /><tspan
- sodipodi:role="line"
- x="402.97598"
- y="184.42735"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11106">PF<tspan
- style="fill:#a02c2c;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11110">y</tspan><tspan
- style="font-size:8.12077141px;fill:#a02c2c;stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan8347-1-2">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="402.97598"
- y="197.96198"
- id="tspan8329-3-4"
- style="stroke-width:0.81207716;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal" /></text>
- <rect
- style="fill:#daeef5;fill-opacity:1;stroke:#6b86fd;stroke-width:0.30575109;stroke-miterlimit:4;stroke-dasharray:none;stroke-opacity:1"
- id="rect5166-9-9-3-0-0"
- width="59.718147"
- height="12.272857"
- x="596.60461"
- y="185.11246" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.0760603px;line-height:0%;font-family:monospace;-inkscape-font-specification:monospace;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.83967167"
- x="615.51703"
- y="191.99774"
- id="text5219-26-1-5-7-6-3-0-1-6-1-5-1"
- transform="scale(0.99742276,1.0025839)"><tspan
- sodipodi:role="line"
- x="615.51703"
- y="191.99774"
- id="tspan5223-10-9-1-6-8-3-1-0-5-5-5-2"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:7.57938623px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;stroke-width:0.83967167">CPT LF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:10.82769489px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none;stroke-width:0.81207716"
- x="608.00879"
- y="145.05219"
- id="text8323-1-7-3"
- transform="scale(1.0315378,0.96942642)"><tspan
- sodipodi:role="line"
- id="tspan8321-2-1-5"
- x="608.00879"
- y="145.05219"
- style="font-size:8.12077141px;text-align:center;text-anchor:middle;stroke-width:0.81207716"><tspan
- id="tspan1793"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#ff2a2a">DPDK</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace"
- id="tspan11966"> Crypto </tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#0066ff"
- id="tspan9396-1-1">VF</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="158.58681"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan8345-6-5-4">driver</tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="172.12143"
- id="tspan8327-7-2-1"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716" /><tspan
- sodipodi:role="line"
- x="608.00879"
- y="185.65604"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;text-align:center;text-anchor:middle;stroke-width:0.81207716"
- id="tspan11106-8">PF<tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737"
- id="tspan11172">m</tspan><tspan
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:8.12077141px;font-family:monospace;-inkscape-font-specification:monospace;fill:#c83737;stroke-width:0.81207716"
- id="tspan8347-1-2-0">-VF0</tspan></tspan><tspan
- sodipodi:role="line"
- x="608.00879"
- y="199.19066"
- id="tspan8329-3-4-0"
- style="stroke-width:0.81207716" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="603.23218"
- y="224.74855"
- id="text8319-7-5-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-8-4"
- x="603.23218"
- y="224.74855"
- style="font-size:10.66666698px;line-height:1">PF-VF MBOX</tspan></text>
- <path
- style="fill:none;stroke:#00ff00;stroke-width:1;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1, 1;stroke-dashoffset:0;stroke-opacity:1;marker-start:url(#Arrow1Mstart-48-6);marker-end:url(#Arrow1Mstart-48-6)"
- d="m 628.31351,204.92589 c 1,29 1,28.5 1,28.5"
- id="path9405-0-2"
- inkscape:connector-curvature="0" />
- <flowRoot
- xml:space="preserve"
- id="flowRoot11473"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- transform="translate(46.542857,100.33361)"><flowRegion
- id="flowRegion11475"><rect
- id="rect11477"
- width="90"
- height="14.5"
- x="426"
- y="26.342873" /></flowRegion><flowPara
- id="flowPara11479">DDDpk</flowPara></flowRoot> <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="509.60013"
- y="128.17648"
- id="text11483"><tspan
- sodipodi:role="line"
- id="tspan11481"
- x="511.47513"
- y="128.17648"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544">D<tspan
- style="-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal;fill:#005544"
- id="tspan11962">PDK-APP1 with </tspan></tspan><tspan
- sodipodi:role="line"
- x="511.47513"
- y="144.84315"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485">one ethdev </tspan><tspan
- sodipodi:role="line"
- x="509.60013"
- y="161.50981"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#005544;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11491">over Linux PF</tspan></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="533.54285"
- y="158.17648"
- id="text11489"><tspan
- sodipodi:role="line"
- id="tspan11487"
- x="533.54285"
- y="170.34088" /></text>
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:sans-serif;-inkscape-font-specification:'sans-serif, Normal';font-variant-ligatures:normal;font-variant-caps:normal;font-variant-numeric:normal;font-feature-settings:normal;text-align:start;letter-spacing:0px;word-spacing:0px;writing-mode:lr-tb;text-anchor:start;fill:#000000;fill-opacity:1;stroke:none"
- x="518.02197"
- y="179.98117"
- id="text11483-6"><tspan
- sodipodi:role="line"
- id="tspan11481-4"
- x="519.42822"
- y="179.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal">DPDK-APP2 with </tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="196.64784"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11485-5">Two ethdevs(PF,VF) ,</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="213.3145"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11517">eventdev, timer adapter and</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="229.98117"
- style="font-size:8px;text-align:center;text-anchor:middle;fill:#ff2a2a;-inkscape-font-specification:monospace;font-family:monospace;font-weight:normal;font-style:normal;font-stretch:normal;font-variant:normal"
- id="tspan11519"> cryptodev</tspan><tspan
- sodipodi:role="line"
- x="518.02197"
- y="246.64784"
- style="font-size:10.66666698px;text-align:center;text-anchor:middle;fill:#00ffff"
- id="tspan11491-6" /></text>
- <path
- style="fill:#005544;stroke:#00ffff;stroke-width:1.02430511;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.02430516, 4.09722065999999963;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mstart-8)"
- d="m 483.99846,150.16496 -112.95349,13.41069 v 0 l -0.48897,-0.53643 h 0.48897"
- id="path11521"
- inkscape:connector-curvature="0" />
- <path
- style="fill:#ff0000;stroke:#ff5555;stroke-width:1.16440296;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:1.16440301, 2.32880602999999997;stroke-dashoffset:0;stroke-opacity:1;marker-end:url(#Arrow1Mend-0)"
- d="m 545.54814,186.52569 c 26.3521,-76.73875 26.3521,-76.73875 26.3521,-76.73875"
- id="path11523"
- inkscape:connector-curvature="0" />
- <path
- style="fill:none;stroke:#ff0000;stroke-width:0.41014698;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:2.29999995;stroke-dasharray:none;stroke-opacity:1;marker-start:url(#Arrow1Mstart-30-0-9-0);marker-end:url(#Arrow1Mend-6-8-3-7)"
- d="m 409.29286,341.50531 v 18.3646"
- id="path7614-2-2-8-2"
- inkscape:connector-curvature="0" />
- <rect
- style="opacity:1;fill:url(#linearGradient6997-8-0);fill-opacity:1;stroke:#695400;stroke-width:1.31599998;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1"
- id="rect5468-2-1-4-9"
- width="81.505402"
- height="17.62063"
- x="372.79016"
- y="360.37729" />
- <text
- xml:space="preserve"
- style="font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:13.33333302px;line-height:1.25;font-family:monospace;-inkscape-font-specification:monospace;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none"
- x="380.98218"
- y="371.97293"
- id="text8319-7-7-1"><tspan
- sodipodi:role="line"
- id="tspan8317-7-3-1"
- x="380.98218"
- y="371.97293"
- style="font-size:9.33333302px;line-height:1">CGX-x LMAC-y</tspan></text>
- </g>
-</svg>
diff --git a/doc/guides/platform/index.rst b/doc/guides/platform/index.rst
index 7614e1a368..2ff91a6018 100644
--- a/doc/guides/platform/index.rst
+++ b/doc/guides/platform/index.rst
@@ -15,4 +15,3 @@ The following are platform specific guides and setup information.
dpaa
dpaa2
octeontx
- octeontx2
diff --git a/doc/guides/platform/octeontx2.rst b/doc/guides/platform/octeontx2.rst
deleted file mode 100644
index 5ab43abbdd..0000000000
--- a/doc/guides/platform/octeontx2.rst
+++ /dev/null
@@ -1,520 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright(c) 2019 Marvell International Ltd.
-
-Marvell OCTEON TX2 Platform Guide
-=================================
-
-This document gives an overview of **Marvell OCTEON TX2** RVU H/W block,
-packet flow and procedure to build DPDK on OCTEON TX2 platform.
-
-More information about OCTEON TX2 SoC can be found at `Marvell Official Website
-<https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
-
-Supported OCTEON TX2 SoCs
--------------------------
-
-- CN98xx
-- CN96xx
-- CN93xx
-
-OCTEON TX2 Resource Virtualization Unit architecture
-----------------------------------------------------
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram depicts the
-RVU architecture and a resource provisioning example.
-
-.. _figure_octeontx2_resource_virtualization:
-
-.. figure:: img/octeontx2_resource_virtualization.*
-
- OCTEON TX2 Resource virtualization architecture and provisioning example
-
-
-Resource Virtualization Unit (RVU) on Marvell's OCTEON TX2 SoC maps HW
-resources belonging to the network, crypto and other functional blocks onto
-PCI-compatible physical and virtual functions.
-
-Each functional block has multiple local functions (LFs) for
-provisioning to different PCIe devices. RVU supports multiple PCIe SRIOV
-physical functions (PFs) and virtual functions (VFs).
-
-The :numref:`table_octeontx2_rvu_dpdk_mapping` shows the various local
-functions (LFs) provided by the RVU and its functional mapping to
-DPDK subsystem.
-
-.. _table_octeontx2_rvu_dpdk_mapping:
-
-.. table:: RVU managed functional blocks and its mapping to DPDK subsystem
-
- +---+-----+--------------------------------------------------------------+
- | # | LF | DPDK subsystem mapping |
- +===+=====+==============================================================+
- | 1 | NIX | rte_ethdev, rte_tm, rte_event_eth_[rt]x_adapter, rte_security|
- +---+-----+--------------------------------------------------------------+
- | 2 | NPA | rte_mempool |
- +---+-----+--------------------------------------------------------------+
- | 3 | NPC | rte_flow |
- +---+-----+--------------------------------------------------------------+
- | 4 | CPT | rte_cryptodev, rte_event_crypto_adapter |
- +---+-----+--------------------------------------------------------------+
- | 5 | SSO | rte_eventdev |
- +---+-----+--------------------------------------------------------------+
- | 6 | TIM | rte_event_timer_adapter |
- +---+-----+--------------------------------------------------------------+
- | 7 | LBK | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 8 | DPI | rte_rawdev |
- +---+-----+--------------------------------------------------------------+
- | 9 | SDP | rte_ethdev |
- +---+-----+--------------------------------------------------------------+
- | 10| REE | rte_regexdev |
- +---+-----+--------------------------------------------------------------+
-
-PF0 is called the administrative / admin function (AF) and has exclusive
-privileges to provision RVU functional block's LFs to each of the PF/VF.
-
-PF/VFs communicates with AF via a shared memory region (mailbox).Upon receiving
-requests from PF/VF, AF does resource provisioning and other HW configuration.
-
-AF is always attached to host, but PF/VFs may be used by host kernel itself,
-or attached to VMs or to userspace applications like DPDK, etc. So, AF has to
-handle provisioning/configuration requests sent by any device from any domain.
-
-The AF driver does not receive or process any data.
-It is only a configuration driver used in control path.
-
-The :numref:`figure_octeontx2_resource_virtualization` diagram also shows a
-resource provisioning example where,
-
-1. PFx and PFx-VF0 bound to Linux netdev driver.
-2. PFx-VF1 ethdev driver bound to the first DPDK application.
-3. PFy ethdev driver, PFy-VF0 ethdev driver, PFz eventdev driver, PFm-VF0 cryptodev driver bound to the second DPDK application.
-
-LBK HW Access
--------------
-
-Loopback HW Unit (LBK) receives packets from NIX-RX and sends packets back to NIX-TX.
-The loopback block has N channels and contains data buffering that is shared across
-all channels. The LBK HW Unit is abstracted using ethdev subsystem, Where PF0's
-VFs are exposed as ethdev device and odd-even pairs of VFs are tied together,
-that is, packets sent on odd VF end up received on even VF and vice versa.
-This would enable HW accelerated means of communication between two domains
-where even VF bound to the first domain and odd VF bound to the second domain.
-
-Typical application usage models are,
-
-#. Communication between the Linux kernel and DPDK application.
-#. Exception path to Linux kernel from DPDK application as SW ``KNI`` replacement.
-#. Communication between two different DPDK applications.
-
-SDP interface
--------------
-
-System DPI Packet Interface unit(SDP) provides PCIe endpoint support for remote host
-to DMA packets into and out of OCTEON TX2 SoC. SDP interface comes in to live only when
-OCTEON TX2 SoC is connected in PCIe endpoint mode. It can be used to send/receive
-packets to/from remote host machine using input/output queue pairs exposed to it.
-SDP interface receives input packets from remote host from NIX-RX and sends packets
-to remote host using NIX-TX. Remote host machine need to use corresponding driver
-(kernel/user mode) to communicate with SDP interface on OCTEON TX2 SoC. SDP supports
-single PCIe SRIOV physical function(PF) and multiple virtual functions(VF's). Users
-can bind PF or VF to use SDP interface and it will be enumerated as ethdev ports.
-
-The primary use case for SDP is to enable the smart NIC use case. Typical usage models are,
-
-#. Communication channel between remote host and OCTEON TX2 SoC over PCIe.
-#. Transfer packets received from network interface to remote host over PCIe and
- vice-versa.
-
-OCTEON TX2 packet flow
-----------------------
-
-The :numref:`figure_octeontx2_packet_flow_hw_accelerators` diagram depicts
-the packet flow on OCTEON TX2 SoC in conjunction with use of various HW accelerators.
-
-.. _figure_octeontx2_packet_flow_hw_accelerators:
-
-.. figure:: img/octeontx2_packet_flow_hw_accelerators.*
-
- OCTEON TX2 packet flow in conjunction with use of HW accelerators
-
-HW Offload Drivers
-------------------
-
-This section lists dataplane H/W block(s) available in OCTEON TX2 SoC.
-
-#. **Ethdev Driver**
- See :doc:`../nics/octeontx2` for NIX Ethdev driver information.
-
-#. **Mempool Driver**
- See :doc:`../mempool/octeontx2` for NPA mempool driver information.
-
-#. **Event Device Driver**
- See :doc:`../eventdevs/octeontx2` for SSO event device driver information.
-
-#. **Crypto Device Driver**
- See :doc:`../cryptodevs/octeontx2` for CPT crypto device driver information.
-
-Procedure to Setup Platform
----------------------------
-
-There are three main prerequisites for setting up DPDK on OCTEON TX2
-compatible board:
-
-1. **OCTEON TX2 Linux kernel driver**
-
- The dependent kernel drivers can be obtained from the
- `kernel.org <https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/drivers/net/ethernet/marvell/octeontx2>`_.
-
- Alternatively, the Marvell SDK also provides the required kernel drivers.
-
- Linux kernel should be configured with the following features enabled:
-
-.. code-block:: console
-
- # 64K pages enabled for better performance
- CONFIG_ARM64_64K_PAGES=y
- CONFIG_ARM64_VA_BITS_48=y
- # huge pages support enabled
- CONFIG_HUGETLBFS=y
- CONFIG_HUGETLB_PAGE=y
- # VFIO enabled with TYPE1 IOMMU at minimum
- CONFIG_VFIO_IOMMU_TYPE1=y
- CONFIG_VFIO_VIRQFD=y
- CONFIG_VFIO=y
- CONFIG_VFIO_NOIOMMU=y
- CONFIG_VFIO_PCI=y
- CONFIG_VFIO_PCI_MMAP=y
- # SMMUv3 driver
- CONFIG_ARM_SMMU_V3=y
- # ARMv8.1 LSE atomics
- CONFIG_ARM64_LSE_ATOMICS=y
- # OCTEONTX2 drivers
- CONFIG_OCTEONTX2_MBOX=y
- CONFIG_OCTEONTX2_AF=y
- # Enable if netdev PF driver required
- CONFIG_OCTEONTX2_PF=y
- # Enable if netdev VF driver required
- CONFIG_OCTEONTX2_VF=y
- CONFIG_CRYPTO_DEV_OCTEONTX2_CPT=y
- # Enable if OCTEONTX2 DMA PF driver required
- CONFIG_OCTEONTX2_DPI_PF=n
-
-2. **ARM64 Linux Tool Chain**
-
- For example, the *aarch64* Linaro Toolchain, which can be obtained from
- `here <https://releases.linaro.org/components/toolchain/binaries/7.4-2019.02/aarch64-linux-gnu/>`_.
-
- Alternatively, the Marvell SDK also provides GNU GCC toolchain, which is
- optimized for OCTEON TX2 CPU.
-
-3. **Rootfile system**
-
- Any *aarch64* supporting filesystem may be used. For example,
- Ubuntu 15.10 (Wily) or 16.04 LTS (Xenial) userland which can be obtained
- from `<http://cdimage.ubuntu.com/ubuntu-base/releases/16.04/release/ubuntu-base-16.04.1-base-arm64.tar.gz>`_.
-
- Alternatively, the Marvell SDK provides the buildroot based root filesystem.
- The SDK includes all the above prerequisites necessary to bring up the OCTEON TX2 board.
-
-- Follow the DPDK :doc:`../linux_gsg/index` to setup the basic DPDK environment.
-
-
-Debugging Options
------------------
-
-.. _table_octeontx2_common_debug_options:
-
-.. table:: OCTEON TX2 common debug options
-
- +---+------------+-------------------------------------------------------+
- | # | Component | EAL log command |
- +===+============+=======================================================+
- | 1 | Common | --log-level='pmd\.octeontx2\.base,8' |
- +---+------------+-------------------------------------------------------+
- | 2 | Mailbox | --log-level='pmd\.octeontx2\.mbox,8' |
- +---+------------+-------------------------------------------------------+
-
-Debugfs support
-~~~~~~~~~~~~~~~
-
-The **OCTEON TX2 Linux kernel driver** provides support to dump RVU blocks
-context or stats using debugfs.
-
-Enable ``debugfs`` by:
-
-1. Compile kernel with debugfs enabled, i.e ``CONFIG_DEBUGFS=y``.
-2. Boot OCTEON TX2 with debugfs supported kernel.
-3. Verify ``debugfs`` mounted by default "mount | grep -i debugfs" or mount it manually by using.
-
-.. code-block:: console
-
- # mount -t debugfs none /sys/kernel/debug
-
-Currently ``debugfs`` supports the following RVU blocks NIX, NPA, NPC, NDC,
-SSO & CGX.
-
-The file structure under ``/sys/kernel/debug`` is as follows
-
-.. code-block:: console
-
- octeontx2/
- |-- cgx
- | |-- cgx0
- | | '-- lmac0
- | | '-- stats
- | |-- cgx1
- | | |-- lmac0
- | | | '-- stats
- | | '-- lmac1
- | | '-- stats
- | '-- cgx2
- | '-- lmac0
- | '-- stats
- |-- cpt
- | |-- cpt_engines_info
- | |-- cpt_engines_sts
- | |-- cpt_err_info
- | |-- cpt_lfs_info
- | '-- cpt_pc
- |---- nix
- | |-- cq_ctx
- | |-- ndc_rx_cache
- | |-- ndc_rx_hits_miss
- | |-- ndc_tx_cache
- | |-- ndc_tx_hits_miss
- | |-- qsize
- | |-- rq_ctx
- | |-- sq_ctx
- | '-- tx_stall_hwissue
- |-- npa
- | |-- aura_ctx
- | |-- ndc_cache
- | |-- ndc_hits_miss
- | |-- pool_ctx
- | '-- qsize
- |-- npc
- | |-- mcam_info
- | '-- rx_miss_act_stats
- |-- rsrc_alloc
- '-- sso
- |-- hws
- | '-- sso_hws_info
- '-- hwgrp
- |-- sso_hwgrp_aq_thresh
- |-- sso_hwgrp_iaq_walk
- |-- sso_hwgrp_pc
- |-- sso_hwgrp_free_list_walk
- |-- sso_hwgrp_ient_walk
- '-- sso_hwgrp_taq_walk
-
-RVU block LF allocation:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/rsrc_alloc
-
- pcifunc NPA NIX SSO GROUP SSOWS TIM CPT
- PF1 0 0
- PF4 1
- PF13 0, 1 0, 1 0
-
-CGX example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cgx/cgx2/lmac0/stats
-
- =======Link Status======
- Link is UP 40000 Mbps
- =======RX_STATS======
- Received packets: 0
- Octets of received packets: 0
- Received PAUSE packets: 0
- Received PAUSE and control packets: 0
- Filtered DMAC0 (NIX-bound) packets: 0
- Filtered DMAC0 (NIX-bound) octets: 0
- Packets dropped due to RX FIFO full: 0
- Octets dropped due to RX FIFO full: 0
- Error packets: 0
- Filtered DMAC1 (NCSI-bound) packets: 0
- Filtered DMAC1 (NCSI-bound) octets: 0
- NCSI-bound packets dropped: 0
- NCSI-bound octets dropped: 0
- =======TX_STATS======
- Packets dropped due to excessive collisions: 0
- Packets dropped due to excessive deferral: 0
- Multiple collisions before successful transmission: 0
- Single collisions before successful transmission: 0
- Total octets sent on the interface: 0
- Total frames sent on the interface: 0
- Packets sent with an octet count < 64: 0
- Packets sent with an octet count == 64: 0
- Packets sent with an octet count of 65127: 0
- Packets sent with an octet count of 128-255: 0
- Packets sent with an octet count of 256-511: 0
- Packets sent with an octet count of 512-1023: 0
- Packets sent with an octet count of 1024-1518: 0
- Packets sent with an octet count of > 1518: 0
- Packets sent to a broadcast DMAC: 0
- Packets sent to the multicast DMAC: 0
- Transmit underflow and were truncated: 0
- Control/PAUSE packets sent: 0
-
-CPT example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/cpt/cpt_pc
-
- CPT instruction requests 0
- CPT instruction latency 0
- CPT NCB read requests 0
- CPT NCB read latency 0
- CPT read requests caused by UC fills 0
- CPT active cycles pc 1395642
- CPT clock count pc 5579867595493
-
-NIX example usage:
-
-.. code-block:: console
-
- Usage: echo <nixlf> [cq number/all] > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/nix/cq_ctx
- cat /sys/kernel/debug/octeontx2/nix/cq_ctx
-
- =====cq_ctx for nixlf:0 and qidx:0 is=====
- W0: base 158ef1a00
-
- W1: wrptr 0
- W1: avg_con 0
- W1: cint_idx 0
- W1: cq_err 0
- W1: qint_idx 0
- W1: bpid 0
- W1: bp_ena 0
-
- W2: update_time 31043
- W2:avg_level 255
- W2: head 0
- W2:tail 0
-
- W3: cq_err_int_ena 5
- W3:cq_err_int 0
- W3: qsize 4
- W3:caching 1
- W3: substream 0x000
- W3: ena 1
- W3: drop_ena 1
- W3: drop 64
- W3: bp 0
-
-NPA example usage:
-
-.. code-block:: console
-
- Usage: echo <npalf> [pool number/all] > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
- echo 0 0 > /sys/kernel/debug/octeontx2/npa/pool_ctx
- cat /sys/kernel/debug/octeontx2/npa/pool_ctx
-
- ======POOL : 0=======
- W0: Stack base 1375bff00
- W1: ena 1
- W1: nat_align 1
- W1: stack_caching 1
- W1: stack_way_mask 0
- W1: buf_offset 1
- W1: buf_size 19
- W2: stack_max_pages 24315
- W2: stack_pages 24314
- W3: op_pc 267456
- W4: stack_offset 2
- W4: shift 5
- W4: avg_level 255
- W4: avg_con 0
- W4: fc_ena 0
- W4: fc_stype 0
- W4: fc_hyst_bits 0
- W4: fc_up_crossing 0
- W4: update_time 62993
- W5: fc_addr 0
- W6: ptr_start 1593adf00
- W7: ptr_end 180000000
- W8: err_int 0
- W8: err_int_ena 7
- W8: thresh_int 0
- W8: thresh_int_ena 0
- W8: thresh_up 0
- W8: thresh_qint_idx 0
- W8: err_qint_idx 0
-
-NPC example usage:
-
-.. code-block:: console
-
- cat /sys/kernel/debug/octeontx2/npc/mcam_info
-
- NPC MCAM info:
- RX keywidth : 224bits
- TX keywidth : 224bits
-
- MCAM entries : 2048
- Reserved : 158
- Available : 1890
-
- MCAM counters : 512
- Reserved : 1
- Available : 511
-
-SSO example usage:
-
-.. code-block:: console
-
- Usage: echo [<hws>/all] > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
- echo 0 > /sys/kernel/debug/octeontx2/sso/hws/sso_hws_info
-
- ==================================================
- SSOW HWS[0] Arbitration State 0x0
- SSOW HWS[0] Guest Machine Control 0x0
- SSOW HWS[0] SET[0] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[0] Group Mask[3] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[0] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[1] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[2] 0xffffffffffffffff
- SSOW HWS[0] SET[1] Group Mask[3] 0xffffffffffffffff
- ==================================================
-
-Compile DPDK
-------------
-
-DPDK may be compiled either natively on OCTEON TX2 platform or cross-compiled on
-an x86 based platform.
-
-Native Compilation
-~~~~~~~~~~~~~~~~~~
-
-.. code-block:: console
-
- meson build
- ninja -C build
-
-Cross Compilation
-~~~~~~~~~~~~~~~~~
-
-Refer to :doc:`../linux_gsg/cross_build_dpdk_for_arm64` for generic arm64 details.
-
-.. code-block:: console
-
- meson build --cross-file config/arm/arm64_octeontx2_linux_gcc
- ninja -C build
-
-.. note::
-
- By default, meson cross compilation uses ``aarch64-linux-gnu-gcc`` toolchain,
- if Marvell toolchain is available then it can be used by overriding the
- c, cpp, ar, strip ``binaries`` attributes to respective Marvell
- toolchain binaries in ``config/arm/arm64_octeontx2_linux_gcc`` file.
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 5581822d10..4e5b23c53d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,20 +125,3 @@ Deprecation Notices
applications should be updated to use the ``dmadev`` library instead,
with the underlying HW-functionality being provided by the ``ioat`` or
``idxd`` dma drivers
-
-* drivers/octeontx2: remove octeontx2 drivers
-
- In the view of enabling unified driver for ``octeontx2(cn9k)``/``octeontx3(cn10k)``,
- removing ``drivers/octeontx2`` drivers and replace with ``drivers/cnxk/`` which
- supports both ``octeontx2(cn9k)`` and ``octeontx3(cn10k)`` SoCs.
- This deprecation notice is to do following actions in DPDK v22.02 version.
-
- #. Replace ``drivers/common/octeontx2/`` with ``drivers/common/cnxk/``
- #. Replace ``drivers/mempool/octeontx2/`` with ``drivers/mempool/cnxk/``
- #. Replace ``drivers/net/octeontx2/`` with ``drivers/net/cnxk/``
- #. Replace ``drivers/event/octeontx2/`` with ``drivers/event/cnxk/``
- #. Replace ``drivers/crypto/octeontx2/`` with ``drivers/crypto/cnxk/``
- #. Rename ``drivers/regex/octeontx2/`` as ``drivers/regex/cn9k/``
- #. Rename ``config/arm/arm64_octeontx2_linux_gcc`` as ``config/arm/arm64_cn9k_linux_gcc``
-
- Last two actions are to align naming convention as cnxk scheme.
diff --git a/doc/guides/rel_notes/release_19_08.rst b/doc/guides/rel_notes/release_19_08.rst
index 1a0e6111d7..2f6973cef3 100644
--- a/doc/guides/rel_notes/release_19_08.rst
+++ b/doc/guides/rel_notes/release_19_08.rst
@@ -146,17 +146,17 @@ New Features
of via software, reducing cycles spent copying large blocks of data in
applications.
-* **Added Marvell OCTEON TX2 drivers.**
+* **Added Marvell OCTEON 9 drivers.**
Added the new ``ethdev``, ``eventdev``, ``mempool``, ``eventdev Rx adapter``,
``eventdev Tx adapter``, ``eventdev Timer adapter`` and ``rawdev DMA``
- drivers for various HW co-processors available in ``OCTEON TX2`` SoC.
+ drivers for various HW co-processors available in ``OCTEON 9`` SoC.
- See :doc:`../platform/octeontx2` and driver information:
+ See ``platform/octeontx2`` and driver information:
- * :doc:`../nics/octeontx2`
- * :doc:`../mempool/octeontx2`
- * :doc:`../eventdevs/octeontx2`
+ * ``nics/octeontx2``
+ * ``mempool/octeontx2``
+ * ``eventdevs/octeontx2``
* ``rawdevs/octeontx2_dma``
* **Introduced the Intel NTB PMD.**
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 302b3e5f37..6c3aa14c0d 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -187,12 +187,12 @@ New Features
Added support for asymmetric operations to Marvell OCTEON TX crypto PMD.
Supports RSA and modexp operations.
-* **Added Marvell OCTEON TX2 crypto PMD.**
+* **Added Marvell OCTEON 9 crypto PMD.**
- Added a new PMD for hardware crypto offload block on ``OCTEON TX2``
+ Added a new PMD for hardware crypto offload block on ``OCTEON 9``
SoC.
- See :doc:`../cryptodevs/octeontx2` for more details
+ See ``cryptodevs/octeontx2`` for more details
* **Updated NXP crypto PMDs for PDCP support.**
diff --git a/doc/guides/rel_notes/release_20_02.rst b/doc/guides/rel_notes/release_20_02.rst
index 925985b4f8..daeca868e0 100644
--- a/doc/guides/rel_notes/release_20_02.rst
+++ b/doc/guides/rel_notes/release_20_02.rst
@@ -175,18 +175,18 @@ New Features
armv8 crypto library is not used anymore. The library name has been changed
from armv8_crypto to AArch64crypto.
-* **Added inline IPsec support to Marvell OCTEON TX2 PMD.**
+* **Added inline IPsec support to Marvell OCTEON 9 PMD.**
- Added inline IPsec support to Marvell OCTEON TX2 PMD. With this feature,
+ Added inline IPsec support to Marvell OCTEON 9 PMD. With this feature,
applications will be able to offload entire IPsec offload to the hardware.
For the configured sessions, hardware will do the lookup and perform
decryption and IPsec transformation. For the outbound path, applications
can submit a plain packet to the PMD, and it will be sent out on the wire
after doing encryption and IPsec transformation of the packet.
-* **Added Marvell OCTEON TX2 End Point rawdev PMD.**
+* **Added Marvell OCTEON 9 End Point rawdev PMD.**
- Added a new OCTEON TX2 rawdev PMD for End Point mode of operation.
+ Added a new OCTEON 9 rawdev PMD for End Point mode of operation.
See ``rawdevs/octeontx2_ep`` for more details on this new PMD.
* **Added event mode to l3fwd sample application.**
diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst
index a38c6c673d..b853f00ae6 100644
--- a/doc/guides/rel_notes/release_20_05.rst
+++ b/doc/guides/rel_notes/release_20_05.rst
@@ -116,9 +116,9 @@ New Features
* Added support for DCF (Device Config Function) feature.
* Added switch filter support for Intel DCF.
-* **Updated Marvell OCTEON TX2 ethdev driver.**
+* **Updated Marvell OCTEON 9 ethdev driver.**
- Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
+ Updated Marvell OCTEON 9 ethdev driver with traffic manager support,
including:
* Hierarchical Scheduling with DWRR and SP.
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index 445e40fbac..e597cd0130 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -183,11 +183,11 @@ New Features
* Added support for Intel GEN2 QuickAssist device 200xx
(PF device id 0x18ee, VF device id 0x18ef).
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Added Chacha20-Poly1305 AEAD algorithm support in OCTEON TX2 crypto PMD.
+ * Added Chacha20-Poly1305 AEAD algorithm support in OCTEON 9 crypto PMD.
- * Updated the OCTEON TX2 crypto PMD to support ``rte_security`` lookaside
+ * Updated the OCTEON 9 crypto PMD to support ``rte_security`` lookaside
protocol offload for IPsec.
* **Added support for BPF_ABS/BPF_IND load instructions.**
diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst
index 7fd15398e4..4ce9b6aea9 100644
--- a/doc/guides/rel_notes/release_20_11.rst
+++ b/doc/guides/rel_notes/release_20_11.rst
@@ -265,9 +265,9 @@ New Features
* Added AES-GCM support.
* Added cipher only offload support.
-* **Updated Marvell OCTEON TX2 crypto PMD.**
+* **Updated Marvell OCTEON 9 crypto PMD.**
- * Updated the OCTEON TX2 crypto PMD lookaside protocol offload for IPsec with
+ * Updated the OCTEON 9 crypto PMD lookaside protocol offload for IPsec with
IPv6 support.
* **Updated Intel QAT PMD.**
@@ -286,9 +286,9 @@ New Features
``rte_security_pdcp_xform`` in ``rte_security`` lib is updated to enable
5G NR processing of SDAP headers in PMDs.
-* **Added Marvell OCTEON TX2 regex PMD.**
+* **Added Marvell OCTEON 9 regex PMD.**
- Added a new PMD for the hardware regex offload block for OCTEON TX2 SoC.
+ Added a new PMD for the hardware regex offload block for OCTEON 9 SoC.
See ``regexdevs/octeontx2`` for more details.
diff --git a/doc/guides/rel_notes/release_21_02.rst b/doc/guides/rel_notes/release_21_02.rst
index 5fbf5b3d43..ac996dce95 100644
--- a/doc/guides/rel_notes/release_21_02.rst
+++ b/doc/guides/rel_notes/release_21_02.rst
@@ -123,14 +123,14 @@ New Features
enable applications to add/remove user callbacks which get called
for every enqueue/dequeue operation.
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Updated the OCTEON TX2 crypto PMD lookaside protocol offload for IPsec with
+ * Updated the OCTEON 9 crypto PMD lookaside protocol offload for IPsec with
ESN and anti-replay support.
- * Updated the OCTEON TX2 crypto PMD with CN98xx support.
- * Added support for aes-cbc sha1-hmac cipher combination in OCTEON TX2 crypto
+ * Updated the OCTEON 9 crypto PMD with CN98xx support.
+ * Added support for aes-cbc sha1-hmac cipher combination in OCTEON 9 crypto
PMD lookaside protocol offload for IPsec.
- * Added support for aes-cbc sha256-128-hmac cipher combination in OCTEON TX2
+ * Added support for aes-cbc sha256-128-hmac cipher combination in OCTEON 9
crypto PMD lookaside protocol offload for IPsec.
* **Added mlx5 compress PMD.**
diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst
index 49044ed422..89a261e5f5 100644
--- a/doc/guides/rel_notes/release_21_05.rst
+++ b/doc/guides/rel_notes/release_21_05.rst
@@ -121,7 +121,7 @@ New Features
* Added GTPU TEID support for DCF switch filter.
* Added flow priority support for DCF switch filter.
-* **Updated Marvell OCTEON TX2 ethdev driver.**
+* **Updated Marvell OCTEON 9 ethdev driver.**
* Added support for flow action port id.
@@ -187,9 +187,9 @@ New Features
* Added support for ``DIGEST_ENCRYPTED`` mode in the OCTEON TX crypto PMD.
-* **Updated the OCTEON TX2 crypto PMD.**
+* **Updated the OCTEON 9 crypto PMD.**
- * Added support for ``DIGEST_ENCRYPTED`` mode in OCTEON TX2 crypto PMD.
+ * Added support for ``DIGEST_ENCRYPTED`` mode in OCTEON 9 crypto PMD.
* Added support in lookaside protocol offload mode for IPsec with
UDP encapsulation support for NAT Traversal.
* Added support in lookaside protocol offload mode for IPsec with
diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst
index db09ec01ea..f2497f1447 100644
--- a/doc/guides/rel_notes/release_21_11.rst
+++ b/doc/guides/rel_notes/release_21_11.rst
@@ -54,7 +54,7 @@ New Features
* **Added Marvell CNXK DMA driver.**
Added dmadev driver for the DPI DMA hardware accelerator
- of Marvell OCTEONTX2 and OCTEONTX3 family of SoCs.
+ of Marvell OCTEON 9 and OCTEON 10 family of SoCs.
* **Added NXP DPAA DMA driver.**
diff --git a/doc/guides/tools/cryptoperf.rst b/doc/guides/tools/cryptoperf.rst
index ce93483291..d3d5ebe4dc 100644
--- a/doc/guides/tools/cryptoperf.rst
+++ b/doc/guides/tools/cryptoperf.rst
@@ -157,7 +157,6 @@ The following are the application command-line options:
crypto_mvsam
crypto_null
crypto_octeontx
- crypto_octeontx2
crypto_openssl
crypto_qat
crypto_scheduler
diff --git a/drivers/common/meson.build b/drivers/common/meson.build
index 4acbad60b1..ea261dd70a 100644
--- a/drivers/common/meson.build
+++ b/drivers/common/meson.build
@@ -8,5 +8,4 @@ drivers = [
'iavf',
'mvep',
'octeontx',
- 'octeontx2',
]
diff --git a/drivers/common/octeontx2/hw/otx2_nix.h b/drivers/common/octeontx2/hw/otx2_nix.h
deleted file mode 100644
index e3b68505b7..0000000000
--- a/drivers/common/octeontx2/hw/otx2_nix.h
+++ /dev/null
@@ -1,1391 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NIX_HW_H__
-#define __OTX2_NIX_HW_H__
-
-/* Register offsets */
-
-#define NIX_AF_CFG (0x0ull)
-#define NIX_AF_STATUS (0x10ull)
-#define NIX_AF_NDC_CFG (0x18ull)
-#define NIX_AF_CONST (0x20ull)
-#define NIX_AF_CONST1 (0x28ull)
-#define NIX_AF_CONST2 (0x30ull)
-#define NIX_AF_CONST3 (0x38ull)
-#define NIX_AF_SQ_CONST (0x40ull)
-#define NIX_AF_CQ_CONST (0x48ull)
-#define NIX_AF_RQ_CONST (0x50ull)
-#define NIX_AF_PSE_CONST (0x60ull)
-#define NIX_AF_TL1_CONST (0x70ull)
-#define NIX_AF_TL2_CONST (0x78ull)
-#define NIX_AF_TL3_CONST (0x80ull)
-#define NIX_AF_TL4_CONST (0x88ull)
-#define NIX_AF_MDQ_CONST (0x90ull)
-#define NIX_AF_MC_MIRROR_CONST (0x98ull)
-#define NIX_AF_LSO_CFG (0xa8ull)
-#define NIX_AF_BLK_RST (0xb0ull)
-#define NIX_AF_TX_TSTMP_CFG (0xc0ull)
-#define NIX_AF_RX_CFG (0xd0ull)
-#define NIX_AF_AVG_DELAY (0xe0ull)
-#define NIX_AF_CINT_DELAY (0xf0ull)
-#define NIX_AF_RX_MCAST_BASE (0x100ull)
-#define NIX_AF_RX_MCAST_CFG (0x110ull)
-#define NIX_AF_RX_MCAST_BUF_BASE (0x120ull)
-#define NIX_AF_RX_MCAST_BUF_CFG (0x130ull)
-#define NIX_AF_RX_MIRROR_BUF_BASE (0x140ull)
-#define NIX_AF_RX_MIRROR_BUF_CFG (0x148ull)
-#define NIX_AF_LF_RST (0x150ull)
-#define NIX_AF_GEN_INT (0x160ull)
-#define NIX_AF_GEN_INT_W1S (0x168ull)
-#define NIX_AF_GEN_INT_ENA_W1S (0x170ull)
-#define NIX_AF_GEN_INT_ENA_W1C (0x178ull)
-#define NIX_AF_ERR_INT (0x180ull)
-#define NIX_AF_ERR_INT_W1S (0x188ull)
-#define NIX_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NIX_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NIX_AF_RAS (0x1a0ull)
-#define NIX_AF_RAS_W1S (0x1a8ull)
-#define NIX_AF_RAS_ENA_W1S (0x1b0ull)
-#define NIX_AF_RAS_ENA_W1C (0x1b8ull)
-#define NIX_AF_RVU_INT (0x1c0ull)
-#define NIX_AF_RVU_INT_W1S (0x1c8ull)
-#define NIX_AF_RVU_INT_ENA_W1S (0x1d0ull)
-#define NIX_AF_RVU_INT_ENA_W1C (0x1d8ull)
-#define NIX_AF_TCP_TIMER (0x1e0ull)
-#define NIX_AF_RX_DEF_OL2 (0x200ull)
-#define NIX_AF_RX_DEF_OIP4 (0x210ull)
-#define NIX_AF_RX_DEF_IIP4 (0x220ull)
-#define NIX_AF_RX_DEF_OIP6 (0x230ull)
-#define NIX_AF_RX_DEF_IIP6 (0x240ull)
-#define NIX_AF_RX_DEF_OTCP (0x250ull)
-#define NIX_AF_RX_DEF_ITCP (0x260ull)
-#define NIX_AF_RX_DEF_OUDP (0x270ull)
-#define NIX_AF_RX_DEF_IUDP (0x280ull)
-#define NIX_AF_RX_DEF_OSCTP (0x290ull)
-#define NIX_AF_RX_DEF_ISCTP (0x2a0ull)
-#define NIX_AF_RX_DEF_IPSECX(a) (0x2b0ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_IPSEC_GEN_CFG (0x300ull)
-#define NIX_AF_RX_CPTX_INST_QSEL(a) (0x320ull | (uint64_t)(a) << 3)
-#define NIX_AF_RX_CPTX_CREDIT(a) (0x360ull | (uint64_t)(a) << 3)
-#define NIX_AF_NDC_RX_SYNC (0x3e0ull)
-#define NIX_AF_NDC_TX_SYNC (0x3f0ull)
-#define NIX_AF_AQ_CFG (0x400ull)
-#define NIX_AF_AQ_BASE (0x410ull)
-#define NIX_AF_AQ_STATUS (0x420ull)
-#define NIX_AF_AQ_DOOR (0x430ull)
-#define NIX_AF_AQ_DONE_WAIT (0x440ull)
-#define NIX_AF_AQ_DONE (0x450ull)
-#define NIX_AF_AQ_DONE_ACK (0x460ull)
-#define NIX_AF_AQ_DONE_TIMER (0x470ull)
-#define NIX_AF_AQ_DONE_ENA_W1S (0x490ull)
-#define NIX_AF_AQ_DONE_ENA_W1C (0x498ull)
-#define NIX_AF_RX_LINKX_CFG(a) (0x540ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_SW_SYNC (0x550ull)
-#define NIX_AF_RX_LINKX_WRR_CFG(a) (0x560ull | (uint64_t)(a) << 16)
-#define NIX_AF_EXPR_TX_FIFO_STATUS (0x640ull)
-#define NIX_AF_NORM_TX_FIFO_STATUS (0x648ull)
-#define NIX_AF_SDP_TX_FIFO_STATUS (0x650ull)
-#define NIX_AF_TX_NPC_CAPTURE_CONFIG (0x660ull)
-#define NIX_AF_TX_NPC_CAPTURE_INFO (0x668ull)
-#define NIX_AF_TX_NPC_CAPTURE_RESPX(a) (0x680ull | (uint64_t)(a) << 3)
-#define NIX_AF_SEB_ACTIVE_CYCLES_PCX(a) (0x6c0ull | (uint64_t)(a) << 3)
-#define NIX_AF_SMQX_CFG(a) (0x700ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_HEAD(a) (0x710ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_TAIL(a) (0x720ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_STATUS(a) (0x730ull | (uint64_t)(a) << 16)
-#define NIX_AF_SMQX_NXT_HEAD(a) (0x740ull | (uint64_t)(a) << 16)
-#define NIX_AF_SQM_ACTIVE_CYCLES_PC (0x770ull)
-#define NIX_AF_PSE_CHANNEL_LEVEL (0x800ull)
-#define NIX_AF_PSE_SHAPER_CFG (0x810ull)
-#define NIX_AF_PSE_ACTIVE_CYCLES_PC (0x8c0ull)
-#define NIX_AF_MARK_FORMATX_CTL(a) (0x900ull | (uint64_t)(a) << 18)
-#define NIX_AF_TX_LINKX_NORM_CREDIT(a) (0xa00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_EXPR_CREDIT(a) (0xa10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_SW_XOFF(a) (0xa20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_LINKX_HW_XOFF(a) (0xa30ull | (uint64_t)(a) << 16)
-#define NIX_AF_SDP_LINK_CREDIT (0xa40ull)
-#define NIX_AF_SDP_SW_XOFFX(a) (0xa60ull | (uint64_t)(a) << 3)
-#define NIX_AF_SDP_HW_XOFFX(a) (0xac0ull | (uint64_t)(a) << 3)
-#define NIX_AF_TL4X_BP_STATUS(a) (0xb00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SDP_LINK_CFG(a) (0xb10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SCHEDULE(a) (0xc00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE(a) (0xc10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_CIR(a) (0xc20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SHAPE_STATE(a) (0xc50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_SW_XOFF(a) (0xc70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_TOPOLOGY(a) (0xc80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG0(a) (0xcc0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG1(a) (0xcc8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG2(a) (0xcd0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_MD_DEBUG3(a) (0xcd8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_PACKETS(a) (0xd20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_DROPPED_BYTES(a) (0xd30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_PACKETS(a) (0xd40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_RED_BYTES(a) (0xd50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_PACKETS(a) (0xd60ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_YELLOW_BYTES(a) (0xd70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_PACKETS(a) (0xd80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL1X_GREEN_BYTES(a) (0xd90ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHEDULE(a) (0xe00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE(a) (0xe10ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_CIR(a) (0xe20ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PIR(a) (0xe30ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SCHED_STATE(a) (0xe40ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SHAPE_STATE(a) (0xe50ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_SW_XOFF(a) (0xe70ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_TOPOLOGY(a) (0xe80ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_PARENT(a) (0xe88ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG0(a) (0xec0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG1(a) (0xec8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG2(a) (0xed0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL2X_MD_DEBUG3(a) (0xed8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHEDULE(a) \
- (0x1000ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE(a) \
- (0x1010ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_CIR(a) \
- (0x1020ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PIR(a) \
- (0x1030ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SCHED_STATE(a) \
- (0x1040ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SHAPE_STATE(a) \
- (0x1050ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_SW_XOFF(a) \
- (0x1070ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_TOPOLOGY(a) \
- (0x1080ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_PARENT(a) \
- (0x1088ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG0(a) \
- (0x10c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG1(a) \
- (0x10c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG2(a) \
- (0x10d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3X_MD_DEBUG3(a) \
- (0x10d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHEDULE(a) \
- (0x1200ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE(a) \
- (0x1210ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_CIR(a) \
- (0x1220ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PIR(a) \
- (0x1230ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SCHED_STATE(a) \
- (0x1240ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SHAPE_STATE(a) \
- (0x1250ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_SW_XOFF(a) \
- (0x1270ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_TOPOLOGY(a) \
- (0x1280ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_PARENT(a) \
- (0x1288ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG0(a) \
- (0x12c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG1(a) \
- (0x12c8ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG2(a) \
- (0x12d0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL4X_MD_DEBUG3(a) \
- (0x12d8ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHEDULE(a) \
- (0x1400ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE(a) \
- (0x1410ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_CIR(a) \
- (0x1420ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PIR(a) \
- (0x1430ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SCHED_STATE(a) \
- (0x1440ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SHAPE_STATE(a) \
- (0x1450ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_SW_XOFF(a) \
- (0x1470ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_PARENT(a) \
- (0x1480ull | (uint64_t)(a) << 16)
-#define NIX_AF_MDQX_MD_DEBUG(a) \
- (0x14c0ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_CFG(a) \
- (0x1600ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_BP_STATUS(a) \
- (0x1610ull | (uint64_t)(a) << 16)
-#define NIX_AF_TL3_TL2X_LINKX_CFG(a, b) \
- (0x1700ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_FLOW_KEY_ALGX_FIELDX(a, b) \
- (0x1800ull | (uint64_t)(a) << 18 | (uint64_t)(b) << 3)
-#define NIX_AF_TX_MCASTX(a) \
- (0x1900ull | (uint64_t)(a) << 15)
-#define NIX_AF_TX_VTAG_DEFX_CTL(a) \
- (0x1a00ull | (uint64_t)(a) << 16)
-#define NIX_AF_TX_VTAG_DEFX_DATA(a) \
- (0x1a10ull | (uint64_t)(a) << 16)
-#define NIX_AF_RX_BPIDX_STATUS(a) \
- (0x1a20ull | (uint64_t)(a) << 17)
-#define NIX_AF_RX_CHANX_CFG(a) \
- (0x1a30ull | (uint64_t)(a) << 15)
-#define NIX_AF_CINT_TIMERX(a) \
- (0x1a40ull | (uint64_t)(a) << 18)
-#define NIX_AF_LSO_FORMATX_FIELDX(a, b) \
- (0x1b00ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_CFG(a) \
- (0x4000ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_CFG(a) \
- (0x4020ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG2(a) \
- (0x4028ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_SQS_BASE(a) \
- (0x4030ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_CFG(a) \
- (0x4040ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RQS_BASE(a) \
- (0x4050ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_CFG(a) \
- (0x4060ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CQS_BASE(a) \
- (0x4070ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_CFG(a) \
- (0x4080ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_PARSE_CFG(a) \
- (0x4090ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_CFG(a) \
- (0x40a0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_CFG(a) \
- (0x40c0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RSS_BASE(a) \
- (0x40d0ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_CFG(a) \
- (0x4100ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_QINTS_BASE(a) \
- (0x4110ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_CFG(a) \
- (0x4120ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_CINTS_BASE(a) \
- (0x4130ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG0(a) \
- (0x4140ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_CFG1(a) \
- (0x4148ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_CFG(a) \
- (0x4150ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_DYNO_BASE(a) \
- (0x4158ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_IPSEC_SA_BASE(a) \
- (0x4170ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_TX_STATUS(a) \
- (0x4180ull | (uint64_t)(a) << 17)
-#define NIX_AF_LFX_RX_VTAG_TYPEX(a, b) \
- (0x4200ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_LOCKX(a, b) \
- (0x4300ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_TX_STATX(a, b) \
- (0x4400ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RX_STATX(a, b) \
- (0x4500ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_LFX_RSS_GRPX(a, b) \
- (0x4600ull | (uint64_t)(a) << 17 | (uint64_t)(b) << 3)
-#define NIX_AF_RX_NPC_MC_RCV (0x4700ull)
-#define NIX_AF_RX_NPC_MC_DROP (0x4710ull)
-#define NIX_AF_RX_NPC_MIRROR_RCV (0x4720ull)
-#define NIX_AF_RX_NPC_MIRROR_DROP (0x4730ull)
-#define NIX_AF_RX_ACTIVE_CYCLES_PCX(a) \
- (0x4800ull | (uint64_t)(a) << 16)
-#define NIX_PRIV_AF_INT_CFG (0x8000000ull)
-#define NIX_PRIV_LFX_CFG(a) \
- (0x8000010ull | (uint64_t)(a) << 8)
-#define NIX_PRIV_LFX_INT_CFG(a) \
- (0x8000020ull | (uint64_t)(a) << 8)
-#define NIX_AF_RVU_LF_CFG_DEBUG (0x8000030ull)
-
-#define NIX_LF_RX_SECRETX(a) (0x0ull | (uint64_t)(a) << 3)
-#define NIX_LF_CFG (0x100ull)
-#define NIX_LF_GINT (0x200ull)
-#define NIX_LF_GINT_W1S (0x208ull)
-#define NIX_LF_GINT_ENA_W1C (0x210ull)
-#define NIX_LF_GINT_ENA_W1S (0x218ull)
-#define NIX_LF_ERR_INT (0x220ull)
-#define NIX_LF_ERR_INT_W1S (0x228ull)
-#define NIX_LF_ERR_INT_ENA_W1C (0x230ull)
-#define NIX_LF_ERR_INT_ENA_W1S (0x238ull)
-#define NIX_LF_RAS (0x240ull)
-#define NIX_LF_RAS_W1S (0x248ull)
-#define NIX_LF_RAS_ENA_W1C (0x250ull)
-#define NIX_LF_RAS_ENA_W1S (0x258ull)
-#define NIX_LF_SQ_OP_ERR_DBG (0x260ull)
-#define NIX_LF_MNQ_ERR_DBG (0x270ull)
-#define NIX_LF_SEND_ERR_DBG (0x280ull)
-#define NIX_LF_TX_STATX(a) (0x300ull | (uint64_t)(a) << 3)
-#define NIX_LF_RX_STATX(a) (0x400ull | (uint64_t)(a) << 3)
-#define NIX_LF_OP_SENDX(a) (0x800ull | (uint64_t)(a) << 3)
-#define NIX_LF_RQ_OP_INT (0x900ull)
-#define NIX_LF_RQ_OP_OCTS (0x910ull)
-#define NIX_LF_RQ_OP_PKTS (0x920ull)
-#define NIX_LF_RQ_OP_DROP_OCTS (0x930ull)
-#define NIX_LF_RQ_OP_DROP_PKTS (0x940ull)
-#define NIX_LF_RQ_OP_RE_PKTS (0x950ull)
-#define NIX_LF_OP_IPSEC_DYNO_CNT (0x980ull)
-#define NIX_LF_SQ_OP_INT (0xa00ull)
-#define NIX_LF_SQ_OP_OCTS (0xa10ull)
-#define NIX_LF_SQ_OP_PKTS (0xa20ull)
-#define NIX_LF_SQ_OP_STATUS (0xa30ull)
-#define NIX_LF_SQ_OP_DROP_OCTS (0xa40ull)
-#define NIX_LF_SQ_OP_DROP_PKTS (0xa50ull)
-#define NIX_LF_CQ_OP_INT (0xb00ull)
-#define NIX_LF_CQ_OP_DOOR (0xb30ull)
-#define NIX_LF_CQ_OP_STATUS (0xb40ull)
-#define NIX_LF_QINTX_CNT(a) (0xc00ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_INT(a) (0xc10ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1S(a) (0xc20ull | (uint64_t)(a) << 12)
-#define NIX_LF_QINTX_ENA_W1C(a) (0xc30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_CNT(a) (0xd00ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_WAIT(a) (0xd10ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT(a) (0xd20ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_INT_W1S(a) (0xd30ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1S(a) (0xd40ull | (uint64_t)(a) << 12)
-#define NIX_LF_CINTX_ENA_W1C(a) (0xd50ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NIX_TX_VTAGOP_NOP (0x0ull)
-#define NIX_TX_VTAGOP_INSERT (0x1ull)
-#define NIX_TX_VTAGOP_REPLACE (0x2ull)
-
-#define NIX_TX_ACTIONOP_DROP (0x0ull)
-#define NIX_TX_ACTIONOP_UCAST_DEFAULT (0x1ull)
-#define NIX_TX_ACTIONOP_UCAST_CHAN (0x2ull)
-#define NIX_TX_ACTIONOP_MCAST (0x3ull)
-#define NIX_TX_ACTIONOP_DROP_VIOL (0x5ull)
-
-#define NIX_INTF_RX (0x0ull)
-#define NIX_INTF_TX (0x1ull)
-
-#define NIX_TXLAYER_OL3 (0x0ull)
-#define NIX_TXLAYER_OL4 (0x1ull)
-#define NIX_TXLAYER_IL3 (0x2ull)
-#define NIX_TXLAYER_IL4 (0x3ull)
-
-#define NIX_SUBDC_NOP (0x0ull)
-#define NIX_SUBDC_EXT (0x1ull)
-#define NIX_SUBDC_CRC (0x2ull)
-#define NIX_SUBDC_IMM (0x3ull)
-#define NIX_SUBDC_SG (0x4ull)
-#define NIX_SUBDC_MEM (0x5ull)
-#define NIX_SUBDC_JUMP (0x6ull)
-#define NIX_SUBDC_WORK (0x7ull)
-#define NIX_SUBDC_SOD (0xfull)
-
-#define NIX_STYPE_STF (0x0ull)
-#define NIX_STYPE_STT (0x1ull)
-#define NIX_STYPE_STP (0x2ull)
-
-#define NIX_STAT_LF_TX_TX_UCAST (0x0ull)
-#define NIX_STAT_LF_TX_TX_BCAST (0x1ull)
-#define NIX_STAT_LF_TX_TX_MCAST (0x2ull)
-#define NIX_STAT_LF_TX_TX_DROP (0x3ull)
-#define NIX_STAT_LF_TX_TX_OCTS (0x4ull)
-
-#define NIX_STAT_LF_RX_RX_OCTS (0x0ull)
-#define NIX_STAT_LF_RX_RX_UCAST (0x1ull)
-#define NIX_STAT_LF_RX_RX_BCAST (0x2ull)
-#define NIX_STAT_LF_RX_RX_MCAST (0x3ull)
-#define NIX_STAT_LF_RX_RX_DROP (0x4ull)
-#define NIX_STAT_LF_RX_RX_DROP_OCTS (0x5ull)
-#define NIX_STAT_LF_RX_RX_FCS (0x6ull)
-#define NIX_STAT_LF_RX_RX_ERR (0x7ull)
-#define NIX_STAT_LF_RX_RX_DRP_BCAST (0x8ull)
-#define NIX_STAT_LF_RX_RX_DRP_MCAST (0x9ull)
-#define NIX_STAT_LF_RX_RX_DRP_L3BCAST (0xaull)
-#define NIX_STAT_LF_RX_RX_DRP_L3MCAST (0xbull)
-
-#define NIX_SQOPERR_SQ_OOR (0x0ull)
-#define NIX_SQOPERR_SQ_CTX_FAULT (0x1ull)
-#define NIX_SQOPERR_SQ_CTX_POISON (0x2ull)
-#define NIX_SQOPERR_SQ_DISABLED (0x3ull)
-#define NIX_SQOPERR_MAX_SQE_SIZE_ERR (0x4ull)
-#define NIX_SQOPERR_SQE_OFLOW (0x5ull)
-#define NIX_SQOPERR_SQB_NULL (0x6ull)
-#define NIX_SQOPERR_SQB_FAULT (0x7ull)
-
-#define NIX_XQESZ_W64 (0x0ull)
-#define NIX_XQESZ_W16 (0x1ull)
-
-#define NIX_VTAGSIZE_T4 (0x0ull)
-#define NIX_VTAGSIZE_T8 (0x1ull)
-
-#define NIX_RX_ACTIONOP_DROP (0x0ull)
-#define NIX_RX_ACTIONOP_UCAST (0x1ull)
-#define NIX_RX_ACTIONOP_UCAST_IPSEC (0x2ull)
-#define NIX_RX_ACTIONOP_MCAST (0x3ull)
-#define NIX_RX_ACTIONOP_RSS (0x4ull)
-#define NIX_RX_ACTIONOP_PF_FUNC_DROP (0x5ull)
-#define NIX_RX_ACTIONOP_MIRROR (0x6ull)
-
-#define NIX_RX_VTAGACTION_VTAG0_RELPTR (0x0ull)
-#define NIX_RX_VTAGACTION_VTAG1_RELPTR (0x4ull)
-#define NIX_RX_VTAGACTION_VTAG_VALID (0x1ull)
-#define NIX_TX_VTAGACTION_VTAG0_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6)
-#define NIX_TX_VTAGACTION_VTAG1_RELPTR \
- (sizeof(struct nix_inst_hdr_s) + 2 * 6 + 4)
-#define NIX_RQINT_DROP (0x0ull)
-#define NIX_RQINT_RED (0x1ull)
-#define NIX_RQINT_R2 (0x2ull)
-#define NIX_RQINT_R3 (0x3ull)
-#define NIX_RQINT_R4 (0x4ull)
-#define NIX_RQINT_R5 (0x5ull)
-#define NIX_RQINT_R6 (0x6ull)
-#define NIX_RQINT_R7 (0x7ull)
-
-#define NIX_MAXSQESZ_W16 (0x0ull)
-#define NIX_MAXSQESZ_W8 (0x1ull)
-
-#define NIX_LSOALG_NOP (0x0ull)
-#define NIX_LSOALG_ADD_SEGNUM (0x1ull)
-#define NIX_LSOALG_ADD_PAYLEN (0x2ull)
-#define NIX_LSOALG_ADD_OFFSET (0x3ull)
-#define NIX_LSOALG_TCP_FLAGS (0x4ull)
-
-#define NIX_MNQERR_SQ_CTX_FAULT (0x0ull)
-#define NIX_MNQERR_SQ_CTX_POISON (0x1ull)
-#define NIX_MNQERR_SQB_FAULT (0x2ull)
-#define NIX_MNQERR_SQB_POISON (0x3ull)
-#define NIX_MNQERR_TOTAL_ERR (0x4ull)
-#define NIX_MNQERR_LSO_ERR (0x5ull)
-#define NIX_MNQERR_CQ_QUERY_ERR (0x6ull)
-#define NIX_MNQERR_MAX_SQE_SIZE_ERR (0x7ull)
-#define NIX_MNQERR_MAXLEN_ERR (0x8ull)
-#define NIX_MNQERR_SQE_SIZEM1_ZERO (0x9ull)
-
-#define NIX_MDTYPE_RSVD (0x0ull)
-#define NIX_MDTYPE_FLUSH (0x1ull)
-#define NIX_MDTYPE_PMD (0x2ull)
-
-#define NIX_NDC_TX_PORT_LMT (0x0ull)
-#define NIX_NDC_TX_PORT_ENQ (0x1ull)
-#define NIX_NDC_TX_PORT_MNQ (0x2ull)
-#define NIX_NDC_TX_PORT_DEQ (0x3ull)
-#define NIX_NDC_TX_PORT_DMA (0x4ull)
-#define NIX_NDC_TX_PORT_XQE (0x5ull)
-
-#define NIX_NDC_RX_PORT_AQ (0x0ull)
-#define NIX_NDC_RX_PORT_CQ (0x1ull)
-#define NIX_NDC_RX_PORT_CINT (0x2ull)
-#define NIX_NDC_RX_PORT_MC (0x3ull)
-#define NIX_NDC_RX_PORT_PKT (0x4ull)
-#define NIX_NDC_RX_PORT_RQ (0x5ull)
-
-#define NIX_RE_OPCODE_RE_NONE (0x0ull)
-#define NIX_RE_OPCODE_RE_PARTIAL (0x1ull)
-#define NIX_RE_OPCODE_RE_JABBER (0x2ull)
-#define NIX_RE_OPCODE_RE_FCS (0x7ull)
-#define NIX_RE_OPCODE_RE_FCS_RCV (0x8ull)
-#define NIX_RE_OPCODE_RE_TERMINATE (0x9ull)
-#define NIX_RE_OPCODE_RE_RX_CTL (0xbull)
-#define NIX_RE_OPCODE_RE_SKIP (0xcull)
-#define NIX_RE_OPCODE_RE_DMAPKT (0xfull)
-#define NIX_RE_OPCODE_UNDERSIZE (0x10ull)
-#define NIX_RE_OPCODE_OVERSIZE (0x11ull)
-#define NIX_RE_OPCODE_OL2_LENMISM (0x12ull)
-
-#define NIX_REDALG_STD (0x0ull)
-#define NIX_REDALG_SEND (0x1ull)
-#define NIX_REDALG_STALL (0x2ull)
-#define NIX_REDALG_DISCARD (0x3ull)
-
-#define NIX_RX_MCOP_RQ (0x0ull)
-#define NIX_RX_MCOP_RSS (0x1ull)
-
-#define NIX_RX_PERRCODE_NPC_RESULT_ERR (0x2ull)
-#define NIX_RX_PERRCODE_MCAST_FAULT (0x4ull)
-#define NIX_RX_PERRCODE_MIRROR_FAULT (0x5ull)
-#define NIX_RX_PERRCODE_MCAST_POISON (0x6ull)
-#define NIX_RX_PERRCODE_MIRROR_POISON (0x7ull)
-#define NIX_RX_PERRCODE_DATA_FAULT (0x8ull)
-#define NIX_RX_PERRCODE_MEMOUT (0x9ull)
-#define NIX_RX_PERRCODE_BUFS_OFLOW (0xaull)
-#define NIX_RX_PERRCODE_OL3_LEN (0x10ull)
-#define NIX_RX_PERRCODE_OL4_LEN (0x11ull)
-#define NIX_RX_PERRCODE_OL4_CHK (0x12ull)
-#define NIX_RX_PERRCODE_OL4_PORT (0x13ull)
-#define NIX_RX_PERRCODE_IL3_LEN (0x20ull)
-#define NIX_RX_PERRCODE_IL4_LEN (0x21ull)
-#define NIX_RX_PERRCODE_IL4_CHK (0x22ull)
-#define NIX_RX_PERRCODE_IL4_PORT (0x23ull)
-
-#define NIX_SENDCRCALG_CRC32 (0x0ull)
-#define NIX_SENDCRCALG_CRC32C (0x1ull)
-#define NIX_SENDCRCALG_ONES16 (0x2ull)
-
-#define NIX_SENDL3TYPE_NONE (0x0ull)
-#define NIX_SENDL3TYPE_IP4 (0x2ull)
-#define NIX_SENDL3TYPE_IP4_CKSUM (0x3ull)
-#define NIX_SENDL3TYPE_IP6 (0x4ull)
-
-#define NIX_SENDL4TYPE_NONE (0x0ull)
-#define NIX_SENDL4TYPE_TCP_CKSUM (0x1ull)
-#define NIX_SENDL4TYPE_SCTP_CKSUM (0x2ull)
-#define NIX_SENDL4TYPE_UDP_CKSUM (0x3ull)
-
-#define NIX_SENDLDTYPE_LDD (0x0ull)
-#define NIX_SENDLDTYPE_LDT (0x1ull)
-#define NIX_SENDLDTYPE_LDWB (0x2ull)
-
-#define NIX_SENDMEMALG_SET (0x0ull)
-#define NIX_SENDMEMALG_SETTSTMP (0x1ull)
-#define NIX_SENDMEMALG_SETRSLT (0x2ull)
-#define NIX_SENDMEMALG_ADD (0x8ull)
-#define NIX_SENDMEMALG_SUB (0x9ull)
-#define NIX_SENDMEMALG_ADDLEN (0xaull)
-#define NIX_SENDMEMALG_SUBLEN (0xbull)
-#define NIX_SENDMEMALG_ADDMBUF (0xcull)
-#define NIX_SENDMEMALG_SUBMBUF (0xdull)
-
-#define NIX_SENDMEMDSZ_B64 (0x0ull)
-#define NIX_SENDMEMDSZ_B32 (0x1ull)
-#define NIX_SENDMEMDSZ_B16 (0x2ull)
-#define NIX_SENDMEMDSZ_B8 (0x3ull)
-
-#define NIX_SEND_STATUS_GOOD (0x0ull)
-#define NIX_SEND_STATUS_SQ_CTX_FAULT (0x1ull)
-#define NIX_SEND_STATUS_SQ_CTX_POISON (0x2ull)
-#define NIX_SEND_STATUS_SQB_FAULT (0x3ull)
-#define NIX_SEND_STATUS_SQB_POISON (0x4ull)
-#define NIX_SEND_STATUS_SEND_HDR_ERR (0x5ull)
-#define NIX_SEND_STATUS_SEND_EXT_ERR (0x6ull)
-#define NIX_SEND_STATUS_JUMP_FAULT (0x7ull)
-#define NIX_SEND_STATUS_JUMP_POISON (0x8ull)
-#define NIX_SEND_STATUS_SEND_CRC_ERR (0x10ull)
-#define NIX_SEND_STATUS_SEND_IMM_ERR (0x11ull)
-#define NIX_SEND_STATUS_SEND_SG_ERR (0x12ull)
-#define NIX_SEND_STATUS_SEND_MEM_ERR (0x13ull)
-#define NIX_SEND_STATUS_INVALID_SUBDC (0x14ull)
-#define NIX_SEND_STATUS_SUBDC_ORDER_ERR (0x15ull)
-#define NIX_SEND_STATUS_DATA_FAULT (0x16ull)
-#define NIX_SEND_STATUS_DATA_POISON (0x17ull)
-#define NIX_SEND_STATUS_NPC_DROP_ACTION (0x20ull)
-#define NIX_SEND_STATUS_LOCK_VIOL (0x21ull)
-#define NIX_SEND_STATUS_NPC_UCAST_CHAN_ERR (0x22ull)
-#define NIX_SEND_STATUS_NPC_MCAST_CHAN_ERR (0x23ull)
-#define NIX_SEND_STATUS_NPC_MCAST_ABORT (0x24ull)
-#define NIX_SEND_STATUS_NPC_VTAG_PTR_ERR (0x25ull)
-#define NIX_SEND_STATUS_NPC_VTAG_SIZE_ERR (0x26ull)
-#define NIX_SEND_STATUS_SEND_MEM_FAULT (0x27ull)
-
-#define NIX_SQINT_LMT_ERR (0x0ull)
-#define NIX_SQINT_MNQ_ERR (0x1ull)
-#define NIX_SQINT_SEND_ERR (0x2ull)
-#define NIX_SQINT_SQB_ALLOC_FAIL (0x3ull)
-
-#define NIX_XQE_TYPE_INVALID (0x0ull)
-#define NIX_XQE_TYPE_RX (0x1ull)
-#define NIX_XQE_TYPE_RX_IPSECS (0x2ull)
-#define NIX_XQE_TYPE_RX_IPSECH (0x3ull)
-#define NIX_XQE_TYPE_RX_IPSECD (0x4ull)
-#define NIX_XQE_TYPE_SEND (0x8ull)
-
-#define NIX_AQ_COMP_NOTDONE (0x0ull)
-#define NIX_AQ_COMP_GOOD (0x1ull)
-#define NIX_AQ_COMP_SWERR (0x2ull)
-#define NIX_AQ_COMP_CTX_POISON (0x3ull)
-#define NIX_AQ_COMP_CTX_FAULT (0x4ull)
-#define NIX_AQ_COMP_LOCKERR (0x5ull)
-#define NIX_AQ_COMP_SQB_ALLOC_FAIL (0x6ull)
-
-#define NIX_AF_INT_VEC_RVU (0x0ull)
-#define NIX_AF_INT_VEC_GEN (0x1ull)
-#define NIX_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NIX_AF_INT_VEC_AF_ERR (0x3ull)
-#define NIX_AF_INT_VEC_POISON (0x4ull)
-
-#define NIX_AQINT_GEN_RX_MCAST_DROP (0x0ull)
-#define NIX_AQINT_GEN_RX_MIRROR_DROP (0x1ull)
-#define NIX_AQINT_GEN_TL1_DRAIN (0x3ull)
-#define NIX_AQINT_GEN_SMQ_FLUSH_DONE (0x4ull)
-
-#define NIX_AQ_INSTOP_NOP (0x0ull)
-#define NIX_AQ_INSTOP_INIT (0x1ull)
-#define NIX_AQ_INSTOP_WRITE (0x2ull)
-#define NIX_AQ_INSTOP_READ (0x3ull)
-#define NIX_AQ_INSTOP_LOCK (0x4ull)
-#define NIX_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NIX_AQ_CTYPE_RQ (0x0ull)
-#define NIX_AQ_CTYPE_SQ (0x1ull)
-#define NIX_AQ_CTYPE_CQ (0x2ull)
-#define NIX_AQ_CTYPE_MCE (0x3ull)
-#define NIX_AQ_CTYPE_RSS (0x4ull)
-#define NIX_AQ_CTYPE_DYNO (0x5ull)
-
-#define NIX_COLORRESULT_GREEN (0x0ull)
-#define NIX_COLORRESULT_YELLOW (0x1ull)
-#define NIX_COLORRESULT_RED_SEND (0x2ull)
-#define NIX_COLORRESULT_RED_DROP (0x3ull)
-
-#define NIX_CHAN_LBKX_CHX(a, b) \
- (0x000ull | ((uint64_t)(a) << 8) | (uint64_t)(b))
-#define NIX_CHAN_R4 (0x400ull)
-#define NIX_CHAN_R5 (0x500ull)
-#define NIX_CHAN_R6 (0x600ull)
-#define NIX_CHAN_SDP_CH_END (0x7ffull)
-#define NIX_CHAN_SDP_CH_START (0x700ull)
-#define NIX_CHAN_CGXX_LMACX_CHX(a, b, c) \
- (0x800ull | ((uint64_t)(a) << 8) | ((uint64_t)(b) << 4) | \
- (uint64_t)(c))
-
-#define NIX_INTF_SDP (0x4ull)
-#define NIX_INTF_CGX0 (0x0ull)
-#define NIX_INTF_CGX1 (0x1ull)
-#define NIX_INTF_CGX2 (0x2ull)
-#define NIX_INTF_LBK0 (0x3ull)
-
-#define NIX_CQERRINT_DOOR_ERR (0x0ull)
-#define NIX_CQERRINT_WR_FULL (0x1ull)
-#define NIX_CQERRINT_CQE_FAULT (0x2ull)
-
-#define NIX_LF_INT_VEC_GINT (0x80ull)
-#define NIX_LF_INT_VEC_ERR_INT (0x81ull)
-#define NIX_LF_INT_VEC_POISON (0x82ull)
-#define NIX_LF_INT_VEC_QINT_END (0x3full)
-#define NIX_LF_INT_VEC_QINT_START (0x0ull)
-#define NIX_LF_INT_VEC_CINT_END (0x7full)
-#define NIX_LF_INT_VEC_CINT_START (0x40ull)
-
-/* Enums definitions */
-
-/* Structures definitions */
-
-/* NIX admin queue instruction structure */
-struct nix_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 7;
- uint64_t rsvd_23_15 : 9;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NIX admin queue result structure */
-struct nix_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NIX completion interrupt context hardware structure */
-struct nix_cint_hw_s {
- uint64_t ecount : 32;
- uint64_t qcount : 16;
- uint64_t intr : 1;
- uint64_t ena : 1;
- uint64_t timer_idx : 8;
- uint64_t rsvd_63_58 : 6;
- uint64_t ecount_wait : 32;
- uint64_t qcount_wait : 16;
- uint64_t time_wait : 8;
- uint64_t rsvd_127_120 : 8;
-};
-
-/* NIX completion queue entry header structure */
-struct nix_cqe_hdr_s {
- uint64_t tag : 32;
- uint64_t q : 20;
- uint64_t rsvd_57_52 : 6;
- uint64_t node : 2;
- uint64_t cqe_type : 4;
-};
-
-/* NIX completion queue context structure */
-struct nix_cq_ctx_s {
- uint64_t base : 64;/* W0 */
- uint64_t rsvd_67_64 : 4;
- uint64_t bp_ena : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t bpid : 9;
- uint64_t rsvd_83_81 : 3;
- uint64_t qint_idx : 7;
- uint64_t cq_err : 1;
- uint64_t cint_idx : 7;
- uint64_t avg_con : 9;
- uint64_t wrptr : 20;
- uint64_t tail : 20;
- uint64_t head : 20;
- uint64_t avg_level : 8;
- uint64_t update_time : 16;
- uint64_t bp : 8;
- uint64_t drop : 8;
- uint64_t drop_ena : 1;
- uint64_t ena : 1;
- uint64_t rsvd_211_210 : 2;
- uint64_t substream : 20;
- uint64_t caching : 1;
- uint64_t rsvd_235_233 : 3;
- uint64_t qsize : 4;
- uint64_t cq_err_int : 8;
- uint64_t cq_err_int_ena : 8;
-};
-
-/* NIX instruction header structure */
-struct nix_inst_hdr_s {
- uint64_t pf_func : 16;
- uint64_t sq : 20;
- uint64_t rsvd_63_36 : 28;
-};
-
-/* NIX i/o virtual address structure */
-struct nix_iova_s {
- uint64_t addr : 64; /* W0 */
-};
-
-/* NIX IPsec dynamic ordering counter structure */
-struct nix_ipsec_dyno_s {
- uint32_t count : 32; /* W0 */
-};
-
-/* NIX memory value structure */
-struct nix_mem_result_s {
- uint64_t v : 1;
- uint64_t color : 2;
- uint64_t rsvd_63_3 : 61;
-};
-
-/* NIX statistics operation write data structure */
-struct nix_op_q_wdata_s {
- uint64_t rsvd_31_0 : 32;
- uint64_t q : 20;
- uint64_t rsvd_63_52 : 12;
-};
-
-/* NIX queue interrupt context hardware structure */
-struct nix_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_127_124 : 4;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t rsvd_150_146 : 5;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_319_288 : 32;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_702_688 : 15;
- uint64_t ena_copy : 1;
- uint64_t rsvd_739_704 : 36;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_767_763 : 5;
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive queue context structure */
-struct nix_rq_ctx_s {
- uint64_t ena : 1;
- uint64_t sso_ena : 1;
- uint64_t ipsech_ena : 1;
- uint64_t ena_wqwd : 1;
- uint64_t cq : 20;
- uint64_t substream : 20;
- uint64_t wqe_aura : 20;
- uint64_t spb_aura : 20;
- uint64_t lpb_aura : 20;
- uint64_t sso_grp : 10;
- uint64_t sso_tt : 2;
- uint64_t pb_caching : 2;
- uint64_t wqe_caching : 1;
- uint64_t xqe_drop_ena : 1;
- uint64_t spb_drop_ena : 1;
- uint64_t lpb_drop_ena : 1;
- uint64_t rsvd_127_122 : 6;
- uint64_t rsvd_139_128 : 12;
- uint64_t spb_sizem1 : 6;
- uint64_t wqe_skip : 2;
- uint64_t rsvd_150_148 : 3;
- uint64_t spb_ena : 1;
- uint64_t lpb_sizem1 : 12;
- uint64_t first_skip : 7;
- uint64_t rsvd_171 : 1;
- uint64_t later_skip : 6;
- uint64_t xqe_imm_size : 6;
- uint64_t rsvd_189_184 : 6;
- uint64_t xqe_imm_copy : 1;
- uint64_t xqe_hdr_split : 1;
- uint64_t xqe_drop : 8;
- uint64_t xqe_pass : 8;
- uint64_t wqe_pool_drop : 8;
- uint64_t wqe_pool_pass : 8;
- uint64_t spb_aura_drop : 8;
- uint64_t spb_aura_pass : 8;
- uint64_t spb_pool_drop : 8;
- uint64_t spb_pool_pass : 8;
- uint64_t lpb_aura_drop : 8;
- uint64_t lpb_aura_pass : 8;
- uint64_t lpb_pool_drop : 8;
- uint64_t lpb_pool_pass : 8;
- uint64_t rsvd_291_288 : 4;
- uint64_t rq_int : 8;
- uint64_t rq_int_ena : 8;
- uint64_t qint_idx : 7;
- uint64_t rsvd_319_315 : 5;
- uint64_t ltag : 24;
- uint64_t good_utag : 8;
- uint64_t bad_utag : 8;
- uint64_t flow_tagw : 6;
- uint64_t rsvd_383_366 : 18;
- uint64_t octs : 48;
- uint64_t rsvd_447_432 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_511_496 : 16;
- uint64_t drop_octs : 48;
- uint64_t rsvd_575_560 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_639_624 : 16;
- uint64_t re_pkts : 48;
- uint64_t rsvd_703_688 : 16;
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NIX receive side scaling entry structure */
-struct nix_rsse_s {
- uint32_t rq : 20;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NIX receive action structure */
-struct nix_rx_action_s {
- uint64_t op : 4;
- uint64_t pf_func : 16;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_63_61 : 3;
-};
-
-/* NIX receive immediate sub descriptor structure */
-struct nix_rx_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX receive multicast/mirror entry structure */
-struct nix_rx_mce_s {
- uint64_t op : 2;
- uint64_t rsvd_2 : 1;
- uint64_t eol : 1;
- uint64_t index : 20;
- uint64_t rsvd_31_24 : 8;
- uint64_t pf_func : 16;
- uint64_t next : 16;
-};
-
-/* NIX receive parse structure */
-struct nix_rx_parse_s {
- uint64_t chan : 12;
- uint64_t desc_sizem1 : 5;
- uint64_t imm_copy : 1;
- uint64_t express : 1;
- uint64_t wqwd : 1;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t latype : 4;
- uint64_t lbtype : 4;
- uint64_t lctype : 4;
- uint64_t ldtype : 4;
- uint64_t letype : 4;
- uint64_t lftype : 4;
- uint64_t lgtype : 4;
- uint64_t lhtype : 4;
- uint64_t pkt_lenm1 : 16;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t vtag0_valid : 1;
- uint64_t vtag0_gone : 1;
- uint64_t vtag1_valid : 1;
- uint64_t vtag1_gone : 1;
- uint64_t pkind : 6;
- uint64_t rsvd_95_94 : 2;
- uint64_t vtag0_tci : 16;
- uint64_t vtag1_tci : 16;
- uint64_t laflags : 8;
- uint64_t lbflags : 8;
- uint64_t lcflags : 8;
- uint64_t ldflags : 8;
- uint64_t leflags : 8;
- uint64_t lfflags : 8;
- uint64_t lgflags : 8;
- uint64_t lhflags : 8;
- uint64_t eoh_ptr : 8;
- uint64_t wqe_aura : 20;
- uint64_t pb_aura : 20;
- uint64_t match_id : 16;
- uint64_t laptr : 8;
- uint64_t lbptr : 8;
- uint64_t lcptr : 8;
- uint64_t ldptr : 8;
- uint64_t leptr : 8;
- uint64_t lfptr : 8;
- uint64_t lgptr : 8;
- uint64_t lhptr : 8;
- uint64_t vtag0_ptr : 8;
- uint64_t vtag1_ptr : 8;
- uint64_t flow_key_alg : 5;
- uint64_t rsvd_383_341 : 43;
- uint64_t rsvd_447_384 : 64; /* W6 */
-};
-
-/* NIX receive scatter/gather sub descriptor structure */
-struct nix_rx_sg_s {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_59_50 : 10;
- uint64_t subdc : 4;
-};
-
-/* NIX receive vtag action structure */
-struct nix_rx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_type : 3;
- uint64_t vtag0_valid : 1;
- uint64_t rsvd_31_16 : 16;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_type : 3;
- uint64_t vtag1_valid : 1;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX send completion structure */
-struct nix_send_comp_s {
- uint64_t status : 8;
- uint64_t sqe_id : 16;
- uint64_t rsvd_63_24 : 40;
-};
-
-/* NIX send CRC sub descriptor structure */
-struct nix_send_crc_s {
- uint64_t size : 16;
- uint64_t start : 16;
- uint64_t insert : 16;
- uint64_t rsvd_57_48 : 10;
- uint64_t alg : 2;
- uint64_t subdc : 4;
- uint64_t iv : 32;
- uint64_t rsvd_127_96 : 32;
-};
-
-/* NIX send extended header sub descriptor structure */
-RTE_STD_C11
-union nix_send_ext_w0_u {
- uint64_t u;
- struct {
- uint64_t lso_mps : 14;
- uint64_t lso : 1;
- uint64_t tstmp : 1;
- uint64_t lso_sb : 8;
- uint64_t lso_format : 5;
- uint64_t rsvd_31_29 : 3;
- uint64_t shp_chg : 9;
- uint64_t shp_dis : 1;
- uint64_t shp_ra : 2;
- uint64_t markptr : 8;
- uint64_t markform : 7;
- uint64_t mark_en : 1;
- uint64_t subdc : 4;
- };
-};
-
-RTE_STD_C11
-union nix_send_ext_w1_u {
- uint64_t u;
- struct {
- uint64_t vlan0_ins_ptr : 8;
- uint64_t vlan0_ins_tci : 16;
- uint64_t vlan1_ins_ptr : 8;
- uint64_t vlan1_ins_tci : 16;
- uint64_t vlan0_ins_ena : 1;
- uint64_t vlan1_ins_ena : 1;
- uint64_t rsvd_127_114 : 14;
- };
-};
-
-struct nix_send_ext_s {
- union nix_send_ext_w0_u w0;
- union nix_send_ext_w1_u w1;
-};
-
-/* NIX send header sub descriptor structure */
-RTE_STD_C11
-union nix_send_hdr_w0_u {
- uint64_t u;
- struct {
- uint64_t total : 18;
- uint64_t rsvd_18 : 1;
- uint64_t df : 1;
- uint64_t aura : 20;
- uint64_t sizem1 : 3;
- uint64_t pnc : 1;
- uint64_t sq : 20;
- };
-};
-
-RTE_STD_C11
-union nix_send_hdr_w1_u {
- uint64_t u;
- struct {
- uint64_t ol3ptr : 8;
- uint64_t ol4ptr : 8;
- uint64_t il3ptr : 8;
- uint64_t il4ptr : 8;
- uint64_t ol3type : 4;
- uint64_t ol4type : 4;
- uint64_t il3type : 4;
- uint64_t il4type : 4;
- uint64_t sqe_id : 16;
- };
-};
-
-struct nix_send_hdr_s {
- union nix_send_hdr_w0_u w0;
- union nix_send_hdr_w1_u w1;
-};
-
-/* NIX send immediate sub descriptor structure */
-struct nix_send_imm_s {
- uint64_t size : 16;
- uint64_t apad : 3;
- uint64_t rsvd_59_19 : 41;
- uint64_t subdc : 4;
-};
-
-/* NIX send jump sub descriptor structure */
-struct nix_send_jump_s {
- uint64_t sizem1 : 7;
- uint64_t rsvd_13_7 : 7;
- uint64_t ld_type : 2;
- uint64_t aura : 20;
- uint64_t rsvd_58_36 : 23;
- uint64_t f : 1;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send memory sub descriptor structure */
-struct nix_send_mem_s {
- uint64_t offset : 16;
- uint64_t rsvd_52_16 : 37;
- uint64_t wmem : 1;
- uint64_t dsz : 2;
- uint64_t alg : 4;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX send scatter/gather sub descriptor structure */
-RTE_STD_C11
-union nix_send_sg_s {
- uint64_t u;
- struct {
- uint64_t seg1_size : 16;
- uint64_t seg2_size : 16;
- uint64_t seg3_size : 16;
- uint64_t segs : 2;
- uint64_t rsvd_54_50 : 5;
- uint64_t i1 : 1;
- uint64_t i2 : 1;
- uint64_t i3 : 1;
- uint64_t ld_type : 2;
- uint64_t subdc : 4;
- };
-};
-
-/* NIX send work sub descriptor structure */
-struct nix_send_work_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_59_44 : 16;
- uint64_t subdc : 4;
- uint64_t addr : 64; /* W1 */
-};
-
-/* NIX sq context hardware structure */
-struct nix_sq_ctx_hw_s {
- uint64_t ena : 1;
- uint64_t substream : 20;
- uint64_t max_sqe_size : 2;
- uint64_t sqe_way_mask : 16;
- uint64_t sqb_aura : 20;
- uint64_t gbl_rsvd1 : 5;
- uint64_t cq_id : 20;
- uint64_t cq_ena : 1;
- uint64_t qint_idx : 6;
- uint64_t gbl_rsvd2 : 1;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t xoff : 1;
- uint64_t sqe_stype : 2;
- uint64_t gbl_rsvd : 17;
- uint64_t head_sqb : 64;/* W2 */
- uint64_t head_offset : 6;
- uint64_t sqb_dequeue_count : 16;
- uint64_t default_chan : 12;
- uint64_t sdp_mcast : 1;
- uint64_t sso_ena : 1;
- uint64_t dse_rsvd1 : 28;
- uint64_t sqb_enqueue_count : 16;
- uint64_t tail_offset : 6;
- uint64_t lmt_dis : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t dnq_rsvd1 : 17;
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t next_sqb : 64;/* W6 */
- uint64_t mnq_dis : 1;
- uint64_t smq : 9;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_next_sq_vld : 1;
- uint64_t scm1_rsvd2 : 32;
- uint64_t smenq_sqb : 64;/* W8 */
- uint64_t smenq_offset : 6;
- uint64_t cq_limit : 8;
- uint64_t smq_rr_count : 25;
- uint64_t scm_lso_rem : 18;
- uint64_t scm_dq_rsvd0 : 7;
- uint64_t smq_lso_segnum : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t scm_dq_rsvd1 : 9;
- uint64_t smenq_next_sqb : 64;/* W11 */
- uint64_t seb_rsvd1 : 64;/* W12 */
- uint64_t drop_pkts : 48;
- uint64_t drop_octs_lsw : 16;
- uint64_t drop_octs_msw : 32;
- uint64_t pkts_lsw : 32;
- uint64_t pkts_msw : 16;
- uint64_t octs : 48;
-};
-
-/* NIX send queue context structure */
-struct nix_sq_ctx_s {
- uint64_t ena : 1;
- uint64_t qint_idx : 6;
- uint64_t substream : 20;
- uint64_t sdp_mcast : 1;
- uint64_t cq : 20;
- uint64_t sqe_way_mask : 16;
- uint64_t smq : 9;
- uint64_t cq_ena : 1;
- uint64_t xoff : 1;
- uint64_t sso_ena : 1;
- uint64_t smq_rr_quantum : 24;
- uint64_t default_chan : 12;
- uint64_t sqb_count : 16;
- uint64_t smq_rr_count : 25;
- uint64_t sqb_aura : 20;
- uint64_t sq_int : 8;
- uint64_t sq_int_ena : 8;
- uint64_t sqe_stype : 2;
- uint64_t rsvd_191 : 1;
- uint64_t max_sqe_size : 2;
- uint64_t cq_limit : 8;
- uint64_t lmt_dis : 1;
- uint64_t mnq_dis : 1;
- uint64_t smq_next_sq : 20;
- uint64_t smq_lso_segnum : 8;
- uint64_t tail_offset : 6;
- uint64_t smenq_offset : 6;
- uint64_t head_offset : 6;
- uint64_t smenq_next_sqb_vld : 1;
- uint64_t smq_pend : 1;
- uint64_t smq_next_sq_vld : 1;
- uint64_t rsvd_255_253 : 3;
- uint64_t next_sqb : 64;/* W4 */
- uint64_t tail_sqb : 64;/* W5 */
- uint64_t smenq_sqb : 64;/* W6 */
- uint64_t smenq_next_sqb : 64;/* W7 */
- uint64_t head_sqb : 64;/* W8 */
- uint64_t rsvd_583_576 : 8;
- uint64_t vfi_lso_total : 18;
- uint64_t vfi_lso_sizem1 : 3;
- uint64_t vfi_lso_sb : 8;
- uint64_t vfi_lso_mps : 14;
- uint64_t vfi_lso_vlan0_ins_ena : 1;
- uint64_t vfi_lso_vlan1_ins_ena : 1;
- uint64_t vfi_lso_vld : 1;
- uint64_t rsvd_639_630 : 10;
- uint64_t scm_lso_rem : 18;
- uint64_t rsvd_703_658 : 46;
- uint64_t octs : 48;
- uint64_t rsvd_767_752 : 16;
- uint64_t pkts : 48;
- uint64_t rsvd_831_816 : 16;
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t drop_octs : 48;
- uint64_t rsvd_959_944 : 16;
- uint64_t drop_pkts : 48;
- uint64_t rsvd_1023_1008 : 16;
-};
-
-/* NIX transmit action structure */
-struct nix_tx_action_s {
- uint64_t op : 4;
- uint64_t rsvd_11_4 : 8;
- uint64_t index : 20;
- uint64_t match_id : 16;
- uint64_t rsvd_63_48 : 16;
-};
-
-/* NIX transmit vtag action structure */
-struct nix_tx_vtag_action_s {
- uint64_t vtag0_relptr : 8;
- uint64_t vtag0_lid : 3;
- uint64_t rsvd_11 : 1;
- uint64_t vtag0_op : 2;
- uint64_t rsvd_15_14 : 2;
- uint64_t vtag0_def : 10;
- uint64_t rsvd_31_26 : 6;
- uint64_t vtag1_relptr : 8;
- uint64_t vtag1_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t vtag1_op : 2;
- uint64_t rsvd_47_46 : 2;
- uint64_t vtag1_def : 10;
- uint64_t rsvd_63_58 : 6;
-};
-
-/* NIX work queue entry header structure */
-struct nix_wqe_hdr_s {
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t node : 2;
- uint64_t q : 14;
- uint64_t wqe_type : 4;
-};
-
-/* NIX Rx flow key algorithm field structure */
-struct nix_rx_flowkey_alg {
- uint64_t key_offset :6;
- uint64_t ln_mask :1;
- uint64_t fn_mask :1;
- uint64_t hdr_offset :8;
- uint64_t bytesm1 :5;
- uint64_t lid :3;
- uint64_t reserved_24_24 :1;
- uint64_t ena :1;
- uint64_t sel_chan :1;
- uint64_t ltype_mask :4;
- uint64_t ltype_match :4;
- uint64_t reserved_35_63 :29;
-};
-
-/* NIX LSO format field structure */
-struct nix_lso_format {
- uint64_t offset : 8;
- uint64_t layer : 2;
- uint64_t rsvd_10_11 : 2;
- uint64_t sizem1 : 2;
- uint64_t rsvd_14_15 : 2;
- uint64_t alg : 3;
- uint64_t rsvd_19_63 : 45;
-};
-
-#define NIX_LSO_FIELD_MAX (8)
-#define NIX_LSO_FIELD_ALG_MASK GENMASK(18, 16)
-#define NIX_LSO_FIELD_SZ_MASK GENMASK(13, 12)
-#define NIX_LSO_FIELD_LY_MASK GENMASK(9, 8)
-#define NIX_LSO_FIELD_OFF_MASK GENMASK(7, 0)
-
-#define NIX_LSO_FIELD_MASK \
- (NIX_LSO_FIELD_OFF_MASK | \
- NIX_LSO_FIELD_LY_MASK | \
- NIX_LSO_FIELD_SZ_MASK | \
- NIX_LSO_FIELD_ALG_MASK)
-
-#endif /* __OTX2_NIX_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npa.h b/drivers/common/octeontx2/hw/otx2_npa.h
deleted file mode 100644
index 2224216c96..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npa.h
+++ /dev/null
@@ -1,305 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPA_HW_H__
-#define __OTX2_NPA_HW_H__
-
-/* Register offsets */
-
-#define NPA_AF_BLK_RST (0x0ull)
-#define NPA_AF_CONST (0x10ull)
-#define NPA_AF_CONST1 (0x18ull)
-#define NPA_AF_LF_RST (0x20ull)
-#define NPA_AF_GEN_CFG (0x30ull)
-#define NPA_AF_NDC_CFG (0x40ull)
-#define NPA_AF_NDC_SYNC (0x50ull)
-#define NPA_AF_INP_CTL (0xd0ull)
-#define NPA_AF_ACTIVE_CYCLES_PC (0xf0ull)
-#define NPA_AF_AVG_DELAY (0x100ull)
-#define NPA_AF_GEN_INT (0x140ull)
-#define NPA_AF_GEN_INT_W1S (0x148ull)
-#define NPA_AF_GEN_INT_ENA_W1S (0x150ull)
-#define NPA_AF_GEN_INT_ENA_W1C (0x158ull)
-#define NPA_AF_RVU_INT (0x160ull)
-#define NPA_AF_RVU_INT_W1S (0x168ull)
-#define NPA_AF_RVU_INT_ENA_W1S (0x170ull)
-#define NPA_AF_RVU_INT_ENA_W1C (0x178ull)
-#define NPA_AF_ERR_INT (0x180ull)
-#define NPA_AF_ERR_INT_W1S (0x188ull)
-#define NPA_AF_ERR_INT_ENA_W1S (0x190ull)
-#define NPA_AF_ERR_INT_ENA_W1C (0x198ull)
-#define NPA_AF_RAS (0x1a0ull)
-#define NPA_AF_RAS_W1S (0x1a8ull)
-#define NPA_AF_RAS_ENA_W1S (0x1b0ull)
-#define NPA_AF_RAS_ENA_W1C (0x1b8ull)
-#define NPA_AF_AQ_CFG (0x600ull)
-#define NPA_AF_AQ_BASE (0x610ull)
-#define NPA_AF_AQ_STATUS (0x620ull)
-#define NPA_AF_AQ_DOOR (0x630ull)
-#define NPA_AF_AQ_DONE_WAIT (0x640ull)
-#define NPA_AF_AQ_DONE (0x650ull)
-#define NPA_AF_AQ_DONE_ACK (0x660ull)
-#define NPA_AF_AQ_DONE_TIMER (0x670ull)
-#define NPA_AF_AQ_DONE_INT (0x680ull)
-#define NPA_AF_AQ_DONE_ENA_W1S (0x690ull)
-#define NPA_AF_AQ_DONE_ENA_W1C (0x698ull)
-#define NPA_AF_LFX_AURAS_CFG(a) (0x4000ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_LOC_AURAS_BASE(a) (0x4010ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_CFG(a) (0x4100ull | (uint64_t)(a) << 18)
-#define NPA_AF_LFX_QINTS_BASE(a) (0x4110ull | (uint64_t)(a) << 18)
-#define NPA_PRIV_AF_INT_CFG (0x10000ull)
-#define NPA_PRIV_LFX_CFG(a) (0x10010ull | (uint64_t)(a) << 8)
-#define NPA_PRIV_LFX_INT_CFG(a) (0x10020ull | (uint64_t)(a) << 8)
-#define NPA_AF_RVU_LF_CFG_DEBUG (0x10030ull)
-#define NPA_AF_DTX_FILTER_CTL (0x10040ull)
-
-#define NPA_LF_AURA_OP_ALLOCX(a) (0x10ull | (uint64_t)(a) << 3)
-#define NPA_LF_AURA_OP_FREE0 (0x20ull)
-#define NPA_LF_AURA_OP_FREE1 (0x28ull)
-#define NPA_LF_AURA_OP_CNT (0x30ull)
-#define NPA_LF_AURA_OP_LIMIT (0x50ull)
-#define NPA_LF_AURA_OP_INT (0x60ull)
-#define NPA_LF_AURA_OP_THRESH (0x70ull)
-#define NPA_LF_POOL_OP_PC (0x100ull)
-#define NPA_LF_POOL_OP_AVAILABLE (0x110ull)
-#define NPA_LF_POOL_OP_PTR_START0 (0x120ull)
-#define NPA_LF_POOL_OP_PTR_START1 (0x128ull)
-#define NPA_LF_POOL_OP_PTR_END0 (0x130ull)
-#define NPA_LF_POOL_OP_PTR_END1 (0x138ull)
-#define NPA_LF_POOL_OP_INT (0x160ull)
-#define NPA_LF_POOL_OP_THRESH (0x170ull)
-#define NPA_LF_ERR_INT (0x200ull)
-#define NPA_LF_ERR_INT_W1S (0x208ull)
-#define NPA_LF_ERR_INT_ENA_W1C (0x210ull)
-#define NPA_LF_ERR_INT_ENA_W1S (0x218ull)
-#define NPA_LF_RAS (0x220ull)
-#define NPA_LF_RAS_W1S (0x228ull)
-#define NPA_LF_RAS_ENA_W1C (0x230ull)
-#define NPA_LF_RAS_ENA_W1S (0x238ull)
-#define NPA_LF_QINTX_CNT(a) (0x300ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_INT(a) (0x310ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1S(a) (0x320ull | (uint64_t)(a) << 12)
-#define NPA_LF_QINTX_ENA_W1C(a) (0x330ull | (uint64_t)(a) << 12)
-
-
-/* Enum offsets */
-
-#define NPA_AQ_COMP_NOTDONE (0x0ull)
-#define NPA_AQ_COMP_GOOD (0x1ull)
-#define NPA_AQ_COMP_SWERR (0x2ull)
-#define NPA_AQ_COMP_CTX_POISON (0x3ull)
-#define NPA_AQ_COMP_CTX_FAULT (0x4ull)
-#define NPA_AQ_COMP_LOCKERR (0x5ull)
-
-#define NPA_AF_INT_VEC_RVU (0x0ull)
-#define NPA_AF_INT_VEC_GEN (0x1ull)
-#define NPA_AF_INT_VEC_AQ_DONE (0x2ull)
-#define NPA_AF_INT_VEC_AF_ERR (0x3ull)
-#define NPA_AF_INT_VEC_POISON (0x4ull)
-
-#define NPA_AQ_INSTOP_NOP (0x0ull)
-#define NPA_AQ_INSTOP_INIT (0x1ull)
-#define NPA_AQ_INSTOP_WRITE (0x2ull)
-#define NPA_AQ_INSTOP_READ (0x3ull)
-#define NPA_AQ_INSTOP_LOCK (0x4ull)
-#define NPA_AQ_INSTOP_UNLOCK (0x5ull)
-
-#define NPA_AQ_CTYPE_AURA (0x0ull)
-#define NPA_AQ_CTYPE_POOL (0x1ull)
-
-#define NPA_BPINTF_NIX0_RX (0x0ull)
-#define NPA_BPINTF_NIX1_RX (0x1ull)
-
-#define NPA_AURA_ERR_INT_AURA_FREE_UNDER (0x0ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_OVER (0x1ull)
-#define NPA_AURA_ERR_INT_AURA_ADD_UNDER (0x2ull)
-#define NPA_AURA_ERR_INT_POOL_DIS (0x3ull)
-#define NPA_AURA_ERR_INT_R4 (0x4ull)
-#define NPA_AURA_ERR_INT_R5 (0x5ull)
-#define NPA_AURA_ERR_INT_R6 (0x6ull)
-#define NPA_AURA_ERR_INT_R7 (0x7ull)
-
-#define NPA_LF_INT_VEC_ERR_INT (0x40ull)
-#define NPA_LF_INT_VEC_POISON (0x41ull)
-#define NPA_LF_INT_VEC_QINT_END (0x3full)
-#define NPA_LF_INT_VEC_QINT_START (0x0ull)
-
-#define NPA_INPQ_SSO (0x4ull)
-#define NPA_INPQ_TIM (0x5ull)
-#define NPA_INPQ_DPI (0x6ull)
-#define NPA_INPQ_AURA_OP (0xeull)
-#define NPA_INPQ_INTERNAL_RSV (0xfull)
-#define NPA_INPQ_NIX0_RX (0x0ull)
-#define NPA_INPQ_NIX1_RX (0x2ull)
-#define NPA_INPQ_NIX0_TX (0x1ull)
-#define NPA_INPQ_NIX1_TX (0x3ull)
-#define NPA_INPQ_R_END (0xdull)
-#define NPA_INPQ_R_START (0x7ull)
-
-#define NPA_POOL_ERR_INT_OVFLS (0x0ull)
-#define NPA_POOL_ERR_INT_RANGE (0x1ull)
-#define NPA_POOL_ERR_INT_PERR (0x2ull)
-#define NPA_POOL_ERR_INT_R3 (0x3ull)
-#define NPA_POOL_ERR_INT_R4 (0x4ull)
-#define NPA_POOL_ERR_INT_R5 (0x5ull)
-#define NPA_POOL_ERR_INT_R6 (0x6ull)
-#define NPA_POOL_ERR_INT_R7 (0x7ull)
-
-#define NPA_NDC0_PORT_AURA0 (0x0ull)
-#define NPA_NDC0_PORT_AURA1 (0x1ull)
-#define NPA_NDC0_PORT_POOL0 (0x2ull)
-#define NPA_NDC0_PORT_POOL1 (0x3ull)
-#define NPA_NDC0_PORT_STACK0 (0x4ull)
-#define NPA_NDC0_PORT_STACK1 (0x5ull)
-
-#define NPA_LF_ERR_INT_AURA_DIS (0x0ull)
-#define NPA_LF_ERR_INT_AURA_OOR (0x1ull)
-#define NPA_LF_ERR_INT_AURA_FAULT (0xcull)
-#define NPA_LF_ERR_INT_POOL_FAULT (0xdull)
-#define NPA_LF_ERR_INT_STACK_FAULT (0xeull)
-#define NPA_LF_ERR_INT_QINT_FAULT (0xfull)
-
-/* Structures definitions */
-
-/* NPA admin queue instruction structure */
-struct npa_aq_inst_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t lf : 9;
- uint64_t rsvd_23_17 : 7;
- uint64_t cindex : 20;
- uint64_t rsvd_62_44 : 19;
- uint64_t doneint : 1;
- uint64_t res_addr : 64; /* W1 */
-};
-
-/* NPA admin queue result structure */
-struct npa_aq_res_s {
- uint64_t op : 4;
- uint64_t ctype : 4;
- uint64_t compcode : 8;
- uint64_t doneint : 1;
- uint64_t rsvd_63_17 : 47;
- uint64_t rsvd_127_64 : 64; /* W1 */
-};
-
-/* NPA aura operation write data structure */
-struct npa_aura_op_wdata_s {
- uint64_t aura : 20;
- uint64_t rsvd_62_20 : 43;
- uint64_t drop : 1;
-};
-
-/* NPA aura context structure */
-struct npa_aura_s {
- uint64_t pool_addr : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t rsvd_66_65 : 2;
- uint64_t pool_caching : 1;
- uint64_t pool_way_mask : 16;
- uint64_t avg_con : 9;
- uint64_t rsvd_93 : 1;
- uint64_t pool_drop_ena : 1;
- uint64_t aura_drop_ena : 1;
- uint64_t bp_ena : 2;
- uint64_t rsvd_103_98 : 6;
- uint64_t aura_drop : 8;
- uint64_t shift : 6;
- uint64_t rsvd_119_118 : 2;
- uint64_t avg_level : 8;
- uint64_t count : 36;
- uint64_t rsvd_167_164 : 4;
- uint64_t nix0_bpid : 9;
- uint64_t rsvd_179_177 : 3;
- uint64_t nix1_bpid : 9;
- uint64_t rsvd_191_189 : 3;
- uint64_t limit : 36;
- uint64_t rsvd_231_228 : 4;
- uint64_t bp : 8;
- uint64_t rsvd_243_240 : 4;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t rsvd_255_252 : 4;
- uint64_t fc_addr : 64;/* W4 */
- uint64_t pool_drop : 8;
- uint64_t update_time : 16;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_363 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_371 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_383_379 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_447_420 : 28;
- uint64_t rsvd_511_448 : 64;/* W7 */
-};
-
-/* NPA pool context structure */
-struct npa_pool_s {
- uint64_t stack_base : 64;/* W0 */
- uint64_t ena : 1;
- uint64_t nat_align : 1;
- uint64_t rsvd_67_66 : 2;
- uint64_t stack_caching : 1;
- uint64_t rsvd_71_69 : 3;
- uint64_t stack_way_mask : 16;
- uint64_t buf_offset : 12;
- uint64_t rsvd_103_100 : 4;
- uint64_t buf_size : 11;
- uint64_t rsvd_127_115 : 13;
- uint64_t stack_max_pages : 32;
- uint64_t stack_pages : 32;
- uint64_t op_pc : 48;
- uint64_t rsvd_255_240 : 16;
- uint64_t stack_offset : 4;
- uint64_t rsvd_263_260 : 4;
- uint64_t shift : 6;
- uint64_t rsvd_271_270 : 2;
- uint64_t avg_level : 8;
- uint64_t avg_con : 9;
- uint64_t fc_ena : 1;
- uint64_t fc_stype : 2;
- uint64_t fc_hyst_bits : 4;
- uint64_t fc_up_crossing : 1;
- uint64_t rsvd_299_297 : 3;
- uint64_t update_time : 16;
- uint64_t rsvd_319_316 : 4;
- uint64_t fc_addr : 64;/* W5 */
- uint64_t ptr_start : 64;/* W6 */
- uint64_t ptr_end : 64;/* W7 */
- uint64_t rsvd_535_512 : 24;
- uint64_t err_int : 8;
- uint64_t err_int_ena : 8;
- uint64_t thresh_int : 1;
- uint64_t thresh_int_ena : 1;
- uint64_t thresh_up : 1;
- uint64_t rsvd_555 : 1;
- uint64_t thresh_qint_idx : 7;
- uint64_t rsvd_563 : 1;
- uint64_t err_qint_idx : 7;
- uint64_t rsvd_575_571 : 5;
- uint64_t thresh : 36;
- uint64_t rsvd_639_612 : 28;
- uint64_t rsvd_703_640 : 64;/* W10 */
- uint64_t rsvd_767_704 : 64;/* W11 */
- uint64_t rsvd_831_768 : 64;/* W12 */
- uint64_t rsvd_895_832 : 64;/* W13 */
- uint64_t rsvd_959_896 : 64;/* W14 */
- uint64_t rsvd_1023_960 : 64;/* W15 */
-};
-
-/* NPA queue interrupt context hardware structure */
-struct npa_qint_hw_s {
- uint32_t count : 22;
- uint32_t rsvd_30_22 : 9;
- uint32_t ena : 1;
-};
-
-#endif /* __OTX2_NPA_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_npc.h b/drivers/common/octeontx2/hw/otx2_npc.h
deleted file mode 100644
index b4e3c1eedc..0000000000
--- a/drivers/common/octeontx2/hw/otx2_npc.h
+++ /dev/null
@@ -1,503 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_NPC_HW_H__
-#define __OTX2_NPC_HW_H__
-
-/* Register offsets */
-
-#define NPC_AF_CFG (0x0ull)
-#define NPC_AF_ACTIVE_PC (0x10ull)
-#define NPC_AF_CONST (0x20ull)
-#define NPC_AF_CONST1 (0x30ull)
-#define NPC_AF_BLK_RST (0x40ull)
-#define NPC_AF_MCAM_SCRUB_CTL (0xa0ull)
-#define NPC_AF_KCAM_SCRUB_CTL (0xb0ull)
-#define NPC_AF_KPUX_CFG(a) \
- (0x500ull | (uint64_t)(a) << 3)
-#define NPC_AF_PCK_CFG (0x600ull)
-#define NPC_AF_PCK_DEF_OL2 (0x610ull)
-#define NPC_AF_PCK_DEF_OIP4 (0x620ull)
-#define NPC_AF_PCK_DEF_OIP6 (0x630ull)
-#define NPC_AF_PCK_DEF_IIP4 (0x640ull)
-#define NPC_AF_KEX_LDATAX_FLAGS_CFG(a) \
- (0x800ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_KEX_CFG(a) \
- (0x1010ull | (uint64_t)(a) << 8)
-#define NPC_AF_PKINDX_ACTION0(a) \
- (0x80000ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_ACTION1(a) \
- (0x80008ull | (uint64_t)(a) << 6)
-#define NPC_AF_PKINDX_CPI_DEFX(a, b) \
- (0x80020ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CHLEN90B_PKIND (0x3bull)
-#define NPC_AF_KPUX_ENTRYX_CAMX(a, b, c) \
- (0x100000ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_KPUX_ENTRYX_ACTION0(a, b) \
- (0x100020ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRYX_ACTION1(a, b) \
- (0x100028ull | (uint64_t)(a) << 14 | (uint64_t)(b) << 6)
-#define NPC_AF_KPUX_ENTRY_DISX(a, b) \
- (0x180000ull | (uint64_t)(a) << 6 | (uint64_t)(b) << 3)
-#define NPC_AF_CPIX_CFG(a) \
- (0x200000ull | (uint64_t)(a) << 3)
-#define NPC_AF_INTFX_LIDX_LTX_LDX_CFG(a, b, c, d) \
- (0x900000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 5 | (uint64_t)(d) << 3)
-#define NPC_AF_INTFX_LDATAX_FLAGSX_CFG(a, b, c) \
- (0x980000ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 12 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_INTF(a, b, c) \
- (0x1000000ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W0(a, b, c) \
- (0x1000010ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CAMX_W1(a, b, c) \
- (0x1000020ull | (uint64_t)(a) << 10 | (uint64_t)(b) << 6 | \
- (uint64_t)(c) << 3)
-#define NPC_AF_MCAMEX_BANKX_CFG(a, b) \
- (0x1800000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_STAT_ACT(a, b) \
- (0x1880000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MATCH_STATX(a) \
- (0x1880008ull | (uint64_t)(a) << 8)
-#define NPC_AF_INTFX_MISS_STAT_ACT(a) \
- (0x1880040ull + (uint64_t)(a) * 0x8)
-#define NPC_AF_MCAMEX_BANKX_ACTION(a, b) \
- (0x1900000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_MCAMEX_BANKX_TAG_ACT(a, b) \
- (0x1900008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_INTFX_MISS_ACT(a) \
- (0x1a00000ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_MISS_TAG_ACT(a) \
- (0x1b00008ull | (uint64_t)(a) << 4)
-#define NPC_AF_MCAM_BANKX_HITX(a, b) \
- (0x1c80000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define NPC_AF_LKUP_CTL (0x2000000ull)
-#define NPC_AF_LKUP_DATAX(a) \
- (0x2000200ull | (uint64_t)(a) << 4)
-#define NPC_AF_LKUP_RESULTX(a) \
- (0x2000400ull | (uint64_t)(a) << 4)
-#define NPC_AF_INTFX_STAT(a) \
- (0x2000800ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_CTL (0x3000000ull)
-#define NPC_AF_DBG_STATUS (0x3000010ull)
-#define NPC_AF_KPUX_DBG(a) \
- (0x3000020ull | (uint64_t)(a) << 8)
-#define NPC_AF_IKPU_ERR_CTL (0x3000080ull)
-#define NPC_AF_KPUX_ERR_CTL(a) \
- (0x30000a0ull | (uint64_t)(a) << 8)
-#define NPC_AF_MCAM_DBG (0x3001000ull)
-#define NPC_AF_DBG_DATAX(a) \
- (0x3001400ull | (uint64_t)(a) << 4)
-#define NPC_AF_DBG_RESULTX(a) \
- (0x3001800ull | (uint64_t)(a) << 4)
-
-
-/* Enum offsets */
-
-#define NPC_INTF_NIX0_RX (0x0ull)
-#define NPC_INTF_NIX0_TX (0x1ull)
-
-#define NPC_LKUPOP_PKT (0x0ull)
-#define NPC_LKUPOP_KEY (0x1ull)
-
-#define NPC_MCAM_KEY_X1 (0x0ull)
-#define NPC_MCAM_KEY_X2 (0x1ull)
-#define NPC_MCAM_KEY_X4 (0x2ull)
-
-enum NPC_ERRLEV_E {
- NPC_ERRLEV_RE = 0,
- NPC_ERRLEV_LA = 1,
- NPC_ERRLEV_LB = 2,
- NPC_ERRLEV_LC = 3,
- NPC_ERRLEV_LD = 4,
- NPC_ERRLEV_LE = 5,
- NPC_ERRLEV_LF = 6,
- NPC_ERRLEV_LG = 7,
- NPC_ERRLEV_LH = 8,
- NPC_ERRLEV_R9 = 9,
- NPC_ERRLEV_R10 = 10,
- NPC_ERRLEV_R11 = 11,
- NPC_ERRLEV_R12 = 12,
- NPC_ERRLEV_R13 = 13,
- NPC_ERRLEV_R14 = 14,
- NPC_ERRLEV_NIX = 15,
- NPC_ERRLEV_ENUM_LAST = 16,
-};
-
-enum npc_kpu_err_code {
- NPC_EC_NOERR = 0, /* has to be zero */
- NPC_EC_UNK,
- NPC_EC_IH_LENGTH,
- NPC_EC_EDSA_UNK,
- NPC_EC_L2_K1,
- NPC_EC_L2_K2,
- NPC_EC_L2_K3,
- NPC_EC_L2_K3_ETYPE_UNK,
- NPC_EC_L2_K4,
- NPC_EC_MPLS_2MANY,
- NPC_EC_MPLS_UNK,
- NPC_EC_NSH_UNK,
- NPC_EC_IP_TTL_0,
- NPC_EC_IP_FRAG_OFFSET_1,
- NPC_EC_IP_VER,
- NPC_EC_IP6_HOP_0,
- NPC_EC_IP6_VER,
- NPC_EC_TCP_FLAGS_FIN_ONLY,
- NPC_EC_TCP_FLAGS_ZERO,
- NPC_EC_TCP_FLAGS_RST_FIN,
- NPC_EC_TCP_FLAGS_URG_SYN,
- NPC_EC_TCP_FLAGS_RST_SYN,
- NPC_EC_TCP_FLAGS_SYN_FIN,
- NPC_EC_VXLAN,
- NPC_EC_NVGRE,
- NPC_EC_GRE,
- NPC_EC_GRE_VER1,
- NPC_EC_L4,
- NPC_EC_OIP4_CSUM,
- NPC_EC_IIP4_CSUM,
- NPC_EC_LAST /* has to be the last item */
-};
-
-enum NPC_LID_E {
- NPC_LID_LA = 0,
- NPC_LID_LB,
- NPC_LID_LC,
- NPC_LID_LD,
- NPC_LID_LE,
- NPC_LID_LF,
- NPC_LID_LG,
- NPC_LID_LH,
-};
-
-#define NPC_LT_NA 0
-
-enum npc_kpu_la_ltype {
- NPC_LT_LA_8023 = 1,
- NPC_LT_LA_ETHER,
- NPC_LT_LA_IH_NIX_ETHER,
- NPC_LT_LA_IH_8_ETHER,
- NPC_LT_LA_IH_4_ETHER,
- NPC_LT_LA_IH_2_ETHER,
- NPC_LT_LA_HIGIG2_ETHER,
- NPC_LT_LA_IH_NIX_HIGIG2_ETHER,
- NPC_LT_LA_CUSTOM_L2_90B_ETHER,
- NPC_LT_LA_CPT_HDR,
- NPC_LT_LA_CUSTOM_L2_24B_ETHER,
- NPC_LT_LA_CUSTOM0 = 0xE,
- NPC_LT_LA_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lb_ltype {
- NPC_LT_LB_ETAG = 1,
- NPC_LT_LB_CTAG,
- NPC_LT_LB_STAG_QINQ,
- NPC_LT_LB_BTAG,
- NPC_LT_LB_ITAG,
- NPC_LT_LB_DSA,
- NPC_LT_LB_DSA_VLAN,
- NPC_LT_LB_EDSA,
- NPC_LT_LB_EDSA_VLAN,
- NPC_LT_LB_EXDSA,
- NPC_LT_LB_EXDSA_VLAN,
- NPC_LT_LB_FDSA,
- NPC_LT_LB_VLAN_EXDSA,
- NPC_LT_LB_CUSTOM0 = 0xE,
- NPC_LT_LB_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lc_ltype {
- NPC_LT_LC_PTP = 1,
- NPC_LT_LC_IP,
- NPC_LT_LC_IP_OPT,
- NPC_LT_LC_IP6,
- NPC_LT_LC_IP6_EXT,
- NPC_LT_LC_ARP,
- NPC_LT_LC_RARP,
- NPC_LT_LC_MPLS,
- NPC_LT_LC_NSH,
- NPC_LT_LC_FCOE,
- NPC_LT_LC_NGIO,
- NPC_LT_LC_CUSTOM0 = 0xE,
- NPC_LT_LC_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_ld_ltype {
- NPC_LT_LD_TCP = 1,
- NPC_LT_LD_UDP,
- NPC_LT_LD_ICMP,
- NPC_LT_LD_SCTP,
- NPC_LT_LD_ICMP6,
- NPC_LT_LD_CUSTOM0,
- NPC_LT_LD_CUSTOM1,
- NPC_LT_LD_IGMP = 8,
- NPC_LT_LD_AH,
- NPC_LT_LD_GRE,
- NPC_LT_LD_NVGRE,
- NPC_LT_LD_NSH,
- NPC_LT_LD_TU_MPLS_IN_NSH,
- NPC_LT_LD_TU_MPLS_IN_IP,
-};
-
-enum npc_kpu_le_ltype {
- NPC_LT_LE_VXLAN = 1,
- NPC_LT_LE_GENEVE,
- NPC_LT_LE_ESP,
- NPC_LT_LE_GTPU = 4,
- NPC_LT_LE_VXLANGPE,
- NPC_LT_LE_GTPC,
- NPC_LT_LE_NSH,
- NPC_LT_LE_TU_MPLS_IN_GRE,
- NPC_LT_LE_TU_NSH_IN_GRE,
- NPC_LT_LE_TU_MPLS_IN_UDP,
- NPC_LT_LE_CUSTOM0 = 0xE,
- NPC_LT_LE_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lf_ltype {
- NPC_LT_LF_TU_ETHER = 1,
- NPC_LT_LF_TU_PPP,
- NPC_LT_LF_TU_MPLS_IN_VXLANGPE,
- NPC_LT_LF_TU_NSH_IN_VXLANGPE,
- NPC_LT_LF_TU_MPLS_IN_NSH,
- NPC_LT_LF_TU_3RD_NSH,
- NPC_LT_LF_CUSTOM0 = 0xE,
- NPC_LT_LF_CUSTOM1 = 0xF,
-};
-
-enum npc_kpu_lg_ltype {
- NPC_LT_LG_TU_IP = 1,
- NPC_LT_LG_TU_IP6,
- NPC_LT_LG_TU_ARP,
- NPC_LT_LG_TU_ETHER_IN_NSH,
- NPC_LT_LG_CUSTOM0 = 0xE,
- NPC_LT_LG_CUSTOM1 = 0xF,
-};
-
-/* Don't modify Ltypes up to SCTP, otherwise it will
- * effect flow tag calculation and thus RSS.
- */
-enum npc_kpu_lh_ltype {
- NPC_LT_LH_TU_TCP = 1,
- NPC_LT_LH_TU_UDP,
- NPC_LT_LH_TU_ICMP,
- NPC_LT_LH_TU_SCTP,
- NPC_LT_LH_TU_ICMP6,
- NPC_LT_LH_TU_IGMP = 8,
- NPC_LT_LH_TU_ESP,
- NPC_LT_LH_TU_AH,
- NPC_LT_LH_CUSTOM0 = 0xE,
- NPC_LT_LH_CUSTOM1 = 0xF,
-};
-
-/* Structures definitions */
-struct npc_kpu_profile_cam {
- uint8_t state;
- uint8_t state_mask;
- uint16_t dp0;
- uint16_t dp0_mask;
- uint16_t dp1;
- uint16_t dp1_mask;
- uint16_t dp2;
- uint16_t dp2_mask;
-};
-
-struct npc_kpu_profile_action {
- uint8_t errlev;
- uint8_t errcode;
- uint8_t dp0_offset;
- uint8_t dp1_offset;
- uint8_t dp2_offset;
- uint8_t bypass_count;
- uint8_t parse_done;
- uint8_t next_state;
- uint8_t ptr_advance;
- uint8_t cap_ena;
- uint8_t lid;
- uint8_t ltype;
- uint8_t flags;
- uint8_t offset;
- uint8_t mask;
- uint8_t right;
- uint8_t shift;
-};
-
-struct npc_kpu_profile {
- int cam_entries;
- int action_entries;
- struct npc_kpu_profile_cam *cam;
- struct npc_kpu_profile_action *action;
-};
-
-/* NPC KPU register formats */
-struct npc_kpu_cam {
- uint64_t dp0_data : 16;
- uint64_t dp1_data : 16;
- uint64_t dp2_data : 16;
- uint64_t state : 8;
- uint64_t rsvd_63_56 : 8;
-};
-
-struct npc_kpu_action0 {
- uint64_t var_len_shift : 3;
- uint64_t var_len_right : 1;
- uint64_t var_len_mask : 8;
- uint64_t var_len_offset : 8;
- uint64_t ptr_advance : 8;
- uint64_t capture_flags : 8;
- uint64_t capture_ltype : 4;
- uint64_t capture_lid : 3;
- uint64_t rsvd_43 : 1;
- uint64_t next_state : 8;
- uint64_t parse_done : 1;
- uint64_t capture_ena : 1;
- uint64_t byp_count : 3;
- uint64_t rsvd_63_57 : 7;
-};
-
-struct npc_kpu_action1 {
- uint64_t dp0_offset : 8;
- uint64_t dp1_offset : 8;
- uint64_t dp2_offset : 8;
- uint64_t errcode : 8;
- uint64_t errlev : 4;
- uint64_t rsvd_63_36 : 28;
-};
-
-struct npc_kpu_pkind_cpi_def {
- uint64_t cpi_base : 10;
- uint64_t rsvd_11_10 : 2;
- uint64_t add_shift : 3;
- uint64_t rsvd_15 : 1;
- uint64_t add_mask : 8;
- uint64_t add_offset : 8;
- uint64_t flags_mask : 8;
- uint64_t flags_match : 8;
- uint64_t ltype_mask : 4;
- uint64_t ltype_match : 4;
- uint64_t lid : 3;
- uint64_t rsvd_62_59 : 4;
- uint64_t ena : 1;
-};
-
-struct nix_rx_action {
- uint64_t op :4;
- uint64_t pf_func :16;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t flow_key_alg :5;
- uint64_t rsvd_63_61 :3;
-};
-
-struct nix_tx_action {
- uint64_t op :4;
- uint64_t rsvd_11_4 :8;
- uint64_t index :20;
- uint64_t match_id :16;
- uint64_t rsvd_63_48 :16;
-};
-
-/* NPC layer parse information structure */
-struct npc_layer_info_s {
- uint32_t lptr : 8;
- uint32_t flags : 8;
- uint32_t ltype : 4;
- uint32_t rsvd_31_20 : 12;
-};
-
-/* NPC layer mcam search key extract structure */
-struct npc_layer_kex_s {
- uint16_t flags : 8;
- uint16_t ltype : 4;
- uint16_t rsvd_15_12 : 4;
-};
-
-/* NPC mcam search key x1 structure */
-struct npc_mcam_key_x1_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 48;
- uint64_t rsvd_191_176 : 16;
-};
-
-/* NPC mcam search key x2 structure */
-struct npc_mcam_key_x2_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 32;
- uint64_t rsvd_319_288 : 32;
-};
-
-/* NPC mcam search key x4 structure */
-struct npc_mcam_key_x4_s {
- uint64_t intf : 2;
- uint64_t rsvd_63_2 : 62;
- uint64_t kw0 : 64; /* W1 */
- uint64_t kw1 : 64; /* W2 */
- uint64_t kw2 : 64; /* W3 */
- uint64_t kw3 : 64; /* W4 */
- uint64_t kw4 : 64; /* W5 */
- uint64_t kw5 : 64; /* W6 */
- uint64_t kw6 : 64; /* W7 */
-};
-
-/* NPC parse key extract structure */
-struct npc_parse_kex_s {
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t la : 12;
- uint64_t lb : 12;
- uint64_t lc : 12;
- uint64_t ld : 12;
- uint64_t le : 12;
- uint64_t lf : 12;
- uint64_t lg : 12;
- uint64_t lh : 12;
- uint64_t rsvd_127_124 : 4;
-};
-
-/* NPC result structure */
-struct npc_result_s {
- uint64_t intf : 2;
- uint64_t pkind : 6;
- uint64_t chan : 12;
- uint64_t errlev : 4;
- uint64_t errcode : 8;
- uint64_t l2m : 1;
- uint64_t l2b : 1;
- uint64_t l3m : 1;
- uint64_t l3b : 1;
- uint64_t eoh_ptr : 8;
- uint64_t rsvd_63_44 : 20;
- uint64_t action : 64; /* W1 */
- uint64_t vtag_action : 64; /* W2 */
- uint64_t la : 20;
- uint64_t lb : 20;
- uint64_t lc : 20;
- uint64_t rsvd_255_252 : 4;
- uint64_t ld : 20;
- uint64_t le : 20;
- uint64_t lf : 20;
- uint64_t rsvd_319_316 : 4;
- uint64_t lg : 20;
- uint64_t lh : 20;
- uint64_t rsvd_383_360 : 24;
-};
-
-#endif /* __OTX2_NPC_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ree.h b/drivers/common/octeontx2/hw/otx2_ree.h
deleted file mode 100644
index b7481f125f..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ree.h
+++ /dev/null
@@ -1,27 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_REE_HW_H__
-#define __OTX2_REE_HW_H__
-
-/* REE BAR0*/
-#define REE_AF_REEXM_MAX_MATCH (0x80c8)
-
-/* REE BAR02 */
-#define REE_LF_MISC_INT (0x300)
-#define REE_LF_DONE_INT (0x120)
-
-#define REE_AF_QUEX_GMCTL(a) (0x800 | (a) << 3)
-
-#define REE_AF_INT_VEC_RAS (0x0ull)
-#define REE_AF_INT_VEC_RVU (0x1ull)
-#define REE_AF_INT_VEC_QUE_DONE (0x2ull)
-#define REE_AF_INT_VEC_AQ (0x3ull)
-
-/* ENUMS */
-
-#define REE_LF_INT_VEC_QUE_DONE (0x0ull)
-#define REE_LF_INT_VEC_MISC (0x1ull)
-
-#endif /* __OTX2_REE_HW_H__*/
diff --git a/drivers/common/octeontx2/hw/otx2_rvu.h b/drivers/common/octeontx2/hw/otx2_rvu.h
deleted file mode 100644
index b98dbcb1cd..0000000000
--- a/drivers/common/octeontx2/hw/otx2_rvu.h
+++ /dev/null
@@ -1,219 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RVU_HW_H__
-#define __OTX2_RVU_HW_H__
-
-/* Register offsets */
-
-#define RVU_AF_MSIXTR_BASE (0x10ull)
-#define RVU_AF_BLK_RST (0x30ull)
-#define RVU_AF_PF_BAR4_ADDR (0x40ull)
-#define RVU_AF_RAS (0x100ull)
-#define RVU_AF_RAS_W1S (0x108ull)
-#define RVU_AF_RAS_ENA_W1S (0x110ull)
-#define RVU_AF_RAS_ENA_W1C (0x118ull)
-#define RVU_AF_GEN_INT (0x120ull)
-#define RVU_AF_GEN_INT_W1S (0x128ull)
-#define RVU_AF_GEN_INT_ENA_W1S (0x130ull)
-#define RVU_AF_GEN_INT_ENA_W1C (0x138ull)
-#define RVU_AF_AFPFX_MBOXX(a, b) \
- (0x2000ull | (uint64_t)(a) << 4 | (uint64_t)(b) << 3)
-#define RVU_AF_PFME_STATUS (0x2800ull)
-#define RVU_AF_PFTRPEND (0x2810ull)
-#define RVU_AF_PFTRPEND_W1S (0x2820ull)
-#define RVU_AF_PF_RST (0x2840ull)
-#define RVU_AF_HWVF_RST (0x2850ull)
-#define RVU_AF_PFAF_MBOX_INT (0x2880ull)
-#define RVU_AF_PFAF_MBOX_INT_W1S (0x2888ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1S (0x2890ull)
-#define RVU_AF_PFAF_MBOX_INT_ENA_W1C (0x2898ull)
-#define RVU_AF_PFFLR_INT (0x28a0ull)
-#define RVU_AF_PFFLR_INT_W1S (0x28a8ull)
-#define RVU_AF_PFFLR_INT_ENA_W1S (0x28b0ull)
-#define RVU_AF_PFFLR_INT_ENA_W1C (0x28b8ull)
-#define RVU_AF_PFME_INT (0x28c0ull)
-#define RVU_AF_PFME_INT_W1S (0x28c8ull)
-#define RVU_AF_PFME_INT_ENA_W1S (0x28d0ull)
-#define RVU_AF_PFME_INT_ENA_W1C (0x28d8ull)
-#define RVU_PRIV_CONST (0x8000000ull)
-#define RVU_PRIV_GEN_CFG (0x8000010ull)
-#define RVU_PRIV_CLK_CFG (0x8000020ull)
-#define RVU_PRIV_ACTIVE_PC (0x8000030ull)
-#define RVU_PRIV_PFX_CFG(a) (0x8000100ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_MSIX_CFG(a) (0x8000110ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_ID_CFG(a) (0x8000120ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_INT_CFG(a) (0x8000200ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_NIXX_CFG(a, b) \
- (0x8000300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_PFX_NPA_CFG(a) (0x8000310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSO_CFG(a) (0x8000320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_SSOW_CFG(a) (0x8000330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_TIM_CFG(a) (0x8000340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_PFX_CPTX_CFG(a, b) \
- (0x8000350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_BLOCK_TYPEX_REV(a) (0x8000400ull | (uint64_t)(a) << 3)
-#define RVU_PRIV_HWVFX_INT_CFG(a) (0x8001280ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_NIXX_CFG(a, b) \
- (0x8001300ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-#define RVU_PRIV_HWVFX_NPA_CFG(a) (0x8001310ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSO_CFG(a) (0x8001320ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_SSOW_CFG(a) (0x8001330ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_TIM_CFG(a) (0x8001340ull | (uint64_t)(a) << 16)
-#define RVU_PRIV_HWVFX_CPTX_CFG(a, b) \
- (0x8001350ull | (uint64_t)(a) << 16 | (uint64_t)(b) << 3)
-
-#define RVU_PF_VFX_PFVF_MBOXX(a, b) \
- (0x0ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 3)
-#define RVU_PF_VF_BAR4_ADDR (0x10ull)
-#define RVU_PF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_STATUSX(a) (0x800ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPENDX(a) (0x820ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFTRPEND_W1SX(a) (0x840ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INTX(a) (0x880ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_W1SX(a) (0x8a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1SX(a) (0x8c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFPF_MBOX_INT_ENA_W1CX(a) (0x8e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INTX(a) (0x900ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_W1SX(a) (0x920ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1SX(a) (0x940ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFFLR_INT_ENA_W1CX(a) (0x960ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INTX(a) (0x980ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_W1SX(a) (0x9a0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1SX(a) (0x9c0ull | (uint64_t)(a) << 3)
-#define RVU_PF_VFME_INT_ENA_W1CX(a) (0x9e0ull | (uint64_t)(a) << 3)
-#define RVU_PF_PFAF_MBOXX(a) (0xc00ull | (uint64_t)(a) << 3)
-#define RVU_PF_INT (0xc20ull)
-#define RVU_PF_INT_W1S (0xc28ull)
-#define RVU_PF_INT_ENA_W1S (0xc30ull)
-#define RVU_PF_INT_ENA_W1C (0xc38ull)
-#define RVU_PF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_PF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-#define RVU_VF_VFPF_MBOXX(a) (0x0ull | (uint64_t)(a) << 3)
-#define RVU_VF_INT (0x20ull)
-#define RVU_VF_INT_W1S (0x28ull)
-#define RVU_VF_INT_ENA_W1S (0x30ull)
-#define RVU_VF_INT_ENA_W1C (0x38ull)
-#define RVU_VF_BLOCK_ADDRX_DISC(a) (0x200ull | (uint64_t)(a) << 3)
-#define RVU_VF_MSIX_VECX_ADDR(a) (0x80000ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_VECX_CTL(a) (0x80008ull | (uint64_t)(a) << 4)
-#define RVU_VF_MSIX_PBAX(a) (0xf0000ull | (uint64_t)(a) << 3)
-
-
-/* Enum offsets */
-
-#define RVU_BAR_RVU_PF_END_BAR0 (0x84f000000000ull)
-#define RVU_BAR_RVU_PF_START_BAR0 (0x840000000000ull)
-#define RVU_BAR_RVU_PFX_FUNCX_BAR2(a, b) \
- (0x840200000000ull | ((uint64_t)(a) << 36) | ((uint64_t)(b) << 25))
-
-#define RVU_AF_INT_VEC_POISON (0x0ull)
-#define RVU_AF_INT_VEC_PFFLR (0x1ull)
-#define RVU_AF_INT_VEC_PFME (0x2ull)
-#define RVU_AF_INT_VEC_GEN (0x3ull)
-#define RVU_AF_INT_VEC_MBOX (0x4ull)
-
-#define RVU_BLOCK_TYPE_RVUM (0x0ull)
-#define RVU_BLOCK_TYPE_LMT (0x2ull)
-#define RVU_BLOCK_TYPE_NIX (0x3ull)
-#define RVU_BLOCK_TYPE_NPA (0x4ull)
-#define RVU_BLOCK_TYPE_NPC (0x5ull)
-#define RVU_BLOCK_TYPE_SSO (0x6ull)
-#define RVU_BLOCK_TYPE_SSOW (0x7ull)
-#define RVU_BLOCK_TYPE_TIM (0x8ull)
-#define RVU_BLOCK_TYPE_CPT (0x9ull)
-#define RVU_BLOCK_TYPE_NDC (0xaull)
-#define RVU_BLOCK_TYPE_DDF (0xbull)
-#define RVU_BLOCK_TYPE_ZIP (0xcull)
-#define RVU_BLOCK_TYPE_RAD (0xdull)
-#define RVU_BLOCK_TYPE_DFA (0xeull)
-#define RVU_BLOCK_TYPE_HNA (0xfull)
-#define RVU_BLOCK_TYPE_REE (0xeull)
-
-#define RVU_BLOCK_ADDR_RVUM (0x0ull)
-#define RVU_BLOCK_ADDR_LMT (0x1ull)
-#define RVU_BLOCK_ADDR_NPA (0x3ull)
-#define RVU_BLOCK_ADDR_NIX0 (0x4ull)
-#define RVU_BLOCK_ADDR_NIX1 (0x5ull)
-#define RVU_BLOCK_ADDR_NPC (0x6ull)
-#define RVU_BLOCK_ADDR_SSO (0x7ull)
-#define RVU_BLOCK_ADDR_SSOW (0x8ull)
-#define RVU_BLOCK_ADDR_TIM (0x9ull)
-#define RVU_BLOCK_ADDR_CPT0 (0xaull)
-#define RVU_BLOCK_ADDR_CPT1 (0xbull)
-#define RVU_BLOCK_ADDR_NDC0 (0xcull)
-#define RVU_BLOCK_ADDR_NDC1 (0xdull)
-#define RVU_BLOCK_ADDR_NDC2 (0xeull)
-#define RVU_BLOCK_ADDR_R_END (0x1full)
-#define RVU_BLOCK_ADDR_R_START (0x14ull)
-#define RVU_BLOCK_ADDR_REE0 (0x14ull)
-#define RVU_BLOCK_ADDR_REE1 (0x15ull)
-
-#define RVU_VF_INT_VEC_MBOX (0x0ull)
-
-#define RVU_PF_INT_VEC_AFPF_MBOX (0x6ull)
-#define RVU_PF_INT_VEC_VFFLR0 (0x0ull)
-#define RVU_PF_INT_VEC_VFFLR1 (0x1ull)
-#define RVU_PF_INT_VEC_VFME0 (0x2ull)
-#define RVU_PF_INT_VEC_VFME1 (0x3ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX0 (0x4ull)
-#define RVU_PF_INT_VEC_VFPF_MBOX1 (0x5ull)
-
-
-#define AF_BAR2_ALIASX_SIZE (0x100000ull)
-
-#define TIM_AF_BAR2_SEL (0x9000000ull)
-#define SSO_AF_BAR2_SEL (0x9000000ull)
-#define NIX_AF_BAR2_SEL (0x9000000ull)
-#define SSOW_AF_BAR2_SEL (0x9000000ull)
-#define NPA_AF_BAR2_SEL (0x9000000ull)
-#define CPT_AF_BAR2_SEL (0x9000000ull)
-#define RVU_AF_BAR2_SEL (0x9000000ull)
-#define REE_AF_BAR2_SEL (0x9000000ull)
-
-#define AF_BAR2_ALIASX(a, b) \
- (0x9100000ull | (uint64_t)(a) << 12 | (uint64_t)(b))
-#define TIM_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define SSO_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NIX_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define SSOW_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define NPA_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(0, b)
-#define CPT_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define RVU_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-#define REE_AF_BAR2_ALIASX(a, b) AF_BAR2_ALIASX(a, b)
-
-/* Structures definitions */
-
-/* RVU admin function register address structure */
-struct rvu_af_addr_s {
- uint64_t addr : 28;
- uint64_t block : 5;
- uint64_t rsvd_63_33 : 31;
-};
-
-/* RVU function-unique address structure */
-struct rvu_func_addr_s {
- uint32_t addr : 12;
- uint32_t lf_slot : 8;
- uint32_t block : 5;
- uint32_t rsvd_31_25 : 7;
-};
-
-/* RVU msi-x vector structure */
-struct rvu_msix_vec_s {
- uint64_t addr : 64; /* W0 */
- uint64_t data : 32;
- uint64_t mask : 1;
- uint64_t pend : 1;
- uint64_t rsvd_127_98 : 30;
-};
-
-/* RVU pf function identification structure */
-struct rvu_pf_func_s {
- uint16_t func : 10;
- uint16_t pf : 6;
-};
-
-#endif /* __OTX2_RVU_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_sdp.h b/drivers/common/octeontx2/hw/otx2_sdp.h
deleted file mode 100644
index 1e690f8b32..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sdp.h
+++ /dev/null
@@ -1,184 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SDP_HW_H_
-#define __OTX2_SDP_HW_H_
-
-/* SDP VF IOQs */
-#define SDP_MIN_RINGS_PER_VF (1)
-#define SDP_MAX_RINGS_PER_VF (8)
-
-/* SDP VF IQ configuration */
-#define SDP_VF_MAX_IQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_IQ_DESCRIPTORS (128)
-
-#define SDP_VF_DB_MIN (1)
-#define SDP_VF_DB_TIMEOUT (1)
-#define SDP_VF_INTR_THRESHOLD (0xFFFFFFFF)
-
-#define SDP_VF_64BYTE_INSTR (64)
-#define SDP_VF_32BYTE_INSTR (32)
-
-/* SDP VF OQ configuration */
-#define SDP_VF_MAX_OQ_DESCRIPTORS (512)
-#define SDP_VF_MIN_OQ_DESCRIPTORS (128)
-#define SDP_VF_OQ_BUF_SIZE (2048)
-#define SDP_VF_OQ_REFIL_THRESHOLD (16)
-
-#define SDP_VF_OQ_INFOPTR_MODE (1)
-#define SDP_VF_OQ_BUFPTR_MODE (0)
-
-#define SDP_VF_OQ_INTR_PKT (1)
-#define SDP_VF_OQ_INTR_TIME (10)
-#define SDP_VF_CFG_IO_QUEUES SDP_MAX_RINGS_PER_VF
-
-/* Wait time in milliseconds for FLR */
-#define SDP_VF_PCI_FLR_WAIT (100)
-#define SDP_VF_BUSY_LOOP_COUNT (10000)
-
-#define SDP_VF_MAX_IO_QUEUES SDP_MAX_RINGS_PER_VF
-#define SDP_VF_MIN_IO_QUEUES SDP_MIN_RINGS_PER_VF
-
-/* SDP VF IOQs per rawdev */
-#define SDP_VF_MAX_IOQS_PER_RAWDEV SDP_VF_MAX_IO_QUEUES
-#define SDP_VF_DEFAULT_IOQS_PER_RAWDEV SDP_VF_MIN_IO_QUEUES
-
-/* SDP VF Register definitions */
-#define SDP_VF_RING_OFFSET (0x1ull << 17)
-
-/* SDP VF IQ Registers */
-#define SDP_VF_R_IN_CONTROL_START (0x10000)
-#define SDP_VF_R_IN_ENABLE_START (0x10010)
-#define SDP_VF_R_IN_INSTR_BADDR_START (0x10020)
-#define SDP_VF_R_IN_INSTR_RSIZE_START (0x10030)
-#define SDP_VF_R_IN_INSTR_DBELL_START (0x10040)
-#define SDP_VF_R_IN_CNTS_START (0x10050)
-#define SDP_VF_R_IN_INT_LEVELS_START (0x10060)
-#define SDP_VF_R_IN_PKT_CNT_START (0x10080)
-#define SDP_VF_R_IN_BYTE_CNT_START (0x10090)
-
-#define SDP_VF_R_IN_CONTROL(ring) \
- (SDP_VF_R_IN_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_ENABLE(ring) \
- (SDP_VF_R_IN_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_BADDR(ring) \
- (SDP_VF_R_IN_INSTR_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_RSIZE(ring) \
- (SDP_VF_R_IN_INSTR_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INSTR_DBELL(ring) \
- (SDP_VF_R_IN_INSTR_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_CNTS(ring) \
- (SDP_VF_R_IN_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_INT_LEVELS(ring) \
- (SDP_VF_R_IN_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_PKT_CNT(ring) \
- (SDP_VF_R_IN_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_IN_BYTE_CNT(ring) \
- (SDP_VF_R_IN_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF IQ Masks */
-#define SDP_VF_R_IN_CTL_RPVF_MASK (0xF)
-#define SDP_VF_R_IN_CTL_RPVF_POS (48)
-
-#define SDP_VF_R_IN_CTL_IDLE (0x1ull << 28)
-#define SDP_VF_R_IN_CTL_RDSIZE (0x3ull << 25) /* Setting to max(4) */
-#define SDP_VF_R_IN_CTL_IS_64B (0x1ull << 24)
-#define SDP_VF_R_IN_CTL_D_NSR (0x1ull << 8)
-#define SDP_VF_R_IN_CTL_D_ESR (0x1ull << 6)
-#define SDP_VF_R_IN_CTL_D_ROR (0x1ull << 5)
-#define SDP_VF_R_IN_CTL_NSR (0x1ull << 3)
-#define SDP_VF_R_IN_CTL_ESR (0x1ull << 1)
-#define SDP_VF_R_IN_CTL_ROR (0x1ull << 0)
-
-#define SDP_VF_R_IN_CTL_MASK \
- (SDP_VF_R_IN_CTL_RDSIZE | SDP_VF_R_IN_CTL_IS_64B)
-
-/* SDP VF OQ Registers */
-#define SDP_VF_R_OUT_CNTS_START (0x10100)
-#define SDP_VF_R_OUT_INT_LEVELS_START (0x10110)
-#define SDP_VF_R_OUT_SLIST_BADDR_START (0x10120)
-#define SDP_VF_R_OUT_SLIST_RSIZE_START (0x10130)
-#define SDP_VF_R_OUT_SLIST_DBELL_START (0x10140)
-#define SDP_VF_R_OUT_CONTROL_START (0x10150)
-#define SDP_VF_R_OUT_ENABLE_START (0x10160)
-#define SDP_VF_R_OUT_PKT_CNT_START (0x10180)
-#define SDP_VF_R_OUT_BYTE_CNT_START (0x10190)
-
-#define SDP_VF_R_OUT_CONTROL(ring) \
- (SDP_VF_R_OUT_CONTROL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_ENABLE(ring) \
- (SDP_VF_R_OUT_ENABLE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_BADDR(ring) \
- (SDP_VF_R_OUT_SLIST_BADDR_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_RSIZE(ring) \
- (SDP_VF_R_OUT_SLIST_RSIZE_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_SLIST_DBELL(ring) \
- (SDP_VF_R_OUT_SLIST_DBELL_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_CNTS(ring) \
- (SDP_VF_R_OUT_CNTS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_INT_LEVELS(ring) \
- (SDP_VF_R_OUT_INT_LEVELS_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_PKT_CNT(ring) \
- (SDP_VF_R_OUT_PKT_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-#define SDP_VF_R_OUT_BYTE_CNT(ring) \
- (SDP_VF_R_OUT_BYTE_CNT_START + ((ring) * SDP_VF_RING_OFFSET))
-
-/* SDP VF OQ Masks */
-#define SDP_VF_R_OUT_CTL_IDLE (1ull << 40)
-#define SDP_VF_R_OUT_CTL_ES_I (1ull << 34)
-#define SDP_VF_R_OUT_CTL_NSR_I (1ull << 33)
-#define SDP_VF_R_OUT_CTL_ROR_I (1ull << 32)
-#define SDP_VF_R_OUT_CTL_ES_D (1ull << 30)
-#define SDP_VF_R_OUT_CTL_NSR_D (1ull << 29)
-#define SDP_VF_R_OUT_CTL_ROR_D (1ull << 28)
-#define SDP_VF_R_OUT_CTL_ES_P (1ull << 26)
-#define SDP_VF_R_OUT_CTL_NSR_P (1ull << 25)
-#define SDP_VF_R_OUT_CTL_ROR_P (1ull << 24)
-#define SDP_VF_R_OUT_CTL_IMODE (1ull << 23)
-
-#define SDP_VF_R_OUT_INT_LEVELS_BMODE (1ull << 63)
-#define SDP_VF_R_OUT_INT_LEVELS_TIMET (32)
-
-/* SDP Instruction Header */
-struct sdp_instr_ih {
- /* Data Len */
- uint64_t tlen:16;
-
- /* Reserved1 */
- uint64_t rsvd1:20;
-
- /* PKIND for SDP */
- uint64_t pkind:6;
-
- /* Front Data size */
- uint64_t fsz:6;
-
- /* No. of entries in gather list */
- uint64_t gsz:14;
-
- /* Gather indicator */
- uint64_t gather:1;
-
- /* Reserved2 */
- uint64_t rsvd2:1;
-} __rte_packed;
-
-#endif /* __OTX2_SDP_HW_H_ */
-
diff --git a/drivers/common/octeontx2/hw/otx2_sso.h b/drivers/common/octeontx2/hw/otx2_sso.h
deleted file mode 100644
index 98a8130b16..0000000000
--- a/drivers/common/octeontx2/hw/otx2_sso.h
+++ /dev/null
@@ -1,209 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSO_HW_H__
-#define __OTX2_SSO_HW_H__
-
-/* Register offsets */
-
-#define SSO_AF_CONST (0x1000ull)
-#define SSO_AF_CONST1 (0x1008ull)
-#define SSO_AF_WQ_INT_PC (0x1020ull)
-#define SSO_AF_NOS_CNT (0x1050ull)
-#define SSO_AF_AW_WE (0x1080ull)
-#define SSO_AF_WS_CFG (0x1088ull)
-#define SSO_AF_GWE_CFG (0x1098ull)
-#define SSO_AF_GWE_RANDOM (0x10b0ull)
-#define SSO_AF_LF_HWGRP_RST (0x10e0ull)
-#define SSO_AF_AW_CFG (0x10f0ull)
-#define SSO_AF_BLK_RST (0x10f8ull)
-#define SSO_AF_ACTIVE_CYCLES0 (0x1100ull)
-#define SSO_AF_ACTIVE_CYCLES1 (0x1108ull)
-#define SSO_AF_ACTIVE_CYCLES2 (0x1110ull)
-#define SSO_AF_ERR0 (0x1220ull)
-#define SSO_AF_ERR0_W1S (0x1228ull)
-#define SSO_AF_ERR0_ENA_W1C (0x1230ull)
-#define SSO_AF_ERR0_ENA_W1S (0x1238ull)
-#define SSO_AF_ERR2 (0x1260ull)
-#define SSO_AF_ERR2_W1S (0x1268ull)
-#define SSO_AF_ERR2_ENA_W1C (0x1270ull)
-#define SSO_AF_ERR2_ENA_W1S (0x1278ull)
-#define SSO_AF_UNMAP_INFO (0x12f0ull)
-#define SSO_AF_UNMAP_INFO2 (0x1300ull)
-#define SSO_AF_UNMAP_INFO3 (0x1310ull)
-#define SSO_AF_RAS (0x1420ull)
-#define SSO_AF_RAS_W1S (0x1430ull)
-#define SSO_AF_RAS_ENA_W1C (0x1460ull)
-#define SSO_AF_RAS_ENA_W1S (0x1470ull)
-#define SSO_AF_AW_INP_CTL (0x2070ull)
-#define SSO_AF_AW_ADD (0x2080ull)
-#define SSO_AF_AW_READ_ARB (0x2090ull)
-#define SSO_AF_XAQ_REQ_PC (0x20b0ull)
-#define SSO_AF_XAQ_LATENCY_PC (0x20b8ull)
-#define SSO_AF_TAQ_CNT (0x20c0ull)
-#define SSO_AF_TAQ_ADD (0x20e0ull)
-#define SSO_AF_POISONX(a) (0x2100ull | (uint64_t)(a) << 3)
-#define SSO_AF_POISONX_W1S(a) (0x2200ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_AF_INT_CFG (0x3000ull)
-#define SSO_AF_RVU_LF_CFG_DEBUG (0x3800ull)
-#define SSO_PRIV_LFX_HWGRP_CFG(a) (0x10000ull | (uint64_t)(a) << 3)
-#define SSO_PRIV_LFX_HWGRP_INT_CFG(a) (0x20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_CFG(a) (0x50000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IU_ACCNTX_RST(a) (0x60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_PTR(a) (0x80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_PTR(a) (0x90000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_HEAD_NEXT(a) (0xa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_TAIL_NEXT(a) (0xb0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TIAQX_STATUS(a) (0xc0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TOAQX_STATUS(a) (0xd0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQX_GMCTL(a) (0xe0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_HWGRPX_IAQ_THR(a) (0x200000ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TAQ_THR(a) (0x200010ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PRI(a) (0x200020ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WS_PC(a) (0x200050ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_EXT_PC(a) (0x200060ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_WA_PC(a) (0x200070ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_TS_PC(a) (0x200080ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DS_PC(a) (0x200090ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_DQ_PC(a) (0x2000A0ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_PAGE_CNT(a) (0x200100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_STATUS(a) (0x200110ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_CFG(a) (0x200120ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_AW_TAGSPACE(a) (0x200130ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_AURA(a) (0x200140ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_XAQ_LIMIT(a) (0x200220ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWGRPX_IU_ACCNT(a) (0x200230ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_ARB(a) (0x400100ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_INV(a) (0x400180ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_GMCTL(a) (0x400200ull | (uint64_t)(a) << 12)
-#define SSO_AF_HWSX_SX_GRPMSKX(a, b, c) \
- (0x400400ull | (uint64_t)(a) << 12 | (uint64_t)(b) << 5 | \
- (uint64_t)(c) << 3)
-#define SSO_AF_IPL_FREEX(a) (0x800000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_IAQX(a) (0x840000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_DESCHEDX(a) (0x860000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IPL_CONFX(a) (0x880000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX(a) (0x900000ull | (uint64_t)(a) << 3)
-#define SSO_AF_NPA_DIGESTX_W1S(a) (0x900100ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX(a) (0x900200ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFP_DIGESTX_W1S(a) (0x900300ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX(a) (0x900400ull | (uint64_t)(a) << 3)
-#define SSO_AF_BFPN_DIGESTX_W1S(a) (0x900500ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX(a) (0x900600ull | (uint64_t)(a) << 3)
-#define SSO_AF_GRPDIS_DIGESTX_W1S(a) (0x900700ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX(a) (0x900800ull | (uint64_t)(a) << 3)
-#define SSO_AF_AWEMPTY_DIGESTX_W1S(a) (0x900900ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX(a) (0x900a00ull | (uint64_t)(a) << 3)
-#define SSO_AF_WQP0_DIGESTX_W1S(a) (0x900b00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX(a) (0x900c00ull | (uint64_t)(a) << 3)
-#define SSO_AF_AW_DROPPED_DIGESTX_W1S(a) (0x900d00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX(a) (0x900e00ull | (uint64_t)(a) << 3)
-#define SSO_AF_QCTLDIS_DIGESTX_W1S(a) (0x900f00ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX(a) (0x901000ull | (uint64_t)(a) << 3)
-#define SSO_AF_XAQDIS_DIGESTX_W1S(a) (0x901100ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX(a) (0x901200ull | (uint64_t)(a) << 3)
-#define SSO_AF_FLR_AQ_DIGESTX_W1S(a) (0x901300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX(a) (0x902000ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GMULTI_DIGESTX_W1S(a) (0x902100ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX(a) (0x902200ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GUNMAP_DIGESTX_W1S(a) (0x902300ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX(a) (0x902400ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_AWE_DIGESTX_W1S(a) (0x902500ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX(a) (0x902600ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_GWI_DIGESTX_W1S(a) (0x902700ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX(a) (0x902800ull | (uint64_t)(a) << 3)
-#define SSO_AF_WS_NE_DIGESTX_W1S(a) (0x902900ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_TAG(a) (0xa00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_GRP(a) (0xa20000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_PENDTAG(a) (0xa40000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_LINKS(a) (0xa60000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_QLINKS(a) (0xa80000ull | (uint64_t)(a) << 3)
-#define SSO_AF_IENTX_WQP(a) (0xaa0000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_LINK(a) (0xc00000ull | (uint64_t)(a) << 3)
-#define SSO_AF_TAQX_WAEX_TAG(a, b) \
- (0xe00000ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-#define SSO_AF_TAQX_WAEX_WQP(a, b) \
- (0xe00008ull | (uint64_t)(a) << 8 | (uint64_t)(b) << 4)
-
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-#define SSO_AF_IAQ_FREE_CNT_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_IAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_IAQ_FREE_CNT_MAX SSO_AF_IAQ_FREE_CNT_MASK
-#define SSO_AF_AW_ADD_RSVD_FREE_MASK 0x3FFFull
-#define SSO_AF_AW_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_IAQ_MAX_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_RSVD_THR_MASK 0x3FFFull
-#define SSO_HWGRP_IAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_IAQ_RSVD_THR 0x2
-
-#define SSO_AF_TAQ_FREE_CNT_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_MASK 0x7FFull
-#define SSO_AF_TAQ_RSVD_FREE_SHIFT 16
-#define SSO_AF_TAQ_FREE_CNT_MAX SSO_AF_TAQ_FREE_CNT_MASK
-#define SSO_AF_TAQ_ADD_RSVD_FREE_MASK 0x1FFFull
-#define SSO_AF_TAQ_ADD_RSVD_FREE_SHIFT 16
-#define SSO_HWGRP_TAQ_MAX_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_RSVD_THR_MASK 0x7FFull
-#define SSO_HWGRP_TAQ_MAX_THR_SHIFT 32
-#define SSO_HWGRP_TAQ_RSVD_THR 0x3
-
-#define SSO_HWGRP_PRI_AFF_MASK 0xFull
-#define SSO_HWGRP_PRI_AFF_SHIFT 8
-#define SSO_HWGRP_PRI_WGT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_SHIFT 16
-#define SSO_HWGRP_PRI_WGT_LEFT_MASK 0x3Full
-#define SSO_HWGRP_PRI_WGT_LEFT_SHIFT 24
-
-#define SSO_HWGRP_AW_CFG_RWEN BIT_ULL(0)
-#define SSO_HWGRP_AW_CFG_LDWB BIT_ULL(1)
-#define SSO_HWGRP_AW_CFG_LDT BIT_ULL(2)
-#define SSO_HWGRP_AW_CFG_STT BIT_ULL(3)
-#define SSO_HWGRP_AW_CFG_XAQ_BYP_DIS BIT_ULL(4)
-
-#define SSO_HWGRP_AW_STS_TPTR_VLD BIT_ULL(8)
-#define SSO_HWGRP_AW_STS_NPA_FETCH BIT_ULL(9)
-#define SSO_HWGRP_AW_STS_XAQ_BUFSC_MASK 0x7ull
-#define SSO_HWGRP_AW_STS_INIT_STS 0x18ull
-
-/* Enum offsets */
-
-#define SSO_LF_INT_VEC_GRP (0x0ull)
-
-#define SSO_AF_INT_VEC_ERR0 (0x0ull)
-#define SSO_AF_INT_VEC_ERR2 (0x1ull)
-#define SSO_AF_INT_VEC_RAS (0x2ull)
-
-#define SSO_WA_IOBN (0x0ull)
-#define SSO_WA_NIXRX (0x1ull)
-#define SSO_WA_CPT (0x2ull)
-#define SSO_WA_ADDWQ (0x3ull)
-#define SSO_WA_DPI (0x4ull)
-#define SSO_WA_NIXTX (0x5ull)
-#define SSO_WA_TIM (0x6ull)
-#define SSO_WA_ZIP (0x7ull)
-
-#define SSO_TT_ORDERED (0x0ull)
-#define SSO_TT_ATOMIC (0x1ull)
-#define SSO_TT_UNTAGGED (0x2ull)
-#define SSO_TT_EMPTY (0x3ull)
-
-
-/* Structures definitions */
-
-#endif /* __OTX2_SSO_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_ssow.h b/drivers/common/octeontx2/hw/otx2_ssow.h
deleted file mode 100644
index 8a44578036..0000000000
--- a/drivers/common/octeontx2/hw/otx2_ssow.h
+++ /dev/null
@@ -1,56 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SSOW_HW_H__
-#define __OTX2_SSOW_HW_H__
-
-/* Register offsets */
-
-#define SSOW_AF_RVU_LF_HWS_CFG_DEBUG (0x10ull)
-#define SSOW_AF_LF_HWS_RST (0x30ull)
-#define SSOW_PRIV_LFX_HWS_CFG(a) (0x1000ull | (uint64_t)(a) << 3)
-#define SSOW_PRIV_LFX_HWS_INT_CFG(a) (0x2000ull | (uint64_t)(a) << 3)
-#define SSOW_AF_SCRATCH_WS (0x100000ull)
-#define SSOW_AF_SCRATCH_GW (0x200000ull)
-#define SSOW_AF_SCRATCH_AW (0x300000ull)
-
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-
-/* Enum offsets */
-
-#define SSOW_LF_INT_VEC_IOP (0x0ull)
-
-
-#endif /* __OTX2_SSOW_HW_H__ */
diff --git a/drivers/common/octeontx2/hw/otx2_tim.h b/drivers/common/octeontx2/hw/otx2_tim.h
deleted file mode 100644
index 41442ad0a8..0000000000
--- a/drivers/common/octeontx2/hw/otx2_tim.h
+++ /dev/null
@@ -1,34 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_HW_H__
-#define __OTX2_TIM_HW_H__
-
-/* TIM */
-#define TIM_AF_CONST (0x90)
-#define TIM_PRIV_LFX_CFG(a) (0x20000 | (a) << 3)
-#define TIM_PRIV_LFX_INT_CFG(a) (0x24000 | (a) << 3)
-#define TIM_AF_RVU_LF_CFG_DEBUG (0x30000)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_LF_RST (0x20)
-#define TIM_AF_BLK_RST (0x10)
-#define TIM_AF_RINGX_GMCTL(a) (0x2000 | (a) << 3)
-#define TIM_AF_RINGX_CTL0(a) (0x4000 | (a) << 3)
-#define TIM_AF_RINGX_CTL1(a) (0x6000 | (a) << 3)
-#define TIM_AF_RINGX_CTL2(a) (0x8000 | (a) << 3)
-#define TIM_AF_FLAGS_REG (0x80)
-#define TIM_AF_FLAGS_REG_ENA_TIM BIT_ULL(0)
-#define TIM_AF_RINGX_CTL1_ENA BIT_ULL(47)
-#define TIM_AF_RINGX_CTL1_RCF_BUSY BIT_ULL(50)
-#define TIM_AF_RINGX_CLT1_CLK_10NS (0)
-#define TIM_AF_RINGX_CLT1_CLK_GPIO (1)
-#define TIM_AF_RINGX_CLT1_CLK_GTI (2)
-#define TIM_AF_RINGX_CLT1_CLK_PTP (3)
-
-/* ENUMS */
-
-#define TIM_LF_INT_VEC_NRSPERR_INT (0x0ull)
-#define TIM_LF_INT_VEC_RAS_INT (0x1ull)
-
-#endif /* __OTX2_TIM_HW_H__ */
diff --git a/drivers/common/octeontx2/meson.build b/drivers/common/octeontx2/meson.build
deleted file mode 100644
index 223ba5ef51..0000000000
--- a/drivers/common/octeontx2/meson.build
+++ /dev/null
@@ -1,24 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources= files(
- 'otx2_common.c',
- 'otx2_dev.c',
- 'otx2_irq.c',
- 'otx2_mbox.c',
- 'otx2_sec_idev.c',
-)
-
-deps = ['eal', 'pci', 'ethdev', 'kvargs']
-includes += include_directories(
- '../../common/octeontx2',
- '../../mempool/octeontx2',
- '../../bus/pci',
-)
diff --git a/drivers/common/octeontx2/otx2_common.c b/drivers/common/octeontx2/otx2_common.c
deleted file mode 100644
index d23c50242e..0000000000
--- a/drivers/common/octeontx2/otx2_common.c
+++ /dev/null
@@ -1,216 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_malloc.h>
-#include <rte_log.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-/**
- * @internal
- * Set default NPA configuration.
- */
-void
-otx2_npa_set_defaults(struct otx2_idev_cfg *idev)
-{
- idev->npa_pf_func = 0;
- rte_atomic16_set(&idev->npa_refcnt, 0);
-}
-
-/**
- * @internal
- * Get intra device config structure.
- */
-struct otx2_idev_cfg *
-otx2_intra_dev_get_cfg(void)
-{
- const char name[] = "octeontx2_intra_device_conf";
- const struct rte_memzone *mz;
- struct otx2_idev_cfg *idev;
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- idev = mz->addr;
- idev->sso_pf_func = 0;
- idev->npa_lf = NULL;
- otx2_npa_set_defaults(idev);
- return idev;
- }
- return NULL;
-}
-
-/**
- * @internal
- * Get SSO PF_FUNC.
- */
-uint16_t
-otx2_sso_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t sso_pf_func;
-
- sso_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- sso_pf_func = idev->sso_pf_func;
-
- return sso_pf_func;
-}
-
-/**
- * @internal
- * Set SSO PF_FUNC.
- */
-void
-otx2_sso_pf_func_set(uint16_t sso_pf_func)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL) {
- idev->sso_pf_func = sso_pf_func;
- rte_smp_wmb();
- }
-}
-
-/**
- * @internal
- * Get NPA PF_FUNC.
- */
-uint16_t
-otx2_npa_pf_func_get(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t npa_pf_func;
-
- npa_pf_func = 0;
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL)
- npa_pf_func = idev->npa_pf_func;
-
- return npa_pf_func;
-}
-
-/**
- * @internal
- * Get NPA lf object.
- */
-struct otx2_npa_lf *
-otx2_npa_lf_obj_get(void)
-{
- struct otx2_idev_cfg *idev;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev != NULL && rte_atomic16_read(&idev->npa_refcnt))
- return idev->npa_lf;
-
- return NULL;
-}
-
-/**
- * @internal
- * Is NPA lf active for the given device?.
- */
-int
-otx2_npa_lf_active(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
-
- /* Check if npalf is actively used on this dev */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->mbox != dev->mbox)
- return 0;
-
- return rte_atomic16_read(&idev->npa_refcnt);
-}
-
-/*
- * @internal
- * Gets reference only to existing NPA LF object.
- */
-int otx2_npa_lf_obj_ref(void)
-{
- struct otx2_idev_cfg *idev;
- uint16_t cnt;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
-
- /* Check if ref not possible */
- if (idev == NULL)
- return -EINVAL;
-
-
- /* Get ref only if > 0 */
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- while (cnt != 0) {
- rc = rte_atomic16_cmpset(&idev->npa_refcnt_u16, cnt, cnt + 1);
- if (rc)
- break;
-
- cnt = rte_atomic16_read(&idev->npa_refcnt);
- }
-
- return cnt ? 0 : -EINVAL;
-}
-
-static int
-parse_npa_lock_mask(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint64_t val;
-
- val = strtoull(value, NULL, 16);
-
- *(uint64_t *)extra_args = val;
-
- return 0;
-}
-
-/*
- * @internal
- * Parse common device arguments
- */
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist)
-{
-
- struct otx2_idev_cfg *idev;
- uint64_t npa_lock_mask = 0;
-
- idev = otx2_intra_dev_get_cfg();
-
- if (idev == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_NPA_LOCK_MASK,
- &parse_npa_lock_mask, &npa_lock_mask);
-
- idev->npa_lock_mask = npa_lock_mask;
-}
-
-RTE_LOG_REGISTER(otx2_logtype_base, pmd.octeontx2.base, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_mbox, pmd.octeontx2.mbox, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npa, pmd.mempool.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_nix, pmd.net.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_npc, pmd.net.octeontx2.flow, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tm, pmd.net.octeontx2.tm, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_sso, pmd.event.octeontx2, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_tim, pmd.event.octeontx2.timer, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_dpi, pmd.raw.octeontx2.dpi, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ep, pmd.raw.octeontx2.ep, NOTICE);
-RTE_LOG_REGISTER(otx2_logtype_ree, pmd.regex.octeontx2, NOTICE);
diff --git a/drivers/common/octeontx2/otx2_common.h b/drivers/common/octeontx2/otx2_common.h
deleted file mode 100644
index cd52e098e6..0000000000
--- a/drivers/common/octeontx2/otx2_common.h
+++ /dev/null
@@ -1,179 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_COMMON_H_
-#define _OTX2_COMMON_H_
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_kvargs.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_io.h>
-
-#include "hw/otx2_rvu.h"
-#include "hw/otx2_nix.h"
-#include "hw/otx2_npc.h"
-#include "hw/otx2_npa.h"
-#include "hw/otx2_sdp.h"
-#include "hw/otx2_sso.h"
-#include "hw/otx2_ssow.h"
-#include "hw/otx2_tim.h"
-#include "hw/otx2_ree.h"
-
-/* Alignment */
-#define OTX2_ALIGN 128
-
-/* Bits manipulation */
-#ifndef BIT_ULL
-#define BIT_ULL(nr) (1ULL << (nr))
-#endif
-#ifndef BIT
-#define BIT(nr) (1UL << (nr))
-#endif
-
-#ifndef BITS_PER_LONG
-#define BITS_PER_LONG (__SIZEOF_LONG__ * 8)
-#endif
-#ifndef BITS_PER_LONG_LONG
-#define BITS_PER_LONG_LONG (__SIZEOF_LONG_LONG__ * 8)
-#endif
-
-#ifndef GENMASK
-#define GENMASK(h, l) \
- (((~0UL) << (l)) & (~0UL >> (BITS_PER_LONG - 1 - (h))))
-#endif
-#ifndef GENMASK_ULL
-#define GENMASK_ULL(h, l) \
- (((~0ULL) - (1ULL << (l)) + 1) & \
- (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h))))
-#endif
-
-#define OTX2_NPA_LOCK_MASK "npa_lock_mask"
-
-/* Intra device related functions */
-struct otx2_npa_lf;
-struct otx2_idev_cfg {
- uint16_t sso_pf_func;
- uint16_t npa_pf_func;
- struct otx2_npa_lf *npa_lf;
- RTE_STD_C11
- union {
- rte_atomic16_t npa_refcnt;
- uint16_t npa_refcnt_u16;
- };
- uint64_t npa_lock_mask;
-};
-
-__rte_internal
-struct otx2_idev_cfg *otx2_intra_dev_get_cfg(void);
-__rte_internal
-void otx2_sso_pf_func_set(uint16_t sso_pf_func);
-__rte_internal
-uint16_t otx2_sso_pf_func_get(void);
-__rte_internal
-uint16_t otx2_npa_pf_func_get(void);
-__rte_internal
-struct otx2_npa_lf *otx2_npa_lf_obj_get(void);
-__rte_internal
-void otx2_npa_set_defaults(struct otx2_idev_cfg *idev);
-__rte_internal
-int otx2_npa_lf_active(void *dev);
-__rte_internal
-int otx2_npa_lf_obj_ref(void);
-__rte_internal
-void otx2_parse_common_devargs(struct rte_kvargs *kvlist);
-
-/* Log */
-extern int otx2_logtype_base;
-extern int otx2_logtype_mbox;
-extern int otx2_logtype_npa;
-extern int otx2_logtype_nix;
-extern int otx2_logtype_sso;
-extern int otx2_logtype_npc;
-extern int otx2_logtype_tm;
-extern int otx2_logtype_tim;
-extern int otx2_logtype_dpi;
-extern int otx2_logtype_ep;
-extern int otx2_logtype_ree;
-
-#define otx2_err(fmt, args...) \
- RTE_LOG(ERR, PMD, "%s():%u " fmt "\n", \
- __func__, __LINE__, ## args)
-
-#define otx2_info(fmt, args...) \
- RTE_LOG(INFO, PMD, fmt"\n", ## args)
-
-#define otx2_dbg(subsystem, fmt, args...) \
- rte_log(RTE_LOG_DEBUG, otx2_logtype_ ## subsystem, \
- "[%s] %s():%u " fmt "\n", \
- #subsystem, __func__, __LINE__, ##args)
-
-#define otx2_base_dbg(fmt, ...) otx2_dbg(base, fmt, ##__VA_ARGS__)
-#define otx2_mbox_dbg(fmt, ...) otx2_dbg(mbox, fmt, ##__VA_ARGS__)
-#define otx2_npa_dbg(fmt, ...) otx2_dbg(npa, fmt, ##__VA_ARGS__)
-#define otx2_nix_dbg(fmt, ...) otx2_dbg(nix, fmt, ##__VA_ARGS__)
-#define otx2_sso_dbg(fmt, ...) otx2_dbg(sso, fmt, ##__VA_ARGS__)
-#define otx2_npc_dbg(fmt, ...) otx2_dbg(npc, fmt, ##__VA_ARGS__)
-#define otx2_tm_dbg(fmt, ...) otx2_dbg(tm, fmt, ##__VA_ARGS__)
-#define otx2_tim_dbg(fmt, ...) otx2_dbg(tim, fmt, ##__VA_ARGS__)
-#define otx2_dpi_dbg(fmt, ...) otx2_dbg(dpi, fmt, ##__VA_ARGS__)
-#define otx2_sdp_dbg(fmt, ...) otx2_dbg(ep, fmt, ##__VA_ARGS__)
-#define otx2_ree_dbg(fmt, ...) otx2_dbg(ree, fmt, ##__VA_ARGS__)
-
-/* PCI IDs */
-#define PCI_VENDOR_ID_CAVIUM 0x177D
-#define PCI_DEVID_OCTEONTX2_RVU_PF 0xA063
-#define PCI_DEVID_OCTEONTX2_RVU_VF 0xA064
-#define PCI_DEVID_OCTEONTX2_RVU_AF 0xA065
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF 0xA0F9
-#define PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF 0xA0FA
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_PF 0xA0FB
-#define PCI_DEVID_OCTEONTX2_RVU_NPA_VF 0xA0FC
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_PF 0xA0FD
-#define PCI_DEVID_OCTEONTX2_RVU_CPT_VF 0xA0FE
-#define PCI_DEVID_OCTEONTX2_RVU_AF_VF 0xA0f8
-#define PCI_DEVID_OCTEONTX2_DPI_VF 0xA081
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
-/* OCTEON TX2 98xx EP mode */
-#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
-#define PCI_DEVID_OCTEONTX2_EP_RAW_VF 0xB204 /* OCTEON TX2 EP mode */
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_PF 0xA0f6
-#define PCI_DEVID_OCTEONTX2_RVU_SDP_VF 0xA0f7
-#define PCI_DEVID_OCTEONTX2_RVU_REE_PF 0xA0f4
-#define PCI_DEVID_OCTEONTX2_RVU_REE_VF 0xA0f5
-
-/*
- * REVID for RVU PCIe devices.
- * Bits 0..1: minor pass
- * Bits 3..2: major pass
- * Bits 7..4: midr id, 0:96, 1:95, 2:loki, f:unknown
- */
-
-#define RVU_PCI_REV_MIDR_ID(rev_id) (rev_id >> 4)
-#define RVU_PCI_REV_MAJOR(rev_id) ((rev_id >> 2) & 0x3)
-#define RVU_PCI_REV_MINOR(rev_id) (rev_id & 0x3)
-
-#define RVU_PCI_CN96XX_MIDR_ID 0x0
-#define RVU_PCI_CNF95XX_MIDR_ID 0x1
-
-/* PCI Config offsets */
-#define RVU_PCI_REVISION_ID 0x08
-
-/* IO Access */
-#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
-#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-
-#if defined(RTE_ARCH_ARM64)
-#include "otx2_io_arm64.h"
-#else
-#include "otx2_io_generic.h"
-#endif
-
-/* Fastpath lookup */
-#define OTX2_NIX_FASTPATH_LOOKUP_MEM "otx2_nix_fastpath_lookup_mem"
-#define OTX2_NIX_SA_TBL_START (4096*4 + 69632*2)
-
-#endif /* _OTX2_COMMON_H_ */
diff --git a/drivers/common/octeontx2/otx2_dev.c b/drivers/common/octeontx2/otx2_dev.c
deleted file mode 100644
index 08dca87848..0000000000
--- a/drivers/common/octeontx2/otx2_dev.c
+++ /dev/null
@@ -1,1074 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <fcntl.h>
-#include <inttypes.h>
-#include <sys/mman.h>
-#include <unistd.h>
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_memcpy.h>
-#include <rte_eal_paging.h>
-
-#include "otx2_dev.h"
-#include "otx2_mbox.h"
-
-#define RVU_MAX_VF 64 /* RVU_PF_VFPF_MBOX_INT(0..1) */
-#define RVU_MAX_INT_RETRY 3
-
-/* PF/VF message handling timer */
-#define VF_PF_MBOX_TIMER_MS (20 * 1000)
-
-static void *
-mbox_mem_map(off_t off, size_t size)
-{
- void *va = MAP_FAILED;
- int mem_fd;
-
- if (size <= 0)
- goto error;
-
- mem_fd = open("/dev/mem", O_RDWR);
- if (mem_fd < 0)
- goto error;
-
- va = rte_mem_map(NULL, size, RTE_PROT_READ | RTE_PROT_WRITE,
- RTE_MAP_SHARED, mem_fd, off);
- close(mem_fd);
-
- if (va == NULL)
- otx2_err("Failed to mmap sz=0x%zx, fd=%d, off=%jd",
- size, mem_fd, (intmax_t)off);
-error:
- return va;
-}
-
-static void
-mbox_mem_unmap(void *va, size_t size)
-{
- if (va)
- rte_mem_unmap(va, size);
-}
-
-static int
-pf_af_sync_msg(struct otx2_dev *dev, struct mbox_msghdr **rsp)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_msghdr *msghdr;
- uint64_t off;
- int rc = 0;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout += sleep;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Message timeout: %dms", MBOX_RSP_TIMEOUT);
- rc = -EIO;
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(int_status, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- if (rc == 0) {
- /* Get message */
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + off);
- if (rsp)
- *rsp = msghdr;
- rc = msghdr->rc;
- }
-
- return rc;
-}
-
-static int
-af_pf_wait_msg(struct otx2_dev *dev, uint16_t vf, int num_msg)
-{
- uint32_t timeout = 0, sleep = 1; struct otx2_mbox *mbox = dev->mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- volatile uint64_t int_status;
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- struct mbox_msghdr *rsp;
- uint64_t offset;
- size_t size;
- int i;
-
- /* We need to disable PF interrupts. We are in timer interrupt */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- /* Send message */
- otx2_mbox_msg_send(mbox, 0);
-
- do {
- rte_delay_ms(sleep);
- timeout++;
- if (timeout >= MBOX_RSP_TIMEOUT) {
- otx2_err("Routed messages %d timeout: %dms",
- num_msg, MBOX_RSP_TIMEOUT);
- break;
- }
- int_status = otx2_read64(dev->bar2 + RVU_PF_INT);
- } while ((int_status & 0x1) != 0x1);
-
- /* Clear */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
-
- /* Enable interrupts */
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- rte_spinlock_lock(&mdev->mbox_lock);
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs != num_msg)
- otx2_err("Routed messages: %d received: %d", num_msg,
- req_hdr->num_msgs);
-
- /* Get messages from mbox */
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* Reserve PF/VF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- rsp = otx2_mbox_alloc_msg(&dev->mbox_vfpf, vf, size);
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* Copy message from AF<->PF mbox to PF<->VF mbox */
- otx2_mbox_memcpy((uint8_t *)rsp + sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
-
- /* Set status and sender pf_func data */
- rsp->rc = msg->rc;
- rsp->pcifunc = msg->pcifunc;
-
- /* Whenever a PF comes up, AF sends the link status to it but
- * when VF comes up no such event is sent to respective VF.
- * Using MBOX_MSG_NIX_LF_START_RX response from AF for the
- * purpose and send the link status of PF to VF.
- */
- if (msg->id == MBOX_MSG_NIX_LF_START_RX) {
- /* Send link status to VF */
- struct cgx_link_user_info linfo;
- struct mbox_msghdr *vf_msg;
- size_t sz;
-
- /* Get the link status */
- if (dev->ops && dev->ops->link_status_get)
- dev->ops->link_status_get(dev, &linfo);
-
- sz = RTE_ALIGN(otx2_mbox_id2size(
- MBOX_MSG_CGX_LINK_EVENT), MBOX_MSG_ALIGN);
- /* Prepare the message to be sent */
- vf_msg = otx2_mbox_alloc_msg(&dev->mbox_vfpf_up, vf,
- sz);
- otx2_mbox_req_init(MBOX_MSG_CGX_LINK_EVENT, vf_msg);
- memcpy((uint8_t *)vf_msg + sizeof(struct mbox_msghdr),
- &linfo, sizeof(struct cgx_link_user_info));
-
- vf_msg->rc = msg->rc;
- vf_msg->pcifunc = msg->pcifunc;
- /* Send to VF */
- otx2_mbox_msg_send(&dev->mbox_vfpf_up, vf);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return req_hdr->num_msgs;
-}
-
-static int
-vf_pf_process_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- int offset, routed = 0; struct otx2_mbox *mbox = &dev->mbox_vfpf;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- size_t size;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (!req_hdr->num_msgs)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
-
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- size = mbox->rx_start + msg->next_msgoff - offset;
-
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- if (msg->id == MBOX_MSG_READY) {
- struct ready_msg_rsp *rsp;
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * 8;
-
- /* Handle READY message in PF */
- dev->active_vfs[vf / max_bits] |=
- BIT_ULL(vf % max_bits);
- rsp = (struct ready_msg_rsp *)
- otx2_mbox_alloc_msg(mbox, vf, sizeof(*rsp));
- otx2_mbox_rsp_init(msg->id, rsp);
-
- /* PF/VF function ID */
- rsp->hdr.pcifunc = msg->pcifunc;
- rsp->hdr.rc = 0;
- } else {
- struct mbox_msghdr *af_req;
- /* Reserve AF/PF mbox message */
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- af_req = otx2_mbox_alloc_msg(dev->mbox, 0, size);
- otx2_mbox_req_init(msg->id, af_req);
-
- /* Copy message from VF<->PF mbox to PF<->AF mbox */
- otx2_mbox_memcpy((uint8_t *)af_req +
- sizeof(struct mbox_msghdr),
- (uint8_t *)msg + sizeof(struct mbox_msghdr),
- size - sizeof(struct mbox_msghdr));
- af_req->pcifunc = msg->pcifunc;
- routed++;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- if (routed > 0) {
- otx2_base_dbg("pf:%d routed %d messages from vf:%d to AF",
- dev->pf, routed, vf);
- af_pf_wait_msg(dev, vf, routed);
- otx2_mbox_reset(dev->mbox, 0);
- }
-
- /* Send mbox responses to VF */
- if (mdev->num_msgs) {
- otx2_base_dbg("pf:%d reply %d messages to vf:%d",
- dev->pf, mdev->num_msgs, vf);
- otx2_mbox_msg_send(mbox, vf);
- }
-
- return i;
-}
-
-static int
-vf_pf_process_up_msgs(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = &dev->mbox_vfpf_up;
- struct otx2_mbox_dev *mdev = &mbox->dev[vf];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return 0;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
-
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- /* RVU_PF_FUNC_S */
- msg->pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- switch (msg->id) {
- case MBOX_MSG_CGX_LINK_EVENT:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- case MBOX_MSG_CGX_PTP_RX_INFO:
- otx2_base_dbg("PF: Msg 0x%x (%s) fn:0x%x (pf:%d,vf:%d)",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc, otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- break;
- default:
- otx2_err("Not handled UP msg 0x%x (%s) func:0x%x",
- msg->id, otx2_mbox_id2name(msg->id),
- msg->pcifunc);
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
- otx2_mbox_reset(mbox, vf);
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-
- return i;
-}
-
-static void
-otx2_vf_pf_mbox_handle_msg(void *param)
-{
- uint16_t vf, max_vf, max_bits;
- struct otx2_dev *dev = param;
-
- max_bits = sizeof(dev->intr.bits[0]) * sizeof(uint64_t);
- max_vf = max_bits * MAX_VFPF_DWORD_BITS;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (dev->intr.bits[vf/max_bits] & BIT_ULL(vf%max_bits)) {
- otx2_base_dbg("Process vf:%d request (pf:%d, vf:%d)",
- vf, dev->pf, dev->vf);
- vf_pf_process_msgs(dev, vf);
- /* UP messages */
- vf_pf_process_up_msgs(dev, vf);
- dev->intr.bits[vf/max_bits] &= ~(BIT_ULL(vf%max_bits));
- }
- }
- dev->timer_set = 0;
-}
-
-static void
-otx2_vf_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- bool alarm_set = false;
- uint64_t intr;
- int vfpf;
-
- for (vfpf = 0; vfpf < MAX_VFPF_DWORD_BITS; ++vfpf) {
- intr = otx2_read64(dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- if (!intr)
- continue;
-
- otx2_base_dbg("vfpf: %d intr: 0x%" PRIx64 " (pf:%d, vf:%d)",
- vfpf, intr, dev->pf, dev->vf);
-
- /* Save and clear intr bits */
- dev->intr.bits[vfpf] |= intr;
- otx2_write64(intr, dev->bar2 + RVU_PF_VFPF_MBOX_INTX(vfpf));
- alarm_set = true;
- }
-
- if (!dev->timer_set && alarm_set) {
- dev->timer_set = 1;
- /* Start timer to handle messages */
- rte_eal_alarm_set(VF_PF_MBOX_TIMER_MS,
- otx2_vf_pf_mbox_handle_msg, dev);
- }
-}
-
-static void
-otx2_process_msgs(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int msgs_acked = 0;
- int offset;
- uint16_t i;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- msgs_acked++;
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
-
- switch (msg->id) {
- /* Add message id's that are handled here */
- case MBOX_MSG_READY:
- /* Get our identity */
- dev->pf_func = msg->pcifunc;
- break;
-
- default:
- if (msg->rc)
- otx2_err("Message (%s) response has err=%d",
- otx2_mbox_id2name(msg->id), msg->rc);
- break;
- }
- offset = mbox->rx_start + msg->next_msgoff;
- }
-
- otx2_mbox_reset(mbox, 0);
- /* Update acked if someone is waiting a message */
- mdev->msgs_acked = msgs_acked;
- rte_wmb();
-}
-
-/* Copies the message received from AF and sends it to VF */
-static void
-pf_vf_mbox_send_up_msg(struct otx2_dev *dev, void *rec_msg)
-{
- uint16_t max_bits = sizeof(dev->active_vfs[0]) * sizeof(uint64_t);
- struct otx2_mbox *vf_mbox = &dev->mbox_vfpf_up;
- struct msg_req *msg = rec_msg;
- struct mbox_msghdr *vf_msg;
- uint16_t vf;
- size_t size;
-
- size = RTE_ALIGN(otx2_mbox_id2size(msg->hdr.id), MBOX_MSG_ALIGN);
- /* Send UP message to all VF's */
- for (vf = 0; vf < vf_mbox->ndevs; vf++) {
- /* VF active */
- if (!(dev->active_vfs[vf / max_bits] & (BIT_ULL(vf))))
- continue;
-
- otx2_base_dbg("(%s) size: %zx to VF: %d",
- otx2_mbox_id2name(msg->hdr.id), size, vf);
-
- /* Reserve PF/VF mbox message */
- vf_msg = otx2_mbox_alloc_msg(vf_mbox, vf, size);
- if (!vf_msg) {
- otx2_err("Failed to alloc VF%d UP message", vf);
- continue;
- }
- otx2_mbox_req_init(msg->hdr.id, vf_msg);
-
- /*
- * Copy message from AF<->PF UP mbox
- * to PF<->VF UP mbox
- */
- otx2_mbox_memcpy((uint8_t *)vf_msg +
- sizeof(struct mbox_msghdr), (uint8_t *)msg
- + sizeof(struct mbox_msghdr), size -
- sizeof(struct mbox_msghdr));
-
- vf_msg->rc = msg->hdr.rc;
- /* Set PF to be a sender */
- vf_msg->pcifunc = dev->pf_func;
-
- /* Send to VF */
- otx2_mbox_msg_send(vf_mbox, vf);
- }
-}
-
-static int
-otx2_mbox_up_handler_cgx_link_event(struct otx2_dev *dev,
- struct cgx_link_info_msg *msg,
- struct msg_rsp *rsp)
-{
- struct cgx_link_user_info *linfo = &msg->link_info;
-
- otx2_base_dbg("pf:%d/vf:%d NIC Link %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func), otx2_get_vf(dev->pf_func),
- linfo->link_up ? "UP" : "DOWN", msg->hdr.id,
- otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets link notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets link up notification */
- if (dev->ops && dev->ops->link_status_update)
- dev->ops->link_status_update(dev, linfo);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-otx2_mbox_up_handler_cgx_ptp_rx_info(struct otx2_dev *dev,
- struct cgx_ptp_rx_info_msg *msg,
- struct msg_rsp *rsp)
-{
- otx2_nix_dbg("pf:%d/vf:%d PTP mode %s --> 0x%x (%s) from: pf:%d/vf:%d",
- otx2_get_pf(dev->pf_func),
- otx2_get_vf(dev->pf_func),
- msg->ptp_en ? "ENABLED" : "DISABLED",
- msg->hdr.id, otx2_mbox_id2name(msg->hdr.id),
- otx2_get_pf(msg->hdr.pcifunc),
- otx2_get_vf(msg->hdr.pcifunc));
-
- /* PF gets PTP notification from AF */
- if (otx2_get_pf(msg->hdr.pcifunc) == 0) {
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
-
- /* Forward the same message as received from AF to VF */
- pf_vf_mbox_send_up_msg(dev, msg);
- } else {
- /* VF gets PTP notification */
- if (dev->ops && dev->ops->ptp_info_update)
- dev->ops->ptp_info_update(dev, msg->ptp_en);
- }
-
- rsp->hdr.rc = 0;
- return 0;
-}
-
-static int
-mbox_process_msgs_up(struct otx2_dev *dev, struct mbox_msghdr *req)
-{
- /* Check if valid, if not reply with a invalid msg */
- if (req->sig != OTX2_MBOX_REQ_SIG)
- return -EIO;
-
- switch (req->id) {
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
- case _id: { \
- struct _rsp_type *rsp; \
- int err; \
- \
- rsp = (struct _rsp_type *)otx2_mbox_alloc_msg( \
- &dev->mbox_up, 0, \
- sizeof(struct _rsp_type)); \
- if (!rsp) \
- return -ENOMEM; \
- \
- rsp->hdr.id = _id; \
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG; \
- rsp->hdr.pcifunc = dev->pf_func; \
- rsp->hdr.rc = 0; \
- \
- err = otx2_mbox_up_handler_ ## _fn_name( \
- dev, (struct _req_type *)req, rsp); \
- return err; \
- }
-MBOX_UP_CGX_MESSAGES
-#undef M
-
- default :
- otx2_reply_invalid_msg(&dev->mbox_up, 0, 0, req->id);
- }
-
- return -ENODEV;
-}
-
-static void
-otx2_process_msgs_up(struct otx2_dev *dev, struct otx2_mbox *mbox)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct mbox_hdr *req_hdr;
- struct mbox_msghdr *msg;
- int i, err, offset;
-
- req_hdr = (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
- if (req_hdr->num_msgs == 0)
- return;
-
- offset = mbox->rx_start + RTE_ALIGN(sizeof(*req_hdr), MBOX_MSG_ALIGN);
- for (i = 0; i < req_hdr->num_msgs; i++) {
- msg = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
-
- otx2_base_dbg("Message 0x%x (%s) pf:%d/vf:%d",
- msg->id, otx2_mbox_id2name(msg->id),
- otx2_get_pf(msg->pcifunc),
- otx2_get_vf(msg->pcifunc));
- err = mbox_process_msgs_up(dev, msg);
- if (err)
- otx2_err("Error %d handling 0x%x (%s)",
- err, msg->id, otx2_mbox_id2name(msg->id));
- offset = mbox->rx_start + msg->next_msgoff;
- }
- /* Send mbox responses */
- if (mdev->num_msgs) {
- otx2_base_dbg("Reply num_msgs:%d", mdev->num_msgs);
- otx2_mbox_msg_send(mbox, 0);
- }
-}
-
-static void
-otx2_pf_vf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_VF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_VF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static void
-otx2_af_pf_mbox_irq(void *param)
-{
- struct otx2_dev *dev = param;
- uint64_t intr;
-
- intr = otx2_read64(dev->bar2 + RVU_PF_INT);
- if (intr == 0)
- otx2_base_dbg("Proceeding to check mbox UP messages if any");
-
- otx2_write64(intr, dev->bar2 + RVU_PF_INT);
- otx2_base_dbg("Irq 0x%" PRIx64 "(pf:%d,vf:%d)", intr, dev->pf, dev->vf);
-
- /* First process all configuration messages */
- otx2_process_msgs(dev, dev->mbox);
-
- /* Process Uplink messages */
- otx2_process_msgs_up(dev, &dev->mbox_up);
-}
-
-static int
-mbox_register_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i, rc;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- /* MBOX interrupt for VF(0...63) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- if (rc) {
- otx2_err("Fail to register PF(VF0-63) mbox irq");
- return rc;
- }
- /* MBOX interrupt for VF(64...128) <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- if (rc) {
- otx2_err("Fail to register PF(VF64-128) mbox irq");
- return rc;
- }
- /* MBOX interrupt AF <-> PF */
- rc = otx2_register_irq(intr_handle, otx2_af_pf_mbox_irq,
- dev, RVU_PF_INT_VEC_AFPF_MBOX);
- if (rc) {
- otx2_err("Fail to register AF<->PF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1SX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int rc;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* MBOX interrupt PF <-> VF */
- rc = otx2_register_irq(intr_handle, otx2_pf_vf_mbox_irq,
- dev, RVU_VF_INT_VEC_MBOX);
- if (rc) {
- otx2_err("Fail to register PF<->VF mbox irq");
- return rc;
- }
-
- /* HW enable intr */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT);
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1S);
-
- return rc;
-}
-
-static int
-mbox_register_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return mbox_register_vf_irq(pci_dev, dev);
- else
- return mbox_register_pf_irq(pci_dev, dev);
-}
-
-static void
-mbox_unregister_pf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i)
- otx2_write64(~0ull, dev->bar2 +
- RVU_PF_VFPF_MBOX_INT_ENA_W1CX(i));
-
- otx2_write64(~0ull, dev->bar2 + RVU_PF_INT_ENA_W1C);
-
- dev->timer_set = 0;
-
- rte_eal_alarm_cancel(otx2_vf_pf_mbox_handle_msg, dev);
-
- /* Unregister the interrupt handler for each vectors */
- /* MBOX interrupt for VF(0...63) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX0);
-
- /* MBOX interrupt for VF(64...128) <-> PF */
- otx2_unregister_irq(intr_handle, otx2_vf_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_VFPF_MBOX1);
-
- /* MBOX interrupt AF <-> PF */
- otx2_unregister_irq(intr_handle, otx2_af_pf_mbox_irq, dev,
- RVU_PF_INT_VEC_AFPF_MBOX);
-
-}
-
-static void
-mbox_unregister_vf_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
-
- /* Clear irq */
- otx2_write64(~0ull, dev->bar2 + RVU_VF_INT_ENA_W1C);
-
- /* Unregister the interrupt handler */
- otx2_unregister_irq(intr_handle, otx2_pf_vf_mbox_irq, dev,
- RVU_VF_INT_VEC_MBOX);
-}
-
-static void
-mbox_unregister_irq(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- mbox_unregister_vf_irq(pci_dev, dev);
- else
- mbox_unregister_pf_irq(pci_dev, dev);
-}
-
-static int
-vf_flr_send_msg(struct otx2_dev *dev, uint16_t vf)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msg_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_vf_flr(mbox);
- /* Overwrite pcifunc to indicate VF */
- req->hdr.pcifunc = otx2_pfvf_func(dev->pf, vf);
-
- /* Sync message in interrupt context */
- rc = pf_af_sync_msg(dev, NULL);
- if (rc)
- otx2_err("Failed to send VF FLR mbox msg, rc=%d", rc);
-
- return rc;
-}
-
-static void
-otx2_pf_vf_flr_irq(void *param)
-{
- struct otx2_dev *dev = (struct otx2_dev *)param;
- uint16_t max_vf = 64, vf;
- uintptr_t bar2;
- uint64_t intr;
- int i;
-
- max_vf = (dev->maxvf > 0) ? dev->maxvf : 64;
- bar2 = dev->bar2;
-
- otx2_base_dbg("FLR VF interrupt: max_vf: %d", max_vf);
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- intr = otx2_read64(bar2 + RVU_PF_VFFLR_INTX(i));
- if (!intr)
- continue;
-
- for (vf = 0; vf < max_vf; vf++) {
- if (!(intr & (1ULL << vf)))
- continue;
-
- otx2_base_dbg("FLR: i :%d intr: 0x%" PRIx64 ", vf-%d",
- i, intr, (64 * i + vf));
- /* Clear interrupt */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFFLR_INTX(i));
- /* Disable the interrupt */
- otx2_write64(BIT_ULL(vf),
- bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
- /* Inform AF about VF reset */
- vf_flr_send_msg(dev, vf);
-
- /* Signal FLR finish */
- otx2_write64(BIT_ULL(vf), bar2 + RVU_PF_VFTRPENDX(i));
- /* Enable interrupt */
- otx2_write64(~0ull,
- bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- }
-}
-
-static int
-vf_flr_unregister_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- int i;
-
- otx2_base_dbg("Unregister VF FLR interrupts for %s", pci_dev->name);
-
- /* HW clear irq */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1CX(i));
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
-
- otx2_unregister_irq(intr_handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
-}
-
-static int
-vf_flr_register_irqs(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int i, rc;
-
- otx2_base_dbg("Register VF FLR interrupts for %s", pci_dev->name);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR0);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR0 rc=%d", rc);
-
- rc = otx2_register_irq(handle, otx2_pf_vf_flr_irq, dev,
- RVU_PF_INT_VEC_VFFLR1);
- if (rc)
- otx2_err("Failed to init RVU_PF_INT_VEC_VFFLR1 rc=%d", rc);
-
- /* Enable HW interrupt */
- for (i = 0; i < MAX_VFPF_DWORD_BITS; ++i) {
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INTX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFTRPENDX(i));
- otx2_write64(~0ull, dev->bar2 + RVU_PF_VFFLR_INT_ENA_W1SX(i));
- }
- return 0;
-}
-
-/**
- * @internal
- * Get number of active VFs for the given PF device.
- */
-int
-otx2_dev_active_vfs(void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- int i, count = 0;
-
- for (i = 0; i < MAX_VFPF_DWORD_BITS; i++)
- count += __builtin_popcount(dev->active_vfs[i]);
-
- return count;
-}
-
-static void
-otx2_update_vf_hwcap(struct rte_pci_device *pci_dev, struct otx2_dev *dev)
-{
- switch (pci_dev->id.device_id) {
- case PCI_DEVID_OCTEONTX2_RVU_PF:
- break;
- case PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_VF:
- case PCI_DEVID_OCTEONTX2_RVU_NPA_VF:
- case PCI_DEVID_OCTEONTX2_RVU_CPT_VF:
- case PCI_DEVID_OCTEONTX2_RVU_AF_VF:
- case PCI_DEVID_OCTEONTX2_RVU_VF:
- case PCI_DEVID_OCTEONTX2_RVU_SDP_VF:
- dev->hwcap |= OTX2_HWCAP_F_VF;
- break;
- }
-}
-
-/**
- * @internal
- * Initialize the otx2 device
- */
-int
-otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- int up_direction = MBOX_DIR_PFAF_UP;
- int rc, direction = MBOX_DIR_PFAF;
- uint64_t intr_offset = RVU_PF_INT;
- struct otx2_dev *dev = otx2_dev;
- uintptr_t bar2, bar4;
- uint64_t bar4_addr;
- void *hwbase;
-
- bar2 = (uintptr_t)pci_dev->mem_resource[2].addr;
- bar4 = (uintptr_t)pci_dev->mem_resource[4].addr;
-
- if (bar2 == 0 || bar4 == 0) {
- otx2_err("Failed to get pci bars");
- rc = -ENODEV;
- goto error;
- }
-
- dev->node = pci_dev->device.numa_node;
- dev->maxvf = pci_dev->max_vfs;
- dev->bar2 = bar2;
- dev->bar4 = bar4;
-
- otx2_update_vf_hwcap(pci_dev, dev);
-
- if (otx2_dev_is_vf(dev)) {
- direction = MBOX_DIR_VFPF;
- up_direction = MBOX_DIR_VFPF_UP;
- intr_offset = RVU_VF_INT;
- }
-
- /* Initialize the local mbox */
- rc = otx2_mbox_init(&dev->mbox_local, bar4, bar2, direction, 1,
- intr_offset);
- if (rc)
- goto error;
- dev->mbox = &dev->mbox_local;
-
- rc = otx2_mbox_init(&dev->mbox_up, bar4, bar2, up_direction, 1,
- intr_offset);
- if (rc)
- goto error;
-
- /* Register mbox interrupts */
- rc = mbox_register_irq(pci_dev, dev);
- if (rc)
- goto mbox_fini;
-
- /* Check the readiness of PF/VF */
- rc = otx2_send_ready_msg(dev->mbox, &dev->pf_func);
- if (rc)
- goto mbox_unregister;
-
- dev->pf = otx2_get_pf(dev->pf_func);
- dev->vf = otx2_get_vf(dev->pf_func);
- memset(&dev->active_vfs, 0, sizeof(dev->active_vfs));
-
- /* Found VF devices in a PF device */
- if (pci_dev->max_vfs > 0) {
-
- /* Remap mbox area for all vf's */
- bar4_addr = otx2_read64(bar2 + RVU_PF_VF_BAR4_ADDR);
- if (bar4_addr == 0) {
- rc = -ENODEV;
- goto mbox_fini;
- }
-
- hwbase = mbox_mem_map(bar4_addr, MBOX_SIZE * pci_dev->max_vfs);
- if (hwbase == MAP_FAILED) {
- rc = -ENOMEM;
- goto mbox_fini;
- }
- /* Init mbox object */
- rc = otx2_mbox_init(&dev->mbox_vfpf, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto iounmap;
-
- /* PF -> VF UP messages */
- rc = otx2_mbox_init(&dev->mbox_vfpf_up, (uintptr_t)hwbase,
- bar2, MBOX_DIR_PFVF_UP, pci_dev->max_vfs,
- intr_offset);
- if (rc)
- goto mbox_fini;
- }
-
- /* Register VF-FLR irq handlers */
- if (otx2_dev_is_pf(dev)) {
- rc = vf_flr_register_irqs(pci_dev, dev);
- if (rc)
- goto iounmap;
- }
- dev->mbox_active = 1;
- return rc;
-
-iounmap:
- mbox_mem_unmap(hwbase, MBOX_SIZE * pci_dev->max_vfs);
-mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
-mbox_fini:
- otx2_mbox_fini(dev->mbox);
- otx2_mbox_fini(&dev->mbox_up);
-error:
- return rc;
-}
-
-/**
- * @internal
- * Finalize the otx2 device
- */
-void
-otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct rte_intr_handle *intr_handle = pci_dev->intr_handle;
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_mbox *mbox;
-
- /* Clear references to this pci dev */
- idev = otx2_intra_dev_get_cfg();
- if (idev->npa_lf && idev->npa_lf->pci_dev == pci_dev)
- idev->npa_lf = NULL;
-
- mbox_unregister_irq(pci_dev, dev);
-
- if (otx2_dev_is_pf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
- /* Release PF - VF */
- mbox = &dev->mbox_vfpf;
- if (mbox->hwbase && mbox->dev)
- mbox_mem_unmap((void *)mbox->hwbase,
- MBOX_SIZE * pci_dev->max_vfs);
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_vfpf_up;
- otx2_mbox_fini(mbox);
-
- /* Release PF - AF */
- mbox = dev->mbox;
- otx2_mbox_fini(mbox);
- mbox = &dev->mbox_up;
- otx2_mbox_fini(mbox);
- dev->mbox_active = 0;
-
- /* Disable MSIX vectors */
- otx2_disable_irqs(intr_handle);
-}
diff --git a/drivers/common/octeontx2/otx2_dev.h b/drivers/common/octeontx2/otx2_dev.h
deleted file mode 100644
index d5b2b0d9af..0000000000
--- a/drivers/common/octeontx2/otx2_dev.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_DEV_H
-#define _OTX2_DEV_H
-
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mbox.h"
-#include "otx2_mempool.h"
-
-/* Common HWCAP flags. Use from LSB bits */
-#define OTX2_HWCAP_F_VF BIT_ULL(8) /* VF device */
-#define otx2_dev_is_vf(dev) (dev->hwcap & OTX2_HWCAP_F_VF)
-#define otx2_dev_is_pf(dev) (!(dev->hwcap & OTX2_HWCAP_F_VF))
-#define otx2_dev_is_lbk(dev) ((dev->hwcap & OTX2_HWCAP_F_VF) && \
- (dev->tx_chan_base < 0x700))
-#define otx2_dev_revid(dev) (dev->hwcap & 0xFF)
-#define otx2_dev_is_sdp(dev) (dev->sdp_link)
-
-#define otx2_dev_is_vf_or_sdp(dev) \
- (otx2_dev_is_vf(dev) || otx2_dev_is_sdp(dev))
-
-#define otx2_dev_is_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_95xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-#define otx2_dev_is_95xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x1))
-
-#define otx2_dev_is_96xx_A0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-#define otx2_dev_is_96xx_Ax(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_Cx(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_96xx_C0(dev) \
- ((RVU_PCI_REV_MAJOR(otx2_dev_revid(dev)) == 0x2) && \
- (RVU_PCI_REV_MINOR(otx2_dev_revid(dev)) == 0x0) && \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x0))
-
-#define otx2_dev_is_98xx(dev) \
- (RVU_PCI_REV_MIDR_ID(otx2_dev_revid(dev)) == 0x3)
-
-struct otx2_dev;
-
-/* Link status update callback */
-typedef void (*otx2_link_status_update_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-/* PTP info callback */
-typedef int (*otx2_ptp_info_t)(struct otx2_dev *dev, bool ptp_en);
-/* Link status get callback */
-typedef void (*otx2_link_status_get_t)(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-
-struct otx2_dev_ops {
- otx2_link_status_update_t link_status_update;
- otx2_ptp_info_t ptp_info_update;
- otx2_link_status_get_t link_status_get;
-};
-
-#define OTX2_DEV \
- int node __rte_cache_aligned; \
- uint16_t pf; \
- int16_t vf; \
- uint16_t pf_func; \
- uint8_t mbox_active; \
- bool drv_inited; \
- uint64_t active_vfs[MAX_VFPF_DWORD_BITS]; \
- uintptr_t bar2; \
- uintptr_t bar4; \
- struct otx2_mbox mbox_local; \
- struct otx2_mbox mbox_up; \
- struct otx2_mbox mbox_vfpf; \
- struct otx2_mbox mbox_vfpf_up; \
- otx2_intr_t intr; \
- int timer_set; /* ~0 : no alarm handling */ \
- uint64_t hwcap; \
- struct otx2_npa_lf npalf; \
- struct otx2_mbox *mbox; \
- uint16_t maxvf; \
- const struct otx2_dev_ops *ops
-
-struct otx2_dev {
- OTX2_DEV;
-};
-
-__rte_internal
-int otx2_dev_priv_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-
-/* Common dev init and fini routines */
-
-static __rte_always_inline int
-otx2_dev_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- uint8_t rev_id;
- int rc;
-
- rc = rte_pci_read_config(pci_dev, &rev_id,
- 1, RVU_PCI_REVISION_ID);
- if (rc != 1) {
- otx2_err("Failed to read pci revision id, rc=%d", rc);
- return rc;
- }
-
- dev->hwcap = rev_id;
- return otx2_dev_priv_init(pci_dev, otx2_dev);
-}
-
-__rte_internal
-void otx2_dev_fini(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_dev_active_vfs(void *otx2_dev);
-
-#define RVU_PFVF_PF_SHIFT 10
-#define RVU_PFVF_PF_MASK 0x3F
-#define RVU_PFVF_FUNC_SHIFT 0
-#define RVU_PFVF_FUNC_MASK 0x3FF
-
-static inline int
-otx2_get_vf(uint16_t pf_func)
-{
- return (((pf_func >> RVU_PFVF_FUNC_SHIFT) & RVU_PFVF_FUNC_MASK) - 1);
-}
-
-static inline int
-otx2_get_pf(uint16_t pf_func)
-{
- return (pf_func >> RVU_PFVF_PF_SHIFT) & RVU_PFVF_PF_MASK;
-}
-
-static inline int
-otx2_pfvf_func(int pf, int vf)
-{
- return (pf << RVU_PFVF_PF_SHIFT) | ((vf << RVU_PFVF_FUNC_SHIFT) + 1);
-}
-
-static inline int
-otx2_is_afvf(uint16_t pf_func)
-{
- return !(pf_func & ~RVU_PFVF_FUNC_MASK);
-}
-
-#endif /* _OTX2_DEV_H */
diff --git a/drivers/common/octeontx2/otx2_io_arm64.h b/drivers/common/octeontx2/otx2_io_arm64.h
deleted file mode 100644
index 34268e3af3..0000000000
--- a/drivers/common/octeontx2/otx2_io_arm64.h
+++ /dev/null
@@ -1,114 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_ARM64_H_
-#define _OTX2_IO_ARM64_H_
-
-#define otx2_load_pair(val0, val1, addr) ({ \
- asm volatile( \
- "ldp %x[x0], %x[x1], [%x[p1]]" \
- :[x0]"=r"(val0), [x1]"=r"(val1) \
- :[p1]"r"(addr) \
- ); })
-
-#define otx2_store_pair(val0, val1, addr) ({ \
- asm volatile( \
- "stp %x[x0], %x[x1], [%x[p1],#0]!" \
- ::[x0]"r"(val0), [x1]"r"(val1), [p1]"r"(addr) \
- ); })
-
-#define otx2_prefetch_store_keep(ptr) ({\
- asm volatile("prfm pstl1keep, [%x0]\n" : : "r" (ptr)); })
-
-#if defined(__ARM_FEATURE_SVE)
-#define __LSE_PREAMBLE " .cpu generic+lse+sve\n"
-#else
-#define __LSE_PREAMBLE " .cpu generic+lse\n"
-#endif
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with no ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadd %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- uint64_t result;
-
- /* Atomic add with ordering */
- asm volatile (
- __LSE_PREAMBLE
- "ldadda %x[i], %x[r], [%[b]]"
- : [r] "=r" (result), "+m" (*ptr)
- : [i] "r" (incr), [b] "r" (ptr)
- : "memory");
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeor xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result): [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline uint64_t
-otx2_lmt_submit_release(rte_iova_t io_address)
-{
- uint64_t result;
-
- asm volatile (
- __LSE_PREAMBLE
- "ldeorl xzr,%x[rf],[%[rs]]" :
- [rf] "=r"(result) : [rs] "r"(io_address));
- return result;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- dst128[0] = src128[0];
- dst128[1] = src128[1];
- /* lmtext receives following value:
- * 1: NIX_SUBDC_EXT needed i.e. tx vlan case
- * 2: NIX_SUBDC_EXT + NIX_SUBDC_MEM i.e. tstamp case
- */
- if (lmtext) {
- dst128[2] = src128[2];
- if (lmtext > 1)
- dst128[3] = src128[3];
- }
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- volatile const __uint128_t *src128 = (const __uint128_t *)in;
- volatile __uint128_t *dst128 = (__uint128_t *)out;
- uint8_t i;
-
- for (i = 0; i < segdw; i++)
- dst128[i] = src128[i];
-}
-
-#undef __LSE_PREAMBLE
-#endif /* _OTX2_IO_ARM64_H_ */
diff --git a/drivers/common/octeontx2/otx2_io_generic.h b/drivers/common/octeontx2/otx2_io_generic.h
deleted file mode 100644
index 3436a6c3d5..0000000000
--- a/drivers/common/octeontx2/otx2_io_generic.h
+++ /dev/null
@@ -1,75 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IO_GENERIC_H_
-#define _OTX2_IO_GENERIC_H_
-
-#include <string.h>
-
-#define otx2_load_pair(val0, val1, addr) \
-do { \
- val0 = rte_read64_relaxed((void *)(addr)); \
- val1 = rte_read64_relaxed((uint8_t *)(addr) + 8); \
-} while (0)
-
-#define otx2_store_pair(val0, val1, addr) \
-do { \
- rte_write64_relaxed(val0, (void *)(addr)); \
- rte_write64_relaxed(val1, (((uint8_t *)(addr)) + 8)); \
-} while (0)
-
-#define otx2_prefetch_store_keep(ptr) do {} while (0)
-
-static inline uint64_t
-otx2_atomic64_add_nosync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline uint64_t
-otx2_atomic64_add_sync(int64_t incr, int64_t *ptr)
-{
- RTE_SET_USED(ptr);
- RTE_SET_USED(incr);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static inline int64_t
-otx2_lmt_submit_release(uint64_t io_address)
-{
- RTE_SET_USED(io_address);
-
- return 0;
-}
-
-static __rte_always_inline void
-otx2_lmt_mov(void *out, const void *in, const uint32_t lmtext)
-{
- /* Copy four words if lmtext = 0
- * six words if lmtext = 1
- * eight words if lmtext =2
- */
- memcpy(out, in, (4 + (2 * lmtext)) * sizeof(uint64_t));
-}
-
-static __rte_always_inline void
-otx2_lmt_mov_seg(void *out, const void *in, const uint16_t segdw)
-{
- RTE_SET_USED(out);
- RTE_SET_USED(in);
- RTE_SET_USED(segdw);
-}
-#endif /* _OTX2_IO_GENERIC_H_ */
diff --git a/drivers/common/octeontx2/otx2_irq.c b/drivers/common/octeontx2/otx2_irq.c
deleted file mode 100644
index 93fc95c0e1..0000000000
--- a/drivers/common/octeontx2/otx2_irq.c
+++ /dev/null
@@ -1,288 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_alarm.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-
-#ifdef RTE_EAL_VFIO
-
-#include <inttypes.h>
-#include <linux/vfio.h>
-#include <sys/eventfd.h>
-#include <sys/ioctl.h>
-#include <unistd.h>
-
-#define MAX_INTR_VEC_ID RTE_MAX_RXTX_INTR_VEC_ID
-#define MSIX_IRQ_SET_BUF_LEN (sizeof(struct vfio_irq_set) + \
- sizeof(int) * (MAX_INTR_VEC_ID))
-
-static int
-irq_get_info(struct rte_intr_handle *intr_handle)
-{
- struct vfio_irq_info irq = { .argsz = sizeof(irq) };
- int rc, vfio_dev_fd;
-
- irq.index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_IRQ_INFO, &irq);
- if (rc < 0) {
- otx2_err("Failed to get IRQ info rc=%d errno=%d", rc, errno);
- return rc;
- }
-
- otx2_base_dbg("Flags=0x%x index=0x%x count=0x%x max_intr_vec_id=0x%x",
- irq.flags, irq.index, irq.count, MAX_INTR_VEC_ID);
-
- if (irq.count > MAX_INTR_VEC_ID) {
- otx2_err("HW max=%d > MAX_INTR_VEC_ID: %d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- if (rte_intr_max_intr_set(intr_handle, MAX_INTR_VEC_ID))
- return -1;
- } else {
- if (rte_intr_max_intr_set(intr_handle, irq.count))
- return -1;
- }
-
- return 0;
-}
-
-static int
-irq_config(struct rte_intr_handle *intr_handle, unsigned int vec)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- len = sizeof(struct vfio_irq_set) + sizeof(int32_t);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
-
- irq_set->start = vec;
- irq_set->count = 1;
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- /* Use vec fd to set interrupt vectors */
- fd_ptr = (int32_t *)&irq_set->data[0];
- fd_ptr[0] = rte_intr_efds_index_get(intr_handle, vec);
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set_irqs vector=0x%x rc=%d", vec, rc);
-
- return rc;
-}
-
-static int
-irq_init(struct rte_intr_handle *intr_handle)
-{
- char irq_set_buf[MSIX_IRQ_SET_BUF_LEN];
- struct vfio_irq_set *irq_set;
- int len, rc, vfio_dev_fd;
- int32_t *fd_ptr;
- uint32_t i;
-
- if (rte_intr_max_intr_get(intr_handle) > MAX_INTR_VEC_ID) {
- otx2_err("Max_intr=%d greater than MAX_INTR_VEC_ID=%d",
- rte_intr_max_intr_get(intr_handle),
- MAX_INTR_VEC_ID);
- return -ERANGE;
- }
-
- len = sizeof(struct vfio_irq_set) +
- sizeof(int32_t) * rte_intr_max_intr_get(intr_handle);
-
- irq_set = (struct vfio_irq_set *)irq_set_buf;
- irq_set->argsz = len;
- irq_set->start = 0;
- irq_set->count = rte_intr_max_intr_get(intr_handle);
- irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD |
- VFIO_IRQ_SET_ACTION_TRIGGER;
- irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX;
-
- fd_ptr = (int32_t *)&irq_set->data[0];
- for (i = 0; i < irq_set->count; i++)
- fd_ptr[i] = -1;
-
- vfio_dev_fd = rte_intr_dev_fd_get(intr_handle);
- rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
- if (rc)
- otx2_err("Failed to set irqs vector rc=%d", rc);
-
- return rc;
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int
-otx2_disable_irqs(struct rte_intr_handle *intr_handle)
-{
- /* Clear max_intr to indicate re-init next time */
- if (rte_intr_max_intr_set(intr_handle, 0))
- return -1;
- return rte_intr_disable(intr_handle);
-}
-
-/**
- * @internal
- * Register IRQ
- */
-int
-otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint32_t nb_efd, tmp_nb_efd;
- int rc, fd;
-
- /* If no max_intr read from VFIO */
- if (rte_intr_max_intr_get(intr_handle) == 0) {
- irq_get_info(intr_handle);
- irq_init(intr_handle);
- }
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Vector=%d greater than max_intr=%d", vec,
- rte_intr_max_intr_get(intr_handle));
- return -EINVAL;
- }
-
- tmp_handle = intr_handle;
- /* Create new eventfd for interrupt vector */
- fd = eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC);
- if (fd == -1)
- return -ENODEV;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return errno;
-
- /* Register vector interrupt callback */
- rc = rte_intr_callback_register(tmp_handle, cb, data);
- if (rc) {
- otx2_err("Failed to register vector:0x%x irq callback.", vec);
- return rc;
- }
-
- rte_intr_efds_index_set(intr_handle, vec, fd);
- nb_efd = (vec > (uint32_t)rte_intr_nb_efd_get(intr_handle)) ?
- vec : (uint32_t)rte_intr_nb_efd_get(intr_handle);
- rte_intr_nb_efd_set(intr_handle, nb_efd);
-
- tmp_nb_efd = rte_intr_nb_efd_get(intr_handle) + 1;
- if (tmp_nb_efd > (uint32_t)rte_intr_max_intr_get(intr_handle))
- rte_intr_max_intr_set(intr_handle, tmp_nb_efd);
-
- otx2_base_dbg("Enable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- /* Enable MSIX vectors to VFIO */
- return irq_config(intr_handle, vec);
-}
-
-/**
- * @internal
- * Unregister IRQ
- */
-void
-otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec)
-{
- struct rte_intr_handle *tmp_handle;
- uint8_t retries = 5; /* 5 ms */
- int rc, fd;
-
- if (vec > (uint32_t)rte_intr_max_intr_get(intr_handle)) {
- otx2_err("Error unregistering MSI-X interrupts vec:%d > %d",
- vec, rte_intr_max_intr_get(intr_handle));
- return;
- }
-
- tmp_handle = intr_handle;
- fd = rte_intr_efds_index_get(intr_handle, vec);
- if (fd == -1)
- return;
-
- if (rte_intr_fd_set(tmp_handle, fd))
- return;
-
- do {
- /* Un-register callback func from platform lib */
- rc = rte_intr_callback_unregister(tmp_handle, cb, data);
- /* Retry only if -EAGAIN */
- if (rc != -EAGAIN)
- break;
- rte_delay_ms(1);
- retries--;
- } while (retries);
-
- if (rc < 0) {
- otx2_err("Error unregistering MSI-X vec %d cb, rc=%d", vec, rc);
- return;
- }
-
- otx2_base_dbg("Disable vector:0x%x for vfio (efds: %d, max:%d)", vec,
- rte_intr_nb_efd_get(intr_handle),
- rte_intr_max_intr_get(intr_handle));
-
- if (rte_intr_efds_index_get(intr_handle, vec) != -1)
- close(rte_intr_efds_index_get(intr_handle, vec));
- /* Disable MSIX vectors from VFIO */
- rte_intr_efds_index_set(intr_handle, vec, -1);
- irq_config(intr_handle, vec);
-}
-
-#else
-
-/**
- * @internal
- * Register IRQ
- */
-int otx2_register_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
- return -ENOTSUP;
-}
-
-
-/**
- * @internal
- * Unregister IRQ
- */
-void otx2_unregister_irq(__rte_unused struct rte_intr_handle *intr_handle,
- __rte_unused rte_intr_callback_fn cb,
- __rte_unused void *data, __rte_unused unsigned int vec)
-{
-}
-
-/**
- * @internal
- * Disable IRQ
- */
-int otx2_disable_irqs(__rte_unused struct rte_intr_handle *intr_handle)
-{
- return -ENOTSUP;
-}
-
-#endif /* RTE_EAL_VFIO */
diff --git a/drivers/common/octeontx2/otx2_irq.h b/drivers/common/octeontx2/otx2_irq.h
deleted file mode 100644
index 0683cf5543..0000000000
--- a/drivers/common/octeontx2/otx2_irq.h
+++ /dev/null
@@ -1,28 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_IRQ_H_
-#define _OTX2_IRQ_H_
-
-#include <rte_pci.h>
-#include <rte_interrupts.h>
-
-#include "otx2_common.h"
-
-typedef struct {
-/* 128 devices translate to two 64 bits dwords */
-#define MAX_VFPF_DWORD_BITS 2
- uint64_t bits[MAX_VFPF_DWORD_BITS];
-} otx2_intr_t;
-
-__rte_internal
-int otx2_register_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-void otx2_unregister_irq(struct rte_intr_handle *intr_handle,
- rte_intr_callback_fn cb, void *data, unsigned int vec);
-__rte_internal
-int otx2_disable_irqs(struct rte_intr_handle *intr_handle);
-
-#endif /* _OTX2_IRQ_H_ */
diff --git a/drivers/common/octeontx2/otx2_mbox.c b/drivers/common/octeontx2/otx2_mbox.c
deleted file mode 100644
index 6df1e8ea63..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.c
+++ /dev/null
@@ -1,465 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <errno.h>
-#include <stdio.h>
-#include <stdlib.h>
-#include <string.h>
-
-#include <rte_atomic.h>
-#include <rte_cycles.h>
-#include <rte_malloc.h>
-
-#include "otx2_mbox.h"
-#include "otx2_dev.h"
-
-#define RVU_AF_AFPF_MBOX0 (0x02000)
-#define RVU_AF_AFPF_MBOX1 (0x02008)
-
-#define RVU_PF_PFAF_MBOX0 (0xC00)
-#define RVU_PF_PFAF_MBOX1 (0xC08)
-
-#define RVU_PF_VFX_PFVF_MBOX0 (0x0000)
-#define RVU_PF_VFX_PFVF_MBOX1 (0x0008)
-
-#define RVU_VF_VFPF_MBOX0 (0x0000)
-#define RVU_VF_VFPF_MBOX1 (0x0008)
-
-static inline uint16_t
-msgs_offset(void)
-{
- return RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
-}
-
-void
-otx2_mbox_fini(struct otx2_mbox *mbox)
-{
- mbox->reg_base = 0;
- mbox->hwbase = 0;
- rte_free(mbox->dev);
- mbox->dev = NULL;
-}
-
-void
-otx2_mbox_reset(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- rte_spinlock_lock(&mdev->mbox_lock);
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- tx_hdr->msg_size = 0;
- tx_hdr->num_msgs = 0;
- rx_hdr->msg_size = 0;
- rx_hdr->num_msgs = 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-}
-
-int
-otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevs, uint64_t intr_offset)
-{
- struct otx2_mbox_dev *mdev;
- int devid;
-
- mbox->intr_offset = intr_offset;
- mbox->reg_base = reg_base;
- mbox->hwbase = hwbase;
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_PFVF:
- mbox->tx_start = MBOX_DOWN_TX_START;
- mbox->rx_start = MBOX_DOWN_RX_START;
- mbox->tx_size = MBOX_DOWN_TX_SIZE;
- mbox->rx_size = MBOX_DOWN_RX_SIZE;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_VFPF:
- mbox->tx_start = MBOX_DOWN_RX_START;
- mbox->rx_start = MBOX_DOWN_TX_START;
- mbox->tx_size = MBOX_DOWN_RX_SIZE;
- mbox->rx_size = MBOX_DOWN_TX_SIZE;
- break;
- case MBOX_DIR_AFPF_UP:
- case MBOX_DIR_PFVF_UP:
- mbox->tx_start = MBOX_UP_TX_START;
- mbox->rx_start = MBOX_UP_RX_START;
- mbox->tx_size = MBOX_UP_TX_SIZE;
- mbox->rx_size = MBOX_UP_RX_SIZE;
- break;
- case MBOX_DIR_PFAF_UP:
- case MBOX_DIR_VFPF_UP:
- mbox->tx_start = MBOX_UP_RX_START;
- mbox->rx_start = MBOX_UP_TX_START;
- mbox->tx_size = MBOX_UP_RX_SIZE;
- mbox->rx_size = MBOX_UP_TX_SIZE;
- break;
- default:
- return -ENODEV;
- }
-
- switch (direction) {
- case MBOX_DIR_AFPF:
- case MBOX_DIR_AFPF_UP:
- mbox->trigger = RVU_AF_AFPF_MBOX0;
- mbox->tr_shift = 4;
- break;
- case MBOX_DIR_PFAF:
- case MBOX_DIR_PFAF_UP:
- mbox->trigger = RVU_PF_PFAF_MBOX1;
- mbox->tr_shift = 0;
- break;
- case MBOX_DIR_PFVF:
- case MBOX_DIR_PFVF_UP:
- mbox->trigger = RVU_PF_VFX_PFVF_MBOX0;
- mbox->tr_shift = 12;
- break;
- case MBOX_DIR_VFPF:
- case MBOX_DIR_VFPF_UP:
- mbox->trigger = RVU_VF_VFPF_MBOX1;
- mbox->tr_shift = 0;
- break;
- default:
- return -ENODEV;
- }
-
- mbox->dev = rte_zmalloc("mbox dev",
- ndevs * sizeof(struct otx2_mbox_dev),
- OTX2_ALIGN);
- if (!mbox->dev) {
- otx2_mbox_fini(mbox);
- return -ENOMEM;
- }
- mbox->ndevs = ndevs;
- for (devid = 0; devid < ndevs; devid++) {
- mdev = &mbox->dev[devid];
- mdev->mbase = (void *)(mbox->hwbase + (devid * MBOX_SIZE));
- rte_spinlock_init(&mdev->mbox_lock);
- /* Init header to reset value */
- otx2_mbox_reset(mbox, devid);
- }
-
- return 0;
-}
-
-/**
- * @internal
- * Allocate a message response
- */
-struct mbox_msghdr *
-otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid, int size,
- int size_rsp)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr = NULL;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- size = RTE_ALIGN(size, MBOX_MSG_ALIGN);
- size_rsp = RTE_ALIGN(size_rsp, MBOX_MSG_ALIGN);
- /* Check if there is space in mailbox */
- if ((mdev->msg_size + size) > mbox->tx_size - msgs_offset())
- goto exit;
- if ((mdev->rsp_size + size_rsp) > mbox->rx_size - msgs_offset())
- goto exit;
- if (mdev->msg_size == 0)
- mdev->num_msgs = 0;
- mdev->num_msgs++;
-
- msghdr = (struct mbox_msghdr *)(((uintptr_t)mdev->mbase +
- mbox->tx_start + msgs_offset() + mdev->msg_size));
-
- /* Clear the whole msg region */
- otx2_mbox_memset(msghdr, 0, sizeof(*msghdr) + size);
- /* Init message header with reset values */
- msghdr->ver = OTX2_MBOX_VERSION;
- mdev->msg_size += size;
- mdev->rsp_size += size_rsp;
- msghdr->next_msgoff = mdev->msg_size + msgs_offset();
-exit:
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return msghdr;
-}
-
-/**
- * @internal
- * Send a mailbox message
- */
-void
-otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase + mbox->rx_start);
-
- /* Reset header for next messages */
- tx_hdr->msg_size = mdev->msg_size;
- mdev->msg_size = 0;
- mdev->rsp_size = 0;
- mdev->msgs_acked = 0;
-
- /* num_msgs != 0 signals to the peer that the buffer has a number of
- * messages. So this should be written after copying txmem
- */
- tx_hdr->num_msgs = mdev->num_msgs;
- rx_hdr->num_msgs = 0;
-
- /* Sync mbox data into memory */
- rte_wmb();
-
- /* The interrupt should be fired after num_msgs is written
- * to the shared memory
- */
- rte_write64(1, (volatile void *)(mbox->reg_base +
- (mbox->trigger | (devid << mbox->tr_shift))));
-}
-
-/**
- * @internal
- * Wait and get mailbox response
- */
-int
-otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp(mbox, devid);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-/**
- * Polling for given wait time to get mailbox response
- */
-static int
-mbox_poll(struct otx2_mbox *mbox, uint32_t wait)
-{
- uint32_t timeout = 0, sleep = 1;
- uint32_t wait_us = wait * 1000;
- uint64_t rsp_reg = 0;
- uintptr_t reg_addr;
-
- reg_addr = mbox->reg_base + mbox->intr_offset;
- do {
- rsp_reg = otx2_read64(reg_addr);
-
- if (timeout >= wait_us)
- return -ETIMEDOUT;
-
- rte_delay_us(sleep);
- timeout += sleep;
- } while (!rsp_reg);
-
- rte_smp_rmb();
-
- /* Clear interrupt */
- otx2_write64(rsp_reg, reg_addr);
-
- /* Reset mbox */
- otx2_mbox_reset(mbox, 0);
-
- return 0;
-}
-
-/**
- * @internal
- * Wait and get mailbox response with timeout
- */
-int
-otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- struct mbox_msghdr *msghdr;
- uint64_t offset;
- int rc;
-
- rc = otx2_mbox_wait_for_rsp_tmo(mbox, devid, tmo);
- if (rc != 1)
- return -EIO;
-
- rte_rmb();
-
- offset = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msghdr = (struct mbox_msghdr *)((uintptr_t)mdev->mbase + offset);
- if (msg != NULL)
- *msg = msghdr;
-
- return msghdr->rc;
-}
-
-static int
-mbox_wait(struct otx2_mbox *mbox, int devid, uint32_t rst_timo)
-{
- volatile struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- uint32_t timeout = 0, sleep = 1;
-
- rst_timo = rst_timo * 1000; /* Milli seconds to micro seconds */
- while (mdev->num_msgs > mdev->msgs_acked) {
- rte_delay_us(sleep);
- timeout += sleep;
- if (timeout >= rst_timo) {
- struct mbox_hdr *tx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->tx_start);
- struct mbox_hdr *rx_hdr =
- (struct mbox_hdr *)((uintptr_t)mdev->mbase +
- mbox->rx_start);
-
- otx2_err("MBOX[devid: %d] message wait timeout %d, "
- "num_msgs: %d, msgs_acked: %d "
- "(tx/rx num_msgs: %d/%d), msg_size: %d, "
- "rsp_size: %d",
- devid, timeout, mdev->num_msgs,
- mdev->msgs_acked, tx_hdr->num_msgs,
- rx_hdr->num_msgs, mdev->msg_size,
- mdev->rsp_size);
-
- return -EIO;
- }
- rte_rmb();
- }
- return 0;
-}
-
-int
-otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int rc = 0;
-
- /* Sync with mbox region */
- rte_rmb();
-
- if (mbox->trigger == RVU_PF_VFX_PFVF_MBOX1 ||
- mbox->trigger == RVU_PF_VFX_PFVF_MBOX0) {
- /* In case of VF, Wait a bit more to account round trip delay */
- tmo = tmo * 2;
- }
-
- /* Wait message */
- if (rte_thread_is_intr())
- rc = mbox_poll(mbox, tmo);
- else
- rc = mbox_wait(mbox, devid, tmo);
-
- if (!rc)
- rc = mdev->num_msgs;
-
- return rc;
-}
-
-/**
- * @internal
- * Wait for the mailbox response
- */
-int
-otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid)
-{
- return otx2_mbox_wait_for_rsp_tmo(mbox, devid, MBOX_RSP_TIMEOUT);
-}
-
-int
-otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- int avail;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- avail = mbox->tx_size - mdev->msg_size - msgs_offset();
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return avail;
-}
-
-int
-otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pcifunc)
-{
- struct ready_msg_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_ready(mbox);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->hdr.ver != OTX2_MBOX_VERSION) {
- otx2_err("Incompatible MBox versions(AF: 0x%04x DPDK: 0x%04x)",
- rsp->hdr.ver, OTX2_MBOX_VERSION);
- return -EPIPE;
- }
-
- if (pcifunc)
- *pcifunc = rsp->hdr.pcifunc;
-
- return 0;
-}
-
-int
-otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pcifunc,
- uint16_t id)
-{
- struct msg_rsp *rsp;
-
- rsp = (struct msg_rsp *)otx2_mbox_alloc_msg(mbox, devid, sizeof(*rsp));
- if (!rsp)
- return -ENOMEM;
- rsp->hdr.id = id;
- rsp->hdr.sig = OTX2_MBOX_RSP_SIG;
- rsp->hdr.rc = MBOX_MSG_INVALID;
- rsp->hdr.pcifunc = pcifunc;
-
- return 0;
-}
-
-/**
- * @internal
- * Convert mail box ID to name
- */
-const char *otx2_mbox_id2name(uint16_t id)
-{
- switch (id) {
-#define M(_name, _id, _1, _2, _3) case _id: return # _name;
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return "INVALID ID";
- }
-}
-
-int otx2_mbox_id2size(uint16_t id)
-{
- switch (id) {
-#define M(_1, _id, _2, _req_type, _3) case _id: return sizeof(struct _req_type);
- MBOX_MESSAGES
- MBOX_UP_CGX_MESSAGES
-#undef M
- default :
- return 0;
- }
-}
diff --git a/drivers/common/octeontx2/otx2_mbox.h b/drivers/common/octeontx2/otx2_mbox.h
deleted file mode 100644
index 25b521a7fa..0000000000
--- a/drivers/common/octeontx2/otx2_mbox.h
+++ /dev/null
@@ -1,1958 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MBOX_H__
-#define __OTX2_MBOX_H__
-
-#include <errno.h>
-#include <stdbool.h>
-
-#include <rte_ether.h>
-#include <rte_spinlock.h>
-
-#include <otx2_common.h>
-
-#define SZ_64K (64ULL * 1024ULL)
-#define SZ_1K (1ULL * 1024ULL)
-#define MBOX_SIZE SZ_64K
-
-/* AF/PF: PF initiated, PF/VF VF initiated */
-#define MBOX_DOWN_RX_START 0
-#define MBOX_DOWN_RX_SIZE (46 * SZ_1K)
-#define MBOX_DOWN_TX_START (MBOX_DOWN_RX_START + MBOX_DOWN_RX_SIZE)
-#define MBOX_DOWN_TX_SIZE (16 * SZ_1K)
-/* AF/PF: AF initiated, PF/VF PF initiated */
-#define MBOX_UP_RX_START (MBOX_DOWN_TX_START + MBOX_DOWN_TX_SIZE)
-#define MBOX_UP_RX_SIZE SZ_1K
-#define MBOX_UP_TX_START (MBOX_UP_RX_START + MBOX_UP_RX_SIZE)
-#define MBOX_UP_TX_SIZE SZ_1K
-
-#if MBOX_UP_TX_SIZE + MBOX_UP_TX_START != MBOX_SIZE
-# error "Incorrect mailbox area sizes"
-#endif
-
-#define INTR_MASK(pfvfs) ((pfvfs < 64) ? (BIT_ULL(pfvfs) - 1) : (~0ull))
-
-#define MBOX_RSP_TIMEOUT 3000 /* Time to wait for mbox response in ms */
-
-#define MBOX_MSG_ALIGN 16 /* Align mbox msg start to 16bytes */
-
-/* Mailbox directions */
-#define MBOX_DIR_AFPF 0 /* AF replies to PF */
-#define MBOX_DIR_PFAF 1 /* PF sends messages to AF */
-#define MBOX_DIR_PFVF 2 /* PF replies to VF */
-#define MBOX_DIR_VFPF 3 /* VF sends messages to PF */
-#define MBOX_DIR_AFPF_UP 4 /* AF sends messages to PF */
-#define MBOX_DIR_PFAF_UP 5 /* PF replies to AF */
-#define MBOX_DIR_PFVF_UP 6 /* PF sends messages to VF */
-#define MBOX_DIR_VFPF_UP 7 /* VF replies to PF */
-
-/* Device memory does not support unaligned access, instruct compiler to
- * not optimize the memory access when working with mailbox memory.
- */
-#define __otx2_io volatile
-
-struct otx2_mbox_dev {
- void *mbase; /* This dev's mbox region */
- rte_spinlock_t mbox_lock;
- uint16_t msg_size; /* Total msg size to be sent */
- uint16_t rsp_size; /* Total rsp size to be sure the reply is ok */
- uint16_t num_msgs; /* No of msgs sent or waiting for response */
- uint16_t msgs_acked; /* No of msgs for which response is received */
-};
-
-struct otx2_mbox {
- uintptr_t hwbase; /* Mbox region advertised by HW */
- uintptr_t reg_base;/* CSR base for this dev */
- uint64_t trigger; /* Trigger mbox notification */
- uint16_t tr_shift; /* Mbox trigger shift */
- uint64_t rx_start; /* Offset of Rx region in mbox memory */
- uint64_t tx_start; /* Offset of Tx region in mbox memory */
- uint16_t rx_size; /* Size of Rx region */
- uint16_t tx_size; /* Size of Tx region */
- uint16_t ndevs; /* The number of peers */
- struct otx2_mbox_dev *dev;
- uint64_t intr_offset; /* Offset to interrupt register */
-};
-
-/* Header which precedes all mbox messages */
-struct mbox_hdr {
- uint64_t __otx2_io msg_size; /* Total msgs size embedded */
- uint16_t __otx2_io num_msgs; /* No of msgs embedded */
-};
-
-/* Header which precedes every msg and is also part of it */
-struct mbox_msghdr {
- uint16_t __otx2_io pcifunc; /* Who's sending this msg */
- uint16_t __otx2_io id; /* Mbox message ID */
-#define OTX2_MBOX_REQ_SIG (0xdead)
-#define OTX2_MBOX_RSP_SIG (0xbeef)
- /* Signature, for validating corrupted msgs */
- uint16_t __otx2_io sig;
-#define OTX2_MBOX_VERSION (0x000b)
- /* Version of msg's structure for this ID */
- uint16_t __otx2_io ver;
- /* Offset of next msg within mailbox region */
- uint16_t __otx2_io next_msgoff;
- int __otx2_io rc; /* Msg processed response code */
-};
-
-/* Mailbox message types */
-#define MBOX_MSG_MASK 0xFFFF
-#define MBOX_MSG_INVALID 0xFFFE
-#define MBOX_MSG_MAX 0xFFFF
-
-#define MBOX_MESSAGES \
-/* Generic mbox IDs (range 0x000 - 0x1FF) */ \
-M(READY, 0x001, ready, msg_req, ready_msg_rsp) \
-M(ATTACH_RESOURCES, 0x002, attach_resources, rsrc_attach_req, msg_rsp)\
-M(DETACH_RESOURCES, 0x003, detach_resources, rsrc_detach_req, msg_rsp)\
-M(FREE_RSRC_CNT, 0x004, free_rsrc_cnt, msg_req, free_rsrcs_rsp) \
-M(MSIX_OFFSET, 0x005, msix_offset, msg_req, msix_offset_rsp) \
-M(VF_FLR, 0x006, vf_flr, msg_req, msg_rsp) \
-M(PTP_OP, 0x007, ptp_op, ptp_req, ptp_rsp) \
-M(GET_HW_CAP, 0x008, get_hw_cap, msg_req, get_hw_cap_rsp) \
-M(NDC_SYNC_OP, 0x009, ndc_sync_op, ndc_sync_op, msg_rsp) \
-/* CGX mbox IDs (range 0x200 - 0x3FF) */ \
-M(CGX_START_RXTX, 0x200, cgx_start_rxtx, msg_req, msg_rsp) \
-M(CGX_STOP_RXTX, 0x201, cgx_stop_rxtx, msg_req, msg_rsp) \
-M(CGX_STATS, 0x202, cgx_stats, msg_req, cgx_stats_rsp) \
-M(CGX_MAC_ADDR_SET, 0x203, cgx_mac_addr_set, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_MAC_ADDR_GET, 0x204, cgx_mac_addr_get, cgx_mac_addr_set_or_get,\
- cgx_mac_addr_set_or_get) \
-M(CGX_PROMISC_ENABLE, 0x205, cgx_promisc_enable, msg_req, msg_rsp) \
-M(CGX_PROMISC_DISABLE, 0x206, cgx_promisc_disable, msg_req, msg_rsp) \
-M(CGX_START_LINKEVENTS, 0x207, cgx_start_linkevents, msg_req, msg_rsp) \
-M(CGX_STOP_LINKEVENTS, 0x208, cgx_stop_linkevents, msg_req, msg_rsp) \
-M(CGX_GET_LINKINFO, 0x209, cgx_get_linkinfo, msg_req, cgx_link_info_msg)\
-M(CGX_INTLBK_ENABLE, 0x20A, cgx_intlbk_enable, msg_req, msg_rsp) \
-M(CGX_INTLBK_DISABLE, 0x20B, cgx_intlbk_disable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_ENABLE, 0x20C, cgx_ptp_rx_enable, msg_req, msg_rsp) \
-M(CGX_PTP_RX_DISABLE, 0x20D, cgx_ptp_rx_disable, msg_req, msg_rsp) \
-M(CGX_CFG_PAUSE_FRM, 0x20E, cgx_cfg_pause_frm, cgx_pause_frm_cfg, \
- cgx_pause_frm_cfg) \
-M(CGX_FW_DATA_GET, 0x20F, cgx_get_aux_link_info, msg_req, cgx_fw_data) \
-M(CGX_FEC_SET, 0x210, cgx_set_fec_param, fec_mode, fec_mode) \
-M(CGX_MAC_ADDR_ADD, 0x211, cgx_mac_addr_add, cgx_mac_addr_add_req, \
- cgx_mac_addr_add_rsp) \
-M(CGX_MAC_ADDR_DEL, 0x212, cgx_mac_addr_del, cgx_mac_addr_del_req, \
- msg_rsp) \
-M(CGX_MAC_MAX_ENTRIES_GET, 0x213, cgx_mac_max_entries_get, msg_req, \
- cgx_max_dmac_entries_get_rsp) \
-M(CGX_SET_LINK_STATE, 0x214, cgx_set_link_state, \
- cgx_set_link_state_msg, msg_rsp) \
-M(CGX_GET_PHY_MOD_TYPE, 0x215, cgx_get_phy_mod_type, msg_req, \
- cgx_phy_mod_type) \
-M(CGX_SET_PHY_MOD_TYPE, 0x216, cgx_set_phy_mod_type, cgx_phy_mod_type, \
- msg_rsp) \
-M(CGX_FEC_STATS, 0x217, cgx_fec_stats, msg_req, cgx_fec_stats_rsp) \
-M(CGX_SET_LINK_MODE, 0x218, cgx_set_link_mode, cgx_set_link_mode_req,\
- cgx_set_link_mode_rsp) \
-M(CGX_GET_PHY_FEC_STATS, 0x219, cgx_get_phy_fec_stats, msg_req, msg_rsp) \
-M(CGX_STATS_RST, 0x21A, cgx_stats_rst, msg_req, msg_rsp) \
-/* NPA mbox IDs (range 0x400 - 0x5FF) */ \
-M(NPA_LF_ALLOC, 0x400, npa_lf_alloc, npa_lf_alloc_req, \
- npa_lf_alloc_rsp) \
-M(NPA_LF_FREE, 0x401, npa_lf_free, msg_req, msg_rsp) \
-M(NPA_AQ_ENQ, 0x402, npa_aq_enq, npa_aq_enq_req, npa_aq_enq_rsp)\
-M(NPA_HWCTX_DISABLE, 0x403, npa_hwctx_disable, hwctx_disable_req, msg_rsp)\
-/* SSO/SSOW mbox IDs (range 0x600 - 0x7FF) */ \
-M(SSO_LF_ALLOC, 0x600, sso_lf_alloc, sso_lf_alloc_req, \
- sso_lf_alloc_rsp) \
-M(SSO_LF_FREE, 0x601, sso_lf_free, sso_lf_free_req, msg_rsp) \
-M(SSOW_LF_ALLOC, 0x602, ssow_lf_alloc, ssow_lf_alloc_req, msg_rsp)\
-M(SSOW_LF_FREE, 0x603, ssow_lf_free, ssow_lf_free_req, msg_rsp) \
-M(SSO_HW_SETCONFIG, 0x604, sso_hw_setconfig, sso_hw_setconfig, \
- msg_rsp) \
-M(SSO_GRP_SET_PRIORITY, 0x605, sso_grp_set_priority, sso_grp_priority, \
- msg_rsp) \
-M(SSO_GRP_GET_PRIORITY, 0x606, sso_grp_get_priority, sso_info_req, \
- sso_grp_priority) \
-M(SSO_WS_CACHE_INV, 0x607, sso_ws_cache_inv, msg_req, msg_rsp) \
-M(SSO_GRP_QOS_CONFIG, 0x608, sso_grp_qos_config, sso_grp_qos_cfg, \
- msg_rsp) \
-M(SSO_GRP_GET_STATS, 0x609, sso_grp_get_stats, sso_info_req, \
- sso_grp_stats) \
-M(SSO_HWS_GET_STATS, 0x610, sso_hws_get_stats, sso_info_req, \
- sso_hws_stats) \
-M(SSO_HW_RELEASE_XAQ, 0x611, sso_hw_release_xaq_aura, \
- sso_release_xaq, msg_rsp) \
-/* TIM mbox IDs (range 0x800 - 0x9FF) */ \
-M(TIM_LF_ALLOC, 0x800, tim_lf_alloc, tim_lf_alloc_req, \
- tim_lf_alloc_rsp) \
-M(TIM_LF_FREE, 0x801, tim_lf_free, tim_ring_req, msg_rsp) \
-M(TIM_CONFIG_RING, 0x802, tim_config_ring, tim_config_req, msg_rsp)\
-M(TIM_ENABLE_RING, 0x803, tim_enable_ring, tim_ring_req, \
- tim_enable_rsp) \
-M(TIM_DISABLE_RING, 0x804, tim_disable_ring, tim_ring_req, msg_rsp) \
-/* CPT mbox IDs (range 0xA00 - 0xBFF) */ \
-M(CPT_LF_ALLOC, 0xA00, cpt_lf_alloc, cpt_lf_alloc_req_msg, \
- cpt_lf_alloc_rsp_msg) \
-M(CPT_LF_FREE, 0xA01, cpt_lf_free, msg_req, msg_rsp) \
-M(CPT_RD_WR_REGISTER, 0xA02, cpt_rd_wr_register, cpt_rd_wr_reg_msg, \
- cpt_rd_wr_reg_msg) \
-M(CPT_SET_CRYPTO_GRP, 0xA03, cpt_set_crypto_grp, \
- cpt_set_crypto_grp_req_msg, \
- msg_rsp) \
-M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \
- cpt_inline_ipsec_cfg_msg, msg_rsp) \
-M(CPT_RX_INLINE_LF_CFG, 0xBFE, cpt_rx_inline_lf_cfg, \
- cpt_rx_inline_lf_cfg_msg, msg_rsp) \
-M(CPT_GET_CAPS, 0xBFD, cpt_caps_get, msg_req, cpt_caps_rsp_msg) \
-/* REE mbox IDs (range 0xE00 - 0xFFF) */ \
-M(REE_CONFIG_LF, 0xE01, ree_config_lf, ree_lf_req_msg, \
- msg_rsp) \
-M(REE_RD_WR_REGISTER, 0xE02, ree_rd_wr_register, ree_rd_wr_reg_msg, \
- ree_rd_wr_reg_msg) \
-M(REE_RULE_DB_PROG, 0xE03, ree_rule_db_prog, \
- ree_rule_db_prog_req_msg, \
- msg_rsp) \
-M(REE_RULE_DB_LEN_GET, 0xE04, ree_rule_db_len_get, ree_req_msg, \
- ree_rule_db_len_rsp_msg) \
-M(REE_RULE_DB_GET, 0xE05, ree_rule_db_get, \
- ree_rule_db_get_req_msg, \
- ree_rule_db_get_rsp_msg) \
-/* NPC mbox IDs (range 0x6000 - 0x7FFF) */ \
-M(NPC_MCAM_ALLOC_ENTRY, 0x6000, npc_mcam_alloc_entry, \
- npc_mcam_alloc_entry_req, \
- npc_mcam_alloc_entry_rsp) \
-M(NPC_MCAM_FREE_ENTRY, 0x6001, npc_mcam_free_entry, \
- npc_mcam_free_entry_req, msg_rsp) \
-M(NPC_MCAM_WRITE_ENTRY, 0x6002, npc_mcam_write_entry, \
- npc_mcam_write_entry_req, msg_rsp) \
-M(NPC_MCAM_ENA_ENTRY, 0x6003, npc_mcam_ena_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_DIS_ENTRY, 0x6004, npc_mcam_dis_entry, \
- npc_mcam_ena_dis_entry_req, msg_rsp) \
-M(NPC_MCAM_SHIFT_ENTRY, 0x6005, npc_mcam_shift_entry, \
- npc_mcam_shift_entry_req, \
- npc_mcam_shift_entry_rsp) \
-M(NPC_MCAM_ALLOC_COUNTER, 0x6006, npc_mcam_alloc_counter, \
- npc_mcam_alloc_counter_req, \
- npc_mcam_alloc_counter_rsp) \
-M(NPC_MCAM_FREE_COUNTER, 0x6007, npc_mcam_free_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_UNMAP_COUNTER, 0x6008, npc_mcam_unmap_counter, \
- npc_mcam_unmap_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_CLEAR_COUNTER, 0x6009, npc_mcam_clear_counter, \
- npc_mcam_oper_counter_req, \
- msg_rsp) \
-M(NPC_MCAM_COUNTER_STATS, 0x600a, npc_mcam_counter_stats, \
- npc_mcam_oper_counter_req, \
- npc_mcam_oper_counter_rsp) \
-M(NPC_MCAM_ALLOC_AND_WRITE_ENTRY, 0x600b, npc_mcam_alloc_and_write_entry,\
- npc_mcam_alloc_and_write_entry_req, \
- npc_mcam_alloc_and_write_entry_rsp) \
-M(NPC_GET_KEX_CFG, 0x600c, npc_get_kex_cfg, msg_req, \
- npc_get_kex_cfg_rsp) \
-M(NPC_INSTALL_FLOW, 0x600d, npc_install_flow, \
- npc_install_flow_req, \
- npc_install_flow_rsp) \
-M(NPC_DELETE_FLOW, 0x600e, npc_delete_flow, \
- npc_delete_flow_req, msg_rsp) \
-M(NPC_MCAM_READ_ENTRY, 0x600f, npc_mcam_read_entry, \
- npc_mcam_read_entry_req, \
- npc_mcam_read_entry_rsp) \
-M(NPC_SET_PKIND, 0x6010, npc_set_pkind, \
- npc_set_pkind, \
- msg_rsp) \
-M(NPC_MCAM_READ_BASE_RULE, 0x6011, npc_read_base_steer_rule, msg_req, \
- npc_mcam_read_base_rule_rsp) \
-/* NIX mbox IDs (range 0x8000 - 0xFFFF) */ \
-M(NIX_LF_ALLOC, 0x8000, nix_lf_alloc, nix_lf_alloc_req, \
- nix_lf_alloc_rsp) \
-M(NIX_LF_FREE, 0x8001, nix_lf_free, nix_lf_free_req, msg_rsp) \
-M(NIX_AQ_ENQ, 0x8002, nix_aq_enq, nix_aq_enq_req, \
- nix_aq_enq_rsp) \
-M(NIX_HWCTX_DISABLE, 0x8003, nix_hwctx_disable, hwctx_disable_req, \
- msg_rsp) \
-M(NIX_TXSCH_ALLOC, 0x8004, nix_txsch_alloc, nix_txsch_alloc_req, \
- nix_txsch_alloc_rsp) \
-M(NIX_TXSCH_FREE, 0x8005, nix_txsch_free, nix_txsch_free_req, \
- msg_rsp) \
-M(NIX_TXSCHQ_CFG, 0x8006, nix_txschq_cfg, nix_txschq_config, \
- nix_txschq_config) \
-M(NIX_STATS_RST, 0x8007, nix_stats_rst, msg_req, msg_rsp) \
-M(NIX_VTAG_CFG, 0x8008, nix_vtag_cfg, nix_vtag_config, msg_rsp) \
-M(NIX_RSS_FLOWKEY_CFG, 0x8009, nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg, \
- nix_rss_flowkey_cfg_rsp) \
-M(NIX_SET_MAC_ADDR, 0x800a, nix_set_mac_addr, nix_set_mac_addr, \
- msg_rsp) \
-M(NIX_SET_RX_MODE, 0x800b, nix_set_rx_mode, nix_rx_mode, msg_rsp) \
-M(NIX_SET_HW_FRS, 0x800c, nix_set_hw_frs, nix_frs_cfg, msg_rsp) \
-M(NIX_LF_START_RX, 0x800d, nix_lf_start_rx, msg_req, msg_rsp) \
-M(NIX_LF_STOP_RX, 0x800e, nix_lf_stop_rx, msg_req, msg_rsp) \
-M(NIX_MARK_FORMAT_CFG, 0x800f, nix_mark_format_cfg, \
- nix_mark_format_cfg, \
- nix_mark_format_cfg_rsp) \
-M(NIX_SET_RX_CFG, 0x8010, nix_set_rx_cfg, nix_rx_cfg, msg_rsp) \
-M(NIX_LSO_FORMAT_CFG, 0x8011, nix_lso_format_cfg, nix_lso_format_cfg, \
- nix_lso_format_cfg_rsp) \
-M(NIX_LF_PTP_TX_ENABLE, 0x8013, nix_lf_ptp_tx_enable, msg_req, \
- msg_rsp) \
-M(NIX_LF_PTP_TX_DISABLE, 0x8014, nix_lf_ptp_tx_disable, msg_req, \
- msg_rsp) \
-M(NIX_SET_VLAN_TPID, 0x8015, nix_set_vlan_tpid, nix_set_vlan_tpid, \
- msg_rsp) \
-M(NIX_BP_ENABLE, 0x8016, nix_bp_enable, nix_bp_cfg_req, \
- nix_bp_cfg_rsp) \
-M(NIX_BP_DISABLE, 0x8017, nix_bp_disable, nix_bp_cfg_req, msg_rsp)\
-M(NIX_GET_MAC_ADDR, 0x8018, nix_get_mac_addr, msg_req, \
- nix_get_mac_addr_rsp) \
-M(NIX_INLINE_IPSEC_CFG, 0x8019, nix_inline_ipsec_cfg, \
- nix_inline_ipsec_cfg, msg_rsp) \
-M(NIX_INLINE_IPSEC_LF_CFG, \
- 0x801a, nix_inline_ipsec_lf_cfg, \
- nix_inline_ipsec_lf_cfg, msg_rsp)
-
-/* Messages initiated by AF (range 0xC00 - 0xDFF) */
-#define MBOX_UP_CGX_MESSAGES \
-M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, \
- msg_rsp) \
-M(CGX_PTP_RX_INFO, 0xC01, cgx_ptp_rx_info, cgx_ptp_rx_info_msg, \
- msg_rsp)
-
-enum {
-#define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id,
-MBOX_MESSAGES
-MBOX_UP_CGX_MESSAGES
-#undef M
-};
-
-/* Mailbox message formats */
-
-#define RVU_DEFAULT_PF_FUNC 0xFFFF
-
-/* Generic request msg used for those mbox messages which
- * don't send any data in the request.
- */
-struct msg_req {
- struct mbox_msghdr hdr;
-};
-
-/* Generic response msg used a ack or response for those mbox
- * messages which doesn't have a specific rsp msg format.
- */
-struct msg_rsp {
- struct mbox_msghdr hdr;
-};
-
-/* RVU mailbox error codes
- * Range 256 - 300.
- */
-enum rvu_af_status {
- RVU_INVALID_VF_ID = -256,
-};
-
-struct ready_msg_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sclk_feq; /* SCLK frequency */
- uint16_t __otx2_io rclk_freq; /* RCLK frequency */
-};
-
-enum npc_pkind_type {
- NPC_RX_CUSTOM_PRE_L2_PKIND = 55ULL,
- NPC_RX_VLAN_EXDSA_PKIND = 56ULL,
- NPC_RX_CHLEN24B_PKIND,
- NPC_RX_CPT_HDR_PKIND,
- NPC_RX_CHLEN90B_PKIND,
- NPC_TX_HIGIG_PKIND,
- NPC_RX_HIGIG_PKIND,
- NPC_RX_EXDSA_PKIND,
- NPC_RX_EDSA_PKIND,
- NPC_TX_DEF_PKIND,
-};
-
-#define OTX2_PRIV_FLAGS_CH_LEN_90B 254
-#define OTX2_PRIV_FLAGS_CH_LEN_24B 255
-
-/* Struct to set pkind */
-struct npc_set_pkind {
- struct mbox_msghdr hdr;
-#define OTX2_PRIV_FLAGS_DEFAULT BIT_ULL(0)
-#define OTX2_PRIV_FLAGS_EDSA BIT_ULL(1)
-#define OTX2_PRIV_FLAGS_HIGIG BIT_ULL(2)
-#define OTX2_PRIV_FLAGS_FDSA BIT_ULL(3)
-#define OTX2_PRIV_FLAGS_EXDSA BIT_ULL(4)
-#define OTX2_PRIV_FLAGS_VLAN_EXDSA BIT_ULL(5)
-#define OTX2_PRIV_FLAGS_CUSTOM BIT_ULL(63)
- uint64_t __otx2_io mode;
-#define PKIND_TX BIT_ULL(0)
-#define PKIND_RX BIT_ULL(1)
- uint8_t __otx2_io dir;
- uint8_t __otx2_io pkind; /* valid only in case custom flag */
- uint8_t __otx2_io var_len_off;
- /* Offset of custom header length field.
- * Valid only for pkind NPC_RX_CUSTOM_PRE_L2_PKIND
- */
- uint8_t __otx2_io var_len_off_mask; /* Mask for length with in offset */
- uint8_t __otx2_io shift_dir;
- /* Shift direction to get length of the
- * header at var_len_off
- */
-};
-
-/* Structure for requesting resource provisioning.
- * 'modify' flag to be used when either requesting more
- * or to detach partial of a certain resource type.
- * Rest of the fields specify how many of what type to
- * be attached.
- * To request LFs from two blocks of same type this mailbox
- * can be sent twice as below:
- * struct rsrc_attach *attach;
- * .. Allocate memory for message ..
- * attach->cptlfs = 3; <3 LFs from CPT0>
- * .. Send message ..
- * .. Allocate memory for message ..
- * attach->modify = 1;
- * attach->cpt_blkaddr = BLKADDR_CPT1;
- * attach->cptlfs = 2; <2 LFs from CPT1>
- * .. Send message ..
- */
-struct rsrc_attach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io modify:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io reelfs;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- int __otx2_io cpt_blkaddr;
- /* BLKADDR_REE0/BLKADDR_REE1 or 0 for BLKADDR_REE0 */
- int __otx2_io ree_blkaddr;
-};
-
-/* Structure for relinquishing resources.
- * 'partial' flag to be used when relinquishing all resources
- * but only of a certain type. If not set, all resources of all
- * types provisioned to the RVU function will be detached.
- */
-struct rsrc_detach_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io partial:1;
- uint8_t __otx2_io npalf:1;
- uint8_t __otx2_io nixlf:1;
- uint8_t __otx2_io sso:1;
- uint8_t __otx2_io ssow:1;
- uint8_t __otx2_io timlfs:1;
- uint8_t __otx2_io cptlfs:1;
- uint8_t __otx2_io reelfs:1;
-};
-
-/* NIX Transmit schedulers */
-#define NIX_TXSCH_LVL_SMQ 0x0
-#define NIX_TXSCH_LVL_MDQ 0x0
-#define NIX_TXSCH_LVL_TL4 0x1
-#define NIX_TXSCH_LVL_TL3 0x2
-#define NIX_TXSCH_LVL_TL2 0x3
-#define NIX_TXSCH_LVL_TL1 0x4
-#define NIX_TXSCH_LVL_CNT 0x5
-
-/*
- * Number of resources available to the caller.
- * In reply to MBOX_MSG_FREE_RSRC_CNT.
- */
-struct free_rsrcs_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT];
- uint16_t __otx2_io sso;
- uint16_t __otx2_io tim;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io cpt;
- uint8_t __otx2_io npa;
- uint8_t __otx2_io nix;
- uint16_t __otx2_io schq_nix1[NIX_TXSCH_LVL_CNT];
- uint8_t __otx2_io nix1;
- uint8_t __otx2_io cpt1;
- uint8_t __otx2_io ree0;
- uint8_t __otx2_io ree1;
-};
-
-#define MSIX_VECTOR_INVALID 0xFFFF
-#define MAX_RVU_BLKLF_CNT 256
-
-struct msix_offset_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io npa_msixoff;
- uint16_t __otx2_io nix_msixoff;
- uint16_t __otx2_io sso;
- uint16_t __otx2_io ssow;
- uint16_t __otx2_io timlfs;
- uint16_t __otx2_io cptlfs;
- uint16_t __otx2_io sso_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ssow_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io timlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cptlf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io cpt1_lfs;
- uint16_t __otx2_io ree0_lfs;
- uint16_t __otx2_io ree1_lfs;
- uint16_t __otx2_io cpt1_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree0_lf_msixoff[MAX_RVU_BLKLF_CNT];
- uint16_t __otx2_io ree1_lf_msixoff[MAX_RVU_BLKLF_CNT];
-
-};
-
-/* CGX mbox message formats */
-
-struct cgx_stats_rsp {
- struct mbox_msghdr hdr;
-#define CGX_RX_STATS_COUNT 13
-#define CGX_TX_STATS_COUNT 18
- uint64_t __otx2_io rx_stats[CGX_RX_STATS_COUNT];
- uint64_t __otx2_io tx_stats[CGX_TX_STATS_COUNT];
-};
-
-struct cgx_fec_stats_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io fec_corr_blks;
- uint64_t __otx2_io fec_uncorr_blks;
-};
-/* Structure for requesting the operation for
- * setting/getting mac address in the CGX interface
- */
-struct cgx_mac_addr_set_or_get {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for requesting the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-/* Structure for response against the operation to
- * add DMAC filter entry into CGX interface
- */
-struct cgx_mac_addr_add_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for requesting the operation to
- * delete DMAC filter entry from CGX interface
- */
-struct cgx_mac_addr_del_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io index;
-};
-
-/* Structure for response against the operation to
- * get maximum supported DMAC filter entries
- */
-struct cgx_max_dmac_entries_get_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io max_dmac_filters;
-};
-
-struct cgx_link_user_info {
- uint64_t __otx2_io link_up:1;
- uint64_t __otx2_io full_duplex:1;
- uint64_t __otx2_io lmac_type_id:4;
- uint64_t __otx2_io speed:20; /* speed in Mbps */
- uint64_t __otx2_io an:1; /* AN supported or not */
- uint64_t __otx2_io fec:2; /* FEC type if enabled else 0 */
- uint64_t __otx2_io port:8;
-#define LMACTYPE_STR_LEN 16
- char lmac_type[LMACTYPE_STR_LEN];
-};
-
-struct cgx_link_info_msg {
- struct mbox_msghdr hdr;
- struct cgx_link_user_info link_info;
-};
-
-struct cgx_ptp_rx_info_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ptp_en;
-};
-
-struct cgx_pause_frm_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io set;
- /* set = 1 if the request is to config pause frames */
- /* set = 0 if the request is to fetch pause frames config */
- uint8_t __otx2_io rx_pause;
- uint8_t __otx2_io tx_pause;
-};
-
-struct sfp_eeprom_s {
-#define SFP_EEPROM_SIZE 256
- uint16_t __otx2_io sff_id;
- uint8_t __otx2_io buf[SFP_EEPROM_SIZE];
- uint64_t __otx2_io reserved;
-};
-
-enum fec_type {
- OTX2_FEC_NONE,
- OTX2_FEC_BASER,
- OTX2_FEC_RS,
-};
-
-struct phy_s {
- uint64_t __otx2_io can_change_mod_type : 1;
- uint64_t __otx2_io mod_type : 1;
-};
-
-struct cgx_lmac_fwdata_s {
- uint16_t __otx2_io rw_valid;
- uint64_t __otx2_io supported_fec;
- uint64_t __otx2_io supported_an;
- uint64_t __otx2_io supported_link_modes;
- /* Only applicable if AN is supported */
- uint64_t __otx2_io advertised_fec;
- uint64_t __otx2_io advertised_link_modes;
- /* Only applicable if SFP/QSFP slot is present */
- struct sfp_eeprom_s sfp_eeprom;
- struct phy_s phy;
-#define LMAC_FWDATA_RESERVED_MEM 1023
- uint64_t __otx2_io reserved[LMAC_FWDATA_RESERVED_MEM];
-};
-
-struct cgx_fw_data {
- struct mbox_msghdr hdr;
- struct cgx_lmac_fwdata_s fwdata;
-};
-
-struct fec_mode {
- struct mbox_msghdr hdr;
- int __otx2_io fec;
-};
-
-struct cgx_set_link_state_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
-};
-
-struct cgx_phy_mod_type {
- struct mbox_msghdr hdr;
- int __otx2_io mod;
-};
-
-struct cgx_set_link_mode_args {
- uint32_t __otx2_io speed;
- uint8_t __otx2_io duplex;
- uint8_t __otx2_io an;
- uint8_t __otx2_io ports;
- uint64_t __otx2_io mode;
-};
-
-struct cgx_set_link_mode_req {
- struct mbox_msghdr hdr;
- struct cgx_set_link_mode_args args;
-};
-
-struct cgx_set_link_mode_rsp {
- struct mbox_msghdr hdr;
- int __otx2_io status;
-};
-/* NPA mbox message formats */
-
-/* NPA mailbox error codes
- * Range 301 - 400.
- */
-enum npa_af_status {
- NPA_AF_ERR_PARAM = -301,
- NPA_AF_ERR_AQ_FULL = -302,
- NPA_AF_ERR_AQ_ENQUEUE = -303,
- NPA_AF_ERR_AF_LF_INVALID = -304,
- NPA_AF_ERR_AF_LF_ALLOC = -305,
- NPA_AF_ERR_LF_RESET = -306,
-};
-
-#define NPA_AURA_SZ_0 0
-#define NPA_AURA_SZ_128 1
-#define NPA_AURA_SZ_256 2
-#define NPA_AURA_SZ_512 3
-#define NPA_AURA_SZ_1K 4
-#define NPA_AURA_SZ_2K 5
-#define NPA_AURA_SZ_4K 6
-#define NPA_AURA_SZ_8K 7
-#define NPA_AURA_SZ_16K 8
-#define NPA_AURA_SZ_32K 9
-#define NPA_AURA_SZ_64K 10
-#define NPA_AURA_SZ_128K 11
-#define NPA_AURA_SZ_256K 12
-#define NPA_AURA_SZ_512K 13
-#define NPA_AURA_SZ_1M 14
-#define NPA_AURA_SZ_MAX 15
-
-/* For NPA LF context alloc and init */
-struct npa_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- int __otx2_io aura_sz; /* No of auras. See NPA_AURA_SZ_* */
- uint32_t __otx2_io nr_pools; /* No of pools */
- uint64_t __otx2_io way_mask;
-};
-
-struct npa_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io stack_pg_ptrs; /* No of ptrs per stack page */
- uint32_t __otx2_io stack_pg_bytes; /* Size of stack page */
- uint16_t __otx2_io qints; /* NPA_AF_CONST::QINTS */
-};
-
-/* NPA AQ enqueue msg */
-struct npa_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io aura_id;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == AURA.
- * LF fills the pool_id in aura.pool_addr. AF will translate
- * the pool_id to pool context pointer.
- */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == WRITE/INIT and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == AURA */
- __otx2_io struct npa_aura_s aura_mask;
- /* Valid when op == WRITE and ctype == POOL */
- __otx2_io struct npa_pool_s pool_mask;
- };
-};
-
-struct npa_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- /* Valid when op == READ and ctype == AURA */
- __otx2_io struct npa_aura_s aura;
- /* Valid when op == READ and ctype == POOL */
- __otx2_io struct npa_pool_s pool;
- };
-};
-
-/* Disable all contexts of type 'ctype' */
-struct hwctx_disable_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io ctype;
-};
-
-/* NIX mbox message formats */
-
-/* NIX mailbox error codes
- * Range 401 - 500.
- */
-enum nix_af_status {
- NIX_AF_ERR_PARAM = -401,
- NIX_AF_ERR_AQ_FULL = -402,
- NIX_AF_ERR_AQ_ENQUEUE = -403,
- NIX_AF_ERR_AF_LF_INVALID = -404,
- NIX_AF_ERR_AF_LF_ALLOC = -405,
- NIX_AF_ERR_TLX_ALLOC_FAIL = -406,
- NIX_AF_ERR_TLX_INVALID = -407,
- NIX_AF_ERR_RSS_SIZE_INVALID = -408,
- NIX_AF_ERR_RSS_GRPS_INVALID = -409,
- NIX_AF_ERR_FRS_INVALID = -410,
- NIX_AF_ERR_RX_LINK_INVALID = -411,
- NIX_AF_INVAL_TXSCHQ_CFG = -412,
- NIX_AF_SMQ_FLUSH_FAILED = -413,
- NIX_AF_ERR_LF_RESET = -414,
- NIX_AF_ERR_RSS_NOSPC_FIELD = -415,
- NIX_AF_ERR_RSS_NOSPC_ALGO = -416,
- NIX_AF_ERR_MARK_CFG_FAIL = -417,
- NIX_AF_ERR_LSO_CFG_FAIL = -418,
- NIX_AF_INVAL_NPA_PF_FUNC = -419,
- NIX_AF_INVAL_SSO_PF_FUNC = -420,
- NIX_AF_ERR_TX_VTAG_NOSPC = -421,
- NIX_AF_ERR_RX_VTAG_INUSE = -422,
- NIX_AF_ERR_PTP_CONFIG_FAIL = -423,
-};
-
-/* For NIX LF context alloc and init */
-struct nix_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint32_t __otx2_io rq_cnt; /* No of receive queues */
- uint32_t __otx2_io sq_cnt; /* No of send queues */
- uint32_t __otx2_io cq_cnt; /* No of completion queues */
- uint8_t __otx2_io xqe_sz;
- uint16_t __otx2_io rss_sz;
- uint8_t __otx2_io rss_grps;
- uint16_t __otx2_io npa_func;
- /* RVU_DEFAULT_PF_FUNC == default pf_func associated with lf */
- uint16_t __otx2_io sso_func;
- uint64_t __otx2_io rx_cfg; /* See NIX_AF_LF(0..127)_RX_CFG */
- uint64_t __otx2_io way_mask;
-#define NIX_LF_RSS_TAG_LSB_AS_ADDER BIT_ULL(0)
- uint64_t flags;
-};
-
-struct nix_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sqb_size;
- uint16_t __otx2_io rx_chan_base;
- uint16_t __otx2_io tx_chan_base;
- uint8_t __otx2_io rx_chan_cnt; /* Total number of RX channels */
- uint8_t __otx2_io tx_chan_cnt; /* Total number of TX channels */
- uint8_t __otx2_io lso_tsov4_idx;
- uint8_t __otx2_io lso_tsov6_idx;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t __otx2_io lf_rx_stats; /* NIX_AF_CONST1::LF_RX_STATS */
- uint8_t __otx2_io lf_tx_stats; /* NIX_AF_CONST1::LF_TX_STATS */
- uint16_t __otx2_io cints; /* NIX_AF_CONST2::CINTS */
- uint16_t __otx2_io qints; /* NIX_AF_CONST2::QINTS */
- uint8_t __otx2_io hw_rx_tstamp_en; /*set if rx timestamping enabled */
- uint8_t __otx2_io cgx_links; /* No. of CGX links present in HW */
- uint8_t __otx2_io lbk_links; /* No. of LBK links present in HW */
- uint8_t __otx2_io sdp_links; /* No. of SDP links present in HW */
- uint8_t __otx2_io tx_link; /* Transmit channel link number */
-};
-
-struct nix_lf_free_req {
- struct mbox_msghdr hdr;
-#define NIX_LF_DISABLE_FLOWS BIT_ULL(0)
-#define NIX_LF_DONT_FREE_TX_VTAG BIT_ULL(1)
- uint64_t __otx2_io flags;
-};
-
-/* NIX AQ enqueue msg */
-struct nix_aq_enq_req {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io qidx;
- uint8_t __otx2_io ctype;
- uint8_t __otx2_io op;
- union {
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss;
- /* Valid when op == WRITE/INIT and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce;
- };
- /* Mask data when op == WRITE (1=write, 0=don't write) */
- union {
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RQ */
- __otx2_io struct nix_rq_ctx_s rq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_SQ */
- __otx2_io struct nix_sq_ctx_s sq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_CQ */
- __otx2_io struct nix_cq_ctx_s cq_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_RSS */
- __otx2_io struct nix_rsse_s rss_mask;
- /* Valid when op == WRITE and ctype == NIX_AQ_CTYPE_MCE */
- __otx2_io struct nix_rx_mce_s mce_mask;
- };
-};
-
-struct nix_aq_enq_rsp {
- struct mbox_msghdr hdr;
- union {
- __otx2_io struct nix_rq_ctx_s rq;
- __otx2_io struct nix_sq_ctx_s sq;
- __otx2_io struct nix_cq_ctx_s cq;
- __otx2_io struct nix_rsse_s rss;
- __otx2_io struct nix_rx_mce_s mce;
- };
-};
-
-/* Tx scheduler/shaper mailbox messages */
-
-#define MAX_TXSCHQ_PER_FUNC 128
-
-struct nix_txsch_alloc_req {
- struct mbox_msghdr hdr;
- /* Scheduler queue count request at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
-};
-
-struct nix_txsch_alloc_rsp {
- struct mbox_msghdr hdr;
- /* Scheduler queue count allocated at each level */
- uint16_t __otx2_io schq_contig[NIX_TXSCH_LVL_CNT]; /* Contig. queues */
- uint16_t __otx2_io schq[NIX_TXSCH_LVL_CNT]; /* Non-Contig. queues */
- /* Scheduler queue list allocated at each level */
- uint16_t __otx2_io
- schq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t __otx2_io schq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Traffic aggregation scheduler level */
- uint8_t __otx2_io aggr_level;
- /* Aggregation lvl's RR_PRIO config */
- uint8_t __otx2_io aggr_lvl_rr_prio;
- /* LINKX_CFG CSRs mapped to TL3 or TL2's index ? */
- uint8_t __otx2_io link_cfg_lvl;
-};
-
-struct nix_txsch_free_req {
- struct mbox_msghdr hdr;
-#define TXSCHQ_FREE_ALL BIT_ULL(0)
- uint16_t __otx2_io flags;
- /* Scheduler queue level to be freed */
- uint16_t __otx2_io schq_lvl;
- /* List of scheduler queues to be freed */
- uint16_t __otx2_io schq;
-};
-
-struct nix_txschq_config {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lvl; /* SMQ/MDQ/TL4/TL3/TL2/TL1 */
- uint8_t __otx2_io read;
-#define TXSCHQ_IDX_SHIFT 16
-#define TXSCHQ_IDX_MASK (BIT_ULL(10) - 1)
-#define TXSCHQ_IDX(reg, shift) (((reg) >> (shift)) & TXSCHQ_IDX_MASK)
- uint8_t __otx2_io num_regs;
-#define MAX_REGS_PER_MBOX_MSG 20
- uint64_t __otx2_io reg[MAX_REGS_PER_MBOX_MSG];
- uint64_t __otx2_io regval[MAX_REGS_PER_MBOX_MSG];
- /* All 0's => overwrite with new value */
- uint64_t __otx2_io regval_mask[MAX_REGS_PER_MBOX_MSG];
-};
-
-struct nix_vtag_config {
- struct mbox_msghdr hdr;
- /* '0' for 4 octet VTAG, '1' for 8 octet VTAG */
- uint8_t __otx2_io vtag_size;
- /* cfg_type is '0' for tx vlan cfg
- * cfg_type is '1' for rx vlan cfg
- */
- uint8_t __otx2_io cfg_type;
- union {
- /* Valid when cfg_type is '0' */
- struct {
- uint64_t __otx2_io vtag0;
- uint64_t __otx2_io vtag1;
-
- /* cfg_vtag0 & cfg_vtag1 fields are valid
- * when free_vtag0 & free_vtag1 are '0's.
- */
- /* cfg_vtag0 = 1 to configure vtag0 */
- uint8_t __otx2_io cfg_vtag0 :1;
- /* cfg_vtag1 = 1 to configure vtag1 */
- uint8_t __otx2_io cfg_vtag1 :1;
-
- /* vtag0_idx & vtag1_idx are only valid when
- * both cfg_vtag0 & cfg_vtag1 are '0's,
- * these fields are used along with free_vtag0
- * & free_vtag1 to free the nix lf's tx_vlan
- * configuration.
- *
- * Denotes the indices of tx_vtag def registers
- * that needs to be cleared and freed.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-
- /* Free_vtag0 & free_vtag1 fields are valid
- * when cfg_vtag0 & cfg_vtag1 are '0's.
- */
- /* Free_vtag0 = 1 clears vtag0 configuration
- * vtag0_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag0 :1;
- /* Free_vtag1 = 1 clears vtag1 configuration
- * vtag1_idx denotes the index to be cleared.
- */
- uint8_t __otx2_io free_vtag1 :1;
- } tx;
-
- /* Valid when cfg_type is '1' */
- struct {
- /* Rx vtag type index, valid values are in 0..7 range */
- uint8_t __otx2_io vtag_type;
- /* Rx vtag strip */
- uint8_t __otx2_io strip_vtag :1;
- /* Rx vtag capture */
- uint8_t __otx2_io capture_vtag :1;
- } rx;
- };
-};
-
-struct nix_vtag_config_rsp {
- struct mbox_msghdr hdr;
- /* Indices of tx_vtag def registers used to configure
- * tx vtag0 & vtag1 headers, these indices are valid
- * when nix_vtag_config mbox requested for vtag0 and/
- * or vtag1 configuration.
- */
- int __otx2_io vtag0_idx;
- int __otx2_io vtag1_idx;
-};
-
-struct nix_rss_flowkey_cfg {
- struct mbox_msghdr hdr;
- int __otx2_io mcam_index; /* MCAM entry index to modify */
- uint32_t __otx2_io flowkey_cfg; /* Flowkey types selected */
-#define FLOW_KEY_TYPE_PORT BIT(0)
-#define FLOW_KEY_TYPE_IPV4 BIT(1)
-#define FLOW_KEY_TYPE_IPV6 BIT(2)
-#define FLOW_KEY_TYPE_TCP BIT(3)
-#define FLOW_KEY_TYPE_UDP BIT(4)
-#define FLOW_KEY_TYPE_SCTP BIT(5)
-#define FLOW_KEY_TYPE_NVGRE BIT(6)
-#define FLOW_KEY_TYPE_VXLAN BIT(7)
-#define FLOW_KEY_TYPE_GENEVE BIT(8)
-#define FLOW_KEY_TYPE_ETH_DMAC BIT(9)
-#define FLOW_KEY_TYPE_IPV6_EXT BIT(10)
-#define FLOW_KEY_TYPE_GTPU BIT(11)
-#define FLOW_KEY_TYPE_INNR_IPV4 BIT(12)
-#define FLOW_KEY_TYPE_INNR_IPV6 BIT(13)
-#define FLOW_KEY_TYPE_INNR_TCP BIT(14)
-#define FLOW_KEY_TYPE_INNR_UDP BIT(15)
-#define FLOW_KEY_TYPE_INNR_SCTP BIT(16)
-#define FLOW_KEY_TYPE_INNR_ETH_DMAC BIT(17)
-#define FLOW_KEY_TYPE_CH_LEN_90B BIT(18)
-#define FLOW_KEY_TYPE_CUSTOM0 BIT(19)
-#define FLOW_KEY_TYPE_VLAN BIT(20)
-#define FLOW_KEY_TYPE_L4_DST BIT(28)
-#define FLOW_KEY_TYPE_L4_SRC BIT(29)
-#define FLOW_KEY_TYPE_L3_DST BIT(30)
-#define FLOW_KEY_TYPE_L3_SRC BIT(31)
- uint8_t __otx2_io group; /* RSS context or group */
-};
-
-struct nix_rss_flowkey_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io alg_idx; /* Selected algo index */
-};
-
-struct nix_set_mac_addr {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_get_mac_addr_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mac_addr[RTE_ETHER_ADDR_LEN];
-};
-
-struct nix_mark_format_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io offset;
- uint8_t __otx2_io y_mask;
- uint8_t __otx2_io y_val;
- uint8_t __otx2_io r_mask;
- uint8_t __otx2_io r_val;
-};
-
-struct nix_mark_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io mark_format_idx;
-};
-
-struct nix_lso_format_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io field_mask;
- uint64_t __otx2_io fields[NIX_LSO_FIELD_MAX];
-};
-
-struct nix_lso_format_cfg_rsp {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io lso_format_idx;
-};
-
-struct nix_rx_mode {
- struct mbox_msghdr hdr;
-#define NIX_RX_MODE_UCAST BIT(0)
-#define NIX_RX_MODE_PROMISC BIT(1)
-#define NIX_RX_MODE_ALLMULTI BIT(2)
- uint16_t __otx2_io mode;
-};
-
-struct nix_rx_cfg {
- struct mbox_msghdr hdr;
-#define NIX_RX_OL3_VERIFY BIT(0)
-#define NIX_RX_OL4_VERIFY BIT(1)
- uint8_t __otx2_io len_verify; /* Outer L3/L4 len check */
-#define NIX_RX_CSUM_OL4_VERIFY BIT(0)
- uint8_t __otx2_io csum_verify; /* Outer L4 checksum verification */
-};
-
-struct nix_frs_cfg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io update_smq; /* Update SMQ's min/max lens */
- uint8_t __otx2_io update_minlen; /* Set minlen also */
- uint8_t __otx2_io sdp_link; /* Set SDP RX link */
- uint16_t __otx2_io maxlen;
- uint16_t __otx2_io minlen;
-};
-
-struct nix_set_vlan_tpid {
- struct mbox_msghdr hdr;
-#define NIX_VLAN_TYPE_INNER 0
-#define NIX_VLAN_TYPE_OUTER 1
- uint8_t __otx2_io vlan_type;
- uint16_t __otx2_io tpid;
-};
-
-struct nix_bp_cfg_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io chan_base; /* Starting channel number */
- uint8_t __otx2_io chan_cnt; /* Number of channels */
- uint8_t __otx2_io bpid_per_chan;
- /* bpid_per_chan = 0 assigns single bp id for range of channels */
- /* bpid_per_chan = 1 assigns separate bp id for each channel */
-};
-
-/* PF can be mapped to either CGX or LBK interface,
- * so maximum 64 channels are possible.
- */
-#define NIX_MAX_CHAN 64
-struct nix_bp_cfg_rsp {
- struct mbox_msghdr hdr;
- /* Channel and bpid mapping */
- uint16_t __otx2_io chan_bpid[NIX_MAX_CHAN];
- /* Number of channel for which bpids are assigned */
- uint8_t __otx2_io chan_cnt;
-};
-
-/* Global NIX inline IPSec configuration */
-struct nix_inline_ipsec_cfg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io cpt_credit;
- struct {
- uint8_t __otx2_io egrp;
- uint8_t __otx2_io opcode;
- } gen_cfg;
- struct {
- uint16_t __otx2_io cpt_pf_func;
- uint8_t __otx2_io cpt_slot;
- } inst_qsel;
- uint8_t __otx2_io enable;
-};
-
-/* Per NIX LF inline IPSec configuration */
-struct nix_inline_ipsec_lf_cfg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io sa_base_addr;
- struct {
- uint32_t __otx2_io tag_const;
- uint16_t __otx2_io lenm1_max;
- uint8_t __otx2_io sa_pow2_size;
- uint8_t __otx2_io tt;
- } ipsec_cfg0;
- struct {
- uint32_t __otx2_io sa_idx_max;
- uint8_t __otx2_io sa_idx_w;
- } ipsec_cfg1;
- uint8_t __otx2_io enable;
-};
-
-/* SSO mailbox error codes
- * Range 501 - 600.
- */
-enum sso_af_status {
- SSO_AF_ERR_PARAM = -501,
- SSO_AF_ERR_LF_INVALID = -502,
- SSO_AF_ERR_AF_LF_ALLOC = -503,
- SSO_AF_ERR_GRP_EBUSY = -504,
- SSO_AF_INVAL_NPA_PF_FUNC = -505,
-};
-
-struct sso_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io xaq_buf_size;
- uint32_t __otx2_io xaq_wq_entries;
- uint32_t __otx2_io in_unit_entries;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hwgrps;
-};
-
-/* SSOW mailbox error codes
- * Range 601 - 700.
- */
-enum ssow_af_status {
- SSOW_AF_ERR_PARAM = -601,
- SSOW_AF_ERR_LF_INVALID = -602,
- SSOW_AF_ERR_AF_LF_ALLOC = -603,
-};
-
-struct ssow_lf_alloc_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct ssow_lf_free_req {
- struct mbox_msghdr hdr;
- int __otx2_io node;
- uint16_t __otx2_io hws;
-};
-
-struct sso_hw_setconfig {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io npa_aura_id;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_release_xaq {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hwgrps;
-};
-
-struct sso_info_req {
- struct mbox_msghdr hdr;
- union {
- uint16_t __otx2_io grp;
- uint16_t __otx2_io hws;
- };
-};
-
-struct sso_grp_priority {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint8_t __otx2_io priority;
- uint8_t __otx2_io affinity;
- uint8_t __otx2_io weight;
-};
-
-struct sso_grp_qos_cfg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint32_t __otx2_io xaq_limit;
- uint16_t __otx2_io taq_thr;
- uint16_t __otx2_io iaq_thr;
-};
-
-struct sso_grp_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io grp;
- uint64_t __otx2_io ws_pc;
- uint64_t __otx2_io ext_pc;
- uint64_t __otx2_io wa_pc;
- uint64_t __otx2_io ts_pc;
- uint64_t __otx2_io ds_pc;
- uint64_t __otx2_io dq_pc;
- uint64_t __otx2_io aw_status;
- uint64_t __otx2_io page_cnt;
-};
-
-struct sso_hws_stats {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io hws;
- uint64_t __otx2_io arbitration;
-};
-
-/* CPT mailbox error codes
- * Range 901 - 1000.
- */
-enum cpt_af_status {
- CPT_AF_ERR_PARAM = -901,
- CPT_AF_ERR_GRP_INVALID = -902,
- CPT_AF_ERR_LF_INVALID = -903,
- CPT_AF_ERR_ACCESS_DENIED = -904,
- CPT_AF_ERR_SSO_PF_FUNC_INVALID = -905,
- CPT_AF_ERR_NIX_PF_FUNC_INVALID = -906,
- CPT_AF_ERR_INLINE_IPSEC_INB_ENA = -907,
- CPT_AF_ERR_INLINE_IPSEC_OUT_ENA = -908
-};
-
-/* CPT mbox message formats */
-
-struct cpt_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint8_t __otx2_io is_write;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_set_crypto_grp_req_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io crypto_eng_grp;
-};
-
-struct cpt_lf_alloc_req_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io nix_pf_func;
- uint16_t __otx2_io sso_pf_func;
- uint16_t __otx2_io eng_grpmask;
- /* BLKADDR_CPT0/BLKADDR_CPT1 or 0 for BLKADDR_CPT0 */
- uint8_t __otx2_io blkaddr;
-};
-
-struct cpt_lf_alloc_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io eng_grpmsk;
-};
-
-#define CPT_INLINE_INBOUND 0
-#define CPT_INLINE_OUTBOUND 1
-
-struct cpt_inline_ipsec_cfg_msg {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io enable;
- uint8_t __otx2_io slot;
- uint8_t __otx2_io dir;
- uint16_t __otx2_io sso_pf_func; /* Inbound path SSO_PF_FUNC */
- uint16_t __otx2_io nix_pf_func; /* Outbound path NIX_PF_FUNC */
-};
-
-struct cpt_rx_inline_lf_cfg_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io sso_pf_func;
-};
-
-enum cpt_eng_type {
- CPT_ENG_TYPE_AE = 1,
- CPT_ENG_TYPE_SE = 2,
- CPT_ENG_TYPE_IE = 3,
- CPT_MAX_ENG_TYPES,
-};
-
-/* CPT HW capabilities */
-union cpt_eng_caps {
- uint64_t __otx2_io u;
- struct {
- uint64_t __otx2_io reserved_0_4:5;
- uint64_t __otx2_io mul:1;
- uint64_t __otx2_io sha1_sha2:1;
- uint64_t __otx2_io chacha20:1;
- uint64_t __otx2_io zuc_snow3g:1;
- uint64_t __otx2_io sha3:1;
- uint64_t __otx2_io aes:1;
- uint64_t __otx2_io kasumi:1;
- uint64_t __otx2_io des:1;
- uint64_t __otx2_io crc:1;
- uint64_t __otx2_io reserved_14_63:50;
- };
-};
-
-struct cpt_caps_rsp_msg {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cpt_pf_drv_version;
- uint8_t __otx2_io cpt_revision;
- union cpt_eng_caps eng_caps[CPT_MAX_ENG_TYPES];
-};
-
-/* NPC mbox message structs */
-
-#define NPC_MCAM_ENTRY_INVALID 0xFFFF
-#define NPC_MCAM_INVALID_MAP 0xFFFF
-
-/* NPC mailbox error codes
- * Range 701 - 800.
- */
-enum npc_af_status {
- NPC_MCAM_INVALID_REQ = -701,
- NPC_MCAM_ALLOC_DENIED = -702,
- NPC_MCAM_ALLOC_FAILED = -703,
- NPC_MCAM_PERM_DENIED = -704,
- NPC_AF_ERR_HIGIG_CONFIG_FAIL = -705,
-};
-
-struct npc_mcam_alloc_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MAX_NONCONTIG_ENTRIES 256
- uint8_t __otx2_io contig; /* Contiguous entries ? */
-#define NPC_MCAM_ANY_PRIO 0
-#define NPC_MCAM_LOWER_PRIO 1
-#define NPC_MCAM_HIGHER_PRIO 2
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint16_t __otx2_io ref_entry;
- uint16_t __otx2_io count; /* Number of entries requested */
-};
-
-struct npc_mcam_alloc_entry_rsp {
- struct mbox_msghdr hdr;
- /* Entry alloc'ed or start index if contiguous.
- * Invalid in case of non-contiguous.
- */
- uint16_t __otx2_io entry;
- uint16_t __otx2_io count; /* Number of entries allocated */
- uint16_t __otx2_io free_count; /* Number of entries available */
- uint16_t __otx2_io entry_list[NPC_MAX_NONCONTIG_ENTRIES];
-};
-
-struct npc_mcam_free_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry; /* Entry index to be freed */
- uint8_t __otx2_io all; /* Free all entries alloc'ed to this PFVF */
-};
-
-struct mcam_entry {
-#define NPC_MAX_KWS_IN_KEY 7 /* Number of keywords in max key width */
- uint64_t __otx2_io kw[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io kw_mask[NPC_MAX_KWS_IN_KEY];
- uint64_t __otx2_io action;
- uint64_t __otx2_io vtag_action;
-};
-
-struct npc_mcam_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io entry; /* MCAM entry to write this match key */
- uint16_t __otx2_io cntr; /* Counter for this MCAM entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io set_cntr; /* Set counter for this entry ? */
-};
-
-/* Enable/Disable a given entry */
-struct npc_mcam_ena_dis_entry_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_shift_entry_req {
- struct mbox_msghdr hdr;
-#define NPC_MCAM_MAX_SHIFTS 64
- uint16_t __otx2_io curr_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io new_entry[NPC_MCAM_MAX_SHIFTS];
- uint16_t __otx2_io shift_count; /* Number of entries to shift */
-};
-
-struct npc_mcam_shift_entry_rsp {
- struct mbox_msghdr hdr;
- /* Index in 'curr_entry', not entry itself */
- uint16_t __otx2_io failed_entry_idx;
-};
-
-struct npc_mcam_alloc_counter_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io contig; /* Contiguous counters ? */
-#define NPC_MAX_NONCONTIG_COUNTERS 64
- uint16_t __otx2_io count; /* Number of counters requested */
-};
-
-struct npc_mcam_alloc_counter_rsp {
- struct mbox_msghdr hdr;
- /* Counter alloc'ed or start idx if contiguous.
- * Invalid incase of non-contiguous.
- */
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io count; /* Number of counters allocated */
- uint16_t __otx2_io cntr_list[NPC_MAX_NONCONTIG_COUNTERS];
-};
-
-struct npc_mcam_oper_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr; /* Free a counter or clear/fetch it's stats */
-};
-
-struct npc_mcam_oper_counter_rsp {
- struct mbox_msghdr hdr;
- /* valid only while fetching counter's stats */
- uint64_t __otx2_io stat;
-};
-
-struct npc_mcam_unmap_counter_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io cntr;
- uint16_t __otx2_io entry; /* Entry and counter to be unmapped */
- uint8_t __otx2_io all; /* Unmap all entries using this counter ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_req {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint16_t __otx2_io ref_entry;
- uint8_t __otx2_io priority; /* Lower or higher w.r.t ref_entry */
- uint8_t __otx2_io intf; /* Rx or Tx interface */
- uint8_t __otx2_io enable_entry;/* Enable this MCAM entry ? */
- uint8_t __otx2_io alloc_cntr; /* Allocate counter and map ? */
-};
-
-struct npc_mcam_alloc_and_write_entry_rsp {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io cntr;
-};
-
-struct npc_get_kex_cfg_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io rx_keyx_cfg; /* NPC_AF_INTF(0)_KEX_CFG */
- uint64_t __otx2_io tx_keyx_cfg; /* NPC_AF_INTF(1)_KEX_CFG */
-#define NPC_MAX_INTF 2
-#define NPC_MAX_LID 8
-#define NPC_MAX_LT 16
-#define NPC_MAX_LD 2
-#define NPC_MAX_LFL 16
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- uint64_t __otx2_io kex_ld_flags[NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG */
- uint64_t __otx2_io
- intf_lid_lt_ld[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD];
- /* NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG */
- uint64_t __otx2_io
- intf_ld_flags[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-#define MKEX_NAME_LEN 128
- uint8_t __otx2_io mkex_pfl_name[MKEX_NAME_LEN];
-};
-
-enum header_fields {
- NPC_DMAC,
- NPC_SMAC,
- NPC_ETYPE,
- NPC_OUTER_VID,
- NPC_TOS,
- NPC_SIP_IPV4,
- NPC_DIP_IPV4,
- NPC_SIP_IPV6,
- NPC_DIP_IPV6,
- NPC_SPORT_TCP,
- NPC_DPORT_TCP,
- NPC_SPORT_UDP,
- NPC_DPORT_UDP,
- NPC_FDSA_VAL,
- NPC_HEADER_FIELDS_MAX,
-};
-
-struct flow_msg {
- unsigned char __otx2_io dmac[6];
- unsigned char __otx2_io smac[6];
- uint16_t __otx2_io etype;
- uint16_t __otx2_io vlan_etype;
- uint16_t __otx2_io vlan_tci;
- union {
- uint32_t __otx2_io ip4src;
- uint32_t __otx2_io ip6src[4];
- };
- union {
- uint32_t __otx2_io ip4dst;
- uint32_t __otx2_io ip6dst[4];
- };
- uint8_t __otx2_io tos;
- uint8_t __otx2_io ip_ver;
- uint8_t __otx2_io ip_proto;
- uint8_t __otx2_io tc;
- uint16_t __otx2_io sport;
- uint16_t __otx2_io dport;
-};
-
-struct npc_install_flow_req {
- struct mbox_msghdr hdr;
- struct flow_msg packet;
- struct flow_msg mask;
- uint64_t __otx2_io features;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io channel;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io set_cntr;
- uint8_t __otx2_io default_rule;
- /* Overwrite(0) or append(1) flow to default rule? */
- uint8_t __otx2_io append;
- uint16_t __otx2_io vf;
- /* action */
- uint32_t __otx2_io index;
- uint16_t __otx2_io match_id;
- uint8_t __otx2_io flow_key_alg;
- uint8_t __otx2_io op;
- /* vtag action */
- uint8_t __otx2_io vtag0_type;
- uint8_t __otx2_io vtag0_valid;
- uint8_t __otx2_io vtag1_type;
- uint8_t __otx2_io vtag1_valid;
-
- /* vtag tx action */
- uint16_t __otx2_io vtag0_def;
- uint8_t __otx2_io vtag0_op;
- uint16_t __otx2_io vtag1_def;
- uint8_t __otx2_io vtag1_op;
-};
-
-struct npc_install_flow_rsp {
- struct mbox_msghdr hdr;
- /* Negative if no counter else counter number */
- int __otx2_io counter;
-};
-
-struct npc_delete_flow_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io entry;
- uint16_t __otx2_io start;/*Disable range of entries */
- uint16_t __otx2_io end;
- uint8_t __otx2_io all; /* PF + VFs */
-};
-
-struct npc_mcam_read_entry_req {
- struct mbox_msghdr hdr;
- /* MCAM entry to read */
- uint16_t __otx2_io entry;
-};
-
-struct npc_mcam_read_entry_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
- uint8_t __otx2_io intf;
- uint8_t __otx2_io enable;
-};
-
-struct npc_mcam_read_base_rule_rsp {
- struct mbox_msghdr hdr;
- struct mcam_entry entry_data;
-};
-
-/* TIM mailbox error codes
- * Range 801 - 900.
- */
-enum tim_af_status {
- TIM_AF_NO_RINGS_LEFT = -801,
- TIM_AF_INVALID_NPA_PF_FUNC = -802,
- TIM_AF_INVALID_SSO_PF_FUNC = -803,
- TIM_AF_RING_STILL_RUNNING = -804,
- TIM_AF_LF_INVALID = -805,
- TIM_AF_CSIZE_NOT_ALIGNED = -806,
- TIM_AF_CSIZE_TOO_SMALL = -807,
- TIM_AF_CSIZE_TOO_BIG = -808,
- TIM_AF_INTERVAL_TOO_SMALL = -809,
- TIM_AF_INVALID_BIG_ENDIAN_VALUE = -810,
- TIM_AF_INVALID_CLOCK_SOURCE = -811,
- TIM_AF_GPIO_CLK_SRC_NOT_ENABLED = -812,
- TIM_AF_INVALID_BSIZE = -813,
- TIM_AF_INVALID_ENABLE_PERIODIC = -814,
- TIM_AF_INVALID_ENABLE_DONTFREE = -815,
- TIM_AF_ENA_DONTFRE_NSET_PERIODIC = -816,
- TIM_AF_RING_ALREADY_DISABLED = -817,
-};
-
-enum tim_clk_srcs {
- TIM_CLK_SRCS_TENNS = 0,
- TIM_CLK_SRCS_GPIO = 1,
- TIM_CLK_SRCS_GTI = 2,
- TIM_CLK_SRCS_PTP = 3,
- TIM_CLK_SRSC_INVALID,
-};
-
-enum tim_gpio_edge {
- TIM_GPIO_NO_EDGE = 0,
- TIM_GPIO_LTOH_TRANS = 1,
- TIM_GPIO_HTOL_TRANS = 2,
- TIM_GPIO_BOTH_TRANS = 3,
- TIM_GPIO_INVALID,
-};
-
-enum ptp_op {
- PTP_OP_ADJFINE = 0, /* adjfine(req.scaled_ppm); */
- PTP_OP_GET_CLOCK = 1, /* rsp.clk = get_clock() */
-};
-
-struct ptp_req {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io op;
- int64_t __otx2_io scaled_ppm;
- uint8_t __otx2_io is_pmu;
-};
-
-struct ptp_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io clk;
- uint64_t __otx2_io tsc;
-};
-
-struct get_hw_cap_rsp {
- struct mbox_msghdr hdr;
- /* Schq mapping fixed or flexible */
- uint8_t __otx2_io nix_fixed_txschq_mapping;
- uint8_t __otx2_io nix_shaping; /* Is shaping and coloring supported */
-};
-
-struct ndc_sync_op {
- struct mbox_msghdr hdr;
- uint8_t __otx2_io nix_lf_tx_sync;
- uint8_t __otx2_io nix_lf_rx_sync;
- uint8_t __otx2_io npa_lf_sync;
-};
-
-struct tim_lf_alloc_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint16_t __otx2_io npa_pf_func;
- uint16_t __otx2_io sso_pf_func;
-};
-
-struct tim_ring_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
-};
-
-struct tim_config_req {
- struct mbox_msghdr hdr;
- uint16_t __otx2_io ring;
- uint8_t __otx2_io bigendian;
- uint8_t __otx2_io clocksource;
- uint8_t __otx2_io enableperiodic;
- uint8_t __otx2_io enabledontfreebuffer;
- uint32_t __otx2_io bucketsize;
- uint32_t __otx2_io chunksize;
- uint32_t __otx2_io interval;
-};
-
-struct tim_lf_alloc_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io tenns_clk;
-};
-
-struct tim_enable_rsp {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io timestarted;
- uint32_t __otx2_io currentbucket;
-};
-
-/* REE mailbox error codes
- * Range 1001 - 1100.
- */
-enum ree_af_status {
- REE_AF_ERR_RULE_UNKNOWN_VALUE = -1001,
- REE_AF_ERR_LF_NO_MORE_RESOURCES = -1002,
- REE_AF_ERR_LF_INVALID = -1003,
- REE_AF_ERR_ACCESS_DENIED = -1004,
- REE_AF_ERR_RULE_DB_PARTIAL = -1005,
- REE_AF_ERR_RULE_DB_EQ_BAD_VALUE = -1006,
- REE_AF_ERR_RULE_DB_BLOCK_ALLOC_FAILED = -1007,
- REE_AF_ERR_BLOCK_NOT_IMPLEMENTED = -1008,
- REE_AF_ERR_RULE_DB_INC_OFFSET_TOO_BIG = -1009,
- REE_AF_ERR_RULE_DB_OFFSET_TOO_BIG = -1010,
- REE_AF_ERR_Q_IS_GRACEFUL_DIS = -1011,
- REE_AF_ERR_Q_NOT_GRACEFUL_DIS = -1012,
- REE_AF_ERR_RULE_DB_ALLOC_FAILED = -1013,
- REE_AF_ERR_RULE_DB_TOO_BIG = -1014,
- REE_AF_ERR_RULE_DB_GEQ_BAD_VALUE = -1015,
- REE_AF_ERR_RULE_DB_LEQ_BAD_VALUE = -1016,
- REE_AF_ERR_RULE_DB_WRONG_LENGTH = -1017,
- REE_AF_ERR_RULE_DB_WRONG_OFFSET = -1018,
- REE_AF_ERR_RULE_DB_BLOCK_TOO_BIG = -1019,
- REE_AF_ERR_RULE_DB_SHOULD_FILL_REQUEST = -1020,
- REE_AF_ERR_RULE_DBI_ALLOC_FAILED = -1021,
- REE_AF_ERR_LF_WRONG_PRIORITY = -1022,
- REE_AF_ERR_LF_SIZE_TOO_BIG = -1023,
-};
-
-/* REE mbox message formats */
-
-struct ree_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
-};
-
-struct ree_lf_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io size;
- uint8_t __otx2_io lf;
- uint8_t __otx2_io pri;
-};
-
-struct ree_rule_db_prog_req_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_REQ_BLOCK_SIZE (MBOX_SIZE >> 1)
- uint8_t __otx2_io rule_db[REE_RULE_DB_REQ_BLOCK_SIZE];
- uint32_t __otx2_io blkaddr; /* REE0 or REE1 */
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
- uint8_t __otx2_io is_incremental; /* is incremental flow */
- uint8_t __otx2_io is_dbi; /* is rule db incremental */
-};
-
-struct ree_rule_db_get_req_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io offset; /* retrieve db from this offset */
- uint8_t __otx2_io is_dbi; /* is request for rule db incremental */
-};
-
-struct ree_rd_wr_reg_msg {
- struct mbox_msghdr hdr;
- uint64_t __otx2_io reg_offset;
- uint64_t __otx2_io *ret_val;
- uint64_t __otx2_io val;
- uint32_t __otx2_io blkaddr;
- uint8_t __otx2_io is_write;
-};
-
-struct ree_rule_db_len_rsp_msg {
- struct mbox_msghdr hdr;
- uint32_t __otx2_io blkaddr;
- uint32_t __otx2_io len;
- uint32_t __otx2_io inc_len;
-};
-
-struct ree_rule_db_get_rsp_msg {
- struct mbox_msghdr hdr;
-#define REE_RULE_DB_RSP_BLOCK_SIZE (MBOX_DOWN_TX_SIZE - SZ_1K)
- uint8_t __otx2_io rule_db[REE_RULE_DB_RSP_BLOCK_SIZE];
- uint32_t __otx2_io total_len; /* total len of rule db */
- uint32_t __otx2_io offset; /* offset of current rule db block */
- uint16_t __otx2_io len; /* length of rule db block */
- uint8_t __otx2_io is_last; /* is this the last block */
-};
-
-__rte_internal
-const char *otx2_mbox_id2name(uint16_t id);
-int otx2_mbox_id2size(uint16_t id);
-void otx2_mbox_reset(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_init(struct otx2_mbox *mbox, uintptr_t hwbase, uintptr_t reg_base,
- int direction, int ndevsi, uint64_t intr_offset);
-void otx2_mbox_fini(struct otx2_mbox *mbox);
-__rte_internal
-void otx2_mbox_msg_send(struct otx2_mbox *mbox, int devid);
-__rte_internal
-int otx2_mbox_wait_for_rsp(struct otx2_mbox *mbox, int devid);
-int otx2_mbox_wait_for_rsp_tmo(struct otx2_mbox *mbox, int devid, uint32_t tmo);
-__rte_internal
-int otx2_mbox_get_rsp(struct otx2_mbox *mbox, int devid, void **msg);
-__rte_internal
-int otx2_mbox_get_rsp_tmo(struct otx2_mbox *mbox, int devid, void **msg,
- uint32_t tmo);
-int otx2_mbox_get_availmem(struct otx2_mbox *mbox, int devid);
-__rte_internal
-struct mbox_msghdr *otx2_mbox_alloc_msg_rsp(struct otx2_mbox *mbox, int devid,
- int size, int size_rsp);
-
-static inline struct mbox_msghdr *
-otx2_mbox_alloc_msg(struct otx2_mbox *mbox, int devid, int size)
-{
- return otx2_mbox_alloc_msg_rsp(mbox, devid, size, 0);
-}
-
-static inline void
-otx2_mbox_req_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_REQ_SIG;
- hdr->ver = OTX2_MBOX_VERSION;
- hdr->id = mbox_id;
- hdr->pcifunc = 0;
-}
-
-static inline void
-otx2_mbox_rsp_init(uint16_t mbox_id, void *msghdr)
-{
- struct mbox_msghdr *hdr = msghdr;
-
- hdr->sig = OTX2_MBOX_RSP_SIG;
- hdr->rc = -ETIMEDOUT;
- hdr->id = mbox_id;
-}
-
-static inline bool
-otx2_mbox_nonempty(struct otx2_mbox *mbox, int devid)
-{
- struct otx2_mbox_dev *mdev = &mbox->dev[devid];
- bool ret;
-
- rte_spinlock_lock(&mdev->mbox_lock);
- ret = mdev->num_msgs != 0;
- rte_spinlock_unlock(&mdev->mbox_lock);
-
- return ret;
-}
-
-static inline int
-otx2_mbox_process(struct otx2_mbox *mbox)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, NULL);
-}
-
-static inline int
-otx2_mbox_process_msg(struct otx2_mbox *mbox, void **msg)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp(mbox, 0, msg);
-}
-
-static inline int
-otx2_mbox_process_tmo(struct otx2_mbox *mbox, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, NULL, tmo);
-}
-
-static inline int
-otx2_mbox_process_msg_tmo(struct otx2_mbox *mbox, void **msg, uint32_t tmo)
-{
- otx2_mbox_msg_send(mbox, 0);
- return otx2_mbox_get_rsp_tmo(mbox, 0, msg, tmo);
-}
-
-int otx2_send_ready_msg(struct otx2_mbox *mbox, uint16_t *pf_func /* out */);
-int otx2_reply_invalid_msg(struct otx2_mbox *mbox, int devid, uint16_t pf_func,
- uint16_t id);
-
-#define M(_name, _id, _fn_name, _req_type, _rsp_type) \
-static inline struct _req_type \
-*otx2_mbox_alloc_msg_ ## _fn_name(struct otx2_mbox *mbox) \
-{ \
- struct _req_type *req; \
- \
- req = (struct _req_type *)otx2_mbox_alloc_msg_rsp( \
- mbox, 0, sizeof(struct _req_type), \
- sizeof(struct _rsp_type)); \
- if (!req) \
- return NULL; \
- \
- req->hdr.sig = OTX2_MBOX_REQ_SIG; \
- req->hdr.id = _id; \
- otx2_mbox_dbg("id=0x%x (%s)", \
- req->hdr.id, otx2_mbox_id2name(req->hdr.id)); \
- return req; \
-}
-
-MBOX_MESSAGES
-#undef M
-
-/* This is required for copy operations from device memory which do not work on
- * addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memcpy.
- */
-static inline volatile void *
-otx2_mbox_memcpy(volatile void *d, const volatile void *s, size_t l)
-{
- const volatile uint8_t *sb;
- volatile uint8_t *db;
- size_t i;
-
- if (!d || !s)
- return NULL;
- db = (volatile uint8_t *)d;
- sb = (const volatile uint8_t *)s;
- for (i = 0; i < l; i++)
- db[i] = sb[i];
- return d;
-}
-
-/* This is required for memory operations from device memory which do not
- * work on addresses which are unaligned to 16B. This is because of specific
- * optimizations to libc memset.
- */
-static inline void
-otx2_mbox_memset(volatile void *d, uint8_t val, size_t l)
-{
- volatile uint8_t *db;
- size_t i = 0;
-
- if (!d || !l)
- return;
- db = (volatile uint8_t *)d;
- for (i = 0; i < l; i++)
- db[i] = val;
-}
-
-#endif /* __OTX2_MBOX_H__ */
diff --git a/drivers/common/octeontx2/otx2_sec_idev.c b/drivers/common/octeontx2/otx2_sec_idev.c
deleted file mode 100644
index b561b67174..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.c
+++ /dev/null
@@ -1,183 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <ethdev_driver.h>
-#include <rte_spinlock.h>
-
-#include "otx2_common.h"
-#include "otx2_sec_idev.h"
-
-static struct otx2_sec_idev_cfg sec_cfg[OTX2_MAX_INLINE_PORTS];
-
-/**
- * @internal
- * Check if rte_eth_dev is security offload capable otx2_eth_dev
- */
-uint8_t
-otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_VF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- return 1;
-
- return 0;
-}
-
-int
-otx2_sec_idev_cfg_init(int port_id)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i;
-
- cfg = &sec_cfg[port_id];
- cfg->tx_cpt_idx = 0;
- rte_spinlock_init(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- cfg->tx_cpt[i].qp = NULL;
- rte_atomic16_set(&cfg->tx_cpt[i].ref_cnt, 0);
- }
-
- return 0;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- int i, ret;
-
- if (qp == NULL || port_id >= OTX2_MAX_INLINE_PORTS)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- /* Find a free slot to save CPT LF */
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == NULL) {
- cfg->tx_cpt[i].qp = qp;
- ret = 0;
- goto unlock;
- }
- }
-
- ret = -EINVAL;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i, ret;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp != qp)
- continue;
-
- /* Don't free if the QP is in use by any sec session */
- if (rte_atomic16_read(&cfg->tx_cpt[i].ref_cnt)) {
- ret = -EBUSY;
- } else {
- cfg->tx_cpt[i].qp = NULL;
- ret = 0;
- }
-
- goto unlock;
- }
-
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- }
-
- return -ENOENT;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t index;
- int i, ret;
-
- if (port_id >= OTX2_MAX_INLINE_PORTS || qp == NULL)
- return -EINVAL;
-
- cfg = &sec_cfg[port_id];
-
- rte_spinlock_lock(&cfg->tx_cpt_lock);
-
- index = cfg->tx_cpt_idx;
-
- /* Get the next index with valid data */
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[index].qp != NULL)
- break;
- index = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
- }
-
- if (i >= OTX2_MAX_CPT_QP_PER_PORT) {
- ret = -EINVAL;
- goto unlock;
- }
-
- *qp = cfg->tx_cpt[index].qp;
- rte_atomic16_inc(&cfg->tx_cpt[index].ref_cnt);
-
- cfg->tx_cpt_idx = (index + 1) % OTX2_MAX_CPT_QP_PER_PORT;
-
- ret = 0;
-
-unlock:
- rte_spinlock_unlock(&cfg->tx_cpt_lock);
- return ret;
-}
-
-int
-otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp)
-{
- struct otx2_sec_idev_cfg *cfg;
- uint16_t port_id;
- int i;
-
- if (qp == NULL)
- return -EINVAL;
-
- for (port_id = 0; port_id < OTX2_MAX_INLINE_PORTS; port_id++) {
- cfg = &sec_cfg[port_id];
- for (i = 0; i < OTX2_MAX_CPT_QP_PER_PORT; i++) {
- if (cfg->tx_cpt[i].qp == qp) {
- rte_atomic16_dec(&cfg->tx_cpt[i].ref_cnt);
- return 0;
- }
- }
- }
-
- return -EINVAL;
-}
diff --git a/drivers/common/octeontx2/otx2_sec_idev.h b/drivers/common/octeontx2/otx2_sec_idev.h
deleted file mode 100644
index 89cdaf66ab..0000000000
--- a/drivers/common/octeontx2/otx2_sec_idev.h
+++ /dev/null
@@ -1,43 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_SEC_IDEV_H_
-#define _OTX2_SEC_IDEV_H_
-
-#include <rte_ethdev.h>
-
-#define OTX2_MAX_CPT_QP_PER_PORT 64
-#define OTX2_MAX_INLINE_PORTS 64
-
-struct otx2_cpt_qp;
-
-struct otx2_sec_idev_cfg {
- struct {
- struct otx2_cpt_qp *qp;
- rte_atomic16_t ref_cnt;
- } tx_cpt[OTX2_MAX_CPT_QP_PER_PORT];
-
- uint16_t tx_cpt_idx;
- rte_spinlock_t tx_cpt_lock;
-};
-
-__rte_internal
-uint8_t otx2_eth_dev_is_sec_capable(struct rte_eth_dev *eth_dev);
-
-__rte_internal
-int otx2_sec_idev_cfg_init(int port_id);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_add(uint16_t port_id, struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_remove(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_put(struct otx2_cpt_qp *qp);
-
-__rte_internal
-int otx2_sec_idev_tx_cpt_qp_get(uint16_t port_id, struct otx2_cpt_qp **qp);
-
-#endif /* _OTX2_SEC_IDEV_H_ */
diff --git a/drivers/common/octeontx2/version.map b/drivers/common/octeontx2/version.map
deleted file mode 100644
index b58f19ce32..0000000000
--- a/drivers/common/octeontx2/version.map
+++ /dev/null
@@ -1,44 +0,0 @@
-INTERNAL {
- global:
-
- otx2_dev_active_vfs;
- otx2_dev_fini;
- otx2_dev_priv_init;
- otx2_disable_irqs;
- otx2_eth_dev_is_sec_capable;
- otx2_intra_dev_get_cfg;
- otx2_logtype_base;
- otx2_logtype_dpi;
- otx2_logtype_ep;
- otx2_logtype_mbox;
- otx2_logtype_nix;
- otx2_logtype_npa;
- otx2_logtype_npc;
- otx2_logtype_ree;
- otx2_logtype_sso;
- otx2_logtype_tim;
- otx2_logtype_tm;
- otx2_mbox_alloc_msg_rsp;
- otx2_mbox_get_rsp;
- otx2_mbox_get_rsp_tmo;
- otx2_mbox_id2name;
- otx2_mbox_msg_send;
- otx2_mbox_wait_for_rsp;
- otx2_npa_lf_active;
- otx2_npa_lf_obj_get;
- otx2_npa_lf_obj_ref;
- otx2_npa_pf_func_get;
- otx2_npa_set_defaults;
- otx2_parse_common_devargs;
- otx2_register_irq;
- otx2_sec_idev_cfg_init;
- otx2_sec_idev_tx_cpt_qp_add;
- otx2_sec_idev_tx_cpt_qp_get;
- otx2_sec_idev_tx_cpt_qp_put;
- otx2_sec_idev_tx_cpt_qp_remove;
- otx2_sso_pf_func_get;
- otx2_sso_pf_func_set;
- otx2_unregister_irq;
-
- local: *;
-};
diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build
index 59f02ea47c..147b8cf633 100644
--- a/drivers/crypto/meson.build
+++ b/drivers/crypto/meson.build
@@ -16,7 +16,6 @@ drivers = [
'nitrox',
'null',
'octeontx',
- 'octeontx2',
'openssl',
'scheduler',
'virtio',
diff --git a/drivers/crypto/octeontx2/meson.build b/drivers/crypto/octeontx2/meson.build
deleted file mode 100644
index 3b387cc570..0000000000
--- a/drivers/crypto/octeontx2/meson.build
+++ /dev/null
@@ -1,30 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright (C) 2019 Marvell International Ltd.
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-deps += ['bus_pci']
-deps += ['common_cpt']
-deps += ['common_octeontx2']
-deps += ['ethdev']
-deps += ['eventdev']
-deps += ['security']
-
-sources = files(
- 'otx2_cryptodev.c',
- 'otx2_cryptodev_capabilities.c',
- 'otx2_cryptodev_hw_access.c',
- 'otx2_cryptodev_mbox.c',
- 'otx2_cryptodev_ops.c',
- 'otx2_cryptodev_sec.c',
-)
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../common/octeontx2')
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../mempool/octeontx2')
-includes += include_directories('../../net/octeontx2')
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.c b/drivers/crypto/octeontx2/otx2_cryptodev.c
deleted file mode 100644
index fc7ad05366..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.c
+++ /dev/null
@@ -1,188 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_crypto.h>
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_dev.h>
-#include <rte_errno.h>
-#include <rte_mempool.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_dev.h"
-
-/* CPT common headers */
-#include "cpt_common.h"
-#include "cpt_pmd_logs.h"
-
-uint8_t otx2_cryptodev_driver_id;
-
-static struct rte_pci_id pci_id_cpt_table[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_CPT_VF)
- },
- /* sentinel */
- {
- .device_id = 0
- },
-};
-
-uint64_t
-otx2_cpt_default_ff_get(void)
-{
- return RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_HW_ACCELERATED |
- RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING |
- RTE_CRYPTODEV_FF_IN_PLACE_SGL |
- RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_LB_OUT |
- RTE_CRYPTODEV_FF_OOP_SGL_IN_SGL_OUT |
- RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO |
- RTE_CRYPTODEV_FF_RSA_PRIV_OP_KEY_QT |
- RTE_CRYPTODEV_FF_SYM_SESSIONLESS |
- RTE_CRYPTODEV_FF_SECURITY |
- RTE_CRYPTODEV_FF_DIGEST_ENCRYPTED;
-}
-
-static int
-otx2_cpt_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
- struct rte_pci_device *pci_dev)
-{
- struct rte_cryptodev_pmd_init_params init_params = {
- .name = "",
- .socket_id = rte_socket_id(),
- .private_data_size = sizeof(struct otx2_cpt_vf)
- };
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
- struct otx2_dev *otx2_dev;
- struct otx2_cpt_vf *vf;
- uint16_t nb_queues;
- int ret;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_create(name, &pci_dev->device, &init_params);
- if (dev == NULL) {
- ret = -ENODEV;
- goto exit;
- }
-
- dev->dev_ops = &otx2_cpt_ops;
-
- dev->driver_id = otx2_cryptodev_driver_id;
-
- /* Get private data space allocated */
- vf = dev->data->dev_private;
-
- otx2_dev = &vf->otx2_dev;
-
- if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
- /* Initialize the base otx2_dev object */
- ret = otx2_dev_init(pci_dev, otx2_dev);
- if (ret) {
- CPT_LOG_ERR("Could not initialize otx2_dev");
- goto pmd_destroy;
- }
-
- /* Get number of queues available on the device */
- ret = otx2_cpt_available_queues_get(dev, &nb_queues);
- if (ret) {
- CPT_LOG_ERR("Could not determine the number of queues available");
- goto otx2_dev_fini;
- }
-
- /* Don't exceed the limits set per VF */
- nb_queues = RTE_MIN(nb_queues, OTX2_CPT_MAX_QUEUES_PER_VF);
-
- if (nb_queues == 0) {
- CPT_LOG_ERR("No free queues available on the device");
- goto otx2_dev_fini;
- }
-
- vf->max_queues = nb_queues;
-
- CPT_LOG_INFO("Max queues supported by device: %d",
- vf->max_queues);
-
- ret = otx2_cpt_hardware_caps_get(dev, vf->hw_caps);
- if (ret) {
- CPT_LOG_ERR("Could not determine hardware capabilities");
- goto otx2_dev_fini;
- }
- }
-
- otx2_crypto_capabilities_init(vf->hw_caps);
- otx2_crypto_sec_capabilities_init(vf->hw_caps);
-
- /* Create security ctx */
- ret = otx2_crypto_sec_ctx_create(dev);
- if (ret)
- goto otx2_dev_fini;
-
- dev->feature_flags = otx2_cpt_default_ff_get();
-
- if (rte_eal_process_type() == RTE_PROC_SECONDARY)
- otx2_cpt_set_enqdeq_fns(dev);
-
- rte_cryptodev_pmd_probing_finish(dev);
-
- return 0;
-
-otx2_dev_fini:
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- otx2_dev_fini(pci_dev, otx2_dev);
-pmd_destroy:
- rte_cryptodev_pmd_destroy(dev);
-exit:
- CPT_LOG_ERR("Could not create device (vendor_id: 0x%x device_id: 0x%x)",
- pci_dev->id.vendor_id, pci_dev->id.device_id);
- return ret;
-}
-
-static int
-otx2_cpt_pci_remove(struct rte_pci_device *pci_dev)
-{
- char name[RTE_CRYPTODEV_NAME_MAX_LEN];
- struct rte_cryptodev *dev;
-
- if (pci_dev == NULL)
- return -EINVAL;
-
- rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
-
- dev = rte_cryptodev_pmd_get_named_dev(name);
- if (dev == NULL)
- return -ENODEV;
-
- /* Destroy security ctx */
- otx2_crypto_sec_ctx_destroy(dev);
-
- return rte_cryptodev_pmd_destroy(dev);
-}
-
-static struct rte_pci_driver otx2_cryptodev_pmd = {
- .id_table = pci_id_cpt_table,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING,
- .probe = otx2_cpt_pci_probe,
- .remove = otx2_cpt_pci_remove,
-};
-
-static struct cryptodev_driver otx2_cryptodev_drv;
-
-RTE_PMD_REGISTER_PCI(CRYPTODEV_NAME_OCTEONTX2_PMD, otx2_cryptodev_pmd);
-RTE_PMD_REGISTER_PCI_TABLE(CRYPTODEV_NAME_OCTEONTX2_PMD, pci_id_cpt_table);
-RTE_PMD_REGISTER_KMOD_DEP(CRYPTODEV_NAME_OCTEONTX2_PMD, "vfio-pci");
-RTE_PMD_REGISTER_CRYPTO_DRIVER(otx2_cryptodev_drv, otx2_cryptodev_pmd.driver,
- otx2_cryptodev_driver_id);
-RTE_LOG_REGISTER_DEFAULT(otx2_cpt_logtype, NOTICE);
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev.h b/drivers/crypto/octeontx2/otx2_cryptodev.h
deleted file mode 100644
index 15ecfe45b6..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev.h
+++ /dev/null
@@ -1,63 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_H_
-#define _OTX2_CRYPTODEV_H_
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-
-#include "otx2_dev.h"
-
-/* Marvell OCTEON TX2 Crypto PMD device name */
-#define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2
-
-#define OTX2_CPT_MAX_LFS 128
-#define OTX2_CPT_MAX_QUEUES_PER_VF 64
-#define OTX2_CPT_MAX_BLKS 2
-#define OTX2_CPT_PMD_VERSION 3
-#define OTX2_CPT_REVISION_ID_3 3
-
-/**
- * Device private data
- */
-struct otx2_cpt_vf {
- struct otx2_dev otx2_dev;
- /**< Base class */
- uint16_t max_queues;
- /**< Max queues supported */
- uint8_t nb_queues;
- /**< Number of crypto queues attached */
- uint16_t lf_msixoff[OTX2_CPT_MAX_LFS];
- /**< MSI-X offsets */
- uint8_t lf_blkaddr[OTX2_CPT_MAX_LFS];
- /**< CPT0/1 BLKADDR of LFs */
- uint8_t cpt_revision;
- /**< CPT revision */
- uint8_t err_intr_registered:1;
- /**< Are error interrupts registered? */
- union cpt_eng_caps hw_caps[CPT_MAX_ENG_TYPES];
- /**< CPT device capabilities */
-};
-
-struct cpt_meta_info {
- uint64_t deq_op_info[5];
- uint64_t comp_code_sz;
- union cpt_res_s cpt_res __rte_aligned(16);
- struct cpt_request_info cpt_req;
-};
-
-#define CPT_LOGTYPE otx2_cpt_logtype
-
-extern int otx2_cpt_logtype;
-
-/*
- * Crypto device driver ID
- */
-extern uint8_t otx2_cryptodev_driver_id;
-
-uint64_t otx2_cpt_default_ff_get(void);
-void otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
deleted file mode 100644
index ba3fbbbe22..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.c
+++ /dev/null
@@ -1,924 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_mbox.h"
-
-#define CPT_EGRP_GET(hw_caps, name, egrp) do { \
- if ((hw_caps[CPT_ENG_TYPE_SE].name) && \
- (hw_caps[CPT_ENG_TYPE_IE].name)) \
- *egrp = OTX2_CPT_EGRP_SE_IE; \
- else if (hw_caps[CPT_ENG_TYPE_SE].name) \
- *egrp = OTX2_CPT_EGRP_SE; \
- else if (hw_caps[CPT_ENG_TYPE_AE].name) \
- *egrp = OTX2_CPT_EGRP_AE; \
- else \
- *egrp = OTX2_CPT_EGRP_MAX; \
-} while (0)
-
-#define CPT_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- cpt_caps_add(caps_##name, RTE_DIM(caps_##name)); \
-} while (0)
-
-#define SEC_CAPS_ADD(hw_caps, name) do { \
- enum otx2_cpt_egrp egrp; \
- CPT_EGRP_GET(hw_caps, name, &egrp); \
- if (egrp < OTX2_CPT_EGRP_MAX) \
- sec_caps_add(sec_caps_##name, RTE_DIM(sec_caps_##name));\
-} while (0)
-
-#define OTX2_CPT_MAX_CAPS 34
-#define OTX2_SEC_MAX_CAPS 4
-
-static struct rte_cryptodev_capabilities otx2_cpt_caps[OTX2_CPT_MAX_CAPS];
-static struct rte_cryptodev_capabilities otx2_cpt_sec_caps[OTX2_SEC_MAX_CAPS];
-
-static const struct rte_cryptodev_capabilities caps_mul[] = {
- { /* RSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_RSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY) |
- (1 << RTE_CRYPTO_ASYM_OP_ENCRYPT) |
- (1 << RTE_CRYPTO_ASYM_OP_DECRYPT)),
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* MOD_EXP */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_MODEX,
- .op_types = 0,
- {.modlen = {
- .min = 17,
- .max = 1024,
- .increment = 1
- }, }
- }
- }, }
- },
- { /* ECDSA */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECDSA,
- .op_types = ((1 << RTE_CRYPTO_ASYM_OP_SIGN) |
- (1 << RTE_CRYPTO_ASYM_OP_VERIFY)),
- }
- },
- }
- },
- { /* ECPM */
- .op = RTE_CRYPTO_OP_TYPE_ASYMMETRIC,
- {.asym = {
- .xform_capa = {
- .xform_type = RTE_CRYPTO_ASYM_XFORM_ECPM,
- .op_types = 0
- }
- },
- }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_sha1_sha2[] = {
- { /* SHA1 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 20,
- .max = 20,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA224 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA224 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA224_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 28,
- .max = 28,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
- { /* SHA384 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 48,
- .max = 48,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA384 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA384_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 24,
- .max = 48,
- .increment = 24
- },
- }, }
- }, }
- },
- { /* SHA512 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512,
- .block_size = 128,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 64,
- .max = 64,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* SHA512 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA512_HMAC,
- .block_size = 128,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 32,
- .max = 64,
- .increment = 32
- },
- }, }
- }, }
- },
- { /* MD5 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5,
- .block_size = 64,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- }, }
- }, }
- },
- { /* MD5 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_MD5_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 8,
- .max = 64,
- .increment = 8
- },
- .digest_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_chacha20[] = {
- { /* Chacha20-Poly1305 */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_CHACHA20_POLY1305,
- .block_size = 64,
- .key_size = {
- .min = 32,
- .max = 32,
- .increment = 0
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- }
-};
-
-static const struct rte_cryptodev_capabilities caps_zuc_snow3g[] = {
- { /* SNOW 3G (UEA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_SNOW3G_UEA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EEA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_ZUC_EEA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SNOW 3G (UIA2) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SNOW3G_UIA2,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* ZUC (EIA3) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_ZUC_EIA3,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_aes[] = {
- { /* AES GMAC (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_AES_GMAC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 8,
- .max = 16,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CTR */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CTR,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 12,
- .max = 16,
- .increment = 4
- }
- }, }
- }, }
- },
- { /* AES XTS */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_XTS,
- .block_size = 16,
- .key_size = {
- .min = 32,
- .max = 64,
- .increment = 0
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 4,
- .max = 16,
- .increment = 1
- },
- .aad_size = {
- .min = 0,
- .max = 1024,
- .increment = 1
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_kasumi[] = {
- { /* KASUMI (F8) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_KASUMI_F8,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* KASUMI (F9) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_KASUMI_F9,
- .block_size = 8,
- .key_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .digest_size = {
- .min = 4,
- .max = 4,
- .increment = 0
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_des[] = {
- { /* 3DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 16,
- .increment = 8
- }
- }, }
- }, }
- },
- { /* 3DES ECB */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_3DES_ECB,
- .block_size = 8,
- .key_size = {
- .min = 24,
- .max = 24,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* DES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_DES_CBC,
- .block_size = 8,
- .key_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- },
- .iv_size = {
- .min = 8,
- .max = 8,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_null[] = {
- { /* NULL (AUTH) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .digest_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- }, },
- }, },
- },
- { /* NULL (CIPHER) */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_NULL,
- .block_size = 1,
- .key_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- },
- .iv_size = {
- .min = 0,
- .max = 0,
- .increment = 0
- }
- }, },
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities caps_end[] = {
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_aes[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
-};
-
-static const struct rte_cryptodev_capabilities sec_caps_sha1_sha2[] = {
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 20,
- .increment = 8
- },
- }, }
- }, }
- },
- { /* SHA256 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA256_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 1,
- .max = 1024,
- .increment = 1
- },
- .digest_size = {
- .min = 16,
- .max = 32,
- .increment = 16
- },
- }, }
- }, }
- },
-};
-
-static const struct rte_security_capability
-otx2_crypto_sec_capabilities[] = {
- { /* IPsec Lookaside Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Lookaside Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_cpt_sec_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-cpt_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_CPT_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- CPT_CAPS_ADD(hw_caps, mul);
- CPT_CAPS_ADD(hw_caps, sha1_sha2);
- CPT_CAPS_ADD(hw_caps, chacha20);
- CPT_CAPS_ADD(hw_caps, zuc_snow3g);
- CPT_CAPS_ADD(hw_caps, aes);
- CPT_CAPS_ADD(hw_caps, kasumi);
- CPT_CAPS_ADD(hw_caps, des);
-
- cpt_caps_add(caps_null, RTE_DIM(caps_null));
- cpt_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void)
-{
- return otx2_cpt_caps;
-}
-
-static void
-sec_caps_add(const struct rte_cryptodev_capabilities *caps, int nb_caps)
-{
- static int cur_pos;
-
- if (cur_pos + nb_caps > OTX2_SEC_MAX_CAPS)
- return;
-
- memcpy(&otx2_cpt_sec_caps[cur_pos], caps, nb_caps * sizeof(caps[0]));
- cur_pos += nb_caps;
-}
-
-void
-otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps)
-{
- SEC_CAPS_ADD(hw_caps, aes);
- SEC_CAPS_ADD(hw_caps, sha1_sha2);
-
- sec_caps_add(caps_end, RTE_DIM(caps_end));
-}
-
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_crypto_sec_capabilities;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h b/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
deleted file mode 100644
index c1e0001190..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_capabilities.h
+++ /dev/null
@@ -1,45 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_CAPABILITIES_H_
-#define _OTX2_CRYPTODEV_CAPABILITIES_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_mbox.h"
-
-enum otx2_cpt_egrp {
- OTX2_CPT_EGRP_SE = 0,
- OTX2_CPT_EGRP_SE_IE = 1,
- OTX2_CPT_EGRP_AE = 2,
- OTX2_CPT_EGRP_MAX,
-};
-
-/*
- * Initialize crypto capabilities for the device
- *
- */
-void otx2_crypto_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get capabilities list for the device
- *
- */
-const struct rte_cryptodev_capabilities *
-otx2_cpt_capabilities_get(void);
-
-/*
- * Initialize security capabilities for the device
- *
- */
-void otx2_crypto_sec_capabilities_init(union cpt_eng_caps *hw_caps);
-
-/*
- * Get security capabilities list for the device
- *
- */
-const struct rte_security_capability *
-otx2_crypto_sec_capabilities_get(void *device __rte_unused);
-
-#endif /* _OTX2_CRYPTODEV_CAPABILITIES_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
deleted file mode 100644
index d5d6b5bad7..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.c
+++ /dev/null
@@ -1,225 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <rte_cryptodev.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_dev.h"
-
-#include "cpt_pmd_logs.h"
-
-static void
-otx2_cpt_lf_err_intr_handler(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t lf_id;
- uint64_t intr;
-
- lf_id = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + OTX2_CPT_LF_MISC_INT);
- if (intr == 0)
- return;
-
- CPT_LOG_ERR("LF %d MISC_INT: 0x%" PRIx64 "", lf_id, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + OTX2_CPT_LF_MISC_INT);
-}
-
-static void
-otx2_cpt_lf_err_intr_unregister(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- otx2_unregister_irq(handle, otx2_cpt_lf_err_intr_handler, (void *)base,
- msix_off);
-}
-
-void
-otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uintptr_t base;
- uint32_t i;
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[i], base);
- }
-
- vf->err_intr_registered = 0;
-}
-
-static int
-otx2_cpt_lf_err_intr_register(const struct rte_cryptodev *dev,
- uint16_t msix_off, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(dev->device);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int ret;
-
- /* Disable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1C);
-
- /* Register error interrupt handler */
- ret = otx2_register_irq(handle, otx2_cpt_lf_err_intr_handler,
- (void *)base, msix_off);
- if (ret)
- return ret;
-
- /* Enable error interrupts */
- otx2_write64(~0ull, base + OTX2_CPT_LF_MISC_INT_ENA_W1S);
-
- return 0;
-}
-
-int
-otx2_cpt_err_intr_register(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint32_t i, j, ret;
- uintptr_t base;
-
- for (i = 0; i < vf->nb_queues; i++) {
- if (vf->lf_msixoff[i] == MSIX_VECTOR_INVALID) {
- CPT_LOG_ERR("Invalid CPT LF MSI-X offset: 0x%x",
- vf->lf_msixoff[i]);
- return -EINVAL;
- }
- }
-
- for (i = 0; i < vf->nb_queues; i++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[i], i);
- ret = otx2_cpt_lf_err_intr_register(dev, vf->lf_msixoff[i],
- base);
- if (ret)
- goto intr_unregister;
- }
-
- vf->err_intr_registered = 1;
- return 0;
-
-intr_unregister:
- /* Unregister the ones already registered */
- for (j = 0; j < i; j++) {
- base = OTX2_CPT_LF_BAR2(vf, vf->lf_blkaddr[j], j);
- otx2_cpt_lf_err_intr_unregister(dev, vf->lf_msixoff[j], base);
- }
-
- /*
- * Failed to register error interrupt. Not returning error as this would
- * prevent application from enabling larger number of devs.
- *
- * This failure is a known issue because otx2_dev_init() initializes
- * interrupts based on static values from ATF, and the actual number
- * of interrupts needed (which is based on LFs) can be determined only
- * after otx2_dev_init() sets up interrupts which includes mbox
- * interrupts.
- */
- return 0;
-}
-
-int
-otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask, uint8_t pri,
- uint32_t size_div40)
-{
- union otx2_cpt_af_lf_ctl af_lf_ctl;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_q_base base;
- union otx2_cpt_lf_q_size size;
- union otx2_cpt_lf_ctl lf_ctl;
- int ret;
-
- /* Set engine group mask and priority */
-
- ret = otx2_cpt_af_reg_read(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, &af_lf_ctl.u);
- if (ret)
- return ret;
- af_lf_ctl.s.grp = grp_mask;
- af_lf_ctl.s.pri = pri ? 1 : 0;
- ret = otx2_cpt_af_reg_write(dev, OTX2_CPT_AF_LF_CTL(qp->id),
- qp->blkaddr, af_lf_ctl.u);
- if (ret)
- return ret;
-
- /* Set instruction queue base address */
-
- base.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_BASE);
- base.s.fault = 0;
- base.s.stopped = 0;
- base.s.addr = qp->iq_dma_addr >> 7;
- otx2_write64(base.u, qp->base + OTX2_CPT_LF_Q_BASE);
-
- /* Set instruction queue size */
-
- size.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_SIZE);
- size.s.size_div40 = size_div40;
- otx2_write64(size.u, qp->base + OTX2_CPT_LF_Q_SIZE);
-
- /* Enable instruction queue */
-
- lf_ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- lf_ctl.s.ena = 1;
- otx2_write64(lf_ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Start instruction execution */
-
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 1;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- return 0;
-}
-
-void
-otx2_cpt_iq_disable(struct otx2_cpt_qp *qp)
-{
- union otx2_cpt_lf_q_grp_ptr grp_ptr;
- union otx2_cpt_lf_inprog inprog;
- union otx2_cpt_lf_ctl ctl;
- int cnt;
-
- /* Stop instruction execution */
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- inprog.s.eena = 0x0;
- otx2_write64(inprog.u, qp->base + OTX2_CPT_LF_INPROG);
-
- /* Disable instructions enqueuing */
- ctl.u = otx2_read64(qp->base + OTX2_CPT_LF_CTL);
- ctl.s.ena = 0;
- otx2_write64(ctl.u, qp->base + OTX2_CPT_LF_CTL);
-
- /* Wait for instruction queue to become empty */
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if (inprog.s.grb_partial)
- cnt = 0;
- else
- cnt++;
- grp_ptr.u = otx2_read64(qp->base + OTX2_CPT_LF_Q_GRP_PTR);
- } while ((cnt < 10) && (grp_ptr.s.nq_ptr != grp_ptr.s.dq_ptr));
-
- cnt = 0;
- do {
- inprog.u = otx2_read64(qp->base + OTX2_CPT_LF_INPROG);
- if ((inprog.s.inflight == 0) &&
- (inprog.s.gwb_cnt < 40) &&
- ((inprog.s.grb_cnt == 0) || (inprog.s.grb_cnt == 40)))
- cnt++;
- else
- cnt = 0;
- } while (cnt < 10);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h b/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
deleted file mode 100644
index 90a338e05a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_hw_access.h
+++ /dev/null
@@ -1,161 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_HW_ACCESS_H_
-#define _OTX2_CRYPTODEV_HW_ACCESS_H_
-
-#include <stdint.h>
-
-#include <rte_cryptodev.h>
-#include <rte_memory.h>
-
-#include "cpt_common.h"
-#include "cpt_hw_types.h"
-#include "cpt_mcode_defines.h"
-
-#include "otx2_dev.h"
-#include "otx2_cryptodev_qp.h"
-
-/* CPT instruction queue length.
- * Use queue size as power of 2 for aiding in pending queue calculations.
- */
-#define OTX2_CPT_DEFAULT_CMD_QLEN 8192
-
-/* Mask which selects all engine groups */
-#define OTX2_CPT_ENG_GRPS_MASK 0xFF
-
-/* Register offsets */
-
-/* LMT LF registers */
-#define OTX2_LMT_LF_LMTLINE(a) (0x0ull | (uint64_t)(a) << 3)
-
-/* CPT LF registers */
-#define OTX2_CPT_LF_CTL 0x10ull
-#define OTX2_CPT_LF_INPROG 0x40ull
-#define OTX2_CPT_LF_MISC_INT 0xb0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1S 0xd0ull
-#define OTX2_CPT_LF_MISC_INT_ENA_W1C 0xe0ull
-#define OTX2_CPT_LF_Q_BASE 0xf0ull
-#define OTX2_CPT_LF_Q_SIZE 0x100ull
-#define OTX2_CPT_LF_Q_GRP_PTR 0x120ull
-#define OTX2_CPT_LF_NQ(a) (0x400ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_AF_LF_CTL(a) (0x27000ull | (uint64_t)(a) << 3)
-#define OTX2_CPT_AF_LF_CTL2(a) (0x29000ull | (uint64_t)(a) << 3)
-
-#define OTX2_CPT_LF_BAR2(vf, blk_addr, q_id) \
- ((vf)->otx2_dev.bar2 + \
- ((blk_addr << 20) | ((q_id) << 12)))
-
-#define OTX2_CPT_QUEUE_HI_PRIO 0x1
-
-union otx2_cpt_lf_ctl {
- uint64_t u;
- struct {
- uint64_t ena : 1;
- uint64_t fc_ena : 1;
- uint64_t fc_up_crossing : 1;
- uint64_t reserved_3_3 : 1;
- uint64_t fc_hyst_bits : 4;
- uint64_t reserved_8_63 : 56;
- } s;
-};
-
-union otx2_cpt_lf_inprog {
- uint64_t u;
- struct {
- uint64_t inflight : 9;
- uint64_t reserved_9_15 : 7;
- uint64_t eena : 1;
- uint64_t grp_drp : 1;
- uint64_t reserved_18_30 : 13;
- uint64_t grb_partial : 1;
- uint64_t grb_cnt : 8;
- uint64_t gwb_cnt : 8;
- uint64_t reserved_48_63 : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_base {
- uint64_t u;
- struct {
- uint64_t fault : 1;
- uint64_t stopped : 1;
- uint64_t reserved_2_6 : 5;
- uint64_t addr : 46;
- uint64_t reserved_53_63 : 11;
- } s;
-};
-
-union otx2_cpt_lf_q_size {
- uint64_t u;
- struct {
- uint64_t size_div40 : 15;
- uint64_t reserved_15_63 : 49;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl {
- uint64_t u;
- struct {
- uint64_t pri : 1;
- uint64_t reserved_1_8 : 8;
- uint64_t pf_func_inst : 1;
- uint64_t cont_err : 1;
- uint64_t reserved_11_15 : 5;
- uint64_t nixtx_en : 1;
- uint64_t reserved_17_47 : 31;
- uint64_t grp : 8;
- uint64_t reserved_56_63 : 8;
- } s;
-};
-
-union otx2_cpt_af_lf_ctl2 {
- uint64_t u;
- struct {
- uint64_t exe_no_swap : 1;
- uint64_t exe_ldwb : 1;
- uint64_t reserved_2_31 : 30;
- uint64_t sso_pf_func : 16;
- uint64_t nix_pf_func : 16;
- } s;
-};
-
-union otx2_cpt_lf_q_grp_ptr {
- uint64_t u;
- struct {
- uint64_t dq_ptr : 15;
- uint64_t reserved_31_15 : 17;
- uint64_t nq_ptr : 15;
- uint64_t reserved_47_62 : 16;
- uint64_t xq_xor : 1;
- } s;
-};
-
-/*
- * Enumeration cpt_9x_comp_e
- *
- * CPT 9X Completion Enumeration
- * Enumerates the values of CPT_RES_S[COMPCODE].
- */
-enum cpt_9x_comp_e {
- CPT_9X_COMP_E_NOTDONE = 0x00,
- CPT_9X_COMP_E_GOOD = 0x01,
- CPT_9X_COMP_E_FAULT = 0x02,
- CPT_9X_COMP_E_HWERR = 0x04,
- CPT_9X_COMP_E_INSTERR = 0x05,
- CPT_9X_COMP_E_LAST_ENTRY = 0x06
-};
-
-void otx2_cpt_err_intr_unregister(const struct rte_cryptodev *dev);
-
-int otx2_cpt_err_intr_register(const struct rte_cryptodev *dev);
-
-int otx2_cpt_iq_enable(const struct rte_cryptodev *dev,
- const struct otx2_cpt_qp *qp, uint8_t grp_mask,
- uint8_t pri, uint32_t size_div40);
-
-void otx2_cpt_iq_disable(struct otx2_cpt_qp *qp);
-
-#endif /* _OTX2_CRYPTODEV_HW_ACCESS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
deleted file mode 100644
index f9e7b0b474..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.c
+++ /dev/null
@@ -1,285 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-#include <cryptodev_pmd.h>
-#include <rte_ethdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_sec_idev.h"
-#include "otx2_mbox.h"
-
-#include "cpt_pmd_logs.h"
-
-int
-otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct cpt_caps_rsp_msg *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_cpt_caps_get(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- if (rsp->cpt_pf_drv_version != OTX2_CPT_PMD_VERSION) {
- otx2_err("Incompatible CPT PMD version"
- "(Kernel: 0x%04x DPDK: 0x%04x)",
- rsp->cpt_pf_drv_version, OTX2_CPT_PMD_VERSION);
- return -EPIPE;
- }
-
- vf->cpt_revision = rsp->cpt_revision;
- otx2_mbox_memcpy(hw_caps, rsp->eng_caps,
- sizeof(union cpt_eng_caps) * CPT_MAX_ENG_TYPES);
-
- return 0;
-}
-
-int
-otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_dev *otx2_dev = &vf->otx2_dev;
- struct free_rsrcs_rsp *rsp;
- int ret;
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(otx2_dev->mbox);
-
- ret = otx2_mbox_process_msg(otx2_dev->mbox, (void *)&rsp);
- if (ret)
- return -EIO;
-
- *nb_queues = rsp->cpt + rsp->cpt1;
- return 0;
-}
-
-int
-otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int blkaddr[OTX2_CPT_MAX_BLKS];
- struct rsrc_attach_req *req;
- int blknum = 0;
- int i, ret;
-
- blkaddr[0] = RVU_BLOCK_ADDR_CPT0;
- blkaddr[1] = RVU_BLOCK_ADDR_CPT1;
-
- /* Ask AF to attach required LFs */
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
-
- if ((vf->cpt_revision == OTX2_CPT_REVISION_ID_3) &&
- (vf->otx2_dev.pf_func & 0x1))
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
-
- /* 1 LF = 1 queue */
- req->cptlfs = nb_queues;
- req->cpt_blkaddr = blkaddr[blknum];
-
- ret = otx2_mbox_process(mbox);
- if (ret == -ENOSPC) {
- if (vf->cpt_revision == OTX2_CPT_REVISION_ID_3) {
- blknum = (blknum + 1) % OTX2_CPT_MAX_BLKS;
- req->cpt_blkaddr = blkaddr[blknum];
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- return -EIO;
- }
- } else if (ret < 0) {
- return -EIO;
- }
-
- /* Update number of attached queues */
- vf->nb_queues = nb_queues;
- for (i = 0; i < nb_queues; i++)
- vf->lf_blkaddr[i] = req->cpt_blkaddr;
-
- return 0;
-}
-
-int
-otx2_cpt_queues_detach(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->cptlfs = true;
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
-
- /* Queues have been detached */
- vf->nb_queues = 0;
-
- return 0;
-}
-
-int
-otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct msix_offset_rsp *rsp;
- uint32_t i, ret;
-
- /* Get CPT MSI-X vector offsets */
-
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret)
- return ret;
-
- for (i = 0; i < vf->nb_queues; i++)
- vf->lf_msixoff[i] = (vf->lf_blkaddr[i] == RVU_BLOCK_ADDR_CPT1) ?
- rsp->cpt1_lf_msixoff[i] : rsp->cptlf_msixoff[i];
-
- return 0;
-}
-
-static int
-otx2_cpt_send_mbox_msg(struct otx2_cpt_vf *vf)
-{
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- int ret;
-
- otx2_mbox_msg_send(mbox, 0);
-
- ret = otx2_mbox_wait_for_rsp(mbox, 0);
- if (ret < 0) {
- CPT_LOG_ERR("Could not get mailbox response");
- return ret;
- }
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct cpt_rd_wr_reg_msg *msg;
- int ret, off;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 0;
- msg->reg_offset = reg;
- msg->ret_val = val;
- msg->blkaddr = blkaddr;
-
- ret = otx2_cpt_send_mbox_msg(vf);
- if (ret < 0)
- return ret;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- msg = (struct cpt_rd_wr_reg_msg *) ((uintptr_t)mdev->mbase + off);
-
- *val = msg->val;
-
- return 0;
-}
-
-int
-otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rd_wr_reg_msg *msg;
-
- msg = (struct cpt_rd_wr_reg_msg *)
- otx2_mbox_alloc_msg_rsp(mbox, 0, sizeof(*msg),
- sizeof(*msg));
- if (msg == NULL) {
- CPT_LOG_ERR("Could not allocate mailbox message");
- return -EFAULT;
- }
-
- msg->hdr.id = MBOX_MSG_CPT_RD_WR_REGISTER;
- msg->hdr.sig = OTX2_MBOX_REQ_SIG;
- msg->hdr.pcifunc = vf->otx2_dev.pf_func;
- msg->is_write = 1;
- msg->reg_offset = reg;
- msg->val = val;
- msg->blkaddr = blkaddr;
-
- return otx2_cpt_send_mbox_msg(vf);
-}
-
-int
-otx2_cpt_inline_init(const struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_rx_inline_lf_cfg_msg *msg;
- int ret;
-
- msg = otx2_mbox_alloc_msg_cpt_rx_inline_lf_cfg(mbox);
- msg->sso_pf_func = otx2_sso_pf_func_get();
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
-
-int
-otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp,
- uint16_t port_id)
-{
- struct rte_eth_dev *eth_dev = &rte_eth_devices[port_id];
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- struct otx2_mbox *mbox = vf->otx2_dev.mbox;
- struct cpt_inline_ipsec_cfg_msg *msg;
- struct otx2_eth_dev *otx2_eth_dev;
- int ret;
-
- if (!otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- return -EINVAL;
-
- otx2_eth_dev = otx2_eth_pmd_priv(eth_dev);
-
- msg = otx2_mbox_alloc_msg_cpt_inline_ipsec_cfg(mbox);
- msg->dir = CPT_INLINE_OUTBOUND;
- msg->enable = 1;
- msg->slot = qp->id;
-
- msg->nix_pf_func = otx2_eth_dev->pf_func;
-
- otx2_mbox_msg_send(mbox, 0);
- ret = otx2_mbox_process(mbox);
- if (ret < 0)
- return -EIO;
-
- return 0;
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h b/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
deleted file mode 100644
index 03323e418c..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_mbox.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_MBOX_H_
-#define _OTX2_CRYPTODEV_MBOX_H_
-
-#include <rte_cryptodev.h>
-
-#include "otx2_cryptodev_hw_access.h"
-
-int otx2_cpt_hardware_caps_get(const struct rte_cryptodev *dev,
- union cpt_eng_caps *hw_caps);
-
-int otx2_cpt_available_queues_get(const struct rte_cryptodev *dev,
- uint16_t *nb_queues);
-
-int otx2_cpt_queues_attach(const struct rte_cryptodev *dev, uint8_t nb_queues);
-
-int otx2_cpt_queues_detach(const struct rte_cryptodev *dev);
-
-int otx2_cpt_msix_offsets_get(const struct rte_cryptodev *dev);
-
-__rte_internal
-int otx2_cpt_af_reg_read(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t *val);
-
-__rte_internal
-int otx2_cpt_af_reg_write(const struct rte_cryptodev *dev, uint64_t reg,
- uint8_t blkaddr, uint64_t val);
-
-int otx2_cpt_qp_ethdev_bind(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint16_t port_id);
-
-int otx2_cpt_inline_init(const struct rte_cryptodev *dev);
-
-#endif /* _OTX2_CRYPTODEV_MBOX_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
deleted file mode 100644
index 339b82f33e..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c
+++ /dev/null
@@ -1,1438 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#include <unistd.h>
-
-#include <cryptodev_pmd.h>
-#include <rte_errno.h>
-#include <ethdev_driver.h>
-#include <rte_event_crypto_adapter.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_po_ops.h"
-#include "otx2_mbox.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#include "cpt_hw_types.h"
-#include "cpt_pmd_logs.h"
-#include "cpt_pmd_ops_helper.h"
-#include "cpt_ucode.h"
-#include "cpt_ucode_asym.h"
-
-#define METABUF_POOL_CACHE_SIZE 512
-
-static uint64_t otx2_fpm_iova[CPT_EC_ID_PMAX];
-
-/* Forward declarations */
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id);
-
-static void
-qp_memzone_name_get(char *name, int size, int dev_id, int qp_id)
-{
- snprintf(name, size, "otx2_cpt_lf_mem_%u:%u", dev_id, qp_id);
-}
-
-static int
-otx2_cpt_metabuf_mempool_create(const struct rte_cryptodev *dev,
- struct otx2_cpt_qp *qp, uint8_t qp_id,
- unsigned int nb_elements)
-{
- char mempool_name[RTE_MEMPOOL_NAMESIZE];
- struct cpt_qp_meta_info *meta_info;
- int lcore_cnt = rte_lcore_count();
- int ret, max_mlen, mb_pool_sz;
- struct rte_mempool *pool;
- int asym_mlen = 0;
- int lb_mlen = 0;
- int sg_mlen = 0;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO) {
-
- /* Get meta len for scatter gather mode */
- sg_mlen = cpt_pmd_ops_helper_get_mlen_sg_mode();
-
- /* Extra 32B saved for future considerations */
- sg_mlen += 4 * sizeof(uint64_t);
-
- /* Get meta len for linear buffer (direct) mode */
- lb_mlen = cpt_pmd_ops_helper_get_mlen_direct_mode();
-
- /* Extra 32B saved for future considerations */
- lb_mlen += 4 * sizeof(uint64_t);
- }
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
-
- /* Get meta len required for asymmetric operations */
- asym_mlen = cpt_pmd_ops_helper_asym_get_mlen();
- }
-
- /*
- * Check max requirement for meta buffer to
- * support crypto op of any type (sym/asym).
- */
- max_mlen = RTE_MAX(RTE_MAX(lb_mlen, sg_mlen), asym_mlen);
-
- /* Allocate mempool */
-
- snprintf(mempool_name, RTE_MEMPOOL_NAMESIZE, "otx2_cpt_mb_%u:%u",
- dev->data->dev_id, qp_id);
-
- mb_pool_sz = nb_elements;
-
- /* For poll mode, core that enqueues and core that dequeues can be
- * different. For event mode, all cores are allowed to use same crypto
- * queue pair.
- */
- mb_pool_sz += (RTE_MAX(2, lcore_cnt) * METABUF_POOL_CACHE_SIZE);
-
- pool = rte_mempool_create_empty(mempool_name, mb_pool_sz, max_mlen,
- METABUF_POOL_CACHE_SIZE, 0,
- rte_socket_id(), 0);
-
- if (pool == NULL) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- return rte_errno;
- }
-
- ret = rte_mempool_set_ops_byname(pool, RTE_MBUF_DEFAULT_MEMPOOL_OPS,
- NULL);
- if (ret) {
- CPT_LOG_ERR("Could not set mempool ops");
- goto mempool_free;
- }
-
- ret = rte_mempool_populate_default(pool);
- if (ret <= 0) {
- CPT_LOG_ERR("Could not populate metabuf pool");
- goto mempool_free;
- }
-
- meta_info = &qp->meta_info;
-
- meta_info->pool = pool;
- meta_info->lb_mlen = lb_mlen;
- meta_info->sg_mlen = sg_mlen;
-
- return 0;
-
-mempool_free:
- rte_mempool_free(pool);
- return ret;
-}
-
-static void
-otx2_cpt_metabuf_mempool_destroy(struct otx2_cpt_qp *qp)
-{
- struct cpt_qp_meta_info *meta_info = &qp->meta_info;
-
- rte_mempool_free(meta_info->pool);
-
- meta_info->pool = NULL;
- meta_info->lb_mlen = 0;
- meta_info->sg_mlen = 0;
-}
-
-static int
-otx2_cpt_qp_inline_cfg(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- static rte_atomic16_t port_offset = RTE_ATOMIC16_INIT(-1);
- uint16_t port_id, nb_ethport = rte_eth_dev_count_avail();
- int i, ret;
-
- for (i = 0; i < nb_ethport; i++) {
- port_id = rte_atomic16_add_return(&port_offset, 1) % nb_ethport;
- if (otx2_eth_dev_is_sec_capable(&rte_eth_devices[port_id]))
- break;
- }
-
- if (i >= nb_ethport)
- return 0;
-
- ret = otx2_cpt_qp_ethdev_bind(dev, qp, port_id);
- if (ret)
- return ret;
-
- /* Publish inline Tx QP to eth dev security */
- ret = otx2_sec_idev_tx_cpt_qp_add(port_id, qp);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static struct otx2_cpt_qp *
-otx2_cpt_qp_create(const struct rte_cryptodev *dev, uint16_t qp_id,
- uint8_t group)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- uint64_t pg_sz = sysconf(_SC_PAGESIZE);
- const struct rte_memzone *lf_mem;
- uint32_t len, iq_len, size_div40;
- char name[RTE_MEMZONE_NAMESIZE];
- uint64_t used_len, iova;
- struct otx2_cpt_qp *qp;
- uint64_t lmtline;
- uint8_t *va;
- int ret;
-
- /* Allocate queue pair */
- qp = rte_zmalloc_socket("OCTEON TX2 Crypto PMD Queue Pair", sizeof(*qp),
- OTX2_ALIGN, 0);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not allocate queue pair");
- return NULL;
- }
-
- /*
- * Pending queue updates make assumption that queue size is a power
- * of 2.
- */
- RTE_BUILD_BUG_ON(!RTE_IS_POWER_OF_2(OTX2_CPT_DEFAULT_CMD_QLEN));
-
- iq_len = OTX2_CPT_DEFAULT_CMD_QLEN;
-
- /*
- * Queue size must be a multiple of 40 and effective queue size to
- * software is (size_div40 - 1) * 40
- */
- size_div40 = (iq_len + 40 - 1) / 40 + 1;
-
- /* For pending queue */
- len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
-
- /* Space for instruction group memory */
- len += size_div40 * 16;
-
- /* So that instruction queues start as pg size aligned */
- len = RTE_ALIGN(len, pg_sz);
-
- /* For instruction queues */
- len += OTX2_CPT_DEFAULT_CMD_QLEN * sizeof(union cpt_inst_s);
-
- /* Wastage after instruction queues */
- len = RTE_ALIGN(len, pg_sz);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp_id);
-
- lf_mem = rte_memzone_reserve_aligned(name, len, vf->otx2_dev.node,
- RTE_MEMZONE_SIZE_HINT_ONLY | RTE_MEMZONE_256MB,
- RTE_CACHE_LINE_SIZE);
- if (lf_mem == NULL) {
- CPT_LOG_ERR("Could not allocate reserved memzone");
- goto qp_free;
- }
-
- va = lf_mem->addr;
- iova = lf_mem->iova;
-
- memset(va, 0, len);
-
- ret = otx2_cpt_metabuf_mempool_create(dev, qp, qp_id, iq_len);
- if (ret) {
- CPT_LOG_ERR("Could not create mempool for metabuf");
- goto lf_mem_free;
- }
-
- /* Initialize pending queue */
- qp->pend_q.rid_queue = (void **)va;
- qp->pend_q.tail = 0;
- qp->pend_q.head = 0;
-
- used_len = iq_len * RTE_ALIGN(sizeof(qp->pend_q.rid_queue[0]), 8);
- used_len += size_div40 * 16;
- used_len = RTE_ALIGN(used_len, pg_sz);
- iova += used_len;
-
- qp->iq_dma_addr = iova;
- qp->id = qp_id;
- qp->blkaddr = vf->lf_blkaddr[qp_id];
- qp->base = OTX2_CPT_LF_BAR2(vf, qp->blkaddr, qp_id);
-
- lmtline = vf->otx2_dev.bar2 +
- (RVU_BLOCK_ADDR_LMT << 20 | qp_id << 12) +
- OTX2_LMT_LF_LMTLINE(0);
-
- qp->lmtline = (void *)lmtline;
-
- qp->lf_nq_reg = qp->base + OTX2_CPT_LF_NQ(0);
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- goto mempool_destroy;
- }
-
- otx2_cpt_iq_disable(qp);
-
- ret = otx2_cpt_qp_inline_cfg(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not configure queue for inline IPsec");
- goto mempool_destroy;
- }
-
- ret = otx2_cpt_iq_enable(dev, qp, group, OTX2_CPT_QUEUE_HI_PRIO,
- size_div40);
- if (ret) {
- CPT_LOG_ERR("Could not enable instruction queue");
- goto mempool_destroy;
- }
-
- return qp;
-
-mempool_destroy:
- otx2_cpt_metabuf_mempool_destroy(qp);
-lf_mem_free:
- rte_memzone_free(lf_mem);
-qp_free:
- rte_free(qp);
- return NULL;
-}
-
-static int
-otx2_cpt_qp_destroy(const struct rte_cryptodev *dev, struct otx2_cpt_qp *qp)
-{
- const struct rte_memzone *lf_mem;
- char name[RTE_MEMZONE_NAMESIZE];
- int ret;
-
- ret = otx2_sec_idev_tx_cpt_qp_remove(qp);
- if (ret && (ret != -ENOENT)) {
- CPT_LOG_ERR("Could not delete inline configuration");
- return ret;
- }
-
- otx2_cpt_iq_disable(qp);
-
- otx2_cpt_metabuf_mempool_destroy(qp);
-
- qp_memzone_name_get(name, RTE_MEMZONE_NAMESIZE, dev->data->dev_id,
- qp->id);
-
- lf_mem = rte_memzone_lookup(name);
-
- ret = rte_memzone_free(lf_mem);
- if (ret)
- return ret;
-
- rte_free(qp);
-
- return 0;
-}
-
-static int
-sym_xform_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->next) {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.op == RTE_CRYPTO_CIPHER_OP_ENCRYPT &&
- (xform->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC ||
- xform->next->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.op == RTE_CRYPTO_CIPHER_OP_DECRYPT &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- (xform->cipher.algo != RTE_CRYPTO_CIPHER_AES_CBC ||
- xform->next->auth.algo != RTE_CRYPTO_AUTH_SHA1_HMAC))
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->next->auth.algo == RTE_CRYPTO_AUTH_SHA1)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_SHA1 &&
- xform->next->type == RTE_CRYPTO_SYM_XFORM_CIPHER &&
- xform->next->cipher.algo == RTE_CRYPTO_CIPHER_3DES_CBC)
- return -ENOTSUP;
-
- } else {
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH &&
- xform->auth.algo == RTE_CRYPTO_AUTH_NULL &&
- xform->auth.op == RTE_CRYPTO_AUTH_OP_VERIFY)
- return -ENOTSUP;
- }
- return 0;
-}
-
-static int
-sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- struct rte_crypto_sym_xform *temp_xform = xform;
- struct cpt_sess_misc *misc;
- vq_cmd_word3_t vq_cmd_w3;
- void *priv;
- int ret;
-
- ret = sym_xform_verify(xform);
- if (unlikely(ret))
- return ret;
-
- if (unlikely(rte_mempool_get(pool, &priv))) {
- CPT_LOG_ERR("Could not allocate session private data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_sess_misc) +
- offsetof(struct cpt_ctx, mc_ctx));
-
- misc = priv;
-
- for ( ; xform != NULL; xform = xform->next) {
- switch (xform->type) {
- case RTE_CRYPTO_SYM_XFORM_AEAD:
- ret = fill_sess_aead(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_CIPHER:
- ret = fill_sess_cipher(xform, misc);
- break;
- case RTE_CRYPTO_SYM_XFORM_AUTH:
- if (xform->auth.algo == RTE_CRYPTO_AUTH_AES_GMAC)
- ret = fill_sess_gmac(xform, misc);
- else
- ret = fill_sess_auth(xform, misc);
- break;
- default:
- ret = -1;
- }
-
- if (ret)
- goto priv_put;
- }
-
- if ((GET_SESS_FC_TYPE(misc) == HASH_HMAC) &&
- cpt_mac_len_verify(&temp_xform->auth)) {
- CPT_LOG_ERR("MAC length is not supported");
- struct cpt_ctx *ctx = SESS_PRIV(misc);
- if (ctx->auth_key != NULL) {
- rte_free(ctx->auth_key);
- ctx->auth_key = NULL;
- }
- ret = -ENOTSUP;
- goto priv_put;
- }
-
- set_sym_session_private_data(sess, driver_id, misc);
-
- misc->ctx_dma_addr = rte_mempool_virt2iova(misc) +
- sizeof(struct cpt_sess_misc);
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.cptr = misc->ctx_dma_addr + offsetof(struct cpt_ctx,
- mc_ctx);
-
- /*
- * IE engines support IPsec operations
- * SE engines support IPsec operations, Chacha-Poly and
- * Air-Crypto operations
- */
- if (misc->zsk_flag || misc->chacha_poly)
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE;
- else
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_SE_IE;
-
- misc->cpt_inst_w7 = vq_cmd_w3.u64;
-
- return 0;
-
-priv_put:
- rte_mempool_put(pool, priv);
-
- return -ENOTSUP;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp,
- struct cpt_request_info *req,
- void *lmtline,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7)
-{
- union rte_event_crypto_metadata *m_data;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- op->sym->session);
- if (m_data == NULL) {
- rte_pktmbuf_free(op->sym->m_src);
- rte_crypto_op_free(op);
- rte_errno = EINVAL;
- return -EINVAL;
- }
- } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)op +
- op->private_data_offset);
- } else {
- return -EINVAL;
- }
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) |
- m_data->response_info.flow_id) |
- ((uint64_t)m_data->response_info.sched_type << 32) |
- ((uint64_t)m_data->response_info.queue_id << 34));
- inst.u[3] = 1 | (((uint64_t)req >> 3) << 3);
- req->qp = qp;
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp,
- struct pending_queue *pend_q,
- struct cpt_request_info *req,
- struct rte_crypto_op *op,
- uint64_t cpt_inst_w7,
- unsigned int burst_index)
-{
- void *lmtline = qp->lmtline;
- union cpt_inst_s inst;
- uint64_t lmt_status;
-
- if (qp->ca_enable)
- return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7);
-
- inst.u[0] = 0;
- inst.s9x.res_addr = req->comp_baddr;
- inst.u[2] = 0;
- inst.u[3] = 0;
-
- inst.s9x.ei0 = req->ist.ei0;
- inst.s9x.ei1 = req->ist.ei1;
- inst.s9x.ei2 = req->ist.ei2;
- inst.s9x.ei3 = cpt_inst_w7;
-
- req->time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- do {
- /* Copy CPT command to LMTLINE */
- memcpy(lmtline, &inst, sizeof(inst));
-
- /*
- * Make sure compiler does not reorder memcpy and ldeor.
- * LMTST transactions are always flushed from the write
- * buffer immediately, a DMB is not required to push out
- * LMTSTs.
- */
- rte_io_wmb();
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- pending_queue_push(pend_q, req, burst_index, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return 0;
-}
-
-static __rte_always_inline int32_t __rte_hot
-otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp,
- struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- struct cpt_qp_meta_info *minfo = &qp->meta_info;
- struct rte_crypto_asym_op *asym_op = op->asym;
- struct asym_op_params params = {0};
- struct cpt_asym_sess_misc *sess;
- uintptr_t *cop;
- void *mdata;
- int ret;
-
- if (unlikely(rte_mempool_get(minfo->pool, &mdata) < 0)) {
- CPT_LOG_ERR("Could not allocate meta buffer for request");
- return -ENOMEM;
- }
-
- sess = get_asym_session_private_data(asym_op->session,
- otx2_cryptodev_driver_id);
-
- /* Store IO address of the mdata to meta_buf */
- params.meta_buf = rte_mempool_virt2iova(mdata);
-
- cop = mdata;
- cop[0] = (uintptr_t)mdata;
- cop[1] = (uintptr_t)op;
- cop[2] = cop[3] = 0ULL;
-
- params.req = RTE_PTR_ADD(cop, 4 * sizeof(uintptr_t));
- params.req->op = cop;
-
- /* Adjust meta_buf to point to end of cpt_request_info structure */
- params.meta_buf += (4 * sizeof(uintptr_t)) +
- sizeof(struct cpt_request_info);
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- ret = cpt_modex_prep(¶ms, &sess->mod_ctx);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- ret = cpt_enqueue_rsa_op(op, ¶ms, sess);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- ret = cpt_enqueue_ecdsa_op(op, ¶ms, sess, otx2_fpm_iova);
- if (unlikely(ret))
- goto req_fail;
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- ret = cpt_ecpm_prep(&asym_op->ecpm, ¶ms,
- sess->ec_ctx.curveid);
- if (unlikely(ret))
- goto req_fail;
- break;
- default:
- op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- ret = -EINVAL;
- goto req_fail;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op,
- sess->cpt_inst_w7, burst_index);
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Could not enqueue crypto req");
- goto req_fail;
- }
-
- return 0;
-
-req_fail:
- free_op_meta(mdata, minfo->pool);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q, unsigned int burst_index)
-{
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct cpt_request_info *req;
- struct cpt_sess_misc *sess;
- uint64_t cpt_op;
- void *mdata;
- int ret;
-
- sess = get_sym_session_private_data(sym_op->session,
- otx2_cryptodev_driver_id);
-
- cpt_op = sess->cpt_op;
-
- if (cpt_op & CPT_OP_CIPHER_MASK)
- ret = fill_fc_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
- else
- ret = fill_digest_params(op, sess, &qp->meta_info, &mdata,
- (void **)&req);
-
- if (unlikely(ret)) {
- CPT_LOG_DP_ERR("Crypto req : op %p, cpt_op 0x%x ret 0x%x",
- op, (unsigned int)cpt_op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
- if (unlikely(ret)) {
- /* Free buffer allocated by fill params routines */
- free_op_meta(mdata, qp->meta_info.pool);
- }
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- const unsigned int burst_index)
-{
- uint32_t winsz, esn_low = 0, esn_hi = 0, seql = 0, seqh = 0;
- struct rte_mbuf *m_src = op->sym->m_src;
- struct otx2_sec_session_ipsec_lp *sess;
- struct otx2_ipsec_po_sa_ctl *ctl_wrd;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *priv;
- struct cpt_request_info *req;
- uint64_t seq_in_sa, seq = 0;
- uint8_t esn;
- int ret;
-
- priv = get_sec_session_private_data(op->sym->sec_session);
- sess = &priv->ipsec.lp;
- sa = &sess->in_sa;
-
- ctl_wrd = &sa->ctl;
- esn = ctl_wrd->esn_en;
- winsz = sa->replay_win_sz;
-
- if (ctl_wrd->direction == OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND)
- ret = process_outb_sa(op, sess, &qp->meta_info, (void **)&req);
- else {
- if (winsz) {
- esn_low = rte_be_to_cpu_32(sa->esn_low);
- esn_hi = rte_be_to_cpu_32(sa->esn_hi);
- seql = *rte_pktmbuf_mtod_offset(m_src, uint32_t *,
- sizeof(struct rte_ipv4_hdr) + 4);
- seql = rte_be_to_cpu_32(seql);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = anti_replay_get_seqh(winsz, seql, esn_hi,
- esn_low);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- ret = anti_replay_check(sa->replay, seq, winsz);
- if (unlikely(ret)) {
- otx2_err("Anti replay check failed");
- return IPSEC_ANTI_REPLAY_FAILED;
- }
-
- if (esn) {
- seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low;
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- }
-
- ret = process_inb_sa(op, sess, &qp->meta_info, (void **)&req);
- }
-
- if (unlikely(ret)) {
- otx2_err("Crypto req : op %p, ret 0x%x", op, ret);
- return ret;
- }
-
- ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7,
- burst_index);
-
- return ret;
-}
-
-static __rte_always_inline int __rte_hot
-otx2_cpt_enqueue_sym_sessless(struct otx2_cpt_qp *qp, struct rte_crypto_op *op,
- struct pending_queue *pend_q,
- unsigned int burst_index)
-{
- const int driver_id = otx2_cryptodev_driver_id;
- struct rte_crypto_sym_op *sym_op = op->sym;
- struct rte_cryptodev_sym_session *sess;
- int ret;
-
- /* Create temporary session */
- sess = rte_cryptodev_sym_session_create(qp->sess_mp);
- if (sess == NULL)
- return -ENOMEM;
-
- ret = sym_session_configure(driver_id, sym_op->xform, sess,
- qp->sess_mp_priv);
- if (ret)
- goto sess_put;
-
- sym_op->session = sess;
-
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q, burst_index);
-
- if (unlikely(ret))
- goto priv_put;
-
- return 0;
-
-priv_put:
- sym_session_clear(driver_id, sess);
-sess_put:
- rte_mempool_put(qp->sess_mp, sess);
- return ret;
-}
-
-static uint16_t
-otx2_cpt_enqueue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- uint16_t nb_allowed, count = 0;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct rte_crypto_op *op;
- int ret;
-
- pend_q = &qp->pend_q;
-
- nb_allowed = pending_queue_free_slots(pend_q,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
- nb_ops = RTE_MIN(nb_ops, nb_allowed);
-
- for (count = 0; count < nb_ops; count++) {
- op = ops[count];
- if (op->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION)
- ret = otx2_cpt_enqueue_sec(qp, op, pend_q,
- count);
- else if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_sym(qp, op, pend_q,
- count);
- else
- ret = otx2_cpt_enqueue_sym_sessless(qp, op,
- pend_q, count);
- } else if (op->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION)
- ret = otx2_cpt_enqueue_asym(qp, op, pend_q,
- count);
- else
- break;
- } else
- break;
-
- if (unlikely(ret))
- break;
- }
-
- if (unlikely(!qp->ca_enable))
- pending_queue_commit(pend_q, count, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- return count;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_rsa_op(struct rte_crypto_op *cop, struct cpt_request_info *req,
- struct rte_crypto_rsa_xform *rsa_ctx)
-{
- struct rte_crypto_rsa_op_param *rsa = &cop->asym->rsa;
-
- switch (rsa->op_type) {
- case RTE_CRYPTO_ASYM_OP_ENCRYPT:
- rsa->cipher.length = rsa_ctx->n.length;
- memcpy(rsa->cipher.data, req->rptr, rsa->cipher.length);
- break;
- case RTE_CRYPTO_ASYM_OP_DECRYPT:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->message.length = rsa_ctx->n.length;
- memcpy(rsa->message.data, req->rptr,
- rsa->message.length);
- } else {
- /* Get length of decrypted output */
- rsa->message.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy decrypted data.
- */
- memcpy(rsa->message.data, req->rptr + 2,
- rsa->message.length);
- }
- break;
- case RTE_CRYPTO_ASYM_OP_SIGN:
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- break;
- case RTE_CRYPTO_ASYM_OP_VERIFY:
- if (rsa->pad == RTE_CRYPTO_RSA_PADDING_NONE) {
- rsa->sign.length = rsa_ctx->n.length;
- memcpy(rsa->sign.data, req->rptr, rsa->sign.length);
- } else {
- /* Get length of signed output */
- rsa->sign.length = rte_cpu_to_be_16
- (*((uint16_t *)req->rptr));
- /*
- * Offset output data pointer by length field
- * (2 bytes) and copy signed data.
- */
- memcpy(rsa->sign.data, req->rptr + 2,
- rsa->sign.length);
- }
- if (memcmp(rsa->sign.data, rsa->message.data,
- rsa->message.length)) {
- CPT_LOG_DP_ERR("RSA verification failed");
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid RSA operation type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecdsa_op(struct rte_crypto_ecdsa_op_param *ecdsa,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- if (ecdsa->op_type == RTE_CRYPTO_ASYM_OP_VERIFY)
- return;
-
- /* Separate out sign r and s components */
- memcpy(ecdsa->r.data, req->rptr, prime_len);
- memcpy(ecdsa->s.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecdsa->r.length = prime_len;
- ecdsa->s.length = prime_len;
-}
-
-static __rte_always_inline void
-otx2_cpt_asym_dequeue_ecpm_op(struct rte_crypto_ecpm_op_param *ecpm,
- struct cpt_request_info *req,
- struct cpt_asym_ec_ctx *ec)
-{
- int prime_len = ec_grp[ec->curveid].prime.length;
-
- memcpy(ecpm->r.x.data, req->rptr, prime_len);
- memcpy(ecpm->r.y.data, req->rptr + RTE_ALIGN_CEIL(prime_len, 8),
- prime_len);
- ecpm->r.x.length = prime_len;
- ecpm->r.y.length = prime_len;
-}
-
-static void
-otx2_cpt_asym_post_process(struct rte_crypto_op *cop,
- struct cpt_request_info *req)
-{
- struct rte_crypto_asym_op *op = cop->asym;
- struct cpt_asym_sess_misc *sess;
-
- sess = get_asym_session_private_data(op->session,
- otx2_cryptodev_driver_id);
-
- switch (sess->xfrm_type) {
- case RTE_CRYPTO_ASYM_XFORM_RSA:
- otx2_cpt_asym_rsa_op(cop, req, &sess->rsa_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_MODEX:
- op->modex.result.length = sess->mod_ctx.modulus.length;
- memcpy(op->modex.result.data, req->rptr,
- op->modex.result.length);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECDSA:
- otx2_cpt_asym_dequeue_ecdsa_op(&op->ecdsa, req, &sess->ec_ctx);
- break;
- case RTE_CRYPTO_ASYM_XFORM_ECPM:
- otx2_cpt_asym_dequeue_ecpm_op(&op->ecpm, req, &sess->ec_ctx);
- break;
- default:
- CPT_LOG_DP_DEBUG("Invalid crypto xform type");
- cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- break;
- }
-}
-
-static void
-otx2_cpt_sec_post_process(struct rte_crypto_op *cop, uintptr_t *rsp)
-{
- struct cpt_request_info *req = (struct cpt_request_info *)rsp[2];
- vq_cmd_word0_t *word0 = (vq_cmd_word0_t *)&req->ist.ei0;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m = sym_op->m_src;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- uint16_t m_len = 0;
- int mdata_len;
- char *data;
-
- mdata_len = (int)rsp[3];
- rte_pktmbuf_trim(m, mdata_len);
-
- if (word0->s.opcode.major == OTX2_IPSEC_PO_PROCESS_IPSEC_INB) {
- data = rte_pktmbuf_mtod(m, char *);
- ip = (struct rte_ipv4_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
-
- if ((ip->version_ihl >> 4) == 4) {
- m_len = rte_be_to_cpu_16(ip->total_length);
- } else {
- ip6 = (struct rte_ipv6_hdr *)(data +
- OTX2_IPSEC_PO_INB_RPTR_HDR);
- m_len = rte_be_to_cpu_16(ip6->payload_len) +
- sizeof(struct rte_ipv6_hdr);
- }
-
- m->data_len = m_len;
- m->pkt_len = m_len;
- m->data_off += OTX2_IPSEC_PO_INB_RPTR_HDR;
- }
-}
-
-static inline void
-otx2_cpt_dequeue_post_process(struct otx2_cpt_qp *qp, struct rte_crypto_op *cop,
- uintptr_t *rsp, uint8_t cc)
-{
- unsigned int sz;
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (cop->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
- if (likely(cc == OTX2_IPSEC_PO_CC_SUCCESS)) {
- otx2_cpt_sec_post_process(cop, rsp);
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
-
- return;
- }
-
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- sz = rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session);
- memset(cop->sym->session, 0, sz);
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
- if (cop->type == RTE_CRYPTO_OP_TYPE_ASYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- /*
- * Pass cpt_req_info stored in metabuf during
- * enqueue.
- */
- rsp = RTE_PTR_ADD(rsp, 4 * sizeof(uintptr_t));
- otx2_cpt_asym_post_process(cop,
- (struct cpt_request_info *)rsp);
- } else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-}
-
-static uint16_t
-otx2_cpt_dequeue_burst(void *qptr, struct rte_crypto_op **ops, uint16_t nb_ops)
-{
- int i, nb_pending, nb_completed;
- struct otx2_cpt_qp *qp = qptr;
- struct pending_queue *pend_q;
- struct cpt_request_info *req;
- struct rte_crypto_op *cop;
- uint8_t cc[nb_ops];
- uintptr_t *rsp;
- void *metabuf;
-
- pend_q = &qp->pend_q;
-
- nb_pending = pending_queue_level(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
-
- /* Ensure pcount isn't read before data lands */
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
-
- nb_ops = RTE_MIN(nb_ops, nb_pending);
-
- for (i = 0; i < nb_ops; i++) {
- pending_queue_peek(pend_q, (void **)&req,
- OTX2_CPT_DEFAULT_CMD_QLEN, 0);
-
- cc[i] = otx2_cpt_compcode_get(req);
-
- if (unlikely(cc[i] == ERR_REQ_PENDING))
- break;
-
- ops[i] = req->op;
-
- pending_queue_pop(pend_q, OTX2_CPT_DEFAULT_CMD_QLEN);
- }
-
- nb_completed = i;
-
- for (i = 0; i < nb_completed; i++) {
- rsp = (void *)ops[i];
-
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- ops[i] = cop;
-
- otx2_cpt_dequeue_post_process(qp, cop, rsp, cc[i]);
-
- free_op_meta(metabuf, qp->meta_info.pool);
- }
-
- return nb_completed;
-}
-
-void
-otx2_cpt_set_enqdeq_fns(struct rte_cryptodev *dev)
-{
- dev->enqueue_burst = otx2_cpt_enqueue_burst;
- dev->dequeue_burst = otx2_cpt_dequeue_burst;
-
- rte_mb();
-}
-
-/* PMD ops */
-
-static int
-otx2_cpt_dev_config(struct rte_cryptodev *dev,
- struct rte_cryptodev_config *conf)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int ret;
-
- if (conf->nb_queue_pairs > vf->max_queues) {
- CPT_LOG_ERR("Invalid number of queue pairs requested");
- return -EINVAL;
- }
-
- dev->feature_flags = otx2_cpt_default_ff_get() & ~conf->ff_disable;
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO) {
- /* Initialize shared FPM table */
- ret = cpt_fpm_init(otx2_fpm_iova);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret) {
- CPT_LOG_ERR("Could not detach CPT queues");
- return ret;
- }
- }
-
- /* Attach queues */
- ret = otx2_cpt_queues_attach(dev, conf->nb_queue_pairs);
- if (ret) {
- CPT_LOG_ERR("Could not attach CPT queues");
- return -ENODEV;
- }
-
- ret = otx2_cpt_msix_offsets_get(dev);
- if (ret) {
- CPT_LOG_ERR("Could not get MSI-X offsets");
- goto queues_detach;
- }
-
- /* Register error interrupts */
- ret = otx2_cpt_err_intr_register(dev);
- if (ret) {
- CPT_LOG_ERR("Could not register error interrupts");
- goto queues_detach;
- }
-
- ret = otx2_cpt_inline_init(dev);
- if (ret) {
- CPT_LOG_ERR("Could not enable inline IPsec");
- goto intr_unregister;
- }
-
- otx2_cpt_set_enqdeq_fns(dev);
-
- return 0;
-
-intr_unregister:
- otx2_cpt_err_intr_unregister(dev);
-queues_detach:
- otx2_cpt_queues_detach(dev);
- return ret;
-}
-
-static int
-otx2_cpt_dev_start(struct rte_cryptodev *dev)
-{
- RTE_SET_USED(dev);
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- return 0;
-}
-
-static void
-otx2_cpt_dev_stop(struct rte_cryptodev *dev)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->feature_flags & RTE_CRYPTODEV_FF_ASYMMETRIC_CRYPTO)
- cpt_fpm_clear();
-}
-
-static int
-otx2_cpt_dev_close(struct rte_cryptodev *dev)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
- int i, ret = 0;
-
- for (i = 0; i < dev->data->nb_queue_pairs; i++) {
- ret = otx2_cpt_queue_pair_release(dev, i);
- if (ret)
- return ret;
- }
-
- /* Unregister error interrupts */
- if (vf->err_intr_registered)
- otx2_cpt_err_intr_unregister(dev);
-
- /* Detach queues */
- if (vf->nb_queues) {
- ret = otx2_cpt_queues_detach(dev);
- if (ret)
- CPT_LOG_ERR("Could not detach CPT queues");
- }
-
- return ret;
-}
-
-static void
-otx2_cpt_dev_info_get(struct rte_cryptodev *dev,
- struct rte_cryptodev_info *info)
-{
- struct otx2_cpt_vf *vf = dev->data->dev_private;
-
- if (info != NULL) {
- info->max_nb_queue_pairs = vf->max_queues;
- info->feature_flags = otx2_cpt_default_ff_get();
- info->capabilities = otx2_cpt_capabilities_get();
- info->sym.max_nb_sessions = 0;
- info->driver_id = otx2_cryptodev_driver_id;
- info->min_mbuf_headroom_req = OTX2_CPT_MIN_HEADROOM_REQ;
- info->min_mbuf_tailroom_req = OTX2_CPT_MIN_TAILROOM_REQ;
- }
-}
-
-static int
-otx2_cpt_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
- const struct rte_cryptodev_qp_conf *conf,
- int socket_id __rte_unused)
-{
- uint8_t grp_mask = OTX2_CPT_ENG_GRPS_MASK;
- struct rte_pci_device *pci_dev;
- struct otx2_cpt_qp *qp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (dev->data->queue_pairs[qp_id] != NULL)
- otx2_cpt_queue_pair_release(dev, qp_id);
-
- if (conf->nb_descriptors > OTX2_CPT_DEFAULT_CMD_QLEN) {
- CPT_LOG_ERR("Could not setup queue pair for %u descriptors",
- conf->nb_descriptors);
- return -EINVAL;
- }
-
- pci_dev = RTE_DEV_TO_PCI(dev->device);
-
- if (pci_dev->mem_resource[2].addr == NULL) {
- CPT_LOG_ERR("Invalid PCI mem address");
- return -EIO;
- }
-
- qp = otx2_cpt_qp_create(dev, qp_id, grp_mask);
- if (qp == NULL) {
- CPT_LOG_ERR("Could not create queue pair %d", qp_id);
- return -ENOMEM;
- }
-
- qp->sess_mp = conf->mp_session;
- qp->sess_mp_priv = conf->mp_session_private;
- dev->data->queue_pairs[qp_id] = qp;
-
- return 0;
-}
-
-static int
-otx2_cpt_queue_pair_release(struct rte_cryptodev *dev, uint16_t qp_id)
-{
- struct otx2_cpt_qp *qp = dev->data->queue_pairs[qp_id];
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (qp == NULL)
- return -EINVAL;
-
- CPT_LOG_INFO("Releasing queue pair %d", qp_id);
-
- ret = otx2_cpt_qp_destroy(dev, qp);
- if (ret) {
- CPT_LOG_ERR("Could not destroy queue pair %d", qp_id);
- return ret;
- }
-
- dev->data->queue_pairs[qp_id] = NULL;
-
- return 0;
-}
-
-static unsigned int
-otx2_cpt_sym_session_get_size(struct rte_cryptodev *dev __rte_unused)
-{
- return cpt_get_session_size();
-}
-
-static int
-otx2_cpt_sym_session_configure(struct rte_cryptodev *dev,
- struct rte_crypto_sym_xform *xform,
- struct rte_cryptodev_sym_session *sess,
- struct rte_mempool *pool)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_configure(dev->driver_id, xform, sess, pool);
-}
-
-static void
-otx2_cpt_sym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_sym_session *sess)
-{
- CPT_PMD_INIT_FUNC_TRACE();
-
- return sym_session_clear(dev->driver_id, sess);
-}
-
-static unsigned int
-otx2_cpt_asym_session_size_get(struct rte_cryptodev *dev __rte_unused)
-{
- return sizeof(struct cpt_asym_sess_misc);
-}
-
-static int
-otx2_cpt_asym_session_cfg(struct rte_cryptodev *dev,
- struct rte_crypto_asym_xform *xform,
- struct rte_cryptodev_asym_session *sess,
- struct rte_mempool *pool)
-{
- struct cpt_asym_sess_misc *priv;
- vq_cmd_word3_t vq_cmd_w3;
- int ret;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- if (rte_mempool_get(pool, (void **)&priv)) {
- CPT_LOG_ERR("Could not allocate session_private_data");
- return -ENOMEM;
- }
-
- memset(priv, 0, sizeof(struct cpt_asym_sess_misc));
-
- ret = cpt_fill_asym_session_parameters(priv, xform);
- if (ret) {
- CPT_LOG_ERR("Could not configure session parameters");
-
- /* Return session to mempool */
- rte_mempool_put(pool, priv);
- return ret;
- }
-
- vq_cmd_w3.u64 = 0;
- vq_cmd_w3.s.grp = OTX2_CPT_EGRP_AE;
- priv->cpt_inst_w7 = vq_cmd_w3.u64;
-
- set_asym_session_private_data(sess, dev->driver_id, priv);
-
- return 0;
-}
-
-static void
-otx2_cpt_asym_session_clear(struct rte_cryptodev *dev,
- struct rte_cryptodev_asym_session *sess)
-{
- struct cpt_asym_sess_misc *priv;
- struct rte_mempool *sess_mp;
-
- CPT_PMD_INIT_FUNC_TRACE();
-
- priv = get_asym_session_private_data(sess, dev->driver_id);
- if (priv == NULL)
- return;
-
- /* Free resources allocated in session_cfg */
- cpt_free_asym_session_parameters(priv);
-
- /* Reset and free object back to pool */
- memset(priv, 0, otx2_cpt_asym_session_size_get(dev));
- sess_mp = rte_mempool_from_obj(priv);
- set_asym_session_private_data(sess, dev->driver_id, NULL);
- rte_mempool_put(sess_mp, priv);
-}
-
-struct rte_cryptodev_ops otx2_cpt_ops = {
- /* Device control ops */
- .dev_configure = otx2_cpt_dev_config,
- .dev_start = otx2_cpt_dev_start,
- .dev_stop = otx2_cpt_dev_stop,
- .dev_close = otx2_cpt_dev_close,
- .dev_infos_get = otx2_cpt_dev_info_get,
-
- .stats_get = NULL,
- .stats_reset = NULL,
- .queue_pair_setup = otx2_cpt_queue_pair_setup,
- .queue_pair_release = otx2_cpt_queue_pair_release,
-
- /* Symmetric crypto ops */
- .sym_session_get_size = otx2_cpt_sym_session_get_size,
- .sym_session_configure = otx2_cpt_sym_session_configure,
- .sym_session_clear = otx2_cpt_sym_session_clear,
-
- /* Asymmetric crypto ops */
- .asym_session_get_size = otx2_cpt_asym_session_size_get,
- .asym_session_configure = otx2_cpt_asym_session_cfg,
- .asym_session_clear = otx2_cpt_asym_session_clear,
-
-};
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
deleted file mode 100644
index 7faf7ad034..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.h
+++ /dev/null
@@ -1,15 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2019 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_H_
-#define _OTX2_CRYPTODEV_OPS_H_
-
-#include <cryptodev_pmd.h>
-
-#define OTX2_CPT_MIN_HEADROOM_REQ 48
-#define OTX2_CPT_MIN_TAILROOM_REQ 208
-
-extern struct rte_cryptodev_ops otx2_cpt_ops;
-
-#endif /* _OTX2_CRYPTODEV_OPS_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h b/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
deleted file mode 100644
index 01c081a216..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_ops_helper.h
+++ /dev/null
@@ -1,82 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_CRYPTODEV_OPS_HELPER_H_
-#define _OTX2_CRYPTODEV_OPS_HELPER_H_
-
-#include "cpt_pmd_logs.h"
-
-static void
-sym_session_clear(int driver_id, struct rte_cryptodev_sym_session *sess)
-{
- void *priv = get_sym_session_private_data(sess, driver_id);
- struct cpt_sess_misc *misc;
- struct rte_mempool *pool;
- struct cpt_ctx *ctx;
-
- if (priv == NULL)
- return;
-
- misc = priv;
- ctx = SESS_PRIV(misc);
-
- if (ctx->auth_key != NULL)
- rte_free(ctx->auth_key);
-
- memset(priv, 0, cpt_get_session_size());
-
- pool = rte_mempool_from_obj(priv);
-
- set_sym_session_private_data(sess, driver_id, NULL);
-
- rte_mempool_put(pool, priv);
-}
-
-static __rte_always_inline uint8_t
-otx2_cpt_compcode_get(struct cpt_request_info *req)
-{
- volatile struct cpt_res_s_9s *res;
- uint8_t ret;
-
- res = (volatile struct cpt_res_s_9s *)req->completion_addr;
-
- if (unlikely(res->compcode == CPT_9X_COMP_E_NOTDONE)) {
- if (rte_get_timer_cycles() < req->time_out)
- return ERR_REQ_PENDING;
-
- CPT_LOG_DP_ERR("Request timed out");
- return ERR_REQ_TIMEOUT;
- }
-
- if (likely(res->compcode == CPT_9X_COMP_E_GOOD)) {
- ret = NO_ERR;
- if (unlikely(res->uc_compcode)) {
- ret = res->uc_compcode;
- CPT_LOG_DP_DEBUG("Request failed with microcode error");
- CPT_LOG_DP_DEBUG("MC completion code 0x%x",
- res->uc_compcode);
- }
- } else {
- CPT_LOG_DP_DEBUG("HW completion code 0x%x", res->compcode);
-
- ret = res->compcode;
- switch (res->compcode) {
- case CPT_9X_COMP_E_INSTERR:
- CPT_LOG_DP_ERR("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- CPT_LOG_DP_ERR("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- CPT_LOG_DP_ERR("Request failed with hardware error");
- break;
- default:
- CPT_LOG_DP_ERR("Request failed with unknown completion code");
- }
- }
-
- return ret;
-}
-
-#endif /* _OTX2_CRYPTODEV_OPS_HELPER_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h b/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
deleted file mode 100644
index 95bce3621a..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_qp.h
+++ /dev/null
@@ -1,46 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#ifndef _OTX2_CRYPTODEV_QP_H_
-#define _OTX2_CRYPTODEV_QP_H_
-
-#include <rte_common.h>
-#include <rte_eventdev.h>
-#include <rte_mempool.h>
-#include <rte_spinlock.h>
-
-#include "cpt_common.h"
-
-struct otx2_cpt_qp {
- uint32_t id;
- /**< Queue pair id */
- uint8_t blkaddr;
- /**< CPT0/1 BLKADDR of LF */
- uintptr_t base;
- /**< Base address where BAR is mapped */
- void *lmtline;
- /**< Address of LMTLINE */
- rte_iova_t lf_nq_reg;
- /**< LF enqueue register address */
- struct pending_queue pend_q;
- /**< Pending queue */
- struct rte_mempool *sess_mp;
- /**< Session mempool */
- struct rte_mempool *sess_mp_priv;
- /**< Session private data mempool */
- struct cpt_qp_meta_info meta_info;
- /**< Metabuf info required to support operations on the queue pair */
- rte_iova_t iq_dma_addr;
- /**< Instruction queue address */
- struct rte_event ev;
- /**< Event information required for binding cryptodev queue to
- * eventdev queue. Used by crypto adapter.
- */
- uint8_t ca_enable;
- /**< Set when queue pair is added to crypto adapter */
- uint8_t qp_ev_bind;
- /**< Set when queue pair is bound to event queue */
-};
-
-#endif /* _OTX2_CRYPTODEV_QP_H_ */
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c b/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
deleted file mode 100644
index 9a4f84f8d8..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.c
+++ /dev/null
@@ -1,655 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_capabilities.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops.h"
-#include "otx2_cryptodev_sec.h"
-#include "otx2_security.h"
-
-static int
-ipsec_lp_len_precalc(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_lp *lp)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- lp->partial_len = 0;
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- lp->partial_len = sizeof(struct rte_ipv4_hdr);
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- lp->partial_len = sizeof(struct rte_ipv6_hdr);
- else
- return -EINVAL;
- }
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- lp->partial_len += sizeof(struct rte_esp_hdr);
- lp->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- lp->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- lp->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- lp->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- lp->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- lp->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- return 0;
- } else {
- return -EINVAL;
- }
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- lp->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- lp->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- lp->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- lp->partial_len += OTX2_SEC_SHA2_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-otx2_cpt_enq_sa_write(struct otx2_sec_session_ipsec_lp *lp,
- struct otx2_cpt_qp *qptr, uint8_t opcode)
-{
- uint64_t lmt_status, time_out;
- void *lmtline = qptr->lmtline;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_res *res;
- uint64_t *mdata;
- int ret = 0;
-
- if (unlikely(rte_mempool_get(qptr->meta_info.pool,
- (void **)&mdata) < 0))
- return -ENOMEM;
-
- res = (struct otx2_cpt_res *)RTE_PTR_ALIGN(mdata, 16);
- res->compcode = CPT_9X_COMP_E_NOTDONE;
-
- inst.opcode = opcode | (lp->ctx_len << 8);
- inst.param1 = 0;
- inst.param2 = 0;
- inst.dlen = lp->ctx_len << 3;
- inst.dptr = rte_mempool_virt2iova(lp);
- inst.rptr = 0;
- inst.cptr = rte_mempool_virt2iova(lp);
- inst.egrp = OTX2_CPT_EGRP_SE;
-
- inst.u64[0] = 0;
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.res_addr = rte_mempool_virt2iova(res);
-
- rte_io_wmb();
-
- do {
- /* Copy CPT command to LMTLINE */
- otx2_lmt_mov(lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qptr->lf_nq_reg);
- } while (lmt_status == 0);
-
- time_out = rte_get_timer_cycles() +
- DEFAULT_COMMAND_TIMEOUT * rte_get_timer_hz();
-
- while (res->compcode == CPT_9X_COMP_E_NOTDONE) {
- if (rte_get_timer_cycles() > time_out) {
- rte_mempool_put(qptr->meta_info.pool, mdata);
- otx2_err("Request timed out");
- return -ETIMEDOUT;
- }
- rte_io_rmb();
- }
-
- if (unlikely(res->compcode != CPT_9X_COMP_E_GOOD)) {
- ret = res->compcode;
- switch (ret) {
- case CPT_9X_COMP_E_INSTERR:
- otx2_err("Request failed with instruction error");
- break;
- case CPT_9X_COMP_E_FAULT:
- otx2_err("Request failed with DMA fault");
- break;
- case CPT_9X_COMP_E_HWERR:
- otx2_err("Request failed with hardware error");
- break;
- default:
- otx2_err("Request failed with unknown hardware "
- "completion code : 0x%x", ret);
- }
- goto mempool_put;
- }
-
- if (unlikely(res->uc_compcode != OTX2_IPSEC_PO_CC_SUCCESS)) {
- ret = res->uc_compcode;
- switch (ret) {
- case OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED:
- otx2_err("Invalid auth type");
- break;
- case OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED:
- otx2_err("Invalid encrypt type");
- break;
- default:
- otx2_err("Request failed with unknown microcode "
- "completion code : 0x%x", ret);
- }
- }
-
-mempool_put:
- rte_mempool_put(qptr->meta_info.pool, mdata);
- return ret;
-}
-
-static void
-set_session_misc_attributes(struct otx2_sec_session_ipsec_lp *sess,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_crypto_sym_xform *auth_xform,
- struct rte_crypto_sym_xform *cipher_xform)
-{
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- sess->iv_offset = crypto_xform->aead.iv.offset;
- sess->iv_length = crypto_xform->aead.iv.length;
- sess->aad_length = crypto_xform->aead.aad_length;
- sess->mac_len = crypto_xform->aead.digest_length;
- } else {
- sess->iv_offset = cipher_xform->cipher.iv.offset;
- sess->iv_length = cipher_xform->cipher.iv.length;
- sess->auth_iv_offset = auth_xform->auth.iv.offset;
- sess->auth_iv_length = auth_xform->auth.iv.length;
- sess->mac_len = auth_xform->auth.digest_length;
- }
-}
-
-static int
-crypto_sec_ipsec_outb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_ipsec_po_ip_template *template = NULL;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_out_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- struct rte_ipv6_hdr *ip6;
- struct rte_ipv4_hdr *ip;
- int ret, ctx_len;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_out_sa));
-
- /* Initialize lookaside ipsec private data */
- lp->ip_id = 0;
- lp->seq_lo = 1;
- lp->seq_hi = 0;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- ret = ipsec_lp_len_precalc(ipsec, crypto_xform, lp);
- if (ret)
- return ret;
-
- /* Start ip id from 1 */
- lp->ip_id = 1;
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip4);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
- ip = &template->ip4.ipv4_hdr;
- if (ipsec->options.udp_encap) {
- ip->next_proto_id = IPPROTO_UDP;
- template->ip4.udp_src = rte_be_to_cpu_16(4500);
- template->ip4.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip->next_proto_id = IPPROTO_ESP;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- ip->version_ihl = RTE_IPV4_VHL_DEF;
- ip->time_to_live = ipsec->tunnel.ipv4.ttl;
- ip->type_of_service |= (ipsec->tunnel.ipv4.dscp << 2);
- if (ipsec->tunnel.ipv4.df)
- ip->fragment_offset = BIT(14);
- memcpy(&ip->src_addr, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&ip->dst_addr, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else if (ipsec->tunnel.type ==
- RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
-
- if (ctl->enc_type == OTX2_IPSEC_PO_SA_ENC_AES_GCM) {
- template = &sa->aes_gcm.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- aes_gcm.template) + sizeof(
- sa->aes_gcm.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA1) {
- template = &sa->sha1.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha1.template) + sizeof(
- sa->sha1.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else if (ctl->auth_type ==
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256) {
- template = &sa->sha2.template;
- ctx_len = offsetof(struct otx2_ipsec_po_out_sa,
- sha2.template) + sizeof(
- sa->sha2.template.ip6);
- ctx_len = RTE_ALIGN_CEIL(ctx_len, 8);
- lp->ctx_len = ctx_len >> 3;
- } else {
- return -EINVAL;
- }
-
- ip6 = &template->ip6.ipv6_hdr;
- if (ipsec->options.udp_encap) {
- ip6->proto = IPPROTO_UDP;
- template->ip6.udp_src = rte_be_to_cpu_16(4500);
- template->ip6.udp_dst = rte_be_to_cpu_16(4500);
- } else {
- ip6->proto = (ipsec->proto ==
- RTE_SECURITY_IPSEC_SA_PROTO_ESP) ?
- IPPROTO_ESP : IPPROTO_AH;
- }
- ip6->vtc_flow = rte_cpu_to_be_32(0x60000000 |
- ((ipsec->tunnel.ipv6.dscp <<
- RTE_IPV6_HDR_TC_SHIFT) &
- RTE_IPV6_HDR_TC_MASK) |
- ((ipsec->tunnel.ipv6.flabel <<
- RTE_IPV6_HDR_FL_SHIFT) &
- RTE_IPV6_HDR_FL_MASK));
- ip6->hop_limits = ipsec->tunnel.ipv6.hlimit;
- memcpy(&ip6->src_addr, &ipsec->tunnel.ipv6.src_addr,
- sizeof(struct in6_addr));
- memcpy(&ip6->dst_addr, &ipsec->tunnel.ipv6.dst_addr,
- sizeof(struct in6_addr));
- }
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- memcpy(sa->sha1.hmac_key, auth_key, auth_key_len);
- else if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC)
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB);
-
- /* Set per packet IV and IKEv2 bits */
- lp->ucmd_param1 = BIT(11) | BIT(9);
- lp->ucmd_param2 = 0;
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_OUTB);
-}
-
-static int
-crypto_sec_ipsec_inb_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_sec_session_ipsec_lp *lp;
- struct otx2_ipsec_po_sa_ctl *ctl;
- int cipher_key_len, auth_key_len;
- struct otx2_ipsec_po_in_sa *sa;
- struct otx2_sec_session *sess;
- struct otx2_cpt_inst_s inst;
- int ret;
-
- sess = get_sec_session_private_data(sec_sess);
- sess->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- lp = &sess->ipsec.lp;
-
- sa = &lp->in_sa;
- ctl = &sa->ctl;
-
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_po_in_sa));
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- ret = ipsec_po_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- return ret;
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->iv.gcm.nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
-
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.hmac_key[0]) >> 3;
- RTE_ASSERT(lp->ctx_len == OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN);
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- memcpy(sa->aes_gcm.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- aes_gcm.selector) >> 3;
- } else if (auth_xform->auth.algo ==
- RTE_CRYPTO_AUTH_SHA256_HMAC) {
- memcpy(sa->sha2.hmac_key, auth_key, auth_key_len);
- lp->ctx_len = offsetof(struct otx2_ipsec_po_in_sa,
- sha2.selector) >> 3;
- }
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_SE;
- inst.cptr = rte_mempool_virt2iova(sa);
-
- lp->cpt_inst_w7 = inst.u64[7];
- lp->ucmd_opcode = (lp->ctx_len << 8) |
- (OTX2_IPSEC_PO_PROCESS_IPSEC_INB);
- lp->ucmd_param1 = 0;
-
- /* Set IKEv2 bit */
- lp->ucmd_param2 = BIT(12);
-
- set_session_misc_attributes(lp, crypto_xform,
- auth_xform, cipher_xform);
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- otx2_err("Replay window size is not supported");
- return -ENOTSUP;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL)
- return -ENOMEM;
-
- /* Set window bottom to 1, base and top to size of window */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- return otx2_cpt_enq_sa_write(lp, crypto_dev->data->queue_pairs[0],
- OTX2_IPSEC_PO_WRITE_IPSEC_INB);
-}
-
-static int
-crypto_sec_ipsec_session_create(struct rte_cryptodev *crypto_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- if (crypto_dev->data->queue_pairs[0] == NULL) {
- otx2_err("Setup cpt queue pair before creating sec session");
- return -EPERM;
- }
-
- ret = ipsec_po_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return crypto_sec_ipsec_inb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
- else
- return crypto_sec_ipsec_outb_session_create(crypto_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_crypto_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_LOOKASIDE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = crypto_sec_ipsec_session_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static int
-otx2_crypto_sec_session_destroy(void *device __rte_unused,
- struct rte_security_session *sess)
-{
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
-
- priv = get_sec_session_private_data(sess);
-
- if (priv == NULL)
- return 0;
-
- sess_mp = rte_mempool_from_obj(priv);
-
- memset(priv, 0, sizeof(*priv));
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_crypto_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static int
-otx2_crypto_sec_set_pkt_mdata(void *device __rte_unused,
- struct rte_security_session *session,
- struct rte_mbuf *m, void *params __rte_unused)
-{
- /* Set security session as the pkt metadata */
- *rte_security_dynfield(m) = (rte_security_dynfield_t)session;
-
- return 0;
-}
-
-static int
-otx2_crypto_sec_get_userdata(void *device __rte_unused, uint64_t md,
- void **userdata)
-{
- /* Retrieve userdata */
- *userdata = (void *)md;
-
- return 0;
-}
-
-static struct rte_security_ops otx2_crypto_sec_ops = {
- .session_create = otx2_crypto_sec_session_create,
- .session_destroy = otx2_crypto_sec_session_destroy,
- .session_get_size = otx2_crypto_sec_session_get_size,
- .set_pkt_metadata = otx2_crypto_sec_set_pkt_mdata,
- .get_userdata = otx2_crypto_sec_get_userdata,
- .capabilities_get = otx2_crypto_sec_capabilities_get
-};
-
-int
-otx2_crypto_sec_ctx_create(struct rte_cryptodev *cdev)
-{
- struct rte_security_ctx *ctx;
-
- ctx = rte_malloc("otx2_cpt_dev_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
-
- if (ctx == NULL)
- return -ENOMEM;
-
- /* Populate ctx */
- ctx->device = cdev;
- ctx->ops = &otx2_crypto_sec_ops;
- ctx->sess_cnt = 0;
-
- cdev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *cdev)
-{
- rte_free(cdev->security_ctx);
-}
diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h b/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
deleted file mode 100644
index ff3329c9c1..0000000000
--- a/drivers/crypto/octeontx2/otx2_cryptodev_sec.h
+++ /dev/null
@@ -1,64 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_CRYPTODEV_SEC_H__
-#define __OTX2_CRYPTODEV_SEC_H__
-
-#include <rte_cryptodev.h>
-
-#include "otx2_ipsec_po.h"
-
-struct otx2_sec_session_ipsec_lp {
- RTE_STD_C11
- union {
- /* Inbound SA */
- struct otx2_ipsec_po_in_sa in_sa;
- /* Outbound SA */
- struct otx2_ipsec_po_out_sa out_sa;
- };
-
- uint64_t cpt_inst_w7;
- union {
- uint64_t ucmd_w0;
- struct {
- uint16_t ucmd_dlen;
- uint16_t ucmd_param2;
- uint16_t ucmd_param1;
- uint16_t ucmd_opcode;
- };
- };
-
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq_lo;
- uint32_t seq_hi;
- };
- };
-
- /** Context length in 8-byte words */
- size_t ctx_len;
- /** Auth IV offset in bytes */
- uint16_t auth_iv_offset;
- /** IV offset in bytes */
- uint16_t iv_offset;
- /** AAD length */
- uint16_t aad_length;
- /** MAC len in bytes */
- uint8_t mac_len;
- /** IV length in bytes */
- uint8_t iv_length;
- /** Auth IV length in bytes */
- uint8_t auth_iv_length;
-};
-
-int otx2_crypto_sec_ctx_create(struct rte_cryptodev *crypto_dev);
-
-void otx2_crypto_sec_ctx_destroy(struct rte_cryptodev *crypto_dev);
-
-#endif /* __OTX2_CRYPTODEV_SEC_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h b/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
deleted file mode 100644
index 089a3d073a..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_anti_replay.h
+++ /dev/null
@@ -1,227 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_ANTI_REPLAY_H__
-#define __OTX2_IPSEC_ANTI_REPLAY_H__
-
-#include <rte_mbuf.h>
-
-#include "otx2_ipsec_fp.h"
-
-#define WORD_SHIFT 6
-#define WORD_SIZE (1 << WORD_SHIFT)
-#define WORD_MASK (WORD_SIZE - 1)
-
-#define IPSEC_ANTI_REPLAY_FAILED (-1)
-
-static inline int
-anti_replay_check(struct otx2_ipsec_replay *replay, uint64_t seq,
- uint64_t winsz)
-{
- uint64_t *window = &replay->window[0];
- uint64_t ex_winsz = winsz + WORD_SIZE;
- uint64_t winwords = ex_winsz >> WORD_SHIFT;
- uint64_t base = replay->base;
- uint32_t winb = replay->winb;
- uint32_t wint = replay->wint;
- uint64_t seqword, shiftwords;
- uint64_t bit_pos;
- uint64_t shift;
- uint64_t *wptr;
- uint64_t tmp;
-
- if (winsz > 64)
- goto slow_shift;
- /* Check if the seq is the biggest one yet */
- if (likely(seq > base)) {
- shift = seq - base;
- if (shift < winsz) { /* In window */
- /*
- * If more than 64-bit anti-replay window,
- * use slow shift routine
- */
- wptr = window + (shift >> WORD_SHIFT);
- *wptr <<= shift;
- *wptr |= 1ull;
- } else {
- /* No special handling of window size > 64 */
- wptr = window + ((winsz - 1) >> WORD_SHIFT);
- /*
- * Zero out the whole window (especially for
- * bigger than 64b window) till the last 64b word
- * as the incoming sequence number minus
- * base sequence is more than the window size.
- */
- while (window != wptr)
- *window++ = 0ull;
- /*
- * Set the last bit (of the window) to 1
- * as that corresponds to the base sequence number.
- * Now any incoming sequence number which is
- * (base - window size - 1) will pass anti-replay check
- */
- *wptr = 1ull;
- }
- /*
- * Set the base to incoming sequence number as
- * that is the biggest sequence number seen yet
- */
- replay->base = seq;
- return 0;
- }
-
- bit_pos = base - seq;
-
- /* If seq falls behind the window, return failure */
- if (bit_pos >= winsz)
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* seq is within anti-replay window */
- wptr = window + ((winsz - bit_pos - 1) >> WORD_SHIFT);
- bit_pos &= WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (*wptr & ((1ull) << bit_pos))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* mark as seen */
- *wptr |= ((1ull) << bit_pos);
- return 0;
-
-slow_shift:
- if (likely(seq > base)) {
- uint32_t i;
-
- shift = seq - base;
- if (unlikely(shift >= winsz)) {
- /*
- * shift is bigger than the window,
- * so just zero out everything
- */
- for (i = 0; i < winwords; i++)
- window[i] = 0;
-winupdate:
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- /* wint and winb range from 1 to ex_winsz */
- replay->wint = ((wint + shift - 1) % ex_winsz) + 1;
- replay->winb = ((winb + shift - 1) % ex_winsz) + 1;
-
- replay->base = seq;
- return 0;
- }
-
- /*
- * New sequence number is bigger than the base but
- * it's not bigger than base + window size
- */
-
- shiftwords = ((wint + shift - 1) >> WORD_SHIFT) -
- ((wint - 1) >> WORD_SHIFT);
- if (unlikely(shiftwords)) {
- tmp = (wint + WORD_SIZE - 1) / WORD_SIZE;
- for (i = 0; i < shiftwords; i++) {
- tmp %= winwords;
- window[tmp++] = 0;
- }
- }
-
- goto winupdate;
- }
-
- /* Sequence number is before the window */
- if (unlikely((seq + winsz) <= base))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /* Sequence number is within the window */
-
- /* Find out the word */
- seqword = ((seq - 1) % ex_winsz) >> WORD_SHIFT;
-
- /* Find out the bit in the word */
- bit_pos = (seq - 1) & WORD_MASK;
-
- /* Check if this is a replayed packet */
- if (window[seqword] & (1ull << (63 - bit_pos)))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- /*
- * Set the bit corresponding to sequence number
- * in window to mark it as received
- */
- window[seqword] |= (1ull << (63 - bit_pos));
-
- return 0;
-}
-
-static inline int
-cpt_ipsec_ip_antireplay_check(struct otx2_ipsec_fp_in_sa *sa, void *l3_ptr)
-{
- struct otx2_ipsec_fp_res_hdr *hdr = l3_ptr;
- uint64_t seq_in_sa;
- uint32_t seqh = 0;
- uint32_t seql;
- uint64_t seq;
- uint8_t esn;
- int ret;
-
- esn = sa->ctl.esn_en;
- seql = rte_be_to_cpu_32(hdr->seq_no_lo);
-
- if (!esn)
- seq = (uint64_t)seql;
- else {
- seqh = rte_be_to_cpu_32(hdr->seq_no_hi);
- seq = ((uint64_t)seqh << 32) | seql;
- }
-
- if (unlikely(seq == 0))
- return IPSEC_ANTI_REPLAY_FAILED;
-
- rte_spinlock_lock(&sa->replay->lock);
- ret = anti_replay_check(sa->replay, seq, sa->replay_win_sz);
- if (esn && (ret == 0)) {
- seq_in_sa = ((uint64_t)rte_be_to_cpu_32(sa->esn_hi) << 32) |
- rte_be_to_cpu_32(sa->esn_low);
- if (seq > seq_in_sa) {
- sa->esn_low = rte_cpu_to_be_32(seql);
- sa->esn_hi = rte_cpu_to_be_32(seqh);
- }
- }
- rte_spinlock_unlock(&sa->replay->lock);
-
- return ret;
-}
-
-static inline uint32_t
-anti_replay_get_seqh(uint32_t winsz, uint32_t seql,
- uint32_t esn_hi, uint32_t esn_low)
-{
- uint32_t win_low = esn_low - winsz + 1;
-
- if (esn_low > winsz - 1) {
- /* Window is in one sequence number subspace */
- if (seql > win_low)
- return esn_hi;
- else
- return esn_hi + 1;
- } else {
- /* Window is split across two sequence number subspaces */
- if (seql > win_low)
- return esn_hi - 1;
- else
- return esn_hi;
- }
-}
-#endif /* __OTX2_IPSEC_ANTI_REPLAY_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_fp.h b/drivers/crypto/octeontx2/otx2_ipsec_fp.h
deleted file mode 100644
index 2461e7462b..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_fp.h
+++ /dev/null
@@ -1,371 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_FP_H__
-#define __OTX2_IPSEC_FP_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-/* Macros for anti replay and ESN */
-#define OTX2_IPSEC_MAX_REPLAY_WIN_SZ 1024
-
-struct otx2_ipsec_fp_res_hdr {
- uint32_t spi;
- uint32_t seq_no_lo;
- uint32_t seq_no_hi;
- uint32_t rsvd;
-};
-
-enum {
- OTX2_IPSEC_FP_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_FP_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_FP_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_FP_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_FP_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENC_NULL = 0,
- OTX2_IPSEC_FP_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_FP_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_FP_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_FP_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_FP_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_FP_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_AUTH_NULL = 0,
- OTX2_IPSEC_FP_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_FP_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_FP_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_FP_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_FRAG_POST = 0,
- OTX2_IPSEC_FP_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_FP_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_FP_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_fp_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_fp_out_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4];
- uint16_t udp_src;
- uint16_t udp_dst;
-
- /* w2 */
- uint32_t ip_src;
- uint32_t ip_dst;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-};
-
-struct otx2_ipsec_replay {
- rte_spinlock_t lock;
- uint32_t winb;
- uint32_t wint;
- uint64_t base; /**< base of the anti-replay window */
- uint64_t window[17]; /**< anti-replay window */
-};
-
-struct otx2_ipsec_fp_in_sa {
- /* w0 */
- struct otx2_ipsec_fp_sa_ctl ctl;
-
- /* w1 */
- uint8_t nonce[4]; /* Only for AES-GCM */
- uint32_t unused;
-
- /* w2 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w3-w6 */
- uint8_t cipher_key[32];
-
- /* w7-w12 */
- uint8_t hmac_key[48];
-
- RTE_STD_C11
- union {
- void *userdata;
- uint64_t udata64;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-
- uint32_t reserved1;
-};
-
-static inline int
-ipsec_fp_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_fp_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_fp_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_fp_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_fp_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_fp_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_fp_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_FP_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = OTX2_IPSEC_FP_SA_IP_VERSION_4;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_FP_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_FP_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_FP_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_FP_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_FP_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn == 1)
- ctl->esn_en = 1;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_FP_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po.h b/drivers/crypto/octeontx2/otx2_ipsec_po.h
deleted file mode 100644
index 695f552644..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po.h
+++ /dev/null
@@ -1,447 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_H__
-#define __OTX2_IPSEC_PO_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_ip.h>
-#include <rte_security.h>
-
-#define OTX2_IPSEC_PO_AES_GCM_INB_CTX_LEN 0x09
-
-#define OTX2_IPSEC_PO_WRITE_IPSEC_OUTB 0x20
-#define OTX2_IPSEC_PO_WRITE_IPSEC_INB 0x21
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_OUTB 0x23
-#define OTX2_IPSEC_PO_PROCESS_IPSEC_INB 0x24
-
-#define OTX2_IPSEC_PO_INB_RPTR_HDR 0x8
-
-enum otx2_ipsec_po_comp_e {
- OTX2_IPSEC_PO_CC_SUCCESS = 0x00,
- OTX2_IPSEC_PO_CC_AUTH_UNSUPPORTED = 0xB0,
- OTX2_IPSEC_PO_CC_ENCRYPT_UNSUPPORTED = 0xB1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_DIRECTION_INBOUND = 0,
- OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_IP_VERSION_4 = 0,
- OTX2_IPSEC_PO_SA_IP_VERSION_6 = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_MODE_TRANSPORT = 0,
- OTX2_IPSEC_PO_SA_MODE_TUNNEL = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_PROTOCOL_AH = 0,
- OTX2_IPSEC_PO_SA_PROTOCOL_ESP = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_128 = 1,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_192 = 2,
- OTX2_IPSEC_PO_SA_AES_KEY_LEN_256 = 3,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENC_NULL = 0,
- OTX2_IPSEC_PO_SA_ENC_DES_CBC = 1,
- OTX2_IPSEC_PO_SA_ENC_3DES_CBC = 2,
- OTX2_IPSEC_PO_SA_ENC_AES_CBC = 3,
- OTX2_IPSEC_PO_SA_ENC_AES_CTR = 4,
- OTX2_IPSEC_PO_SA_ENC_AES_GCM = 5,
- OTX2_IPSEC_PO_SA_ENC_AES_CCM = 6,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_AUTH_NULL = 0,
- OTX2_IPSEC_PO_SA_AUTH_MD5 = 1,
- OTX2_IPSEC_PO_SA_AUTH_SHA1 = 2,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_224 = 3,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_256 = 4,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_384 = 5,
- OTX2_IPSEC_PO_SA_AUTH_SHA2_512 = 6,
- OTX2_IPSEC_PO_SA_AUTH_AES_GMAC = 7,
- OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128 = 8,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_FRAG_POST = 0,
- OTX2_IPSEC_PO_SA_FRAG_PRE = 1,
-};
-
-enum {
- OTX2_IPSEC_PO_SA_ENCAP_NONE = 0,
- OTX2_IPSEC_PO_SA_ENCAP_UDP = 1,
-};
-
-struct otx2_ipsec_po_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-union otx2_ipsec_po_bit_perfect_iv {
- uint8_t aes_iv[16];
- uint8_t des_iv[8];
- struct {
- uint8_t nonce[4];
- uint8_t iv[8];
- uint8_t counter[4];
- } gcm;
-};
-
-struct otx2_ipsec_po_traffic_selector {
- rte_be16_t src_port[2];
- rte_be16_t dst_port[2];
- RTE_STD_C11
- union {
- struct {
- rte_be32_t src_addr[2];
- rte_be32_t dst_addr[2];
- } ipv4;
- struct {
- uint8_t src_addr[32];
- uint8_t dst_addr[32];
- } ipv6;
- };
-};
-
-struct otx2_ipsec_po_sa_ctl {
- rte_be32_t spi : 32;
- uint64_t exp_proto_inter_frag : 8;
- uint64_t rsvd_42_40 : 3;
- uint64_t esn_en : 1;
- uint64_t rsvd_45_44 : 2;
- uint64_t encap_type : 2;
- uint64_t enc_type : 3;
- uint64_t rsvd_48 : 1;
- uint64_t auth_type : 4;
- uint64_t valid : 1;
- uint64_t direction : 1;
- uint64_t outer_ip_ver : 1;
- uint64_t inner_ip_ver : 1;
- uint64_t ipsec_mode : 1;
- uint64_t ipsec_proto : 1;
- uint64_t aes_key_len : 2;
-};
-
-struct otx2_ipsec_po_in_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8 */
- uint8_t udp_encap[8];
-
- /* w9-w33 */
- union {
- struct {
- uint8_t hmac_key[48];
- struct otx2_ipsec_po_traffic_selector selector;
- } aes_gcm;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_traffic_selector selector;
- } sha2;
- };
- union {
- struct otx2_ipsec_replay *replay;
- uint64_t replay64;
- };
- uint32_t replay_win_sz;
-};
-
-struct otx2_ipsec_po_ip_template {
- RTE_STD_C11
- union {
- struct {
- struct rte_ipv4_hdr ipv4_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip4;
- struct {
- struct rte_ipv6_hdr ipv6_hdr;
- uint16_t udp_src;
- uint16_t udp_dst;
- } ip6;
- };
-};
-
-struct otx2_ipsec_po_out_sa {
- /* w0 */
- struct otx2_ipsec_po_sa_ctl ctl;
-
- /* w1-w4 */
- uint8_t cipher_key[32];
-
- /* w5-w6 */
- union otx2_ipsec_po_bit_perfect_iv iv;
-
- /* w7 */
- uint32_t esn_hi;
- uint32_t esn_low;
-
- /* w8-w55 */
- union {
- struct {
- struct otx2_ipsec_po_ip_template template;
- } aes_gcm;
- struct {
- uint8_t hmac_key[24];
- uint8_t unused[24];
- struct otx2_ipsec_po_ip_template template;
- } sha1;
- struct {
- uint8_t hmac_key[64];
- uint8_t hmac_iv[64];
- struct otx2_ipsec_po_ip_template template;
- } sha2;
- };
-};
-
-static inline int
-ipsec_po_xform_cipher_verify(struct rte_crypto_sym_xform *xform)
-{
- if (xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- switch (xform->cipher.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -ENOTSUP;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_auth_verify(struct rte_crypto_sym_xform *xform)
-{
- uint16_t keylen = xform->auth.key.length;
-
- if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC) {
- if (keylen >= 20 && keylen <= 64)
- return 0;
- } else if (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256_HMAC) {
- if (keylen >= 32 && keylen <= 64)
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_aead_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_ENCRYPT)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS &&
- xform->aead.op != RTE_CRYPTO_AEAD_OP_DECRYPT)
- return -EINVAL;
-
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- switch (xform->aead.key.length) {
- case 16:
- case 24:
- case 32:
- break;
- default:
- return -EINVAL;
- }
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline int
-ipsec_po_xform_verify(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- int ret;
-
- if (ipsec->life.bytes_hard_limit != 0 ||
- ipsec->life.bytes_soft_limit != 0 ||
- ipsec->life.packets_hard_limit != 0 ||
- ipsec->life.packets_soft_limit != 0)
- return -ENOTSUP;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- return ipsec_po_xform_aead_verify(ipsec, xform);
-
- if (xform->next == NULL)
- return -EINVAL;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- /* Ingress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AUTH ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_CIPHER)
- return -EINVAL;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- /* Egress */
- if (xform->type != RTE_CRYPTO_SYM_XFORM_CIPHER ||
- xform->next->type != RTE_CRYPTO_SYM_XFORM_AUTH)
- return -EINVAL;
- cipher_xform = xform;
- auth_xform = xform->next;
- }
-
- ret = ipsec_po_xform_cipher_verify(cipher_xform);
- if (ret)
- return ret;
-
- ret = ipsec_po_xform_auth_verify(auth_xform);
- if (ret)
- return ret;
-
- return 0;
-}
-
-static inline int
-ipsec_po_sa_ctl_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_ipsec_po_sa_ctl *ctl)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
- int aes_key_len;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_OUTBOUND;
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- ctl->direction = OTX2_IPSEC_PO_SA_DIRECTION_INBOUND;
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_4;
- else if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV6)
- ctl->outer_ip_ver = OTX2_IPSEC_PO_SA_IP_VERSION_6;
- else
- return -EINVAL;
- }
-
- ctl->inner_ip_ver = ctl->outer_ip_ver;
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TRANSPORT;
- else if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL)
- ctl->ipsec_mode = OTX2_IPSEC_PO_SA_MODE_TUNNEL;
- else
- return -EINVAL;
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_AH;
- else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP)
- ctl->ipsec_proto = OTX2_IPSEC_PO_SA_PROTOCOL_ESP;
- else
- return -EINVAL;
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_GCM;
- aes_key_len = xform->aead.key.length;
- } else {
- return -ENOTSUP;
- }
- } else if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- ctl->enc_type = OTX2_IPSEC_PO_SA_ENC_AES_CBC;
- aes_key_len = cipher_xform->cipher.key.length;
- } else {
- return -ENOTSUP;
- }
-
-
- switch (aes_key_len) {
- case 16:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_128;
- break;
- case 24:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_192;
- break;
- case 32:
- ctl->aes_key_len = OTX2_IPSEC_PO_SA_AES_KEY_LEN_256;
- break;
- default:
- return -EINVAL;
- }
-
- if (xform->type != RTE_CRYPTO_SYM_XFORM_AEAD) {
- switch (auth_xform->auth.algo) {
- case RTE_CRYPTO_AUTH_NULL:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_NULL;
- break;
- case RTE_CRYPTO_AUTH_MD5_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_MD5;
- break;
- case RTE_CRYPTO_AUTH_SHA1_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA1;
- break;
- case RTE_CRYPTO_AUTH_SHA224_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_224;
- break;
- case RTE_CRYPTO_AUTH_SHA256_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_256;
- break;
- case RTE_CRYPTO_AUTH_SHA384_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_384;
- break;
- case RTE_CRYPTO_AUTH_SHA512_HMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_SHA2_512;
- break;
- case RTE_CRYPTO_AUTH_AES_GMAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_GMAC;
- break;
- case RTE_CRYPTO_AUTH_AES_XCBC_MAC:
- ctl->auth_type = OTX2_IPSEC_PO_SA_AUTH_AES_XCBC_128;
- break;
- default:
- return -ENOTSUP;
- }
- }
-
- if (ipsec->options.esn)
- ctl->esn_en = 1;
-
- if (ipsec->options.udp_encap == 1)
- ctl->encap_type = OTX2_IPSEC_PO_SA_ENCAP_UDP;
-
- ctl->spi = rte_cpu_to_be_32(ipsec->spi);
- ctl->valid = 1;
-
- return 0;
-}
-
-#endif /* __OTX2_IPSEC_PO_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h b/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
deleted file mode 100644
index c3abf02187..0000000000
--- a/drivers/crypto/octeontx2/otx2_ipsec_po_ops.h
+++ /dev/null
@@ -1,167 +0,0 @@
-
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_IPSEC_PO_OPS_H__
-#define __OTX2_IPSEC_PO_OPS_H__
-
-#include <rte_crypto_sym.h>
-#include <rte_security.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_security.h"
-
-static __rte_always_inline int32_t
-otx2_ipsec_po_out_rlen_get(struct otx2_sec_session_ipsec_lp *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline struct cpt_request_info *
-alloc_request_struct(char *maddr, void *cop, int mdata_len)
-{
- struct cpt_request_info *req;
- struct cpt_meta_info *meta;
- uint8_t *resp_addr;
- uintptr_t *op;
-
- meta = (void *)RTE_PTR_ALIGN((uint8_t *)maddr, 16);
-
- op = (uintptr_t *)meta->deq_op_info;
- req = &meta->cpt_req;
- resp_addr = (uint8_t *)&meta->cpt_res;
-
- req->completion_addr = (uint64_t *)((uint8_t *)resp_addr);
- *req->completion_addr = COMPLETION_CODE_INIT;
- req->comp_baddr = rte_mem_virt2iova(resp_addr);
- req->op = op;
-
- op[0] = (uintptr_t)((uint64_t)meta | 1ull);
- op[1] = (uintptr_t)cop;
- op[2] = (uintptr_t)req;
- op[3] = mdata_len;
-
- return req;
-}
-
-static __rte_always_inline int
-process_outb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- uint32_t dlen, rlen, extend_head, extend_tail;
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- struct otx2_ipsec_po_out_hdr *hdr;
- struct otx2_ipsec_po_out_sa *sa;
- int hdr_len, mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- char *mdata, *data;
-
- sa = &sess->out_sa;
- hdr_len = sizeof(*hdr);
-
- dlen = rte_pktmbuf_pkt_len(m_src) + hdr_len;
- rlen = otx2_ipsec_po_out_rlen_get(sess, dlen - hdr_len);
-
- extend_head = hdr_len + RTE_ETHER_HDR_LEN;
- extend_tail = rlen - dlen;
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, extend_tail + mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- mdata += extend_tail; /* mdata follows encrypted data */
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- data = rte_pktmbuf_prepend(m_src, extend_head);
- if (unlikely(data == NULL)) {
- otx2_err("Not enough head room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_po_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + hdr_len, RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_po_out_hdr *)rte_pktmbuf_adj(m_src,
- RTE_ETHER_HDR_LEN);
-
- memcpy(&hdr->iv[0], rte_crypto_op_ctod_offset(cop, uint8_t *,
- sess->iv_offset), sess->iv_length);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
- sa->esn_hi = sess->seq_hi;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq_lo);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
-exit:
- *prep_req = req;
-
- return ret;
-}
-
-static __rte_always_inline int
-process_inb_sa(struct rte_crypto_op *cop,
- struct otx2_sec_session_ipsec_lp *sess,
- struct cpt_qp_meta_info *m_info, void **prep_req)
-{
- struct rte_crypto_sym_op *sym_op = cop->sym;
- struct rte_mbuf *m_src = sym_op->m_src;
- struct cpt_request_info *req = NULL;
- int mdata_len, ret = 0;
- vq_cmd_word0_t word0;
- uint32_t dlen;
- char *mdata;
-
- dlen = rte_pktmbuf_pkt_len(m_src);
- mdata_len = m_info->lb_mlen + 8;
-
- mdata = rte_pktmbuf_append(m_src, mdata_len);
- if (unlikely(mdata == NULL)) {
- otx2_err("Not enough tail room\n");
- ret = -ENOMEM;
- goto exit;
- }
-
- req = alloc_request_struct(mdata, (void *)cop, mdata_len);
-
- /* Prepare CPT instruction */
- word0.u64 = sess->ucmd_w0;
- word0.s.dlen = dlen;
-
- req->ist.ei0 = word0.u64;
- req->ist.ei1 = rte_pktmbuf_iova(m_src);
- req->ist.ei2 = req->ist.ei1;
-
-exit:
- *prep_req = req;
- return ret;
-}
-#endif /* __OTX2_IPSEC_PO_OPS_H__ */
diff --git a/drivers/crypto/octeontx2/otx2_security.h b/drivers/crypto/octeontx2/otx2_security.h
deleted file mode 100644
index 29c8fc351b..0000000000
--- a/drivers/crypto/octeontx2/otx2_security.h
+++ /dev/null
@@ -1,37 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_SECURITY_H__
-#define __OTX2_SECURITY_H__
-
-#include <rte_security.h>
-
-#include "otx2_cryptodev_sec.h"
-#include "otx2_ethdev_sec.h"
-
-#define OTX2_SEC_AH_HDR_LEN 12
-#define OTX2_SEC_AES_GCM_IV_LEN 8
-#define OTX2_SEC_AES_GCM_MAC_LEN 16
-#define OTX2_SEC_AES_CBC_IV_LEN 16
-#define OTX2_SEC_SHA1_HMAC_LEN 12
-#define OTX2_SEC_SHA2_HMAC_LEN 16
-
-#define OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN 4
-#define OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN 16
-
-struct otx2_sec_session_ipsec {
- union {
- struct otx2_sec_session_ipsec_ip ip;
- struct otx2_sec_session_ipsec_lp lp;
- };
- enum rte_security_ipsec_sa_direction dir;
-};
-
-struct otx2_sec_session {
- struct otx2_sec_session_ipsec ipsec;
- void *userdata;
- /**< Userdata registered by the application */
-} __rte_cache_aligned;
-
-#endif /* __OTX2_SECURITY_H__ */
diff --git a/drivers/crypto/octeontx2/version.map b/drivers/crypto/octeontx2/version.map
deleted file mode 100644
index d36663132a..0000000000
--- a/drivers/crypto/octeontx2/version.map
+++ /dev/null
@@ -1,13 +0,0 @@
-DPDK_22 {
- local: *;
-};
-
-INTERNAL {
- global:
-
- otx2_cryptodev_driver_id;
- otx2_cpt_af_reg_read;
- otx2_cpt_af_reg_write;
-
- local: *;
-};
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index b68ce6c0a4..8db9775d7b 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -1127,6 +1127,16 @@ cn9k_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_sso_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_SSO_TIM_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/event/meson.build b/drivers/event/meson.build
index 63d6b410b2..d6706b57f7 100644
--- a/drivers/event/meson.build
+++ b/drivers/event/meson.build
@@ -11,7 +11,6 @@ drivers = [
'dpaa',
'dpaa2',
'dsw',
- 'octeontx2',
'opdl',
'skeleton',
'sw',
diff --git a/drivers/event/octeontx2/meson.build b/drivers/event/octeontx2/meson.build
deleted file mode 100644
index ce360af5f8..0000000000
--- a/drivers/event/octeontx2/meson.build
+++ /dev/null
@@ -1,26 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_worker.c',
- 'otx2_worker_dual.c',
- 'otx2_evdev.c',
- 'otx2_evdev_adptr.c',
- 'otx2_evdev_crypto_adptr.c',
- 'otx2_evdev_irq.c',
- 'otx2_evdev_selftest.c',
- 'otx2_tim_evdev.c',
- 'otx2_tim_worker.c',
-)
-
-deps += ['bus_pci', 'common_octeontx2', 'crypto_octeontx2', 'mempool_octeontx2', 'net_octeontx2']
-
-includes += include_directories('../../crypto/octeontx2')
-includes += include_directories('../../common/cpt')
diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c
deleted file mode 100644
index ccf28b678b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.c
+++ /dev/null
@@ -1,1900 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <eventdev_pmd_pci.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_tx.h"
-#include "otx2_evdev_stats.h"
-#include "otx2_irq.h"
-#include "otx2_tim_evdev.h"
-
-static inline int
-sso_get_msix_offsets(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get SSO and SSOW MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < nb_ports; i++)
- dev->ssow_msixoff[i] = msix_rsp->ssow_msixoff[i];
-
- for (i = 0; i < dev->nb_event_queues; i++)
- dev->sso_msixoff[i] = msix_rsp->sso_msixoff[i];
-
- return rc;
-}
-
-void
-sso_fastpath_fns_set(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- /* Single WS modes */
- const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
-
- /* Dual WS modes */
- const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_t
- ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- const event_dequeue_burst_t
- ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_deq_seg_timeout_burst_ ##name,
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
- };
-
- /* Tx modes */
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- const event_tx_adapter_enqueue_t
- ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = \
- otx2_ssogws_dual_tx_adptr_enq_seg_ ## name,
- SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
- };
-
- event_dev->enqueue = otx2_ssogws_enq;
- event_dev->enqueue_burst = otx2_ssogws_enq_burst;
- event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst;
- event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst;
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_deq_seg
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_seg_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_seg_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_seg_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_deq
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_deq_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue = ssogws_deq_timeout
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_deq_timeout_burst
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_tx_adptr_enq
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_ca_enq;
-
- if (dev->dual_ws) {
- event_dev->enqueue = otx2_ssogws_dual_enq;
- event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst;
- event_dev->enqueue_new_burst =
- otx2_ssogws_dual_enq_new_burst;
- event_dev->enqueue_forward_burst =
- otx2_ssogws_dual_enq_fwd_burst;
-
- if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) {
- event_dev->dequeue = ssogws_dual_deq_seg
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_seg_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_seg_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_seg_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- } else {
- event_dev->dequeue = ssogws_dual_deq
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst = ssogws_dual_deq_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)];
- if (dev->is_timeout_deq) {
- event_dev->dequeue =
- ssogws_dual_deq_timeout
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- event_dev->dequeue_burst =
- ssogws_dual_deq_timeout_burst
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offloads &
- NIX_RX_OFFLOAD_RSS_F)];
- }
- }
-
- if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) {
- /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- } else {
- event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offloads &
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
- }
- event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq;
- }
-
- event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue;
- rte_mb();
-}
-
-static void
-otx2_sso_info_get(struct rte_eventdev *event_dev,
- struct rte_event_dev_info *dev_info)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- dev_info->driver_name = RTE_STR(EVENTDEV_NAME_OCTEONTX2_PMD);
- dev_info->min_dequeue_timeout_ns = dev->min_dequeue_timeout_ns;
- dev_info->max_dequeue_timeout_ns = dev->max_dequeue_timeout_ns;
- dev_info->max_event_queues = dev->max_event_queues;
- dev_info->max_event_queue_flows = (1ULL << 20);
- dev_info->max_event_queue_priority_levels = 8;
- dev_info->max_event_priority_levels = 1;
- dev_info->max_event_ports = dev->max_event_ports;
- dev_info->max_event_port_dequeue_depth = 1;
- dev_info->max_event_port_enqueue_depth = 1;
- dev_info->max_num_events = dev->max_num_events;
- dev_info->event_dev_cap = RTE_EVENT_DEV_CAP_QUEUE_QOS |
- RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED |
- RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES |
- RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK |
- RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT |
- RTE_EVENT_DEV_CAP_NONSEQ_MODE |
- RTE_EVENT_DEV_CAP_CARRY_FLOW_ID |
- RTE_EVENT_DEV_CAP_MAINTENANCE_FREE;
-}
-
-static void
-sso_port_link_modify(struct otx2_ssogws *ws, uint8_t queue, uint8_t enable)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t val;
-
- val = queue;
- val |= 0ULL << 12; /* SET 0 */
- val |= 0x8000800080000000; /* Dont modify rest of the masks */
- val |= (uint64_t)enable << 14; /* Enable/Disable Membership. */
-
- otx2_write64(val, base + SSOW_LF_GWS_GRPMSK_CHG);
-}
-
-static int
-otx2_sso_port_link(struct rte_eventdev *event_dev, void *port,
- const uint8_t queues[], const uint8_t priorities[],
- uint16_t nb_links)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t link;
-
- RTE_SET_USED(priorities);
- for (link = 0; link < nb_links; link++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[link], true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[link], true);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[link], true);
- }
- }
- sso_func_trace("Port=%d nb_links=%d", port_id, nb_links);
-
- return (int)nb_links;
-}
-
-static int
-otx2_sso_port_unlink(struct rte_eventdev *event_dev, void *port,
- uint8_t queues[], uint16_t nb_unlinks)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t port_id = 0;
- uint16_t unlink;
-
- for (unlink = 0; unlink < nb_unlinks; unlink++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], queues[unlink],
- false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], queues[unlink],
- false);
- } else {
- struct otx2_ssogws *ws = port;
-
- port_id = ws->port;
- sso_port_link_modify(ws, queues[unlink], false);
- }
- }
- sso_func_trace("Port=%d nb_unlinks=%d", port_id, nb_unlinks);
-
- return (int)nb_unlinks;
-}
-
-static int
-sso_hw_lf_cfg(struct otx2_mbox *mbox, enum otx2_sso_lf_type type,
- uint16_t nb_lf, uint8_t attach)
-{
- if (attach) {
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = nb_lf;
- break;
- case SSO_LF_GWS:
- req->ssow = nb_lf;
- break;
- default:
- return -EINVAL;
- }
- req->modify = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- } else {
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- switch (type) {
- case SSO_LF_GGRP:
- req->sso = true;
- break;
- case SSO_LF_GWS:
- req->ssow = true;
- break;
- default:
- return -EINVAL;
- }
- req->partial = true;
- if (otx2_mbox_process(mbox) < 0)
- return -EIO;
- }
-
- return 0;
-}
-
-static int
-sso_lf_cfg(struct otx2_sso_evdev *dev, struct otx2_mbox *mbox,
- enum otx2_sso_lf_type type, uint16_t nb_lf, uint8_t alloc)
-{
- void *rsp;
- int rc;
-
- if (alloc) {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_alloc_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_alloc(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_alloc_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_alloc(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- } else {
- switch (type) {
- case SSO_LF_GGRP:
- {
- struct sso_lf_free_req *req_ggrp;
- req_ggrp = otx2_mbox_alloc_msg_sso_lf_free(mbox);
- req_ggrp->hwgrps = nb_lf;
- }
- break;
- case SSO_LF_GWS:
- {
- struct ssow_lf_free_req *req_hws;
- req_hws = otx2_mbox_alloc_msg_ssow_lf_free(mbox);
- req_hws->hws = nb_lf;
- }
- break;
- default:
- return -EINVAL;
- }
- }
-
- rc = otx2_mbox_process_msg_tmo(mbox, (void **)&rsp, ~0);
- if (rc < 0)
- return rc;
-
- if (alloc && type == SSO_LF_GGRP) {
- struct sso_lf_alloc_rsp *rsp_ggrp = rsp;
-
- dev->xaq_buf_size = rsp_ggrp->xaq_buf_size;
- dev->xae_waes = rsp_ggrp->xaq_wq_entries;
- dev->iue = rsp_ggrp->in_unit_entries;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_release(void *port)
-{
- struct otx2_ssogws_cookie *gws_cookie = ssogws_get_cookie(port);
- struct otx2_sso_evdev *dev;
- int i;
-
- if (!gws_cookie->configured)
- goto free;
-
- dev = sso_pmd_priv(gws_cookie->event_dev);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], i, false);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], i, false);
- }
- memset(ws, 0, sizeof(*ws));
- } else {
- struct otx2_ssogws *ws = port;
-
- for (i = 0; i < dev->nb_event_queues; i++)
- sso_port_link_modify(ws, i, false);
- memset(ws, 0, sizeof(*ws));
- }
-
- memset(gws_cookie, 0, sizeof(*gws_cookie));
-
-free:
- rte_free(gws_cookie);
-}
-
-static void
-otx2_sso_queue_release(struct rte_eventdev *event_dev, uint8_t queue_id)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-}
-
-static void
-sso_restore_links(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t *links_map;
- int i, j;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- links_map = event_dev->data->links_map;
- /* Point links_map to this port specific area */
- links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[0], j, true);
- sso_port_link_modify((struct otx2_ssogws *)
- &ws->ws_state[1], j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- for (j = 0; j < dev->nb_event_queues; j++) {
- if (links_map[j] == 0xdead)
- continue;
- sso_port_link_modify(ws, j, true);
- sso_func_trace("Restoring port %d queue %d "
- "link", i, j);
- }
- }
- }
-}
-
-static void
-sso_set_port_ops(struct otx2_ssogws *ws, uintptr_t base)
-{
- ws->tag_op = base + SSOW_LF_GWS_TAG;
- ws->wqp_op = base + SSOW_LF_GWS_WQP;
- ws->getwrk_op = base + SSOW_LF_GWS_OP_GET_WORK;
- ws->swtag_flush_op = base + SSOW_LF_GWS_OP_SWTAG_FLUSH;
- ws->swtag_norm_op = base + SSOW_LF_GWS_OP_SWTAG_NORM;
- ws->swtag_desched_op = base + SSOW_LF_GWS_OP_SWTAG_DESCHED;
-}
-
-static int
-sso_configure_dual_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t vws = 0;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports * 2;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws_dual *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws_dual) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws_dual *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[0], base);
- ws->base[0] = base;
- vws++;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | vws << 12);
- sso_set_port_ops((struct otx2_ssogws *)&ws->ws_state[1], base);
- ws->base[1] = base;
- vws++;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_ports(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int i, rc;
-
- otx2_sso_dbg("Configuring event ports %d", dev->nb_event_ports);
-
- nb_lf = dev->nb_event_ports;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GWS LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- otx2_err("Failed to init SSO GWS LF");
- return -ENODEV;
- }
-
- for (i = 0; i < nb_lf; i++) {
- struct otx2_ssogws_cookie *gws_cookie;
- struct otx2_ssogws *ws;
- uintptr_t base;
-
- if (event_dev->data->ports[i] != NULL) {
- ws = event_dev->data->ports[i];
- } else {
- /* Allocate event port memory */
- ws = rte_zmalloc_socket("otx2_sso_ws",
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE,
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL) {
- otx2_err("Failed to alloc memory for port=%d",
- i);
- rc = -ENOMEM;
- break;
- }
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
- }
-
- ws->port = i;
- base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | i << 12);
- sso_set_port_ops(ws, base);
- ws->base = base;
-
- gws_cookie = ssogws_get_cookie(ws);
- gws_cookie->event_dev = event_dev;
- gws_cookie->configured = 1;
-
- event_dev->data->ports[i] = ws;
- }
-
- if (rc < 0) {
- sso_lf_cfg(dev, mbox, SSO_LF_GWS, nb_lf, false);
- sso_hw_lf_cfg(mbox, SSO_LF_GWS, nb_lf, false);
- }
-
- return rc;
-}
-
-static int
-sso_configure_queues(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t nb_lf;
- int rc;
-
- otx2_sso_dbg("Configuring event queues %d", dev->nb_event_queues);
-
- nb_lf = dev->nb_event_queues;
- /* Ask AF to attach required LFs. */
- rc = sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, true);
- if (rc < 0) {
- otx2_err("Failed to attach SSO GGRP LF");
- return -ENODEV;
- }
-
- if (sso_lf_cfg(dev, mbox, SSO_LF_GGRP, nb_lf, true) < 0) {
- sso_hw_lf_cfg(mbox, SSO_LF_GGRP, nb_lf, false);
- otx2_err("Failed to init SSO GGRP LF");
- return -ENODEV;
- }
-
- return rc;
-}
-
-static int
-sso_xaq_allocate(struct otx2_sso_evdev *dev)
-{
- const struct rte_memzone *mz;
- struct npa_aura_s *aura;
- static int reconfig_cnt;
- char pool_name[RTE_MEMZONE_NAMESIZE];
- uint32_t xaq_cnt;
- int rc;
-
- if (dev->xaq_pool)
- rte_mempool_free(dev->xaq_pool);
-
- /*
- * Allocate memory for Add work backpressure.
- */
- mz = rte_memzone_lookup(OTX2_SSO_FC_NAME);
- if (mz == NULL)
- mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME,
- OTX2_ALIGN +
- sizeof(struct npa_aura_s),
- rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG,
- OTX2_ALIGN);
- if (mz == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- return -ENOMEM;
- }
-
- dev->fc_iova = mz->iova;
- dev->fc_mem = mz->addr;
- *dev->fc_mem = 0;
- aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN);
- memset(aura, 0, sizeof(struct npa_aura_s));
-
- aura->fc_ena = 1;
- aura->fc_addr = dev->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
-
- /* Taken from HRM 14.3.3(4) */
- xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT;
- if (dev->xae_cnt)
- xaq_cnt += dev->xae_cnt / dev->xae_waes;
- else if (dev->adptr_xae_cnt)
- xaq_cnt += (dev->adptr_xae_cnt / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
- else
- xaq_cnt += (dev->iue / dev->xae_waes) +
- (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues);
-
- otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt);
- /* Setup XAQ based on number of nb queues. */
- snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt);
- dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name,
- xaq_cnt, dev->xaq_buf_size, 0, 0,
- rte_socket_id(), 0);
-
- if (dev->xaq_pool == NULL) {
- otx2_err("Unable to create empty mempool.");
- rte_memzone_free(mz);
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(dev->xaq_pool,
- rte_mbuf_platform_mempool_ops(), aura);
- if (rc != 0) {
- otx2_err("Unable to set xaqpool ops.");
- goto alloc_fail;
- }
-
- rc = rte_mempool_populate_default(dev->xaq_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate xaqpool.");
- goto alloc_fail;
- }
- reconfig_cnt++;
- /* When SW does addwork (enqueue) check if there is space in XAQ by
- * comparing fc_addr above against the xaq_lmt calculated below.
- * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO
- * to request XAQ to cache them even before enqueue is called.
- */
- dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 *
- dev->nb_event_queues);
- dev->nb_xaq_cfg = xaq_cnt;
-
- return 0;
-alloc_fail:
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(mz);
- return rc;
-}
-
-static int
-sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_hw_setconfig *req;
-
- otx2_sso_dbg("Configuring XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-sso_ggrp_free_xaq(struct otx2_sso_evdev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_release_xaq *req;
-
- otx2_sso_dbg("Freeing XAQ for GGRPs");
- req = otx2_mbox_alloc_msg_sso_hw_release_xaq_aura(mbox);
- req->hwgrps = dev->nb_event_queues;
-
- return otx2_mbox_process(mbox);
-}
-
-static void
-sso_lf_teardown(struct otx2_sso_evdev *dev,
- enum otx2_sso_lf_type lf_type)
-{
- uint8_t nb_lf;
-
- switch (lf_type) {
- case SSO_LF_GGRP:
- nb_lf = dev->nb_event_queues;
- break;
- case SSO_LF_GWS:
- nb_lf = dev->nb_event_ports;
- nb_lf *= dev->dual_ws ? 2 : 1;
- break;
- default:
- return;
- }
-
- sso_lf_cfg(dev, dev->mbox, lf_type, nb_lf, false);
- sso_hw_lf_cfg(dev->mbox, lf_type, nb_lf, false);
-}
-
-static int
-otx2_sso_configure(const struct rte_eventdev *event_dev)
-{
- struct rte_event_dev_config *conf = &event_dev->data->dev_conf;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t deq_tmo_ns;
- int rc;
-
- sso_func_trace();
- deq_tmo_ns = conf->dequeue_timeout_ns;
-
- if (deq_tmo_ns == 0)
- deq_tmo_ns = dev->min_dequeue_timeout_ns;
-
- if (deq_tmo_ns < dev->min_dequeue_timeout_ns ||
- deq_tmo_ns > dev->max_dequeue_timeout_ns) {
- otx2_err("Unsupported dequeue timeout requested");
- return -EINVAL;
- }
-
- if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
- dev->is_timeout_deq = 1;
-
- dev->deq_tmo_ns = deq_tmo_ns;
-
- if (conf->nb_event_ports > dev->max_event_ports ||
- conf->nb_event_queues > dev->max_event_queues) {
- otx2_err("Unsupported event queues/ports requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_dequeue_depth > 1) {
- otx2_err("Unsupported event port deq depth requested");
- return -EINVAL;
- }
-
- if (conf->nb_event_port_enqueue_depth > 1) {
- otx2_err("Unsupported event port enq depth requested");
- return -EINVAL;
- }
-
- if (dev->configured)
- sso_unregister_irqs(event_dev);
-
- if (dev->nb_event_queues) {
- /* Finit any previous queues. */
- sso_lf_teardown(dev, SSO_LF_GGRP);
- }
- if (dev->nb_event_ports) {
- /* Finit any previous ports. */
- sso_lf_teardown(dev, SSO_LF_GWS);
- }
-
- dev->nb_event_queues = conf->nb_event_queues;
- dev->nb_event_ports = conf->nb_event_ports;
-
- if (dev->dual_ws)
- rc = sso_configure_dual_ports(event_dev);
- else
- rc = sso_configure_ports(event_dev);
-
- if (rc < 0) {
- otx2_err("Failed to configure event ports");
- return -ENODEV;
- }
-
- if (sso_configure_queues(event_dev) < 0) {
- otx2_err("Failed to configure event queues");
- rc = -ENODEV;
- goto teardown_hws;
- }
-
- if (sso_xaq_allocate(dev) < 0) {
- rc = -ENOMEM;
- goto teardown_hwggrp;
- }
-
- /* Restore any prior port-queue mapping. */
- sso_restore_links(event_dev);
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_get_msix_offsets(event_dev);
- if (rc < 0) {
- otx2_err("Failed to get msix offsets %d", rc);
- goto teardown_hwggrp;
- }
-
- rc = sso_register_irqs(event_dev);
- if (rc < 0) {
- otx2_err("Failed to register irq %d", rc);
- goto teardown_hwggrp;
- }
-
- dev->configured = 1;
- rte_mb();
-
- return 0;
-teardown_hwggrp:
- sso_lf_teardown(dev, SSO_LF_GGRP);
-teardown_hws:
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
- dev->configured = 0;
- return rc;
-}
-
-static void
-otx2_sso_queue_def_conf(struct rte_eventdev *event_dev, uint8_t queue_id,
- struct rte_event_queue_conf *queue_conf)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(queue_id);
-
- queue_conf->nb_atomic_flows = (1ULL << 20);
- queue_conf->nb_atomic_order_sequences = (1ULL << 20);
- queue_conf->event_queue_cfg = RTE_EVENT_QUEUE_CFG_ALL_TYPES;
- queue_conf->priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
-}
-
-static int
-otx2_sso_queue_setup(struct rte_eventdev *event_dev, uint8_t queue_id,
- const struct rte_event_queue_conf *queue_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct sso_grp_priority *req;
- int rc;
-
- sso_func_trace("Queue=%d prio=%d", queue_id, queue_conf->priority);
-
- req = otx2_mbox_alloc_msg_sso_grp_set_priority(dev->mbox);
- req->grp = queue_id;
- req->weight = 0xFF;
- req->affinity = 0xFF;
- /* Normalize <0-255> to <0-7> */
- req->priority = queue_conf->priority / 32;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to set priority queue=%d", queue_id);
- return rc;
- }
-
- return 0;
-}
-
-static void
-otx2_sso_port_def_conf(struct rte_eventdev *event_dev, uint8_t port_id,
- struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
-
- RTE_SET_USED(port_id);
- port_conf->new_event_threshold = dev->max_num_events;
- port_conf->dequeue_depth = 1;
- port_conf->enqueue_depth = 1;
-}
-
-static int
-otx2_sso_port_setup(struct rte_eventdev *event_dev, uint8_t port_id,
- const struct rte_event_port_conf *port_conf)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP] = {0};
- uint64_t val;
- uint16_t q;
-
- sso_func_trace("Port=%d", port_id);
- RTE_SET_USED(port_conf);
-
- if (event_dev->data->ports[port_id] == NULL) {
- otx2_err("Invalid port Id %d", port_id);
- return -EINVAL;
- }
-
- for (q = 0; q < dev->nb_event_queues; q++) {
- grps_base[q] = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 | q << 12);
- if (grps_base[q] == 0) {
- otx2_err("Failed to get grp[%d] base addr", q);
- return -EINVAL;
- }
- }
-
- /* Set get_work timeout for HWS */
- val = NSEC2USEC(dev->deq_tmo_ns) - 1;
-
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[port_id];
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[0].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- otx2_write64(val, OTX2_SSOW_GET_BASE_ADDR(
- ws->ws_state[1].getwrk_op) + SSOW_LF_GWS_NW_TIM);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[port_id];
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- rte_memcpy(ws->grps_base, grps_base,
- sizeof(uintptr_t) * OTX2_SSO_MAX_VHGRP);
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- ws->tstamp = dev->tstamp;
- otx2_write64(val, base + SSOW_LF_GWS_NW_TIM);
- }
-
- otx2_sso_dbg("Port=%d ws=%p", port_id, event_dev->data->ports[port_id]);
-
- return 0;
-}
-
-static int
-otx2_sso_timeout_ticks(struct rte_eventdev *event_dev, uint64_t ns,
- uint64_t *tmo_ticks)
-{
- RTE_SET_USED(event_dev);
- *tmo_ticks = NSEC2TICK(ns, rte_get_timer_hz());
-
- return 0;
-}
-
-static void
-ssogws_dump(struct otx2_ssogws *ws, FILE *f)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
-
- fprintf(f, "SSOW_LF_GWS Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSOW_LF_GWS_LINKS 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_LINKS));
- fprintf(f, "SSOW_LF_GWS_PENDWQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDWQP));
- fprintf(f, "SSOW_LF_GWS_PENDSTATE 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDSTATE));
- fprintf(f, "SSOW_LF_GWS_NW_TIM 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_NW_TIM));
- fprintf(f, "SSOW_LF_GWS_TAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_WQP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_TAG));
- fprintf(f, "SSOW_LF_GWS_SWTP 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_SWTP));
- fprintf(f, "SSOW_LF_GWS_PENDTAG 0x%" PRIx64 "\n",
- otx2_read64(base + SSOW_LF_GWS_PENDTAG));
-}
-
-static void
-ssoggrp_dump(uintptr_t base, FILE *f)
-{
- fprintf(f, "SSO_LF_GGRP Base addr 0x%" PRIx64 "\n", (uint64_t)base);
- fprintf(f, "SSO_LF_GGRP_QCTL 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_QCTL));
- fprintf(f, "SSO_LF_GGRP_XAQ_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_XAQ_CNT));
- fprintf(f, "SSO_LF_GGRP_INT_THR 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_THR));
- fprintf(f, "SSO_LF_GGRP_INT_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_INT_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_CNT 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_CNT));
- fprintf(f, "SSO_LF_GGRP_AQ_THR 0x%" PRIX64 "\n",
- otx2_read64(base + SSO_LF_GGRP_AQ_THR));
- fprintf(f, "SSO_LF_GGRP_MISC_CNT 0x%" PRIx64 "\n",
- otx2_read64(base + SSO_LF_GGRP_MISC_CNT));
-}
-
-static void
-otx2_sso_dump(struct rte_eventdev *event_dev, FILE *f)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t queue;
- uint8_t port;
-
- fprintf(f, "[%s] SSO running in [%s] mode\n", __func__, dev->dual_ws ?
- "dual_ws" : "single_ws");
- /* Dump SSOW registers */
- for (port = 0; port < dev->nb_event_ports; port++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws =
- event_dev->data->ports[port];
-
- fprintf(f, "[%s] SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 0);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[0], f);
- fprintf(f, "[%s]SSO dual workslot[%d] vws[%d] dump\n",
- __func__, port, 1);
- ssogws_dump((struct otx2_ssogws *)&ws->ws_state[1], f);
- } else {
- fprintf(f, "[%s]SSO single workslot[%d] dump\n",
- __func__, port);
- ssogws_dump(event_dev->data->ports[port], f);
- }
- }
-
- /* Dump SSO registers */
- for (queue = 0; queue < dev->nb_event_queues; queue++) {
- fprintf(f, "[%s]SSO group[%d] dump\n", __func__, queue);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
- ssoggrp_dump(ws->grps_base[queue], f);
- }
- }
-}
-
-static void
-otx2_handle_event(void *arg, struct rte_event event)
-{
- struct rte_eventdev *event_dev = arg;
-
- if (event_dev->dev_ops->dev_stop_flush != NULL)
- event_dev->dev_ops->dev_stop_flush(event_dev->data->dev_id,
- event, event_dev->data->dev_stop_flush_arg);
-}
-
-static void
-sso_qos_cfg(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct sso_grp_qos_cfg *req;
- uint16_t i;
-
- for (i = 0; i < dev->qos_queue_cnt; i++) {
- uint8_t xaq_prcnt = dev->qos_parse_data[i].xaq_prcnt;
- uint8_t iaq_prcnt = dev->qos_parse_data[i].iaq_prcnt;
- uint8_t taq_prcnt = dev->qos_parse_data[i].taq_prcnt;
-
- if (dev->qos_parse_data[i].queue >= dev->nb_event_queues)
- continue;
-
- req = otx2_mbox_alloc_msg_sso_grp_qos_config(dev->mbox);
- req->xaq_limit = (dev->nb_xaq_cfg *
- (xaq_prcnt ? xaq_prcnt : 100)) / 100;
- req->taq_thr = (SSO_HWGRP_IAQ_MAX_THR_MASK *
- (iaq_prcnt ? iaq_prcnt : 100)) / 100;
- req->iaq_thr = (SSO_HWGRP_TAQ_MAX_THR_MASK *
- (taq_prcnt ? taq_prcnt : 100)) / 100;
- }
-
- if (dev->qos_queue_cnt)
- otx2_mbox_process(dev->mbox);
-}
-
-static void
-sso_cleanup(struct rte_eventdev *event_dev, uint8_t enable)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[0]);
- ssogws_reset((struct otx2_ssogws *)&ws->ws_state[1]);
- ws->swtag_req = 0;
- ws->vws = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- } else {
- struct otx2_ssogws *ws;
-
- ws = event_dev->data->ports[i];
- ssogws_reset(ws);
- ws->swtag_req = 0;
- ws->fc_mem = dev->fc_mem;
- ws->xaq_lmt = dev->xaq_lmt;
- }
- }
-
- rte_mb();
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[0];
- struct otx2_ssogws temp_ws;
-
- memcpy(&temp_ws, &ws->ws_state[0],
- sizeof(struct otx2_ssogws_state));
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(&temp_ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[0];
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- /* Consume all the events through HWS0 */
- ssogws_flush_events(ws, i, ws->grps_base[i],
- otx2_handle_event, event_dev);
- /* Enable/Disable SSO GGRP */
- otx2_write64(enable, ws->grps_base[i] +
- SSO_LF_GGRP_QCTL);
- }
- }
-
- /* reset SSO GWS cache */
- otx2_mbox_alloc_msg_sso_ws_cache_inv(dev->mbox);
- otx2_mbox_process(dev->mbox);
-}
-
-int
-sso_xae_reconfigure(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int rc = 0;
-
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 0);
-
- rc = sso_ggrp_free_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to free XAQ\n");
- return rc;
- }
-
- rte_mempool_free(dev->xaq_pool);
- dev->xaq_pool = NULL;
- rc = sso_xaq_allocate(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq pool %d", rc);
- return rc;
- }
- rc = sso_ggrp_alloc_xaq(dev);
- if (rc < 0) {
- otx2_err("Failed to alloc xaq to ggrp %d", rc);
- return rc;
- }
-
- rte_mb();
- if (event_dev->data->dev_started)
- sso_cleanup(event_dev, 1);
-
- return 0;
-}
-
-static int
-otx2_sso_start(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_qos_cfg(event_dev);
- sso_cleanup(event_dev, 1);
- sso_fastpath_fns_set(event_dev);
-
- return 0;
-}
-
-static void
-otx2_sso_stop(struct rte_eventdev *event_dev)
-{
- sso_func_trace();
- sso_cleanup(event_dev, 0);
- rte_mb();
-}
-
-static int
-otx2_sso_close(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV];
- uint16_t i;
-
- if (!dev->configured)
- return 0;
-
- sso_unregister_irqs(event_dev);
-
- for (i = 0; i < dev->nb_event_queues; i++)
- all_queues[i] = i;
-
- for (i = 0; i < dev->nb_event_ports; i++)
- otx2_sso_port_unlink(event_dev, event_dev->data->ports[i],
- all_queues, dev->nb_event_queues);
-
- sso_lf_teardown(dev, SSO_LF_GGRP);
- sso_lf_teardown(dev, SSO_LF_GWS);
- dev->nb_event_ports = 0;
- dev->nb_event_queues = 0;
- rte_mempool_free(dev->xaq_pool);
- rte_memzone_free(rte_memzone_lookup(OTX2_SSO_FC_NAME));
-
- return 0;
-}
-
-/* Initialize and register event driver with DPDK Application */
-static struct eventdev_ops otx2_sso_ops = {
- .dev_infos_get = otx2_sso_info_get,
- .dev_configure = otx2_sso_configure,
- .queue_def_conf = otx2_sso_queue_def_conf,
- .queue_setup = otx2_sso_queue_setup,
- .queue_release = otx2_sso_queue_release,
- .port_def_conf = otx2_sso_port_def_conf,
- .port_setup = otx2_sso_port_setup,
- .port_release = otx2_sso_port_release,
- .port_link = otx2_sso_port_link,
- .port_unlink = otx2_sso_port_unlink,
- .timeout_ticks = otx2_sso_timeout_ticks,
-
- .eth_rx_adapter_caps_get = otx2_sso_rx_adapter_caps_get,
- .eth_rx_adapter_queue_add = otx2_sso_rx_adapter_queue_add,
- .eth_rx_adapter_queue_del = otx2_sso_rx_adapter_queue_del,
- .eth_rx_adapter_start = otx2_sso_rx_adapter_start,
- .eth_rx_adapter_stop = otx2_sso_rx_adapter_stop,
-
- .eth_tx_adapter_caps_get = otx2_sso_tx_adapter_caps_get,
- .eth_tx_adapter_queue_add = otx2_sso_tx_adapter_queue_add,
- .eth_tx_adapter_queue_del = otx2_sso_tx_adapter_queue_del,
-
- .timer_adapter_caps_get = otx2_tim_caps_get,
-
- .crypto_adapter_caps_get = otx2_ca_caps_get,
- .crypto_adapter_queue_pair_add = otx2_ca_qp_add,
- .crypto_adapter_queue_pair_del = otx2_ca_qp_del,
-
- .xstats_get = otx2_sso_xstats_get,
- .xstats_reset = otx2_sso_xstats_reset,
- .xstats_get_names = otx2_sso_xstats_get_names,
-
- .dump = otx2_sso_dump,
- .dev_start = otx2_sso_start,
- .dev_stop = otx2_sso_stop,
- .dev_close = otx2_sso_close,
- .dev_selftest = otx2_sso_selftest,
-};
-
-#define OTX2_SSO_XAE_CNT "xae_cnt"
-#define OTX2_SSO_SINGLE_WS "single_ws"
-#define OTX2_SSO_GGRP_QOS "qos"
-#define OTX2_SSO_FORCE_BP "force_rx_bp"
-
-static void
-parse_queue_param(char *value, void *opaque)
-{
- struct otx2_sso_qos queue_qos = {0};
- uint8_t *val = (uint8_t *)&queue_qos;
- struct otx2_sso_evdev *dev = opaque;
- char *tok = strtok(value, "-");
- struct otx2_sso_qos *old_ptr;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&queue_qos.iaq_prcnt + 1)) {
- otx2_err("Invalid QoS parameter expected [Qx-XAQ-TAQ-IAQ]");
- return;
- }
-
- dev->qos_queue_cnt++;
- old_ptr = dev->qos_parse_data;
- dev->qos_parse_data = rte_realloc(dev->qos_parse_data,
- sizeof(struct otx2_sso_qos) *
- dev->qos_queue_cnt, 0);
- if (dev->qos_parse_data == NULL) {
- dev->qos_parse_data = old_ptr;
- dev->qos_queue_cnt--;
- return;
- }
- dev->qos_parse_data[dev->qos_queue_cnt - 1] = queue_qos;
-}
-
-static void
-parse_qos_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- parse_queue_param(start + 1, opaque);
- s = end;
- start = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-parse_sso_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] use '-' cause ','
- * isn't allowed. Everything is expressed in percentages, 0 represents
- * default.
- */
- parse_qos_list(value, opaque);
-
- return 0;
-}
-
-static void
-sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs)
-{
- struct rte_kvargs *kvlist;
- uint8_t single_ws = 0;
-
- if (devargs == NULL)
- return;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_SSO_XAE_CNT, &parse_kvargs_value,
- &dev->xae_cnt);
- rte_kvargs_process(kvlist, OTX2_SSO_SINGLE_WS, &parse_kvargs_flag,
- &single_ws);
- rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict,
- dev);
- rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag,
- &dev->force_rx_bp);
- otx2_parse_common_devargs(kvlist);
- dev->dual_ws = !single_ws;
- rte_kvargs_free(kvlist);
-}
-
-static int
-otx2_sso_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_probe(pci_drv, pci_dev,
- sizeof(struct otx2_sso_evdev),
- otx2_sso_init);
-}
-
-static int
-otx2_sso_remove(struct rte_pci_device *pci_dev)
-{
- return rte_event_pmd_pci_remove(pci_dev, otx2_sso_fini);
-}
-
-static const struct rte_pci_id pci_sso_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SSO_TIM_PF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_sso = {
- .id_table = pci_sso_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = otx2_sso_probe,
- .remove = otx2_sso_remove,
-};
-
-int
-otx2_sso_init(struct rte_eventdev *event_dev)
-{
- struct free_rsrcs_rsp *rsrc_cnt;
- struct rte_pci_device *pci_dev;
- struct otx2_sso_evdev *dev;
- int rc;
-
- event_dev->dev_ops = &otx2_sso_ops;
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- sso_fastpath_fns_set(event_dev);
- return 0;
- }
-
- dev = sso_pmd_priv(event_dev);
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
-
- /* Get SSO and SSOW MSIX rsrc cnt */
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count");
- goto otx2_dev_uninit;
- }
- otx2_sso_dbg("SSO %d SSOW %d NPA %d provisioned", rsrc_cnt->sso,
- rsrc_cnt->ssow, rsrc_cnt->npa);
-
- dev->max_event_ports = RTE_MIN(rsrc_cnt->ssow, OTX2_SSO_MAX_VHWS);
- dev->max_event_queues = RTE_MIN(rsrc_cnt->sso, OTX2_SSO_MAX_VHGRP);
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc < 0) {
- otx2_err("Unable to init NPA lf. It might not be provisioned");
- goto otx2_dev_uninit;
- }
-
- dev->drv_inited = true;
- dev->is_timeout_deq = 0;
- dev->min_dequeue_timeout_ns = USEC2NSEC(1);
- dev->max_dequeue_timeout_ns = USEC2NSEC(0x3FF);
- dev->max_num_events = -1;
- dev->nb_event_queues = 0;
- dev->nb_event_ports = 0;
-
- if (!dev->max_event_ports || !dev->max_event_queues) {
- otx2_err("Not enough eventdev resource queues=%d ports=%d",
- dev->max_event_queues, dev->max_event_ports);
- rc = -ENODEV;
- goto otx2_npa_lf_uninit;
- }
-
- dev->dual_ws = 1;
- sso_parse_devargs(dev, pci_dev->device.devargs);
- if (dev->dual_ws) {
- otx2_sso_dbg("Using dual workslot mode");
- dev->max_event_ports = dev->max_event_ports / 2;
- } else {
- otx2_sso_dbg("Using single workslot mode");
- }
-
- otx2_sso_pf_func_set(dev->pf_func);
- otx2_sso_dbg("Initializing %s max_queues=%d max_ports=%d",
- event_dev->data->name, dev->max_event_queues,
- dev->max_event_ports);
-
- otx2_tim_init(pci_dev, (struct otx2_dev *)dev);
-
- return 0;
-
-otx2_npa_lf_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- return rc;
-}
-
-int
-otx2_sso_fini(struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct rte_pci_device *pci_dev;
-
- /* For secondary processes, nothing to be done */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- pci_dev = container_of(event_dev->dev, struct rte_pci_device, device);
-
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("Common resource in use by other devices");
- return -EAGAIN;
- }
-
- otx2_tim_fini();
- otx2_dev_fini(pci_dev, dev);
-
- return 0;
-}
-
-RTE_PMD_REGISTER_PCI(event_octeontx2, pci_sso);
-RTE_PMD_REGISTER_PCI_TABLE(event_octeontx2, pci_sso_map);
-RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=<int>"
- OTX2_SSO_SINGLE_WS "=1"
- OTX2_SSO_GGRP_QOS "=<string>"
- OTX2_SSO_FORCE_BP "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h
deleted file mode 100644
index a5d34b7df7..0000000000
--- a/drivers/event/octeontx2/otx2_evdev.h
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_H__
-#define __OTX2_EVDEV_H__
-
-#include <rte_eventdev.h>
-#include <eventdev_pmd.h>
-#include <rte_event_eth_rx_adapter.h>
-#include <rte_event_eth_tx_adapter.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-#include "otx2_mempool.h"
-#include "otx2_tim_evdev.h"
-
-#define EVENTDEV_NAME_OCTEONTX2_PMD event_octeontx2
-
-#define sso_func_trace otx2_sso_dbg
-
-#define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV
-#define OTX2_SSO_MAX_VHWS (UINT8_MAX)
-#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc"
-#define OTX2_SSO_SQB_LIMIT (0x180)
-#define OTX2_SSO_XAQ_SLACK (8)
-#define OTX2_SSO_XAQ_CACHE_CNT (0x7)
-#define OTX2_SSO_WQE_SG_PTR (9)
-
-/* SSO LF register offsets (BAR2) */
-#define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull)
-#define SSO_LF_GGRP_OP_ADD_WORK1 (0x8ull)
-
-#define SSO_LF_GGRP_QCTL (0x20ull)
-#define SSO_LF_GGRP_EXE_DIS (0x80ull)
-#define SSO_LF_GGRP_INT (0x100ull)
-#define SSO_LF_GGRP_INT_W1S (0x108ull)
-#define SSO_LF_GGRP_INT_ENA_W1S (0x110ull)
-#define SSO_LF_GGRP_INT_ENA_W1C (0x118ull)
-#define SSO_LF_GGRP_INT_THR (0x140ull)
-#define SSO_LF_GGRP_INT_CNT (0x180ull)
-#define SSO_LF_GGRP_XAQ_CNT (0x1b0ull)
-#define SSO_LF_GGRP_AQ_CNT (0x1c0ull)
-#define SSO_LF_GGRP_AQ_THR (0x1e0ull)
-#define SSO_LF_GGRP_MISC_CNT (0x200ull)
-
-/* SSOW LF register offsets (BAR2) */
-#define SSOW_LF_GWS_LINKS (0x10ull)
-#define SSOW_LF_GWS_PENDWQP (0x40ull)
-#define SSOW_LF_GWS_PENDSTATE (0x50ull)
-#define SSOW_LF_GWS_NW_TIM (0x70ull)
-#define SSOW_LF_GWS_GRPMSK_CHG (0x80ull)
-#define SSOW_LF_GWS_INT (0x100ull)
-#define SSOW_LF_GWS_INT_W1S (0x108ull)
-#define SSOW_LF_GWS_INT_ENA_W1S (0x110ull)
-#define SSOW_LF_GWS_INT_ENA_W1C (0x118ull)
-#define SSOW_LF_GWS_TAG (0x200ull)
-#define SSOW_LF_GWS_WQP (0x210ull)
-#define SSOW_LF_GWS_SWTP (0x220ull)
-#define SSOW_LF_GWS_PENDTAG (0x230ull)
-#define SSOW_LF_GWS_OP_ALLOC_WE (0x400ull)
-#define SSOW_LF_GWS_OP_GET_WORK (0x600ull)
-#define SSOW_LF_GWS_OP_SWTAG_FLUSH (0x800ull)
-#define SSOW_LF_GWS_OP_SWTAG_UNTAG (0x810ull)
-#define SSOW_LF_GWS_OP_SWTP_CLR (0x820ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP0 (0x830ull)
-#define SSOW_LF_GWS_OP_UPD_WQP_GRP1 (0x838ull)
-#define SSOW_LF_GWS_OP_DESCHED (0x880ull)
-#define SSOW_LF_GWS_OP_DESCHED_NOSCH (0x8c0ull)
-#define SSOW_LF_GWS_OP_SWTAG_DESCHED (0x980ull)
-#define SSOW_LF_GWS_OP_SWTAG_NOSCHED (0x9c0ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED0 (0xa00ull)
-#define SSOW_LF_GWS_OP_CLR_NSCHED1 (0xa08ull)
-#define SSOW_LF_GWS_OP_SWTP_SET (0xc00ull)
-#define SSOW_LF_GWS_OP_SWTAG_NORM (0xc10ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL0 (0xc20ull)
-#define SSOW_LF_GWS_OP_SWTAG_FULL1 (0xc28ull)
-#define SSOW_LF_GWS_OP_GWC_INVAL (0xe00ull)
-
-#define OTX2_SSOW_GET_BASE_ADDR(_GW) ((_GW) - SSOW_LF_GWS_OP_GET_WORK)
-#define OTX2_SSOW_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY)
-#define OTX2_SSOW_GRP_FROM_TAG(x) (((x) >> 36) & 0x3ff)
-
-#define NSEC2USEC(__ns) ((__ns) / 1E3)
-#define USEC2NSEC(__us) ((__us) * 1E3)
-#define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9)
-#define TICK2NSEC(__tck, __freq) (((__tck) * 1E9) / (__freq))
-
-enum otx2_sso_lf_type {
- SSO_LF_GGRP,
- SSO_LF_GWS
-};
-
-union otx2_sso_event {
- uint64_t get_work0;
- struct {
- uint32_t flow_id:20;
- uint32_t sub_event_type:8;
- uint32_t event_type:4;
- uint8_t op:2;
- uint8_t rsvd:4;
- uint8_t sched_type:2;
- uint8_t queue_id;
- uint8_t priority;
- uint8_t impl_opaque;
- };
-} __rte_aligned(64);
-
-enum {
- SSO_SYNC_ORDERED,
- SSO_SYNC_ATOMIC,
- SSO_SYNC_UNTAGGED,
- SSO_SYNC_EMPTY
-};
-
-struct otx2_sso_qos {
- uint8_t queue;
- uint8_t xaq_prcnt;
- uint8_t taq_prcnt;
- uint8_t iaq_prcnt;
-};
-
-struct otx2_sso_evdev {
- OTX2_DEV; /* Base class */
- uint8_t max_event_queues;
- uint8_t max_event_ports;
- uint8_t is_timeout_deq;
- uint8_t nb_event_queues;
- uint8_t nb_event_ports;
- uint8_t configured;
- uint32_t deq_tmo_ns;
- uint32_t min_dequeue_timeout_ns;
- uint32_t max_dequeue_timeout_ns;
- int32_t max_num_events;
- uint64_t *fc_mem;
- uint64_t xaq_lmt;
- uint64_t nb_xaq_cfg;
- rte_iova_t fc_iova;
- struct rte_mempool *xaq_pool;
- uint64_t rx_offloads;
- uint64_t tx_offloads;
- uint64_t adptr_xae_cnt;
- uint16_t rx_adptr_pool_cnt;
- uint64_t *rx_adptr_pools;
- uint16_t max_port_id;
- uint16_t tim_adptr_ring_cnt;
- uint16_t *timer_adptr_rings;
- uint64_t *timer_adptr_sz;
- /* Dev args */
- uint8_t dual_ws;
- uint32_t xae_cnt;
- uint8_t qos_queue_cnt;
- uint8_t force_rx_bp;
- struct otx2_sso_qos *qos_parse_data;
- /* HW const */
- uint32_t xae_waes;
- uint32_t xaq_buf_size;
- uint32_t iue;
- /* MSIX offsets */
- uint16_t sso_msixoff[OTX2_SSO_MAX_VHGRP];
- uint16_t ssow_msixoff[OTX2_SSO_MAX_VHWS];
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
-} __rte_cache_aligned;
-
-#define OTX2_SSOGWS_OPS \
- /* WS ops */ \
- uintptr_t getwrk_op; \
- uintptr_t tag_op; \
- uintptr_t wqp_op; \
- uintptr_t swtag_flush_op; \
- uintptr_t swtag_norm_op; \
- uintptr_t swtag_desched_op;
-
-/* Event port aka GWS */
-struct otx2_ssogws {
- /* Get Work Fastpath data */
- OTX2_SSOGWS_OPS;
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-struct otx2_ssogws_state {
- OTX2_SSOGWS_OPS;
-};
-
-struct otx2_ssogws_dual {
- /* Get Work Fastpath data */
- struct otx2_ssogws_state ws_state[2]; /* Ping and Pong */
- /* PTP timestamp */
- struct otx2_timesync_info *tstamp;
- void *lookup_mem;
- uint8_t swtag_req;
- uint8_t vws; /* Ping pong bit */
- uint8_t port;
- /* Add Work Fastpath data */
- uint64_t xaq_lmt __rte_cache_aligned;
- uint64_t *fc_mem;
- uintptr_t grps_base[OTX2_SSO_MAX_VHGRP];
- /* Tx Fastpath data */
- uint64_t base[2] __rte_cache_aligned;
- uint8_t tx_adptr_data[];
-} __rte_cache_aligned;
-
-static inline struct otx2_sso_evdev *
-sso_pmd_priv(const struct rte_eventdev *event_dev)
-{
- return event_dev->data->dev_private;
-}
-
-struct otx2_ssogws_cookie {
- const struct rte_eventdev *event_dev;
- bool configured;
-};
-
-static inline struct otx2_ssogws_cookie *
-ssogws_get_cookie(void *ws)
-{
- return (struct otx2_ssogws_cookie *)
- ((uint8_t *)ws - RTE_CACHE_LINE_SIZE);
-}
-
-static const union mbuf_initializer mbuf_init = {
- .fields = {
- .data_off = RTE_PKTMBUF_HEADROOM,
- .refcnt = 1,
- .nb_segs = 1,
- .port = 0
- }
-};
-
-static __rte_always_inline void
-otx2_wqe_to_mbuf(uint64_t get_work1, const uint64_t mbuf, uint8_t port_id,
- const uint32_t tag, const uint32_t flags,
- const void * const lookup_mem)
-{
- struct nix_wqe_hdr_s *wqe = (struct nix_wqe_hdr_s *)get_work1;
- uint64_t val = mbuf_init.value | (uint64_t)port_id << 48;
-
- if (flags & NIX_RX_OFFLOAD_TSTAMP_F)
- val |= NIX_TIMESYNC_RX_OFFSET;
-
- otx2_nix_cqe_to_mbuf((struct nix_cqe_hdr_s *)wqe, tag,
- (struct rte_mbuf *)mbuf, lookup_mem,
- val, flags);
-
-}
-
-static inline int
-parse_kvargs_flag(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint8_t *)opaque = !!atoi(value);
- return 0;
-}
-
-static inline int
-parse_kvargs_value(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- *(uint32_t *)opaque = (uint32_t)atoi(value);
- return 0;
-}
-
-#define SSO_RX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_FASTPATH_MODES
-#define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES
-
-/* Single WS API's */
-uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Dual WS API's */
-uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev);
-uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events);
-
-/* Auto generated API's */
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
- \
-uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks); \
-uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks);\
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events); \
-uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events); \
-
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data,
- uint32_t event_type);
-int sso_xae_reconfigure(struct rte_eventdev *event_dev);
-void sso_fastpath_fns_set(struct rte_eventdev *event_dev);
-
-int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf);
-int otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id);
-int otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev);
-int otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev,
- uint32_t *caps);
-int otx2_sso_tx_adapter_queue_add(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-int otx2_sso_tx_adapter_queue_del(uint8_t id,
- const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id);
-
-/* Event crypto adapter API's */
-int otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps);
-
-int otx2_ca_qp_add(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id,
- const struct rte_event *event);
-
-int otx2_ca_qp_del(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, int32_t queue_pair_id);
-
-/* Clean up API's */
-typedef void (*otx2_handle_event_t)(void *arg, struct rte_event ev);
-void ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id,
- uintptr_t base, otx2_handle_event_t fn, void *arg);
-void ssogws_reset(struct otx2_ssogws *ws);
-/* Selftest */
-int otx2_sso_selftest(void);
-/* Init and Fini API's */
-int otx2_sso_init(struct rte_eventdev *event_dev);
-int otx2_sso_fini(struct rte_eventdev *event_dev);
-/* IRQ handlers */
-int sso_register_irqs(const struct rte_eventdev *event_dev);
-void sso_unregister_irqs(const struct rte_eventdev *event_dev);
-
-#endif /* __OTX2_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c
deleted file mode 100644
index a91f784b1e..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_adptr.c
+++ /dev/null
@@ -1,656 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019-2021 Marvell.
- */
-
-#include "otx2_evdev.h"
-
-#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100)
-
-int
-otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int rc;
-
- RTE_SET_USED(event_dev);
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- *caps = RTE_EVENT_ETH_RX_ADAPTER_SW_CAP;
- else
- *caps = RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ;
-
- return 0;
-}
-
-static inline int
-sso_rxq_enable(struct otx2_eth_dev *dev, uint16_t qid, uint8_t tt, uint8_t ggrp,
- uint16_t eth_port_id)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq.caching = 0;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 1;
- aq->rq.sso_tt = tt;
- aq->rq.sso_grp = ggrp;
- aq->rq.ena_wqwd = 1;
- /* Mbuf Header generation :
- * > FIRST_SKIP is a super set of WQE_SKIP, dont modify first skip as
- * it already has data related to mbuf size, headroom, private area.
- * > Using WQE_SKIP we can directly assign
- * mbuf = wqe - sizeof(struct mbuf);
- * so that mbuf header will not have unpredicted values while headroom
- * and private data starts at the beginning of wqe_data.
- */
- aq->rq.wqe_skip = 1;
- aq->rq.wqe_caching = 1;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 20; /* 20-bits */
-
- /* Flow Tag calculation :
- *
- * rq_tag <31:24> = good/bad_tag<8:0>;
- * rq_tag <23:0> = [ltag]
- *
- * flow_tag_mask<31:0> = (1 << flow_tagw) - 1; <31:20>
- * tag<31:0> = (~flow_tag_mask & rq_tag) | (flow_tag_mask & flow_tag);
- *
- * Setup :
- * ltag<23:0> = (eth_port_id & 0xF) << 20;
- * good/bad_tag<8:0> =
- * ((eth_port_id >> 4) & 0xF) | (RTE_EVENT_TYPE_ETHDEV << 4);
- *
- * TAG<31:0> on getwork = <31:28>(RTE_EVENT_TYPE_ETHDEV) |
- * <27:20> (eth_port_id) | <20:0> [TAG]
- */
-
- aq->rq.ltag = (eth_port_id & 0xF) << 20;
- aq->rq.good_utag = ((eth_port_id >> 4) & 0xF) |
- (RTE_EVENT_TYPE_ETHDEV << 4);
- aq->rq.bad_utag = aq->rq.good_utag;
-
- aq->rq.ena = 0; /* Don't enable RQ yet */
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to init rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-static inline int
-sso_rxq_disable(struct otx2_eth_dev *dev, uint16_t qid)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
-
- otx2_mbox_memset(&aq->cq_mask, 0, sizeof(struct nix_cq_ctx_s));
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
- aq->cq_mask.caching = ~(aq->cq_mask.caching);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to enable cq context");
- goto fail;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.sso_ena = 0;
- aq->rq.sso_tt = SSO_TT_UNTAGGED;
- aq->rq.sso_grp = 0;
- aq->rq.ena_wqwd = 0;
- aq->rq.wqe_caching = 0;
- aq->rq.wqe_skip = 0;
- aq->rq.spb_ena = 0;
- aq->rq.flow_tagw = 0x20;
- aq->rq.ltag = 0;
- aq->rq.good_utag = 0;
- aq->rq.bad_utag = 0;
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
-
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(struct nix_rq_ctx_s));
- /* mask the bits to write. */
- aq->rq_mask.sso_ena = ~(aq->rq_mask.sso_ena);
- aq->rq_mask.sso_tt = ~(aq->rq_mask.sso_tt);
- aq->rq_mask.sso_grp = ~(aq->rq_mask.sso_grp);
- aq->rq_mask.ena_wqwd = ~(aq->rq_mask.ena_wqwd);
- aq->rq_mask.wqe_caching = ~(aq->rq_mask.wqe_caching);
- aq->rq_mask.wqe_skip = ~(aq->rq_mask.wqe_skip);
- aq->rq_mask.spb_ena = ~(aq->rq_mask.spb_ena);
- aq->rq_mask.flow_tagw = ~(aq->rq_mask.flow_tagw);
- aq->rq_mask.ltag = ~(aq->rq_mask.ltag);
- aq->rq_mask.good_utag = ~(aq->rq_mask.good_utag);
- aq->rq_mask.bad_utag = ~(aq->rq_mask.bad_utag);
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
- aq->rq_mask.pb_caching = ~(aq->rq_mask.pb_caching);
- aq->rq_mask.xqe_imm_size = ~(aq->rq_mask.xqe_imm_size);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to clear rx adapter context");
- goto fail;
- }
-
- return 0;
-fail:
- return rc;
-}
-
-void
-sso_updt_xae_cnt(struct otx2_sso_evdev *dev, void *data, uint32_t event_type)
-{
- int i;
-
- switch (event_type) {
- case RTE_EVENT_TYPE_ETHDEV:
- {
- struct otx2_eth_rxq *rxq = data;
- uint64_t *old_ptr;
-
- for (i = 0; i < dev->rx_adptr_pool_cnt; i++) {
- if ((uint64_t)rxq->pool == dev->rx_adptr_pools[i])
- return;
- }
-
- dev->rx_adptr_pool_cnt++;
- old_ptr = dev->rx_adptr_pools;
- dev->rx_adptr_pools = rte_realloc(dev->rx_adptr_pools,
- sizeof(uint64_t) *
- dev->rx_adptr_pool_cnt, 0);
- if (dev->rx_adptr_pools == NULL) {
- dev->adptr_xae_cnt += rxq->pool->size;
- dev->rx_adptr_pools = old_ptr;
- dev->rx_adptr_pool_cnt--;
- return;
- }
- dev->rx_adptr_pools[dev->rx_adptr_pool_cnt - 1] =
- (uint64_t)rxq->pool;
-
- dev->adptr_xae_cnt += rxq->pool->size;
- break;
- }
- case RTE_EVENT_TYPE_TIMER:
- {
- struct otx2_tim_ring *timr = data;
- uint16_t *old_ring_ptr;
- uint64_t *old_sz_ptr;
-
- for (i = 0; i < dev->tim_adptr_ring_cnt; i++) {
- if (timr->ring_id != dev->timer_adptr_rings[i])
- continue;
- if (timr->nb_timers == dev->timer_adptr_sz[i])
- return;
- dev->adptr_xae_cnt -= dev->timer_adptr_sz[i];
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz[i] = timr->nb_timers;
-
- return;
- }
-
- dev->tim_adptr_ring_cnt++;
- old_ring_ptr = dev->timer_adptr_rings;
- old_sz_ptr = dev->timer_adptr_sz;
-
- dev->timer_adptr_rings = rte_realloc(dev->timer_adptr_rings,
- sizeof(uint16_t) *
- dev->tim_adptr_ring_cnt,
- 0);
- if (dev->timer_adptr_rings == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_rings = old_ring_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_sz = rte_realloc(dev->timer_adptr_sz,
- sizeof(uint64_t) *
- dev->tim_adptr_ring_cnt,
- 0);
-
- if (dev->timer_adptr_sz == NULL) {
- dev->adptr_xae_cnt += timr->nb_timers;
- dev->timer_adptr_sz = old_sz_ptr;
- dev->tim_adptr_ring_cnt--;
- return;
- }
-
- dev->timer_adptr_rings[dev->tim_adptr_ring_cnt - 1] =
- timr->ring_id;
- dev->timer_adptr_sz[dev->tim_adptr_ring_cnt - 1] =
- timr->nb_timers;
-
- dev->adptr_xae_cnt += timr->nb_timers;
- break;
- }
- default:
- break;
- }
-}
-
-static inline void
-sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < dev->nb_event_ports; i++) {
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- } else {
- struct otx2_ssogws *ws = event_dev->data->ports[i];
-
- ws->lookup_mem = lookup_mem;
- }
- }
-}
-
-static inline void
-sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev,
- struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq,
- uint8_t ena)
-{
- struct otx2_fc_info *fc = &otx2_eth_dev->fc_info;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint32_t limit;
- int rc;
-
- if (otx2_dev_is_sdp(otx2_eth_dev))
- return;
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return;
- mbox = lf->mbox;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return;
-
- limit = rsp->aura.limit;
- /* BP is already enabled. */
- if (rsp->aura.bp_ena) {
- /* If BP ids don't match disable BP. */
- if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) {
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id =
- npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- req->aura.bp_ena = 0;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
- }
- return;
- }
-
- /* BP was previously enabled but now disabled skip. */
- if (rsp->aura.bp)
- return;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (req == NULL)
- return;
-
- req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
-
- if (ena) {
- req->aura.nix0_bpid = fc->bpid[0];
- req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid);
- req->aura.bp = NIX_RQ_AURA_THRESH(
- limit > 128 ? 256 : limit); /* 95% of size*/
- req->aura_mask.bp = ~(req->aura_mask.bp);
- }
-
- req->aura.bp_ena = !!ena;
- req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena);
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id,
- const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint16_t port = eth_dev->data->port_id;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure(
- (struct rte_eventdev *)(uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, i,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- }
- rxq = eth_dev->data->rx_queues[0];
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- } else {
- rxq = eth_dev->data->rx_queues[rx_queue_id];
- sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true);
- rc = sso_xae_reconfigure((struct rte_eventdev *)
- (uintptr_t)event_dev);
- rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id,
- queue_conf->ev.sched_type,
- queue_conf->ev.queue_id, port);
- sso_updt_lookup_mem(event_dev, rxq->lookup_mem);
- }
-
- if (rc < 0) {
- otx2_err("Failed to configure Rx adapter port=%d, q=%d", port,
- queue_conf->ev.queue_id);
- return rc;
- }
-
- dev->rx_offloads |= otx2_eth_dev->rx_offload_flags;
- dev->tstamp = &otx2_eth_dev->tstamp;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t rx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc;
-
- rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13);
- if (rc)
- return -EINVAL;
-
- if (rx_queue_id < 0) {
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = sso_rxq_disable(otx2_eth_dev, i);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[i], false);
- }
- } else {
- rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id);
- sso_cfg_nix_mp_bpid(dev, otx2_eth_dev,
- eth_dev->data->rx_queues[rx_queue_id],
- false);
- }
-
- if (rc < 0)
- otx2_err("Failed to clear Rx adapter config port=%d, q=%d",
- eth_dev->data->port_id, rx_queue_id);
-
- return rc;
-}
-
-int
-otx2_sso_rx_adapter_start(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_rx_adapter_stop(const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(event_dev);
- RTE_SET_USED(eth_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_caps_get(const struct rte_eventdev *dev,
- const struct rte_eth_dev *eth_dev, uint32_t *caps)
-{
- int ret;
-
- RTE_SET_USED(dev);
- ret = strncmp(eth_dev->device->driver->name, "net_octeontx2,", 13);
- if (ret)
- *caps = 0;
- else
- *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT;
-
- return 0;
-}
-
-static int
-sso_sqb_aura_limit_edit(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-sso_add_tx_queue_data(const struct rte_eventdev *event_dev,
- uint16_t eth_port_id, uint16_t tx_queue_id,
- struct otx2_eth_txq *txq)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i;
-
- for (i = 0; i < event_dev->data->nb_ports; i++) {
- dev->max_port_id = RTE_MAX(dev->max_port_id, eth_port_id);
- if (dev->dual_ws) {
- struct otx2_ssogws_dual *old_dws;
- struct otx2_ssogws_dual *dws;
-
- old_dws = event_dev->data->ports[i];
- dws = rte_realloc_socket(ssogws_get_cookie(old_dws),
- sizeof(struct otx2_ssogws_dual)
- + RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (dws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- dws = (struct otx2_ssogws_dual *)
- ((uint8_t *)dws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&dws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = dws;
- } else {
- struct otx2_ssogws *old_ws;
- struct otx2_ssogws *ws;
-
- old_ws = event_dev->data->ports[i];
- ws = rte_realloc_socket(ssogws_get_cookie(old_ws),
- sizeof(struct otx2_ssogws) +
- RTE_CACHE_LINE_SIZE +
- (sizeof(uint64_t) *
- (dev->max_port_id + 1) *
- RTE_MAX_QUEUES_PER_PORT),
- RTE_CACHE_LINE_SIZE,
- event_dev->data->socket_id);
- if (ws == NULL)
- return -ENOMEM;
-
- /* First cache line is reserved for cookie */
- ws = (struct otx2_ssogws *)
- ((uint8_t *)ws + RTE_CACHE_LINE_SIZE);
-
- ((uint64_t (*)[RTE_MAX_QUEUES_PER_PORT]
- )&ws->tx_adptr_data)[eth_port_id][tx_queue_id] =
- (uint64_t)txq;
- event_dev->data->ports[i] = ws;
- }
- }
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private;
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_eth_txq *txq;
- int i, ret;
-
- RTE_SET_USED(id);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev,
- eth_dev->data->port_id, i,
- txq);
- if (ret < 0)
- return ret;
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, OTX2_SSO_SQB_LIMIT);
- ret = sso_add_tx_queue_data(event_dev, eth_dev->data->port_id,
- tx_queue_id, txq);
- if (ret < 0)
- return ret;
- }
-
- dev->tx_offloads |= otx2_eth_dev->tx_offload_flags;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
-
- return 0;
-}
-
-int
-otx2_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev,
- const struct rte_eth_dev *eth_dev,
- int32_t tx_queue_id)
-{
- struct otx2_eth_txq *txq;
- int i;
-
- RTE_SET_USED(id);
- RTE_SET_USED(eth_dev);
- RTE_SET_USED(event_dev);
- if (tx_queue_id < 0) {
- for (i = 0 ; i < eth_dev->data->nb_tx_queues; i++) {
- txq = eth_dev->data->tx_queues[i];
- sso_sqb_aura_limit_edit(txq->sqb_pool,
- txq->nb_sqb_bufs);
- }
- } else {
- txq = eth_dev->data->tx_queues[tx_queue_id];
- sso_sqb_aura_limit_edit(txq->sqb_pool, txq->nb_sqb_bufs);
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
deleted file mode 100644
index d59d6c53f6..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c
+++ /dev/null
@@ -1,132 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020-2021 Marvell.
- */
-
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_cryptodev_mbox.h"
-#include "otx2_evdev.h"
-
-int
-otx2_ca_caps_get(const struct rte_eventdev *dev,
- const struct rte_cryptodev *cdev, uint32_t *caps)
-{
- RTE_SET_USED(dev);
- RTE_SET_USED(cdev);
-
- *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW |
- RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD;
-
- return 0;
-}
-
-static int
-otx2_ca_qp_sso_link(const struct rte_cryptodev *cdev, struct otx2_cpt_qp *qp,
- uint16_t sso_pf_func)
-{
- union otx2_cpt_af_lf_ctl2 af_lf_ctl2;
- int ret;
-
- ret = otx2_cpt_af_reg_read(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, &af_lf_ctl2.u);
- if (ret)
- return ret;
-
- af_lf_ctl2.s.sso_pf_func = sso_pf_func;
- ret = otx2_cpt_af_reg_write(cdev, OTX2_CPT_AF_LF_CTL2(qp->id),
- qp->blkaddr, af_lf_ctl2.u);
- return ret;
-}
-
-static void
-otx2_ca_qp_init(struct otx2_cpt_qp *qp, const struct rte_event *event)
-{
- if (event) {
- qp->qp_ev_bind = 1;
- rte_memcpy(&qp->ev, event, sizeof(struct rte_event));
- } else {
- qp->qp_ev_bind = 0;
- }
- qp->ca_enable = 1;
-}
-
-int
-otx2_ca_qp_add(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id, const struct rte_event *event)
-{
- struct otx2_sso_evdev *sso_evdev = sso_pmd_priv(dev);
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- uint16_t sso_pf_func = otx2_sso_pf_func_get();
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret) {
- uint8_t qp_tmp;
- for (qp_tmp = 0; qp_tmp < qp_id; qp_tmp++)
- otx2_ca_qp_del(dev, cdev, qp_tmp);
- return ret;
- }
- otx2_ca_qp_init(qp, event);
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, sso_pf_func);
- if (ret)
- return ret;
- otx2_ca_qp_init(qp, event);
- }
-
- sso_evdev->rx_offloads |= NIX_RX_OFFLOAD_SECURITY_F;
- sso_fastpath_fns_set((struct rte_eventdev *)(uintptr_t)dev);
-
- /* Update crypto adapter xae count */
- if (queue_pair_id == -1)
- sso_evdev->adptr_xae_cnt +=
- vf->nb_queues * OTX2_CPT_DEFAULT_CMD_QLEN;
- else
- sso_evdev->adptr_xae_cnt += OTX2_CPT_DEFAULT_CMD_QLEN;
- sso_xae_reconfigure((struct rte_eventdev *)(uintptr_t)dev);
-
- return 0;
-}
-
-int
-otx2_ca_qp_del(const struct rte_eventdev *dev, const struct rte_cryptodev *cdev,
- int32_t queue_pair_id)
-{
- struct otx2_cpt_vf *vf = cdev->data->dev_private;
- struct otx2_cpt_qp *qp;
- uint8_t qp_id;
- int ret;
-
- RTE_SET_USED(dev);
-
- ret = 0;
- if (queue_pair_id == -1) {
- for (qp_id = 0; qp_id < vf->nb_queues; qp_id++) {
- qp = cdev->data->queue_pairs[qp_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
- } else {
- qp = cdev->data->queue_pairs[queue_pair_id];
- ret = otx2_ca_qp_sso_link(cdev, qp, 0);
- if (ret)
- return ret;
- qp->ca_enable = 0;
- }
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
deleted file mode 100644
index b33cb7e139..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h
+++ /dev/null
@@ -1,77 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_eventdev.h>
-
-#include "cpt_pmd_logs.h"
-#include "cpt_ucode.h"
-
-#include "otx2_cryptodev.h"
-#include "otx2_cryptodev_hw_access.h"
-#include "otx2_cryptodev_ops_helper.h"
-#include "otx2_cryptodev_qp.h"
-
-static inline void
-otx2_ca_deq_post_process(const struct otx2_cpt_qp *qp,
- struct rte_crypto_op *cop, uintptr_t *rsp,
- uint8_t cc)
-{
- if (cop->type == RTE_CRYPTO_OP_TYPE_SYMMETRIC) {
- if (likely(cc == NO_ERR)) {
- /* Verify authentication data if required */
- if (unlikely(rsp[2]))
- compl_auth_verify(cop, (uint8_t *)rsp[2],
- rsp[3]);
- else
- cop->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
- } else {
- if (cc == ERR_GC_ICV_MISCOMPARE)
- cop->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED;
- else
- cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
- }
-
- if (unlikely(cop->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) {
- sym_session_clear(otx2_cryptodev_driver_id,
- cop->sym->session);
- memset(cop->sym->session, 0,
- rte_cryptodev_sym_get_existing_header_session_size(
- cop->sym->session));
- rte_mempool_put(qp->sess_mp, cop->sym->session);
- cop->sym->session = NULL;
- }
- }
-
-}
-
-static inline uint64_t
-otx2_handle_crypto_event(uint64_t get_work1)
-{
- struct cpt_request_info *req;
- const struct otx2_cpt_qp *qp;
- struct rte_crypto_op *cop;
- uintptr_t *rsp;
- void *metabuf;
- uint8_t cc;
-
- req = (struct cpt_request_info *)(get_work1);
- cc = otx2_cpt_compcode_get(req);
- qp = req->qp;
-
- rsp = req->op;
- metabuf = (void *)rsp[0];
- cop = (void *)rsp[1];
-
- otx2_ca_deq_post_process(qp, cop, rsp, cc);
-
- rte_mempool_put(qp->meta_info.pool, metabuf);
-
- return (uint64_t)(cop);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
deleted file mode 100644
index 1fc56f903b..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h
+++ /dev/null
@@ -1,83 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2021 Marvell International Ltd.
- */
-
-#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_
-
-#include <rte_cryptodev.h>
-#include <cryptodev_pmd.h>
-#include <rte_event_crypto_adapter.h>
-#include <rte_eventdev.h>
-
-#include <otx2_cryptodev_qp.h>
-#include <otx2_worker.h>
-
-static inline uint16_t
-otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev)
-{
- union rte_event_crypto_metadata *m_data;
- struct rte_crypto_op *crypto_op;
- struct rte_cryptodev *cdev;
- struct otx2_cpt_qp *qp;
- uint8_t cdev_id;
- uint16_t qp_id;
-
- crypto_op = ev->event_ptr;
- if (crypto_op == NULL)
- return 0;
-
- if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) {
- m_data = rte_cryptodev_sym_session_get_user_data(
- crypto_op->sym->session);
- if (m_data == NULL)
- goto free_op;
-
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS &&
- crypto_op->private_data_offset) {
- m_data = (union rte_event_crypto_metadata *)
- ((uint8_t *)crypto_op +
- crypto_op->private_data_offset);
- cdev_id = m_data->request_info.cdev_id;
- qp_id = m_data->request_info.queue_pair_id;
- } else {
- goto free_op;
- }
-
- cdev = &rte_cryptodevs[cdev_id];
- qp = cdev->data->queue_pairs[qp_id];
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(tag_op);
- if (qp->ca_enable)
- return cdev->enqueue_burst(qp, &crypto_op, 1);
-
-free_op:
- rte_pktmbuf_free(crypto_op->sym->m_src);
- rte_crypto_op_free(crypto_op);
- rte_errno = EINVAL;
- return 0;
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->tag_op, ev);
-}
-
-static uint16_t __rte_hot
-otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
-
- RTE_SET_USED(nb_events);
-
- return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev);
-}
-#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */
diff --git a/drivers/event/octeontx2/otx2_evdev_irq.c b/drivers/event/octeontx2/otx2_evdev_irq.c
deleted file mode 100644
index 9b7ad27b04..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_irq.c
+++ /dev/null
@@ -1,272 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static void
-sso_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ggrp;
-
- ggrp = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + SSO_LF_GGRP_INT);
- if (intr == 0)
- return;
-
- otx2_err("GGRP %d GGRP_INT=0x%" PRIx64 "", ggrp, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSO_LF_GGRP_INT);
-}
-
-static int
-sso_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t ggrp_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, sso_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-ssow_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint8_t gws = (base >> 12) & 0xFF;
- uint64_t intr;
-
- intr = otx2_read64(base + SSOW_LF_GWS_INT);
- if (intr == 0)
- return;
-
- otx2_err("GWS %d GWS_INT=0x%" PRIx64 "", gws, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + SSOW_LF_GWS_INT);
-}
-
-static int
-ssow_lf_register_irq(const struct rte_eventdev *event_dev, uint16_t gws_msixoff,
- uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, ssow_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-sso_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t ggrp_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = ggrp_msixoff + SSO_LF_INT_VEC_GRP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSO_LF_GGRP_INT_ENA_W1C);
- otx2_unregister_irq(handle, sso_lf_irq, (void *)base, vec);
-}
-
-static void
-ssow_lf_unregister_irq(const struct rte_eventdev *event_dev,
- uint16_t gws_msixoff, uintptr_t base)
-{
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(event_dev->dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = gws_msixoff + SSOW_LF_INT_VEC_IOP;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + SSOW_LF_GWS_INT_ENA_W1C);
- otx2_unregister_irq(handle, ssow_lf_irq, (void *)base, vec);
-}
-
-int
-sso_register_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- int i, rc = -EINVAL;
- uint8_t nb_ports;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- if (dev->sso_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOLF MSIX offset[%d] vector: 0x%x",
- i, dev->sso_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < nb_ports; i++) {
- if (dev->ssow_msixoff[i] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid SSOWLF MSIX offset[%d] vector: 0x%x",
- i, dev->ssow_msixoff[i]);
- goto fail;
- }
- }
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- rc = sso_lf_register_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- rc = ssow_lf_register_irq(event_dev, dev->ssow_msixoff[i],
- base);
- }
-
-fail:
- return rc;
-}
-
-void
-sso_unregister_irqs(const struct rte_eventdev *event_dev)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint8_t nb_ports;
- int i;
-
- nb_ports = dev->nb_event_ports * (dev->dual_ws ? 2 : 1);
-
- for (i = 0; i < dev->nb_event_queues; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSO << 20 |
- i << 12);
- sso_lf_unregister_irq(event_dev, dev->sso_msixoff[i], base);
- }
-
- for (i = 0; i < nb_ports; i++) {
- uintptr_t base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 |
- i << 12);
- ssow_lf_unregister_irq(event_dev, dev->ssow_msixoff[i], base);
- }
-}
-
-static void
-tim_lf_irq(void *param)
-{
- uintptr_t base = (uintptr_t)param;
- uint64_t intr;
- uint8_t ring;
-
- ring = (base >> 12) & 0xFF;
-
- intr = otx2_read64(base + TIM_LF_NRSPERR_INT);
- otx2_err("TIM RING %d TIM_LF_NRSPERR_INT=0x%" PRIx64 "", ring, intr);
- intr = otx2_read64(base + TIM_LF_RAS_INT);
- otx2_err("TIM RING %d TIM_LF_RAS_INT=0x%" PRIx64 "", ring, intr);
-
- /* Clear interrupt */
- otx2_write64(intr, base + TIM_LF_NRSPERR_INT);
- otx2_write64(intr, base + TIM_LF_RAS_INT);
-}
-
-static int
-tim_lf_register_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int rc, vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1S);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, tim_lf_irq, (void *)base, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-tim_lf_unregister_irq(struct rte_pci_device *pci_dev, uint16_t tim_msixoff,
- uintptr_t base)
-{
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- int vec;
-
- vec = tim_msixoff + TIM_LF_INT_VEC_NRSPERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_NRSPERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-
- vec = tim_msixoff + TIM_LF_INT_VEC_RAS_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, base + TIM_LF_RAS_INT_ENA_W1C);
- otx2_unregister_irq(handle, tim_lf_irq, (void *)base, vec);
-}
-
-int
-tim_register_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- int rc = -EINVAL;
- uintptr_t base;
-
- if (dev->tim_msixoff[ring_id] == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid TIMLF MSIX offset[%d] vector: 0x%x",
- ring_id, dev->tim_msixoff[ring_id]);
- goto fail;
- }
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- rc = tim_lf_register_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-fail:
- return rc;
-}
-
-void
-tim_unregister_irq(uint16_t ring_id)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- uintptr_t base;
-
- base = dev->bar2 + (RVU_BLOCK_ADDR_TIM << 20 | ring_id << 12);
- tim_lf_unregister_irq(dev->pci_dev, dev->tim_msixoff[ring_id], base);
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_selftest.c b/drivers/event/octeontx2/otx2_evdev_selftest.c
deleted file mode 100644
index 48bfaf893d..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_selftest.c
+++ /dev/null
@@ -1,1517 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_common.h>
-#include <rte_cycles.h>
-#include <rte_debug.h>
-#include <rte_eal.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_hexdump.h>
-#include <rte_launch.h>
-#include <rte_lcore.h>
-#include <rte_mbuf.h>
-#include <rte_malloc.h>
-#include <rte_memcpy.h>
-#include <rte_per_lcore.h>
-#include <rte_random.h>
-#include <rte_test.h>
-
-#include "otx2_evdev.h"
-
-#define NUM_PACKETS (1024)
-#define MAX_EVENTS (1024)
-
-#define OCTEONTX2_TEST_RUN(setup, teardown, test) \
- octeontx_test_run(setup, teardown, test, #test)
-
-static int total;
-static int passed;
-static int failed;
-static int unsupported;
-
-static int evdev;
-static struct rte_mempool *eventdev_test_mempool;
-
-struct event_attr {
- uint32_t flow_id;
- uint8_t event_type;
- uint8_t sub_event_type;
- uint8_t sched_type;
- uint8_t queue;
- uint8_t port;
-};
-
-static uint32_t seqn_list_index;
-static int seqn_list[NUM_PACKETS];
-
-static inline void
-seqn_list_init(void)
-{
- RTE_BUILD_BUG_ON(NUM_PACKETS < MAX_EVENTS);
- memset(seqn_list, 0, sizeof(seqn_list));
- seqn_list_index = 0;
-}
-
-static inline int
-seqn_list_update(int val)
-{
- if (seqn_list_index >= NUM_PACKETS)
- return -1;
-
- seqn_list[seqn_list_index++] = val;
- rte_smp_wmb();
- return 0;
-}
-
-static inline int
-seqn_list_check(int limit)
-{
- int i;
-
- for (i = 0; i < limit; i++) {
- if (seqn_list[i] != i) {
- otx2_err("Seqn mismatch %d %d", seqn_list[i], i);
- return -1;
- }
- }
- return 0;
-}
-
-struct test_core_param {
- rte_atomic32_t *total_events;
- uint64_t dequeue_tmo_ticks;
- uint8_t port;
- uint8_t sched_type;
-};
-
-static int
-testsuite_setup(void)
-{
- const char *eventdev_name = "event_octeontx2";
-
- evdev = rte_event_dev_get_dev_id(eventdev_name);
- if (evdev < 0) {
- otx2_err("%d: Eventdev %s not found", __LINE__, eventdev_name);
- return -1;
- }
- return 0;
-}
-
-static void
-testsuite_teardown(void)
-{
- rte_event_dev_close(evdev);
-}
-
-static inline void
-devconf_set_default_sane_values(struct rte_event_dev_config *dev_conf,
- struct rte_event_dev_info *info)
-{
- memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
- dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
- dev_conf->nb_event_ports = info->max_event_ports;
- dev_conf->nb_event_queues = info->max_event_queues;
- dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
- dev_conf->nb_event_port_dequeue_depth =
- info->max_event_port_dequeue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_events_limit =
- info->max_num_events;
-}
-
-enum {
- TEST_EVENTDEV_SETUP_DEFAULT,
- TEST_EVENTDEV_SETUP_PRIORITY,
- TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT,
-};
-
-static inline int
-_eventdev_setup(int mode)
-{
- const char *pool_name = "evdev_octeontx_test_pool";
- struct rte_event_dev_config dev_conf;
- struct rte_event_dev_info info;
- int i, ret;
-
- /* Create and destrory pool for each test case to make it standalone */
- eventdev_test_mempool = rte_pktmbuf_pool_create(pool_name, MAX_EVENTS,
- 0, 0, 512,
- rte_socket_id());
- if (!eventdev_test_mempool) {
- otx2_err("ERROR creating mempool");
- return -1;
- }
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
-
- devconf_set_default_sane_values(&dev_conf, &info);
- if (mode == TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT)
- dev_conf.event_dev_cfg |= RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT;
-
- ret = rte_event_dev_configure(evdev, &dev_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev");
-
- uint32_t queue_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- if (mode == TEST_EVENTDEV_SETUP_PRIORITY) {
- if (queue_count > 8)
- queue_count = 8;
-
- /* Configure event queues(0 to n) with
- * RTE_EVENT_DEV_PRIORITY_HIGHEST to
- * RTE_EVENT_DEV_PRIORITY_LOWEST
- */
- uint8_t step = (RTE_EVENT_DEV_PRIORITY_LOWEST + 1) /
- queue_count;
- for (i = 0; i < (int)queue_count; i++) {
- struct rte_event_queue_conf queue_conf;
-
- ret = rte_event_queue_default_conf_get(evdev, i,
- &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get def_conf%d",
- i);
- queue_conf.priority = i * step;
- ret = rte_event_queue_setup(evdev, i, &queue_conf);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
-
- } else {
- /* Configure event queues with default priority */
- for (i = 0; i < (int)queue_count; i++) {
- ret = rte_event_queue_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d",
- i);
- }
- }
- /* Configure event ports */
- uint32_t port_count;
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_setup(evdev, i, NULL);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d", i);
- ret = rte_event_port_link(evdev, i, NULL, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to link all queues port=%d",
- i);
- }
-
- ret = rte_event_dev_start(evdev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to start device");
-
- return 0;
-}
-
-static inline int
-eventdev_setup(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEFAULT);
-}
-
-static inline int
-eventdev_setup_priority(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_PRIORITY);
-}
-
-static inline int
-eventdev_setup_dequeue_timeout(void)
-{
- return _eventdev_setup(TEST_EVENTDEV_SETUP_DEQUEUE_TIMEOUT);
-}
-
-static inline void
-eventdev_teardown(void)
-{
- rte_event_dev_stop(evdev);
- rte_mempool_free(eventdev_test_mempool);
-}
-
-static inline void
-update_event_and_validation_attr(struct rte_mbuf *m, struct rte_event *ev,
- uint32_t flow_id, uint8_t event_type,
- uint8_t sub_event_type, uint8_t sched_type,
- uint8_t queue, uint8_t port)
-{
- struct event_attr *attr;
-
- /* Store the event attributes in mbuf for future reference */
- attr = rte_pktmbuf_mtod(m, struct event_attr *);
- attr->flow_id = flow_id;
- attr->event_type = event_type;
- attr->sub_event_type = sub_event_type;
- attr->sched_type = sched_type;
- attr->queue = queue;
- attr->port = port;
-
- ev->flow_id = flow_id;
- ev->sub_event_type = sub_event_type;
- ev->event_type = event_type;
- /* Inject the new event */
- ev->op = RTE_EVENT_OP_NEW;
- ev->sched_type = sched_type;
- ev->queue_id = queue;
- ev->mbuf = m;
-}
-
-static inline int
-inject_events(uint32_t flow_id, uint8_t event_type, uint8_t sub_event_type,
- uint8_t sched_type, uint8_t queue, uint8_t port,
- unsigned int events)
-{
- struct rte_mbuf *m;
- unsigned int i;
-
- for (i = 0; i < events; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- update_event_and_validation_attr(m, &ev, flow_id, event_type,
- sub_event_type, sched_type,
- queue, port);
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- return 0;
-}
-
-static inline int
-check_excess_events(uint8_t port)
-{
- uint16_t valid_event;
- struct rte_event ev;
- int i;
-
- /* Check for excess events, try for a few times and exit */
- for (i = 0; i < 32; i++) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
-
- RTE_TEST_ASSERT_SUCCESS(valid_event,
- "Unexpected valid event=%d",
- *rte_event_pmd_selftest_seqn(ev.mbuf));
- }
- return 0;
-}
-
-static inline int
-generate_random_events(const unsigned int total_events)
-{
- struct rte_event_dev_info info;
- uint32_t queue_count;
- unsigned int i;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- ret = rte_event_dev_info_get(evdev, &info);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info");
- for (i = 0; i < total_events; i++) {
- ret = inject_events(
- rte_rand() % info.max_event_queue_flows /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- rte_rand() % queue_count /* queue */,
- 0 /* port */,
- 1 /* events */);
- if (ret)
- return -1;
- }
- return ret;
-}
-
-
-static inline int
-validate_event(struct rte_event *ev)
-{
- struct event_attr *attr;
-
- attr = rte_pktmbuf_mtod(ev->mbuf, struct event_attr *);
- RTE_TEST_ASSERT_EQUAL(attr->flow_id, ev->flow_id,
- "flow_id mismatch enq=%d deq =%d",
- attr->flow_id, ev->flow_id);
- RTE_TEST_ASSERT_EQUAL(attr->event_type, ev->event_type,
- "event_type mismatch enq=%d deq =%d",
- attr->event_type, ev->event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sub_event_type, ev->sub_event_type,
- "sub_event_type mismatch enq=%d deq =%d",
- attr->sub_event_type, ev->sub_event_type);
- RTE_TEST_ASSERT_EQUAL(attr->sched_type, ev->sched_type,
- "sched_type mismatch enq=%d deq =%d",
- attr->sched_type, ev->sched_type);
- RTE_TEST_ASSERT_EQUAL(attr->queue, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- attr->queue, ev->queue_id);
- return 0;
-}
-
-typedef int (*validate_event_cb)(uint32_t index, uint8_t port,
- struct rte_event *ev);
-
-static inline int
-consume_events(uint8_t port, const uint32_t total_events, validate_event_cb fn)
-{
- uint32_t events = 0, forward_progress_cnt = 0, index = 0;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (1) {
- if (++forward_progress_cnt > UINT16_MAX) {
- otx2_err("Detected deadlock");
- return -1;
- }
-
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- forward_progress_cnt = 0;
- ret = validate_event(&ev);
- if (ret)
- return -1;
-
- if (fn != NULL) {
- ret = fn(index, port, &ev);
- RTE_TEST_ASSERT_SUCCESS(ret,
- "Failed to validate test specific event");
- }
-
- ++index;
-
- rte_pktmbuf_free(ev.mbuf);
- if (++events >= total_events)
- break;
- }
-
- return check_excess_events(port);
-}
-
-static int
-validate_simple_enqdeq(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(index, *rte_event_pmd_selftest_seqn(ev->mbuf),
- "index=%d != seqn=%d",
- index, *rte_event_pmd_selftest_seqn(ev->mbuf));
- return 0;
-}
-
-static inline int
-test_simple_enqdeq(uint8_t sched_type)
-{
- int ret;
-
- ret = inject_events(0 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type */,
- sched_type,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, validate_simple_enqdeq);
-}
-
-static int
-test_simple_enqdeq_ordered(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_simple_enqdeq_atomic(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_simple_enqdeq_parallel(void)
-{
- return test_simple_enqdeq(RTE_SCHED_TYPE_PARALLEL);
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. On dequeue, using single event port(port 0) verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_single_port_deq(void)
-{
- int ret;
-
- ret = generate_random_events(MAX_EVENTS);
- if (ret)
- return -1;
-
- return consume_events(0 /* port */, MAX_EVENTS, NULL);
-}
-
-/*
- * Inject 0..MAX_EVENTS events over 0..queue_count with modulus
- * operation
- *
- * For example, Inject 32 events over 0..7 queues
- * enqueue events 0, 8, 16, 24 in queue 0
- * enqueue events 1, 9, 17, 25 in queue 1
- * ..
- * ..
- * enqueue events 7, 15, 23, 31 in queue 7
- *
- * On dequeue, Validate the events comes in 0,8,16,24,1,9,17,25..,7,15,23,31
- * order from queue0(highest priority) to queue7(lowest_priority)
- */
-static int
-validate_queue_priority(uint32_t index, uint8_t port, struct rte_event *ev)
-{
- uint32_t queue_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- uint32_t range = MAX_EVENTS / queue_count;
- uint32_t expected_val = (index % range) * queue_count;
-
- expected_val += ev->queue_id;
- RTE_SET_USED(port);
- RTE_TEST_ASSERT_EQUAL(
- *rte_event_pmd_selftest_seqn(ev->mbuf), expected_val,
- "seqn=%d index=%d expected=%d range=%d nb_queues=%d max_event=%d",
- *rte_event_pmd_selftest_seqn(ev->mbuf), index, expected_val,
- range, queue_count, MAX_EVENTS);
- return 0;
-}
-
-static int
-test_multi_queue_priority(void)
-{
- int i, max_evts_roundoff;
- /* See validate_queue_priority() comments for priority validate logic */
- uint32_t queue_count;
- struct rte_mbuf *m;
- uint8_t queue;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count > 8)
- queue_count = 8;
- max_evts_roundoff = MAX_EVENTS / queue_count;
- max_evts_roundoff *= queue_count;
-
- for (i = 0; i < max_evts_roundoff; i++) {
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- RTE_TEST_ASSERT_NOT_NULL(m, "mempool alloc failed");
-
- *rte_event_pmd_selftest_seqn(m) = i;
- queue = i % queue_count;
- update_event_and_validation_attr(m, &ev, 0, RTE_EVENT_TYPE_CPU,
- 0, RTE_SCHED_TYPE_PARALLEL,
- queue, 0);
- rte_event_enqueue_burst(evdev, 0, &ev, 1);
- }
-
- return consume_events(0, max_evts_roundoff, validate_queue_priority);
-}
-
-static int
-worker_multi_port_fn(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
- int ret;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- ret = validate_event(&ev);
- RTE_TEST_ASSERT_SUCCESS(ret, "Failed to validate event");
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- }
-
- return 0;
-}
-
-static inline int
-wait_workers_to_join(const rte_atomic32_t *count)
-{
- uint64_t cycles, print_cycles;
-
- cycles = rte_get_timer_cycles();
- print_cycles = cycles;
- while (rte_atomic32_read(count)) {
- uint64_t new_cycles = rte_get_timer_cycles();
-
- if (new_cycles - print_cycles > rte_get_timer_hz()) {
- otx2_err("Events %d", rte_atomic32_read(count));
- print_cycles = new_cycles;
- }
- if (new_cycles - cycles > rte_get_timer_hz() * 10000000000) {
- otx2_err("No schedules for seconds, deadlock (%d)",
- rte_atomic32_read(count));
- rte_event_dev_dump(evdev, stdout);
- cycles = new_cycles;
- return -1;
- }
- }
- rte_eal_mp_wait_lcore();
-
- return 0;
-}
-
-static inline int
-launch_workers_and_wait(int (*main_thread)(void *),
- int (*worker_thread)(void *), uint32_t total_events,
- uint8_t nb_workers, uint8_t sched_type)
-{
- rte_atomic32_t atomic_total_events;
- struct test_core_param *param;
- uint64_t dequeue_tmo_ticks;
- uint8_t port = 0;
- int w_lcore;
- int ret;
-
- if (!nb_workers)
- return 0;
-
- rte_atomic32_set(&atomic_total_events, total_events);
- seqn_list_init();
-
- param = malloc(sizeof(struct test_core_param) * nb_workers);
- if (!param)
- return -1;
-
- ret = rte_event_dequeue_timeout_ticks(evdev,
- rte_rand() % 10000000/* 10ms */,
- &dequeue_tmo_ticks);
- if (ret) {
- free(param);
- return -1;
- }
-
- param[0].total_events = &atomic_total_events;
- param[0].sched_type = sched_type;
- param[0].port = 0;
- param[0].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_wmb();
-
- w_lcore = rte_get_next_lcore(
- /* start core */ -1,
- /* skip main */ 1,
- /* wrap */ 0);
- rte_eal_remote_launch(main_thread, ¶m[0], w_lcore);
-
- for (port = 1; port < nb_workers; port++) {
- param[port].total_events = &atomic_total_events;
- param[port].sched_type = sched_type;
- param[port].port = port;
- param[port].dequeue_tmo_ticks = dequeue_tmo_ticks;
- rte_smp_wmb();
- w_lcore = rte_get_next_lcore(w_lcore, 1, 0);
- rte_eal_remote_launch(worker_thread, ¶m[port], w_lcore);
- }
-
- rte_smp_wmb();
- ret = wait_workers_to_join(&atomic_total_events);
- free(param);
-
- return ret;
-}
-
-/*
- * Generate a prescribed number of events and spread them across available
- * queues. Dequeue the events through multiple ports and verify the enqueued
- * event attributes
- */
-static int
-test_multi_queue_enq_multi_port_deq(void)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- return launch_workers_and_wait(worker_multi_port_fn,
- worker_multi_port_fn, total_events,
- nr_ports, 0xff /* invalid */);
-}
-
-static
-void flush(uint8_t dev_id, struct rte_event event, void *arg)
-{
- unsigned int *count = arg;
-
- RTE_SET_USED(dev_id);
- if (event.event_type == RTE_EVENT_TYPE_CPU)
- *count = *count + 1;
-}
-
-static int
-test_dev_stop_flush(void)
-{
- unsigned int total_events = MAX_EVENTS, count = 0;
- int ret;
-
- ret = generate_random_events(total_events);
- if (ret)
- return -1;
-
- ret = rte_event_dev_stop_flush_callback_register(evdev, flush, &count);
- if (ret)
- return -2;
- rte_event_dev_stop(evdev);
- ret = rte_event_dev_stop_flush_callback_register(evdev, NULL, NULL);
- if (ret)
- return -3;
- RTE_TEST_ASSERT_EQUAL(total_events, count,
- "count mismatch total_events=%d count=%d",
- total_events, count);
-
- return 0;
-}
-
-static int
-validate_queue_to_port_single_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, ev->queue_id,
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link queue x to port x and check correctness of link by checking
- * queue_id == x on dequeue on the specific port x
- */
-static int
-test_queue_to_port_single_link(void)
-{
- int i, nr_links, ret;
- uint32_t queue_count;
- uint32_t port_count;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &port_count),
- "Port count get failed");
-
- /* Unlink all connections that created in eventdev_setup */
- for (i = 0; i < (int)port_count; i++) {
- ret = rte_event_port_unlink(evdev, i, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0,
- "Failed to unlink all queues port=%d", i);
- }
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
-
- nr_links = RTE_MIN(port_count, queue_count);
- const unsigned int total_events = MAX_EVENTS / nr_links;
-
- /* Link queue x to port x and inject events to queue x through port x */
- for (i = 0; i < nr_links; i++) {
- uint8_t queue = (uint8_t)i;
-
- ret = rte_event_port_link(evdev, i, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue to port %d", i);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, i /* port */,
- total_events /* events */);
- if (ret)
- return -1;
- }
-
- /* Verify the events generated from correct queue */
- for (i = 0; i < nr_links; i++) {
- ret = consume_events(i /* port */, total_events,
- validate_queue_to_port_single_link);
- if (ret)
- return -1;
- }
-
- return 0;
-}
-
-static int
-validate_queue_to_port_multi_link(uint32_t index, uint8_t port,
- struct rte_event *ev)
-{
- RTE_SET_USED(index);
- RTE_TEST_ASSERT_EQUAL(port, (ev->queue_id & 0x1),
- "queue mismatch enq=%d deq =%d",
- port, ev->queue_id);
-
- return 0;
-}
-
-/*
- * Link all even number of queues to port 0 and all odd number of queues to
- * port 1 and verify the link connection on dequeue
- */
-static int
-test_queue_to_port_multi_link(void)
-{
- int ret, port0_events = 0, port1_events = 0;
- uint32_t nr_queues = 0;
- uint32_t nr_ports = 0;
- uint8_t queue, port;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &nr_queues),
- "Queue count get failed");
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- if (nr_ports < 2) {
- otx2_err("Not enough ports to test ports=%d", nr_ports);
- return 0;
- }
-
- /* Unlink all connections that created in eventdev_setup */
- for (port = 0; port < nr_ports; port++) {
- ret = rte_event_port_unlink(evdev, port, NULL, 0);
- RTE_TEST_ASSERT(ret >= 0, "Failed to unlink all queues port=%d",
- port);
- }
-
- const unsigned int total_events = MAX_EVENTS / nr_queues;
-
- /* Link all even number of queues to port0 and odd numbers to port 1*/
- for (queue = 0; queue < nr_queues; queue++) {
- port = queue & 0x1;
- ret = rte_event_port_link(evdev, port, &queue, NULL, 1);
- RTE_TEST_ASSERT(ret == 1, "Failed to link queue=%d to port=%d",
- queue, port);
-
- ret = inject_events(0x100 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- rte_rand() % 256 /* sub_event_type */,
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1),
- queue /* queue */, port /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- if (port == 0)
- port0_events += total_events;
- else
- port1_events += total_events;
- }
-
- ret = consume_events(0 /* port */, port0_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
- ret = consume_events(1 /* port */, port1_events,
- validate_queue_to_port_multi_link);
- if (ret)
- return -1;
-
- return 0;
-}
-
-static int
-worker_flow_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0 */
- if (ev.sub_event_type == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type = 1; /* stage 1 */
- ev.sched_type = new_sched_type;
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.sub_event_type == 1) { /* Events from stage 1*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.sub_event_type = %d",
- ev.sub_event_type);
- return -1;
- }
- }
- return 0;
-}
-
-static int
-test_multiport_flow_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d", nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- rte_mb();
- ret = launch_workers_and_wait(worker_flow_based_pipeline,
- worker_flow_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-/* Multi port ordered to atomic transaction */
-static int
-test_multi_port_flow_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_ordered_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_ordered_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_atomic_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_atomic_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_flow_parallel_to_atomic(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_flow_parallel_to_ordered(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_flow_parallel_to_parallel(void)
-{
- return test_multiport_flow_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_group_based_pipeline(void *arg)
-{
- struct test_core_param *param = arg;
- uint64_t dequeue_tmo_ticks = param->dequeue_tmo_ticks;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t new_sched_type = param->sched_type;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1,
- dequeue_tmo_ticks);
- if (!valid_event)
- continue;
-
- /* Events from stage 0(group 0) */
- if (ev.queue_id == 0) {
- /* Move to atomic flow to maintain the ordering */
- ev.flow_id = 0x2;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = new_sched_type;
- ev.queue_id = 1; /* Stage 1*/
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- } else if (ev.queue_id == 1) { /* Events from stage 1(group 1)*/
- uint32_t seqn = *rte_event_pmd_selftest_seqn(ev.mbuf);
-
- if (seqn_list_update(seqn) == 0) {
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- otx2_err("Failed to update seqn_list");
- return -1;
- }
- } else {
- otx2_err("Invalid ev.queue_id = %d", ev.queue_id);
- return -1;
- }
- }
-
- return 0;
-}
-
-static int
-test_multiport_queue_sched_type_test(uint8_t in_sched_type,
- uint8_t out_sched_type)
-{
- const unsigned int total_events = MAX_EVENTS;
- uint32_t queue_count;
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
-
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- if (queue_count < 2 || !nr_ports) {
- otx2_err("Not enough queues=%d ports=%d or workers=%d",
- queue_count, nr_ports,
- rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- in_sched_type,
- 0 /* queue */,
- 0 /* port */,
- total_events /* events */);
- if (ret)
- return -1;
-
- ret = launch_workers_and_wait(worker_group_based_pipeline,
- worker_group_based_pipeline, total_events,
- nr_ports, out_sched_type);
- if (ret)
- return -1;
-
- if (in_sched_type != RTE_SCHED_TYPE_PARALLEL &&
- out_sched_type == RTE_SCHED_TYPE_ATOMIC) {
- /* Check the events order maintained or not */
- return seqn_list_check(total_events);
- }
-
- return 0;
-}
-
-static int
-test_multi_port_queue_ordered_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_ordered_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_ordered_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ORDERED,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_atomic_to_atomic(void)
-{
- /* Ingress event order test */
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_atomic_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_atomic_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_ATOMIC,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-test_multi_port_queue_parallel_to_atomic(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ATOMIC);
-}
-
-static int
-test_multi_port_queue_parallel_to_ordered(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_ORDERED);
-}
-
-static int
-test_multi_port_queue_parallel_to_parallel(void)
-{
- return test_multiport_queue_sched_type_test(RTE_SCHED_TYPE_PARALLEL,
- RTE_SCHED_TYPE_PARALLEL);
-}
-
-static int
-worker_flow_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- rte_atomic32_t *total_events = param->total_events;
- uint8_t port = param->port;
- uint16_t valid_event;
- struct rte_event ev;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.sub_event_type == 255) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sub_event_type++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-static int
-launch_multi_port_max_stages_random_sched_type(int (*fn)(void *))
-{
- uint32_t nr_ports;
- int ret;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (!nr_ports) {
- otx2_err("Not enough ports=%d or workers=%d",
- nr_ports, rte_lcore_count() - 1);
- return 0;
- }
-
- /* Injects events with a 0 sequence number to total_events */
- ret = inject_events(0x1 /*flow_id */,
- RTE_EVENT_TYPE_CPU /* event_type */,
- 0 /* sub_event_type (stage 0) */,
- rte_rand() %
- (RTE_SCHED_TYPE_PARALLEL + 1) /* sched_type */,
- 0 /* queue */,
- 0 /* port */,
- MAX_EVENTS /* events */);
- if (ret)
- return -1;
-
- return launch_workers_and_wait(fn, fn, MAX_EVENTS, nr_ports,
- 0xff /* invalid */);
-}
-
-/* Flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_flow_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_flow_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_queue_based_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_queue_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_queue_based_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_mixed_pipeline_max_stages_rand_sched_type(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- uint32_t queue_count;
- uint16_t valid_event;
- struct rte_event ev;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count),
- "Queue count get failed");
- uint8_t nr_queues = queue_count;
- rte_atomic32_t *total_events = param->total_events;
-
- while (rte_atomic32_read(total_events) > 0) {
- valid_event = rte_event_dequeue_burst(evdev, port, &ev, 1, 0);
- if (!valid_event)
- continue;
-
- if (ev.queue_id == nr_queues - 1) { /* Last stage */
- rte_pktmbuf_free(ev.mbuf);
- rte_atomic32_sub(total_events, 1);
- } else {
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.queue_id++;
- ev.sub_event_type = rte_rand() % 256;
- ev.sched_type =
- rte_rand() % (RTE_SCHED_TYPE_PARALLEL + 1);
- ev.op = RTE_EVENT_OP_FORWARD;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
- }
-
- return 0;
-}
-
-/* Queue and flow based pipeline with maximum stages with random sched type */
-static int
-test_multi_port_mixed_max_stages_random_sched_type(void)
-{
- return launch_multi_port_max_stages_random_sched_type(
- worker_mixed_pipeline_max_stages_rand_sched_type);
-}
-
-static int
-worker_ordered_flow_producer(void *arg)
-{
- struct test_core_param *param = arg;
- uint8_t port = param->port;
- struct rte_mbuf *m;
- int counter = 0;
-
- while (counter < NUM_PACKETS) {
- m = rte_pktmbuf_alloc(eventdev_test_mempool);
- if (m == NULL)
- continue;
-
- *rte_event_pmd_selftest_seqn(m) = counter++;
-
- struct rte_event ev = {.event = 0, .u64 = 0};
-
- ev.flow_id = 0x1; /* Generate a fat flow */
- ev.sub_event_type = 0;
- /* Inject the new event */
- ev.op = RTE_EVENT_OP_NEW;
- ev.event_type = RTE_EVENT_TYPE_CPU;
- ev.sched_type = RTE_SCHED_TYPE_ORDERED;
- ev.queue_id = 0;
- ev.mbuf = m;
- rte_event_enqueue_burst(evdev, port, &ev, 1);
- }
-
- return 0;
-}
-
-static inline int
-test_producer_consumer_ingress_order_test(int (*fn)(void *))
-{
- uint32_t nr_ports;
-
- RTE_TEST_ASSERT_SUCCESS(rte_event_dev_attr_get(evdev,
- RTE_EVENT_DEV_ATTR_PORT_COUNT, &nr_ports),
- "Port count get failed");
- nr_ports = RTE_MIN(nr_ports, rte_lcore_count() - 1);
-
- if (rte_lcore_count() < 3 || nr_ports < 2) {
- otx2_err("### Not enough cores for test.");
- return 0;
- }
-
- launch_workers_and_wait(worker_ordered_flow_producer, fn,
- NUM_PACKETS, nr_ports, RTE_SCHED_TYPE_ATOMIC);
- /* Check the events order maintained or not */
- return seqn_list_check(NUM_PACKETS);
-}
-
-/* Flow based producer consumer ingress order test */
-static int
-test_flow_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_flow_based_pipeline);
-}
-
-/* Queue based producer consumer ingress order test */
-static int
-test_queue_producer_consumer_ingress_order_test(void)
-{
- return test_producer_consumer_ingress_order_test(
- worker_group_based_pipeline);
-}
-
-static void octeontx_test_run(int (*setup)(void), void (*tdown)(void),
- int (*test)(void), const char *name)
-{
- if (setup() < 0) {
- printf("Error setting up test %s", name);
- unsupported++;
- } else {
- if (test() < 0) {
- failed++;
- printf("+ TestCase [%2d] : %s failed\n", total, name);
- } else {
- passed++;
- printf("+ TestCase [%2d] : %s succeeded\n", total,
- name);
- }
- }
-
- total++;
- tdown();
-}
-
-int
-otx2_sso_selftest(void)
-{
- testsuite_setup();
-
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_simple_enqdeq_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_single_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_dev_stop_flush);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_queue_enq_multi_port_deq);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_single_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_to_port_multi_link);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_ordered_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_atomic_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_ordered);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_parallel_to_parallel);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_flow_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_queue_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_multi_port_mixed_max_stages_random_sched_type);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_flow_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup, eventdev_teardown,
- test_queue_producer_consumer_ingress_order_test);
- OCTEONTX2_TEST_RUN(eventdev_setup_priority, eventdev_teardown,
- test_multi_queue_priority);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_flow_ordered_to_atomic);
- OCTEONTX2_TEST_RUN(eventdev_setup_dequeue_timeout, eventdev_teardown,
- test_multi_port_queue_ordered_to_atomic);
- printf("Total tests : %d\n", total);
- printf("Passed : %d\n", passed);
- printf("Failed : %d\n", failed);
- printf("Not supported : %d\n", unsupported);
-
- testsuite_teardown();
-
- if (failed)
- return -1;
-
- return 0;
-}
diff --git a/drivers/event/octeontx2/otx2_evdev_stats.h b/drivers/event/octeontx2/otx2_evdev_stats.h
deleted file mode 100644
index 74fcec8a07..0000000000
--- a/drivers/event/octeontx2/otx2_evdev_stats.h
+++ /dev/null
@@ -1,286 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_EVDEV_STATS_H__
-#define __OTX2_EVDEV_STATS_H__
-
-#include "otx2_evdev.h"
-
-struct otx2_sso_xstats_name {
- const char name[RTE_EVENT_DEV_XSTATS_NAME_SIZE];
- const size_t offset;
- const uint64_t mask;
- const uint8_t shift;
- uint64_t reset_snap[OTX2_SSO_MAX_VHGRP];
-};
-
-static struct otx2_sso_xstats_name sso_hws_xstats[] = {
- {"last_grp_serviced", offsetof(struct sso_hws_stats, arbitration),
- 0x3FF, 0, {0} },
- {"affinity_arbitration_credits",
- offsetof(struct sso_hws_stats, arbitration),
- 0xF, 16, {0} },
-};
-
-static struct otx2_sso_xstats_name sso_grp_xstats[] = {
- {"wrk_sched", offsetof(struct sso_grp_stats, ws_pc), ~0x0, 0,
- {0} },
- {"xaq_dram", offsetof(struct sso_grp_stats, ext_pc), ~0x0,
- 0, {0} },
- {"add_wrk", offsetof(struct sso_grp_stats, wa_pc), ~0x0, 0,
- {0} },
- {"tag_switch_req", offsetof(struct sso_grp_stats, ts_pc), ~0x0, 0,
- {0} },
- {"desched_req", offsetof(struct sso_grp_stats, ds_pc), ~0x0, 0,
- {0} },
- {"desched_wrk", offsetof(struct sso_grp_stats, dq_pc), ~0x0, 0,
- {0} },
- {"xaq_cached", offsetof(struct sso_grp_stats, aw_status), 0x3,
- 0, {0} },
- {"work_inflight", offsetof(struct sso_grp_stats, aw_status), 0x3F,
- 16, {0} },
- {"inuse_pages", offsetof(struct sso_grp_stats, page_cnt),
- 0xFFFFFFFF, 0, {0} },
-};
-
-#define OTX2_SSO_NUM_HWS_XSTATS RTE_DIM(sso_hws_xstats)
-#define OTX2_SSO_NUM_GRP_XSTATS RTE_DIM(sso_grp_xstats)
-
-#define OTX2_SSO_NUM_XSTATS (OTX2_SSO_NUM_HWS_XSTATS + OTX2_SSO_NUM_GRP_XSTATS)
-
-static int
-otx2_sso_xstats_get(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode, uint8_t queue_port_id,
- const unsigned int ids[], uint64_t values[], unsigned int n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- values[i] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- values[i] = (values[i] >> xstat->shift) &
- xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- values[i] += value;
- else
- values[i] = value;
-
- values[i] -= xstat->reset_snap[queue_port_id];
- }
-
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_reset(struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- int16_t queue_port_id, const uint32_t ids[], uint32_t n)
-{
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- struct otx2_sso_xstats_name *xstats;
- struct otx2_sso_xstats_name *xstat;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int i;
- uint64_t value;
- void *req_rsp;
- int rc;
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- return 0;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_hws_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws = dev->dual_ws ?
- 2 * queue_port_id : queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- if (dev->dual_ws) {
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- xstat->reset_snap[queue_port_id] = *(uint64_t *)
- ((char *)req_rsp + xstat->offset);
- xstat->reset_snap[queue_port_id] =
- (xstat->reset_snap[queue_port_id] >>
- xstat->shift) & xstat->mask;
- }
-
- req_rsp = otx2_mbox_alloc_msg_sso_hws_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->hws =
- (2 * queue_port_id) + 1;
- rc = otx2_mbox_process_msg(mbox, (void **)&req_rsp);
- if (rc < 0)
- goto invalid_value;
- }
-
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- goto invalid_value;
-
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- xstats = sso_grp_xstats;
-
- req_rsp = otx2_mbox_alloc_msg_sso_grp_get_stats(mbox);
- ((struct sso_info_req *)req_rsp)->grp = queue_port_id;
- rc = otx2_mbox_process_msg(mbox, (void *)&req_rsp);
- if (rc < 0)
- goto invalid_value;
-
- break;
- default:
- otx2_err("Invalid mode received");
- goto invalid_value;
- };
-
- for (i = 0; i < n && i < xstats_mode_count; i++) {
- xstat = &xstats[ids[i] - start_offset];
- value = *(uint64_t *)((char *)req_rsp + xstat->offset);
- value = (value >> xstat->shift) & xstat->mask;
-
- if ((mode == RTE_EVENT_DEV_XSTATS_PORT) && dev->dual_ws)
- xstat->reset_snap[queue_port_id] += value;
- else
- xstat->reset_snap[queue_port_id] = value;
- }
- return i;
-invalid_value:
- return -EINVAL;
-}
-
-static int
-otx2_sso_xstats_get_names(const struct rte_eventdev *event_dev,
- enum rte_event_dev_xstats_mode mode,
- uint8_t queue_port_id,
- struct rte_event_dev_xstats_name *xstats_names,
- unsigned int *ids, unsigned int size)
-{
- struct rte_event_dev_xstats_name xstats_names_copy[OTX2_SSO_NUM_XSTATS];
- struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev);
- uint32_t xstats_mode_count = 0;
- uint32_t start_offset = 0;
- unsigned int xidx = 0;
- unsigned int i;
-
- for (i = 0; i < OTX2_SSO_NUM_HWS_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_hws_xstats[i].name);
- }
-
- for (; i < OTX2_SSO_NUM_XSTATS; i++) {
- snprintf(xstats_names_copy[i].name,
- sizeof(xstats_names_copy[i].name), "%s",
- sso_grp_xstats[i - OTX2_SSO_NUM_HWS_XSTATS].name);
- }
-
- switch (mode) {
- case RTE_EVENT_DEV_XSTATS_DEVICE:
- break;
- case RTE_EVENT_DEV_XSTATS_PORT:
- if (queue_port_id >= (signed int)dev->nb_event_ports)
- break;
- xstats_mode_count = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- case RTE_EVENT_DEV_XSTATS_QUEUE:
- if (queue_port_id >= (signed int)dev->nb_event_queues)
- break;
- xstats_mode_count = OTX2_SSO_NUM_GRP_XSTATS;
- start_offset = OTX2_SSO_NUM_HWS_XSTATS;
- break;
- default:
- otx2_err("Invalid mode received");
- return -EINVAL;
- };
-
- if (xstats_mode_count > size || !ids || !xstats_names)
- return xstats_mode_count;
-
- for (i = 0; i < xstats_mode_count; i++) {
- xidx = i + start_offset;
- strncpy(xstats_names[i].name, xstats_names_copy[xidx].name,
- sizeof(xstats_names[i].name));
- ids[i] = xidx;
- }
-
- return i;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c
deleted file mode 100644
index 6da8b14b78..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.c
+++ /dev/null
@@ -1,735 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_evdev.h"
-#include "otx2_tim_evdev.h"
-
-static struct event_timer_adapter_ops otx2_tim_ops;
-
-static inline int
-tim_get_msix_offsets(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int i, rc;
-
- /* Get TIM MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- for (i = 0; i < dev->nb_rings; i++)
- dev->tim_msixoff[i] = msix_rsp->timlf_msixoff[i];
-
- return rc;
-}
-
-static void
-tim_set_fp_ops(struct otx2_tim_ring *tim_ring)
-{
- uint8_t prod_flag = !tim_ring->prod_type_sp;
-
- /* [DFB/FB] [SP][MP]*/
- const rte_event_timer_arm_burst_t arm_burst[2][2][2] = {
-#define FP(_name, _f3, _f2, _f1, flags) \
- [_f3][_f2][_f1] = otx2_tim_arm_burst_##_name,
- TIM_ARM_FASTPATH_MODES
-#undef FP
- };
-
- const rte_event_timer_arm_tmo_tick_burst_t arm_tmo_burst[2][2] = {
-#define FP(_name, _f2, _f1, flags) \
- [_f2][_f1] = otx2_tim_arm_tmo_tick_burst_##_name,
- TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
- };
-
- otx2_tim_ops.arm_burst =
- arm_burst[tim_ring->enable_stats][tim_ring->ena_dfb][prod_flag];
- otx2_tim_ops.arm_tmo_tick_burst =
- arm_tmo_burst[tim_ring->enable_stats][tim_ring->ena_dfb];
- otx2_tim_ops.cancel_burst = otx2_tim_timer_cancel_burst;
-}
-
-static void
-otx2_tim_ring_info_get(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer_adapter_info *adptr_info)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
-
- adptr_info->max_tmo_ns = tim_ring->max_tout;
- adptr_info->min_resolution_ns = tim_ring->ena_periodic ?
- tim_ring->max_tout : tim_ring->tck_nsec;
- rte_memcpy(&adptr_info->conf, &adptr->data->conf,
- sizeof(struct rte_event_timer_adapter_conf));
-}
-
-static int
-tim_chnk_pool_create(struct otx2_tim_ring *tim_ring,
- struct rte_event_timer_adapter_conf *rcfg)
-{
- unsigned int cache_sz = (tim_ring->nb_chunks / 1.5);
- unsigned int mp_flags = 0;
- char pool_name[25];
- int rc;
-
- cache_sz /= rte_lcore_count();
- /* Create chunk pool. */
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_SP_PUT) {
- mp_flags = RTE_MEMPOOL_F_SP_PUT | RTE_MEMPOOL_F_SC_GET;
- otx2_tim_dbg("Using single producer mode");
- tim_ring->prod_type_sp = true;
- }
-
- snprintf(pool_name, sizeof(pool_name), "otx2_tim_chunk_pool%d",
- tim_ring->ring_id);
-
- if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE)
- cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE;
-
- cache_sz = cache_sz != 0 ? cache_sz : 2;
- tim_ring->nb_chunks += (cache_sz * rte_lcore_count());
- if (!tim_ring->disable_npa) {
- tim_ring->chunk_pool = rte_mempool_create_empty(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, rte_socket_id(), mp_flags);
-
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
-
- rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool,
- rte_mbuf_platform_mempool_ops(),
- NULL);
- if (rc < 0) {
- otx2_err("Unable to set chunkpool ops");
- goto free;
- }
-
- rc = rte_mempool_populate_default(tim_ring->chunk_pool);
- if (rc < 0) {
- otx2_err("Unable to set populate chunkpool.");
- goto free;
- }
- tim_ring->aura = npa_lf_aura_handle_to_aura(
- tim_ring->chunk_pool->pool_id);
- tim_ring->ena_dfb = tim_ring->ena_periodic ? 1 : 0;
- } else {
- tim_ring->chunk_pool = rte_mempool_create(pool_name,
- tim_ring->nb_chunks, tim_ring->chunk_sz,
- cache_sz, 0, NULL, NULL, NULL, NULL,
- rte_socket_id(),
- mp_flags);
- if (tim_ring->chunk_pool == NULL) {
- otx2_err("Unable to create chunkpool.");
- return -ENOMEM;
- }
- tim_ring->ena_dfb = 1;
- }
-
- return 0;
-
-free:
- rte_mempool_free(tim_ring->chunk_pool);
- return rc;
-}
-
-static void
-tim_err_desc(int rc)
-{
- switch (rc) {
- case TIM_AF_NO_RINGS_LEFT:
- otx2_err("Unable to allocat new TIM ring.");
- break;
- case TIM_AF_INVALID_NPA_PF_FUNC:
- otx2_err("Invalid NPA pf func.");
- break;
- case TIM_AF_INVALID_SSO_PF_FUNC:
- otx2_err("Invalid SSO pf func.");
- break;
- case TIM_AF_RING_STILL_RUNNING:
- otx2_tim_dbg("Ring busy.");
- break;
- case TIM_AF_LF_INVALID:
- otx2_err("Invalid Ring id.");
- break;
- case TIM_AF_CSIZE_NOT_ALIGNED:
- otx2_err("Chunk size specified needs to be multiple of 16.");
- break;
- case TIM_AF_CSIZE_TOO_SMALL:
- otx2_err("Chunk size too small.");
- break;
- case TIM_AF_CSIZE_TOO_BIG:
- otx2_err("Chunk size too big.");
- break;
- case TIM_AF_INTERVAL_TOO_SMALL:
- otx2_err("Bucket traversal interval too small.");
- break;
- case TIM_AF_INVALID_BIG_ENDIAN_VALUE:
- otx2_err("Invalid Big endian value.");
- break;
- case TIM_AF_INVALID_CLOCK_SOURCE:
- otx2_err("Invalid Clock source specified.");
- break;
- case TIM_AF_GPIO_CLK_SRC_NOT_ENABLED:
- otx2_err("GPIO clock source not enabled.");
- break;
- case TIM_AF_INVALID_BSIZE:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_PERIODIC:
- otx2_err("Invalid bucket size.");
- break;
- case TIM_AF_INVALID_ENABLE_DONTFREE:
- otx2_err("Invalid Don't free value.");
- break;
- case TIM_AF_ENA_DONTFRE_NSET_PERIODIC:
- otx2_err("Don't free bit not set when periodic is enabled.");
- break;
- case TIM_AF_RING_ALREADY_DISABLED:
- otx2_err("Ring already stopped");
- break;
- default:
- otx2_err("Unknown Error.");
- }
-}
-
-static int
-otx2_tim_ring_create(struct rte_event_timer_adapter *adptr)
-{
- struct rte_event_timer_adapter_conf *rcfg = &adptr->data->conf;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct otx2_tim_ring *tim_ring;
- struct tim_config_req *cfg_req;
- struct tim_ring_req *free_req;
- struct tim_lf_alloc_req *req;
- struct tim_lf_alloc_rsp *rsp;
- uint8_t is_periodic;
- int i, rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- if (adptr->data->id >= dev->nb_rings)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_lf_alloc(dev->mbox);
- req->npa_pf_func = otx2_npa_pf_func_get();
- req->sso_pf_func = otx2_sso_pf_func_get();
- req->ring = adptr->data->id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- return -ENODEV;
- }
-
- if (NSEC2TICK(RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10),
- rsp->tenns_clk) < OTX2_TIM_MIN_TMO_TKS) {
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_ADJUST_RES)
- rcfg->timer_tick_ns = TICK2NSEC(OTX2_TIM_MIN_TMO_TKS,
- rsp->tenns_clk);
- else {
- rc = -ERANGE;
- goto rng_mem_err;
- }
- }
-
- is_periodic = 0;
- if (rcfg->flags & RTE_EVENT_TIMER_ADAPTER_F_PERIODIC) {
- if (rcfg->max_tmo_ns &&
- rcfg->max_tmo_ns != rcfg->timer_tick_ns) {
- rc = -ERANGE;
- goto rng_mem_err;
- }
-
- /* Use 2 buckets to avoid contention */
- rcfg->max_tmo_ns = rcfg->timer_tick_ns;
- rcfg->timer_tick_ns /= 2;
- is_periodic = 1;
- }
-
- tim_ring = rte_zmalloc("otx2_tim_prv", sizeof(struct otx2_tim_ring), 0);
- if (tim_ring == NULL) {
- rc = -ENOMEM;
- goto rng_mem_err;
- }
-
- adptr->data->adapter_priv = tim_ring;
-
- tim_ring->tenns_clk_freq = rsp->tenns_clk;
- tim_ring->clk_src = (int)rcfg->clk_src;
- tim_ring->ring_id = adptr->data->id;
- tim_ring->tck_nsec = RTE_ALIGN_MUL_CEIL(rcfg->timer_tick_ns, 10);
- tim_ring->max_tout = is_periodic ?
- rcfg->timer_tick_ns * 2 : rcfg->max_tmo_ns;
- tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec);
- tim_ring->chunk_sz = dev->chunk_sz;
- tim_ring->nb_timers = rcfg->nb_timers;
- tim_ring->disable_npa = dev->disable_npa;
- tim_ring->ena_periodic = is_periodic;
- tim_ring->enable_stats = dev->enable_stats;
-
- for (i = 0; i < dev->ring_ctl_cnt ; i++) {
- struct otx2_tim_ctl *ring_ctl = &dev->ring_ctl_data[i];
-
- if (ring_ctl->ring == tim_ring->ring_id) {
- tim_ring->chunk_sz = ring_ctl->chunk_slots ?
- ((uint32_t)(ring_ctl->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT) : tim_ring->chunk_sz;
- tim_ring->enable_stats = ring_ctl->enable_stats;
- tim_ring->disable_npa = ring_ctl->disable_npa;
- }
- }
-
- if (tim_ring->disable_npa) {
- tim_ring->nb_chunks =
- tim_ring->nb_timers /
- OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->nb_chunks = tim_ring->nb_chunks * tim_ring->nb_bkts;
- } else {
- tim_ring->nb_chunks = tim_ring->nb_timers;
- }
- tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz);
- tim_ring->bkt = rte_zmalloc("otx2_tim_bucket", (tim_ring->nb_bkts) *
- sizeof(struct otx2_tim_bkt),
- RTE_CACHE_LINE_SIZE);
- if (tim_ring->bkt == NULL)
- goto bkt_mem_err;
-
- rc = tim_chnk_pool_create(tim_ring, rcfg);
- if (rc < 0)
- goto chnk_mem_err;
-
- cfg_req = otx2_mbox_alloc_msg_tim_config_ring(dev->mbox);
-
- cfg_req->ring = tim_ring->ring_id;
- cfg_req->bigendian = false;
- cfg_req->clocksource = tim_ring->clk_src;
- cfg_req->enableperiodic = tim_ring->ena_periodic;
- cfg_req->enabledontfreebuffer = tim_ring->ena_dfb;
- cfg_req->bucketsize = tim_ring->nb_bkts;
- cfg_req->chunksize = tim_ring->chunk_sz;
- cfg_req->interval = NSEC2TICK(tim_ring->tck_nsec,
- tim_ring->tenns_clk_freq);
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- goto chnk_mem_err;
- }
-
- tim_ring->base = dev->bar2 +
- (RVU_BLOCK_ADDR_TIM << 20 | tim_ring->ring_id << 12);
-
- rc = tim_register_irq(tim_ring->ring_id);
- if (rc < 0)
- goto chnk_mem_err;
-
- otx2_write64((uint64_t)tim_ring->bkt,
- tim_ring->base + TIM_LF_RING_BASE);
- otx2_write64(tim_ring->aura, tim_ring->base + TIM_LF_RING_AURA);
-
- /* Set fastpath ops. */
- tim_set_fp_ops(tim_ring);
-
- /* Update SSO xae count. */
- sso_updt_xae_cnt(sso_pmd_priv(dev->event_dev), (void *)tim_ring,
- RTE_EVENT_TYPE_TIMER);
- sso_xae_reconfigure(dev->event_dev);
-
- otx2_tim_dbg("Total memory used %"PRIu64"MB\n",
- (uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz)
- + (tim_ring->nb_bkts * sizeof(struct otx2_tim_bkt))) /
- BIT_ULL(20)));
-
- return rc;
-
-chnk_mem_err:
- rte_free(tim_ring->bkt);
-bkt_mem_err:
- rte_free(tim_ring);
-rng_mem_err:
- free_req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- free_req->ring = adptr->data->id;
- otx2_mbox_process(dev->mbox);
- return rc;
-}
-
-static void
-otx2_tim_calibrate_start_tsc(struct otx2_tim_ring *tim_ring)
-{
-#define OTX2_TIM_CALIB_ITER 1E6
- uint32_t real_bkt, bucket;
- int icount, ecount = 0;
- uint64_t bkt_cyc;
-
- for (icount = 0; icount < OTX2_TIM_CALIB_ITER; icount++) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- bkt_cyc = tim_cntvct();
- bucket = (bkt_cyc - tim_ring->ring_start_cyc) /
- tim_ring->tck_int;
- bucket = bucket % (tim_ring->nb_bkts);
- tim_ring->ring_start_cyc = bkt_cyc - (real_bkt *
- tim_ring->tck_int);
- if (bucket != real_bkt)
- ecount++;
- }
- tim_ring->last_updt_cyc = bkt_cyc;
- otx2_tim_dbg("Bucket mispredict %3.2f distance %d\n",
- 100 - (((double)(icount - ecount) / (double)icount) * 100),
- bucket - real_bkt);
-}
-
-static int
-otx2_tim_ring_start(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_enable_rsp *rsp;
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_enable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (rc < 0) {
- tim_err_desc(rc);
- goto fail;
- }
- tim_ring->ring_start_cyc = rsp->timestarted;
- tim_ring->tck_int = NSEC2TICK(tim_ring->tck_nsec, tim_cntfrq());
- tim_ring->tot_int = tim_ring->tck_int * tim_ring->nb_bkts;
- tim_ring->fast_div = rte_reciprocal_value_u64(tim_ring->tck_int);
- tim_ring->fast_bkt = rte_reciprocal_value_u64(tim_ring->nb_bkts);
-
- otx2_tim_calibrate_start_tsc(tim_ring);
-
-fail:
- return rc;
-}
-
-static int
-otx2_tim_ring_stop(const struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- req = otx2_mbox_alloc_msg_tim_disable_ring(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- rc = -EBUSY;
- }
-
- return rc;
-}
-
-static int
-otx2_tim_ring_free(struct rte_event_timer_adapter *adptr)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct tim_ring_req *req;
- int rc;
-
- if (dev == NULL)
- return -ENODEV;
-
- tim_unregister_irq(tim_ring->ring_id);
-
- req = otx2_mbox_alloc_msg_tim_lf_free(dev->mbox);
- req->ring = tim_ring->ring_id;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- tim_err_desc(rc);
- return -EBUSY;
- }
-
- rte_free(tim_ring->bkt);
- rte_mempool_free(tim_ring->chunk_pool);
- rte_free(adptr->data->adapter_priv);
-
- return 0;
-}
-
-static int
-otx2_tim_stats_get(const struct rte_event_timer_adapter *adapter,
- struct rte_event_timer_adapter_stats *stats)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
- uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
-
- stats->evtim_exp_count = __atomic_load_n(&tim_ring->arm_cnt,
- __ATOMIC_RELAXED);
- stats->ev_enq_count = stats->evtim_exp_count;
- stats->adapter_tick_count = rte_reciprocal_divide_u64(bkt_cyc,
- &tim_ring->fast_div);
- return 0;
-}
-
-static int
-otx2_tim_stats_reset(const struct rte_event_timer_adapter *adapter)
-{
- struct otx2_tim_ring *tim_ring = adapter->data->adapter_priv;
-
- __atomic_store_n(&tim_ring->arm_cnt, 0, __ATOMIC_RELAXED);
- return 0;
-}
-
-int
-otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags,
- uint32_t *caps, const struct event_timer_adapter_ops **ops)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
-
- RTE_SET_USED(flags);
-
- if (dev == NULL)
- return -ENODEV;
-
- otx2_tim_ops.init = otx2_tim_ring_create;
- otx2_tim_ops.uninit = otx2_tim_ring_free;
- otx2_tim_ops.start = otx2_tim_ring_start;
- otx2_tim_ops.stop = otx2_tim_ring_stop;
- otx2_tim_ops.get_info = otx2_tim_ring_info_get;
-
- if (dev->enable_stats) {
- otx2_tim_ops.stats_get = otx2_tim_stats_get;
- otx2_tim_ops.stats_reset = otx2_tim_stats_reset;
- }
-
- /* Store evdev pointer for later use. */
- dev->event_dev = (struct rte_eventdev *)(uintptr_t)evdev;
- *caps = RTE_EVENT_TIMER_ADAPTER_CAP_INTERNAL_PORT |
- RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC;
- *ops = &otx2_tim_ops;
-
- return 0;
-}
-
-#define OTX2_TIM_DISABLE_NPA "tim_disable_npa"
-#define OTX2_TIM_CHNK_SLOTS "tim_chnk_slots"
-#define OTX2_TIM_STATS_ENA "tim_stats_ena"
-#define OTX2_TIM_RINGS_LMT "tim_rings_lmt"
-#define OTX2_TIM_RING_CTL "tim_ring_ctl"
-
-static void
-tim_parse_ring_param(char *value, void *opaque)
-{
- struct otx2_tim_evdev *dev = opaque;
- struct otx2_tim_ctl ring_ctl = {0};
- char *tok = strtok(value, "-");
- struct otx2_tim_ctl *old_ptr;
- uint16_t *val;
-
- val = (uint16_t *)&ring_ctl;
-
- if (!strlen(value))
- return;
-
- while (tok != NULL) {
- *val = atoi(tok);
- tok = strtok(NULL, "-");
- val++;
- }
-
- if (val != (&ring_ctl.enable_stats + 1)) {
- otx2_err(
- "Invalid ring param expected [ring-chunk_sz-disable_npa-enable_stats]");
- return;
- }
-
- dev->ring_ctl_cnt++;
- old_ptr = dev->ring_ctl_data;
- dev->ring_ctl_data = rte_realloc(dev->ring_ctl_data,
- sizeof(struct otx2_tim_ctl) *
- dev->ring_ctl_cnt, 0);
- if (dev->ring_ctl_data == NULL) {
- dev->ring_ctl_data = old_ptr;
- dev->ring_ctl_cnt--;
- return;
- }
-
- dev->ring_ctl_data[dev->ring_ctl_cnt - 1] = ring_ctl;
-}
-
-static void
-tim_parse_ring_ctl_list(const char *value, void *opaque)
-{
- char *s = strdup(value);
- char *start = NULL;
- char *end = NULL;
- char *f = s;
-
- while (*s) {
- if (*s == '[')
- start = s;
- else if (*s == ']')
- end = s;
-
- if (start && start < end) {
- *end = 0;
- tim_parse_ring_param(start + 1, opaque);
- start = end;
- s = end;
- }
- s++;
- }
-
- free(f);
-}
-
-static int
-tim_parse_kvargs_dict(const char *key, const char *value, void *opaque)
-{
- RTE_SET_USED(key);
-
- /* Dict format [ring-chunk_sz-disable_npa-enable_stats] use '-' as ','
- * isn't allowed. 0 represents default.
- */
- tim_parse_ring_ctl_list(value, opaque);
-
- return 0;
-}
-
-static void
-tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev)
-{
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- return;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- return;
-
- rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA,
- &parse_kvargs_flag, &dev->disable_npa);
- rte_kvargs_process(kvlist, OTX2_TIM_CHNK_SLOTS,
- &parse_kvargs_value, &dev->chunk_slots);
- rte_kvargs_process(kvlist, OTX2_TIM_STATS_ENA, &parse_kvargs_flag,
- &dev->enable_stats);
- rte_kvargs_process(kvlist, OTX2_TIM_RINGS_LMT, &parse_kvargs_value,
- &dev->min_ring_cnt);
- rte_kvargs_process(kvlist, OTX2_TIM_RING_CTL,
- &tim_parse_kvargs_dict, &dev);
-
- rte_kvargs_free(kvlist);
-}
-
-void
-otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev)
-{
- struct rsrc_attach_req *atch_req;
- struct rsrc_detach_req *dtch_req;
- struct free_rsrcs_rsp *rsrc_cnt;
- const struct rte_memzone *mz;
- struct otx2_tim_evdev *dev;
- int rc;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- mz = rte_memzone_reserve(RTE_STR(OTX2_TIM_EVDEV_NAME),
- sizeof(struct otx2_tim_evdev),
- rte_socket_id(), 0);
- if (mz == NULL) {
- otx2_tim_dbg("Unable to allocate memory for TIM Event device");
- return;
- }
-
- dev = mz->addr;
- dev->pci_dev = pci_dev;
- dev->mbox = cmn_dev->mbox;
- dev->bar2 = cmn_dev->bar2;
-
- tim_parse_devargs(pci_dev->device.devargs, dev);
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox);
- rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt);
- if (rc < 0) {
- otx2_err("Unable to get free rsrc count.");
- goto mz_free;
- }
-
- dev->nb_rings = dev->min_ring_cnt ?
- RTE_MIN(dev->min_ring_cnt, rsrc_cnt->tim) : rsrc_cnt->tim;
-
- if (!dev->nb_rings) {
- otx2_tim_dbg("No TIM Logical functions provisioned.");
- goto mz_free;
- }
-
- atch_req = otx2_mbox_alloc_msg_attach_resources(dev->mbox);
- atch_req->modify = true;
- atch_req->timlfs = dev->nb_rings;
-
- rc = otx2_mbox_process(dev->mbox);
- if (rc < 0) {
- otx2_err("Unable to attach TIM rings.");
- goto mz_free;
- }
-
- rc = tim_get_msix_offsets();
- if (rc < 0) {
- otx2_err("Unable to get MSIX offsets for TIM.");
- goto detach;
- }
-
- if (dev->chunk_slots &&
- dev->chunk_slots <= OTX2_TIM_MAX_CHUNK_SLOTS &&
- dev->chunk_slots >= OTX2_TIM_MIN_CHUNK_SLOTS) {
- dev->chunk_sz = (dev->chunk_slots + 1) *
- OTX2_TIM_CHUNK_ALIGNMENT;
- } else {
- dev->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ;
- }
-
- return;
-
-detach:
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
-mz_free:
- rte_memzone_free(mz);
-}
-
-void
-otx2_tim_fini(void)
-{
- struct otx2_tim_evdev *dev = tim_priv_get();
- struct rsrc_detach_req *dtch_req;
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return;
-
- dtch_req = otx2_mbox_alloc_msg_detach_resources(dev->mbox);
- dtch_req->partial = true;
- dtch_req->timlfs = true;
-
- otx2_mbox_process(dev->mbox);
- rte_memzone_free(rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME)));
-}
diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h
deleted file mode 100644
index dac642e0e1..0000000000
--- a/drivers/event/octeontx2/otx2_tim_evdev.h
+++ /dev/null
@@ -1,256 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_EVDEV_H__
-#define __OTX2_TIM_EVDEV_H__
-
-#include <event_timer_adapter_pmd.h>
-#include <rte_event_timer_adapter.h>
-#include <rte_reciprocal.h>
-
-#include "otx2_dev.h"
-
-#define OTX2_TIM_EVDEV_NAME otx2_tim_eventdev
-
-#define otx2_tim_func_trace otx2_tim_dbg
-
-#define TIM_LF_RING_AURA (0x0)
-#define TIM_LF_RING_BASE (0x130)
-#define TIM_LF_NRSPERR_INT (0x200)
-#define TIM_LF_NRSPERR_INT_W1S (0x208)
-#define TIM_LF_NRSPERR_INT_ENA_W1S (0x210)
-#define TIM_LF_NRSPERR_INT_ENA_W1C (0x218)
-#define TIM_LF_RAS_INT (0x300)
-#define TIM_LF_RAS_INT_W1S (0x308)
-#define TIM_LF_RAS_INT_ENA_W1S (0x310)
-#define TIM_LF_RAS_INT_ENA_W1C (0x318)
-#define TIM_LF_RING_REL (0x400)
-
-#define TIM_BUCKET_W1_S_CHUNK_REMAINDER (48)
-#define TIM_BUCKET_W1_M_CHUNK_REMAINDER ((1ULL << (64 - \
- TIM_BUCKET_W1_S_CHUNK_REMAINDER)) - 1)
-#define TIM_BUCKET_W1_S_LOCK (40)
-#define TIM_BUCKET_W1_M_LOCK ((1ULL << \
- (TIM_BUCKET_W1_S_CHUNK_REMAINDER - \
- TIM_BUCKET_W1_S_LOCK)) - 1)
-#define TIM_BUCKET_W1_S_RSVD (35)
-#define TIM_BUCKET_W1_S_BSK (34)
-#define TIM_BUCKET_W1_M_BSK ((1ULL << \
- (TIM_BUCKET_W1_S_RSVD - \
- TIM_BUCKET_W1_S_BSK)) - 1)
-#define TIM_BUCKET_W1_S_HBT (33)
-#define TIM_BUCKET_W1_M_HBT ((1ULL << \
- (TIM_BUCKET_W1_S_BSK - \
- TIM_BUCKET_W1_S_HBT)) - 1)
-#define TIM_BUCKET_W1_S_SBT (32)
-#define TIM_BUCKET_W1_M_SBT ((1ULL << \
- (TIM_BUCKET_W1_S_HBT - \
- TIM_BUCKET_W1_S_SBT)) - 1)
-#define TIM_BUCKET_W1_S_NUM_ENTRIES (0)
-#define TIM_BUCKET_W1_M_NUM_ENTRIES ((1ULL << \
- (TIM_BUCKET_W1_S_SBT - \
- TIM_BUCKET_W1_S_NUM_ENTRIES)) - 1)
-
-#define TIM_BUCKET_SEMA (TIM_BUCKET_CHUNK_REMAIN)
-
-#define TIM_BUCKET_CHUNK_REMAIN \
- (TIM_BUCKET_W1_M_CHUNK_REMAINDER << TIM_BUCKET_W1_S_CHUNK_REMAINDER)
-
-#define TIM_BUCKET_LOCK \
- (TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK)
-
-#define TIM_BUCKET_SEMA_WLOCK \
- (TIM_BUCKET_CHUNK_REMAIN | (1ull << TIM_BUCKET_W1_S_LOCK))
-
-#define OTX2_MAX_TIM_RINGS (256)
-#define OTX2_TIM_MAX_BUCKETS (0xFFFFF)
-#define OTX2_TIM_RING_DEF_CHUNK_SZ (4096)
-#define OTX2_TIM_CHUNK_ALIGNMENT (16)
-#define OTX2_TIM_MAX_BURST (RTE_CACHE_LINE_SIZE / \
- OTX2_TIM_CHUNK_ALIGNMENT)
-#define OTX2_TIM_NB_CHUNK_SLOTS(sz) (((sz) / OTX2_TIM_CHUNK_ALIGNMENT) - 1)
-#define OTX2_TIM_MIN_CHUNK_SLOTS (0x8)
-#define OTX2_TIM_MAX_CHUNK_SLOTS (0x1FFE)
-#define OTX2_TIM_MIN_TMO_TKS (256)
-
-#define OTX2_TIM_SP 0x1
-#define OTX2_TIM_MP 0x2
-#define OTX2_TIM_ENA_FB 0x10
-#define OTX2_TIM_ENA_DFB 0x20
-#define OTX2_TIM_ENA_STATS 0x40
-
-enum otx2_tim_clk_src {
- OTX2_TIM_CLK_SRC_10NS = RTE_EVENT_TIMER_ADAPTER_CPU_CLK,
- OTX2_TIM_CLK_SRC_GPIO = RTE_EVENT_TIMER_ADAPTER_EXT_CLK0,
- OTX2_TIM_CLK_SRC_GTI = RTE_EVENT_TIMER_ADAPTER_EXT_CLK1,
- OTX2_TIM_CLK_SRC_PTP = RTE_EVENT_TIMER_ADAPTER_EXT_CLK2,
-};
-
-struct otx2_tim_bkt {
- uint64_t first_chunk;
- union {
- uint64_t w1;
- struct {
- uint32_t nb_entry;
- uint8_t sbt:1;
- uint8_t hbt:1;
- uint8_t bsk:1;
- uint8_t rsvd:5;
- uint8_t lock;
- int16_t chunk_remainder;
- };
- };
- uint64_t current_chunk;
- uint64_t pad;
-} __rte_packed __rte_aligned(32);
-
-struct otx2_tim_ent {
- uint64_t w0;
- uint64_t wqe;
-} __rte_packed;
-
-struct otx2_tim_ctl {
- uint16_t ring;
- uint16_t chunk_slots;
- uint16_t disable_npa;
- uint16_t enable_stats;
-};
-
-struct otx2_tim_evdev {
- struct rte_pci_device *pci_dev;
- struct rte_eventdev *event_dev;
- struct otx2_mbox *mbox;
- uint16_t nb_rings;
- uint32_t chunk_sz;
- uintptr_t bar2;
- /* Dev args */
- uint8_t disable_npa;
- uint16_t chunk_slots;
- uint16_t min_ring_cnt;
- uint8_t enable_stats;
- uint16_t ring_ctl_cnt;
- struct otx2_tim_ctl *ring_ctl_data;
- /* HW const */
- /* MSIX offsets */
- uint16_t tim_msixoff[OTX2_MAX_TIM_RINGS];
-};
-
-struct otx2_tim_ring {
- uintptr_t base;
- uint16_t nb_chunk_slots;
- uint32_t nb_bkts;
- uint64_t last_updt_cyc;
- uint64_t ring_start_cyc;
- uint64_t tck_int;
- uint64_t tot_int;
- struct otx2_tim_bkt *bkt;
- struct rte_mempool *chunk_pool;
- struct rte_reciprocal_u64 fast_div;
- struct rte_reciprocal_u64 fast_bkt;
- uint64_t arm_cnt;
- uint8_t prod_type_sp;
- uint8_t enable_stats;
- uint8_t disable_npa;
- uint8_t ena_dfb;
- uint8_t ena_periodic;
- uint16_t ring_id;
- uint32_t aura;
- uint64_t nb_timers;
- uint64_t tck_nsec;
- uint64_t max_tout;
- uint64_t nb_chunks;
- uint64_t chunk_sz;
- uint64_t tenns_clk_freq;
- enum otx2_tim_clk_src clk_src;
-} __rte_cache_aligned;
-
-static inline struct otx2_tim_evdev *
-tim_priv_get(void)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(RTE_STR(OTX2_TIM_EVDEV_NAME));
- if (mz == NULL)
- return NULL;
-
- return mz->addr;
-}
-
-#ifdef RTE_ARCH_ARM64
-static inline uint64_t
-tim_cntvct(void)
-{
- return __rte_arm64_cntvct();
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return __rte_arm64_cntfrq();
-}
-#else
-static inline uint64_t
-tim_cntvct(void)
-{
- return 0;
-}
-
-static inline uint64_t
-tim_cntfrq(void)
-{
- return 0;
-}
-#endif
-
-#define TIM_ARM_FASTPATH_MODES \
- FP(sp, 0, 0, 0, OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(mp, 0, 0, 1, OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(fb_sp, 0, 1, 0, OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(fb_mp, 0, 1, 1, OTX2_TIM_ENA_FB | OTX2_TIM_MP) \
- FP(stats_mod_sp, 1, 0, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_SP) \
- FP(stats_mod_mp, 1, 0, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB | OTX2_TIM_MP) \
- FP(stats_mod_fb_sp, 1, 1, 0, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_SP) \
- FP(stats_mod_fb_mp, 1, 1, 1, \
- OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB | OTX2_TIM_MP)
-
-#define TIM_ARM_TMO_FASTPATH_MODES \
- FP(dfb, 0, 0, OTX2_TIM_ENA_DFB) \
- FP(fb, 0, 1, OTX2_TIM_ENA_FB) \
- FP(stats_dfb, 1, 0, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_DFB) \
- FP(stats_fb, 1, 1, OTX2_TIM_ENA_STATS | OTX2_TIM_ENA_FB)
-
-#define FP(_name, _f3, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint16_t nb_timers);
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, flags) \
- uint16_t otx2_tim_arm_tmo_tick_burst_##_name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, const uint64_t timeout_tick, \
- const uint16_t nb_timers);
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t otx2_tim_timer_cancel_burst(
- const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim, const uint16_t nb_timers);
-
-int otx2_tim_caps_get(const struct rte_eventdev *dev, uint64_t flags,
- uint32_t *caps,
- const struct event_timer_adapter_ops **ops);
-
-void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev);
-void otx2_tim_fini(void);
-
-/* TIM IRQ */
-int tim_register_irq(uint16_t ring_id);
-void tim_unregister_irq(uint16_t ring_id);
-
-#endif /* __OTX2_TIM_EVDEV_H__ */
diff --git a/drivers/event/octeontx2/otx2_tim_worker.c b/drivers/event/octeontx2/otx2_tim_worker.c
deleted file mode 100644
index 9ee07958fd..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.c
+++ /dev/null
@@ -1,192 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_tim_evdev.h"
-#include "otx2_tim_worker.h"
-
-static inline int
-tim_arm_checks(const struct otx2_tim_ring * const tim_ring,
- struct rte_event_timer * const tim)
-{
- if (unlikely(tim->state)) {
- tim->state = RTE_EVENT_TIMER_ERROR;
- rte_errno = EALREADY;
- goto fail;
- }
-
- if (unlikely(!tim->timeout_ticks ||
- tim->timeout_ticks >= tim_ring->nb_bkts)) {
- tim->state = tim->timeout_ticks ? RTE_EVENT_TIMER_ERROR_TOOLATE
- : RTE_EVENT_TIMER_ERROR_TOOEARLY;
- rte_errno = EINVAL;
- goto fail;
- }
-
- return 0;
-
-fail:
- return -EINVAL;
-}
-
-static inline void
-tim_format_event(const struct rte_event_timer * const tim,
- struct otx2_tim_ent * const entry)
-{
- entry->w0 = (tim->ev.event & 0xFFC000000000) >> 6 |
- (tim->ev.event & 0xFFFFFFFFF);
- entry->wqe = tim->ev.u64;
-}
-
-static inline void
-tim_sync_start_cyc(struct otx2_tim_ring *tim_ring)
-{
- uint64_t cur_cyc = tim_cntvct();
- uint32_t real_bkt;
-
- if (cur_cyc - tim_ring->last_updt_cyc > tim_ring->tot_int) {
- real_bkt = otx2_read64(tim_ring->base + TIM_LF_RING_REL) >> 44;
- cur_cyc = tim_cntvct();
-
- tim_ring->ring_start_cyc = cur_cyc -
- (real_bkt * tim_ring->tck_int);
- tim_ring->last_updt_cyc = cur_cyc;
- }
-
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers,
- const uint8_t flags)
-{
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- struct otx2_tim_ent entry;
- uint16_t index;
- int ret;
-
- tim_sync_start_cyc(tim_ring);
- for (index = 0; index < nb_timers; index++) {
- if (tim_arm_checks(tim_ring, tim[index]))
- break;
-
- tim_format_event(tim[index], &entry);
- if (flags & OTX2_TIM_SP)
- ret = tim_add_entry_sp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
- if (flags & OTX2_TIM_MP)
- ret = tim_add_entry_mp(tim_ring,
- tim[index]->timeout_ticks,
- tim[index], &entry, flags);
-
- if (unlikely(ret)) {
- rte_errno = -ret;
- break;
- }
- }
-
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, index, __ATOMIC_RELAXED);
-
- return index;
-}
-
-static __rte_always_inline uint16_t
-tim_timer_arm_tmo_brst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint64_t timeout_tick,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent entry[OTX2_TIM_MAX_BURST] __rte_cache_aligned;
- struct otx2_tim_ring *tim_ring = adptr->data->adapter_priv;
- uint16_t set_timers = 0;
- uint16_t arr_idx = 0;
- uint16_t idx;
- int ret;
-
- if (unlikely(!timeout_tick || timeout_tick >= tim_ring->nb_bkts)) {
- const enum rte_event_timer_state state = timeout_tick ?
- RTE_EVENT_TIMER_ERROR_TOOLATE :
- RTE_EVENT_TIMER_ERROR_TOOEARLY;
- for (idx = 0; idx < nb_timers; idx++)
- tim[idx]->state = state;
-
- rte_errno = EINVAL;
- return 0;
- }
-
- tim_sync_start_cyc(tim_ring);
- while (arr_idx < nb_timers) {
- for (idx = 0; idx < OTX2_TIM_MAX_BURST && (arr_idx < nb_timers);
- idx++, arr_idx++) {
- tim_format_event(tim[arr_idx], &entry[idx]);
- }
- ret = tim_add_entry_brst(tim_ring, timeout_tick,
- &tim[set_timers], entry, idx, flags);
- set_timers += ret;
- if (ret != idx)
- break;
- }
- if (flags & OTX2_TIM_ENA_STATS)
- __atomic_fetch_add(&tim_ring->arm_cnt, set_timers,
- __ATOMIC_RELAXED);
-
- return set_timers;
-}
-
-#define FP(_name, _f3, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_burst_ ## _name(const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_burst(adptr, tim, nb_timers, _flags); \
-}
-TIM_ARM_FASTPATH_MODES
-#undef FP
-
-#define FP(_name, _f2, _f1, _flags) \
-uint16_t __rte_noinline \
-otx2_tim_arm_tmo_tick_burst_ ## _name( \
- const struct rte_event_timer_adapter *adptr, \
- struct rte_event_timer **tim, \
- const uint64_t timeout_tick, \
- const uint16_t nb_timers) \
-{ \
- return tim_timer_arm_tmo_brst(adptr, tim, timeout_tick, \
- nb_timers, _flags); \
-}
-TIM_ARM_TMO_FASTPATH_MODES
-#undef FP
-
-uint16_t
-otx2_tim_timer_cancel_burst(const struct rte_event_timer_adapter *adptr,
- struct rte_event_timer **tim,
- const uint16_t nb_timers)
-{
- uint16_t index;
- int ret;
-
- RTE_SET_USED(adptr);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- for (index = 0; index < nb_timers; index++) {
- if (tim[index]->state == RTE_EVENT_TIMER_CANCELED) {
- rte_errno = EALREADY;
- break;
- }
-
- if (tim[index]->state != RTE_EVENT_TIMER_ARMED) {
- rte_errno = EINVAL;
- break;
- }
- ret = tim_rm_entry(tim[index]);
- if (ret) {
- rte_errno = -ret;
- break;
- }
- }
-
- return index;
-}
diff --git a/drivers/event/octeontx2/otx2_tim_worker.h b/drivers/event/octeontx2/otx2_tim_worker.h
deleted file mode 100644
index efe88a8692..0000000000
--- a/drivers/event/octeontx2/otx2_tim_worker.h
+++ /dev/null
@@ -1,598 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TIM_WORKER_H__
-#define __OTX2_TIM_WORKER_H__
-
-#include "otx2_tim_evdev.h"
-
-static inline uint8_t
-tim_bkt_fetch_lock(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_LOCK) &
- TIM_BUCKET_W1_M_LOCK;
-}
-
-static inline int16_t
-tim_bkt_fetch_rem(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_CHUNK_REMAINDER) &
- TIM_BUCKET_W1_M_CHUNK_REMAINDER;
-}
-
-static inline int16_t
-tim_bkt_get_rem(struct otx2_tim_bkt *bktp)
-{
- return __atomic_load_n(&bktp->chunk_remainder, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_set_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_store_n(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_sub_rem(struct otx2_tim_bkt *bktp, uint16_t v)
-{
- __atomic_fetch_sub(&bktp->chunk_remainder, v, __ATOMIC_RELAXED);
-}
-
-static inline uint8_t
-tim_bkt_get_hbt(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_HBT) & TIM_BUCKET_W1_M_HBT;
-}
-
-static inline uint8_t
-tim_bkt_get_bsk(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_BSK) & TIM_BUCKET_W1_M_BSK;
-}
-
-static inline uint64_t
-tim_bkt_clr_bsk(struct otx2_tim_bkt *bktp)
-{
- /* Clear everything except lock. */
- const uint64_t v = TIM_BUCKET_W1_M_LOCK << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_and(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema_lock(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA_WLOCK,
- __ATOMIC_ACQUIRE);
-}
-
-static inline uint64_t
-tim_bkt_fetch_sema(struct otx2_tim_bkt *bktp)
-{
- return __atomic_fetch_add(&bktp->w1, TIM_BUCKET_SEMA, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_inc_lock(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = 1ull << TIM_BUCKET_W1_S_LOCK;
-
- return __atomic_fetch_add(&bktp->w1, v, __ATOMIC_ACQUIRE);
-}
-
-static inline void
-tim_bkt_dec_lock(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELEASE);
-}
-
-static inline void
-tim_bkt_dec_lock_relaxed(struct otx2_tim_bkt *bktp)
-{
- __atomic_fetch_sub(&bktp->lock, 1, __ATOMIC_RELAXED);
-}
-
-static inline uint32_t
-tim_bkt_get_nent(uint64_t w1)
-{
- return (w1 >> TIM_BUCKET_W1_S_NUM_ENTRIES) &
- TIM_BUCKET_W1_M_NUM_ENTRIES;
-}
-
-static inline void
-tim_bkt_inc_nent(struct otx2_tim_bkt *bktp)
-{
- __atomic_add_fetch(&bktp->nb_entry, 1, __ATOMIC_RELAXED);
-}
-
-static inline void
-tim_bkt_add_nent(struct otx2_tim_bkt *bktp, uint32_t v)
-{
- __atomic_add_fetch(&bktp->nb_entry, v, __ATOMIC_RELAXED);
-}
-
-static inline uint64_t
-tim_bkt_clr_nent(struct otx2_tim_bkt *bktp)
-{
- const uint64_t v = ~(TIM_BUCKET_W1_M_NUM_ENTRIES <<
- TIM_BUCKET_W1_S_NUM_ENTRIES);
-
- return __atomic_and_fetch(&bktp->w1, v, __ATOMIC_ACQ_REL);
-}
-
-static inline uint64_t
-tim_bkt_fast_mod(uint64_t n, uint64_t d, struct rte_reciprocal_u64 R)
-{
- return (n - (d * rte_reciprocal_divide_u64(n, &R)));
-}
-
-static __rte_always_inline void
-tim_get_target_bucket(struct otx2_tim_ring *const tim_ring,
- const uint32_t rel_bkt, struct otx2_tim_bkt **bkt,
- struct otx2_tim_bkt **mirr_bkt)
-{
- const uint64_t bkt_cyc = tim_cntvct() - tim_ring->ring_start_cyc;
- uint64_t bucket =
- rte_reciprocal_divide_u64(bkt_cyc, &tim_ring->fast_div) +
- rel_bkt;
- uint64_t mirr_bucket = 0;
-
- bucket =
- tim_bkt_fast_mod(bucket, tim_ring->nb_bkts, tim_ring->fast_bkt);
- mirr_bucket = tim_bkt_fast_mod(bucket + (tim_ring->nb_bkts >> 1),
- tim_ring->nb_bkts, tim_ring->fast_bkt);
- *bkt = &tim_ring->bkt[bucket];
- *mirr_bkt = &tim_ring->bkt[mirr_bucket];
-}
-
-static struct otx2_tim_ent *
-tim_clr_bkt(struct otx2_tim_ring * const tim_ring,
- struct otx2_tim_bkt * const bkt)
-{
-#define TIM_MAX_OUTSTANDING_OBJ 64
- void *pend_chunks[TIM_MAX_OUTSTANDING_OBJ];
- struct otx2_tim_ent *chunk;
- struct otx2_tim_ent *pnext;
- uint8_t objs = 0;
-
-
- chunk = ((struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk);
- chunk = (struct otx2_tim_ent *)(uintptr_t)(chunk +
- tim_ring->nb_chunk_slots)->w0;
- while (chunk) {
- pnext = (struct otx2_tim_ent *)(uintptr_t)
- ((chunk + tim_ring->nb_chunk_slots)->w0);
- if (objs == TIM_MAX_OUTSTANDING_OBJ) {
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
- objs = 0;
- }
- pend_chunks[objs++] = chunk;
- chunk = pnext;
- }
-
- if (objs)
- rte_mempool_put_bulk(tim_ring->chunk_pool, pend_chunks,
- objs);
-
- return (struct otx2_tim_ent *)(uintptr_t)bkt->first_chunk;
-}
-
-static struct otx2_tim_ent *
-tim_refill_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (bkt->nb_entry || !bkt->first_chunk) {
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool,
- (void **)&chunk)))
- return NULL;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) =
- (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- } else {
- chunk = tim_clr_bkt(tim_ring, bkt);
- bkt->first_chunk = (uintptr_t)chunk;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
-
- return chunk;
-}
-
-static struct otx2_tim_ent *
-tim_insert_chunk(struct otx2_tim_bkt * const bkt,
- struct otx2_tim_bkt * const mirr_bkt,
- struct otx2_tim_ring * const tim_ring)
-{
- struct otx2_tim_ent *chunk;
-
- if (unlikely(rte_mempool_get(tim_ring->chunk_pool, (void **)&chunk)))
- return NULL;
-
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- if (bkt->nb_entry) {
- *(uint64_t *)(((struct otx2_tim_ent *)(uintptr_t)
- mirr_bkt->current_chunk) +
- tim_ring->nb_chunk_slots) = (uintptr_t)chunk;
- } else {
- bkt->first_chunk = (uintptr_t)chunk;
- }
- return chunk;
-}
-
-static __rte_always_inline int
-tim_add_entry_sp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
- /* Insert the work. */
- rem = tim_bkt_fetch_rem(lock_sema);
-
- if (!rem) {
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- bkt->chunk_remainder = 0;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- bkt->chunk_remainder = tim_ring->nb_chunk_slots - 1;
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- }
-
- /* Copy work entry. */
- *chunk = *pent;
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static __rte_always_inline int
-tim_add_entry_mp(struct otx2_tim_ring * const tim_ring,
- const uint32_t rel_bkt,
- struct rte_event_timer * const tim,
- const struct otx2_tim_ent * const pent,
- const uint8_t flags)
-{
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_ent *chunk;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
- int16_t rem;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
- /* Get Bucket sema*/
- lock_sema = tim_bkt_fetch_sema_lock(bkt);
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- rem = tim_bkt_fetch_rem(lock_sema);
- if (rem < 0) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- uint64_t w1;
- asm volatile(" ldxr %[w1], [%[crem]] \n"
- " tbz %[w1], 63, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[w1], [%[crem]] \n"
- " tbnz %[w1], 63, rty%= \n"
- "dne%=: \n"
- : [w1] "=&r"(w1)
- : [crem] "r"(&bkt->w1)
- : "memory");
-#else
- while (__atomic_load_n((int64_t *)&bkt->w1, __ATOMIC_RELAXED) <
- 0)
- ;
-#endif
- goto __retry;
- } else if (!rem) {
- /* Only one thread can be here*/
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim->state = RTE_EVENT_TIMER_ERROR;
- tim_bkt_set_rem(bkt, 0);
- tim_bkt_dec_lock(bkt);
- return -ENOMEM;
- }
- *chunk = *pent;
- if (tim_bkt_fetch_lock(lock_sema)) {
- do {
- lock_sema = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (tim_bkt_fetch_lock(lock_sema) - 1);
- rte_atomic_thread_fence(__ATOMIC_ACQUIRE);
- }
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- __atomic_store_n(&bkt->chunk_remainder,
- tim_ring->nb_chunk_slots - 1, __ATOMIC_RELEASE);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += tim_ring->nb_chunk_slots - rem;
- *chunk = *pent;
- }
-
- tim->impl_opaque[0] = (uintptr_t)chunk;
- tim->impl_opaque[1] = (uintptr_t)bkt;
- __atomic_store_n(&tim->state, RTE_EVENT_TIMER_ARMED, __ATOMIC_RELEASE);
- tim_bkt_inc_nent(bkt);
- tim_bkt_dec_lock_relaxed(bkt);
-
- return 0;
-}
-
-static inline uint16_t
-tim_cpy_wrk(uint16_t index, uint16_t cpy_lmt,
- struct otx2_tim_ent *chunk,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent * const ents,
- const struct otx2_tim_bkt * const bkt)
-{
- for (; index < cpy_lmt; index++) {
- *chunk = *(ents + index);
- tim[index]->impl_opaque[0] = (uintptr_t)chunk++;
- tim[index]->impl_opaque[1] = (uintptr_t)bkt;
- tim[index]->state = RTE_EVENT_TIMER_ARMED;
- }
-
- return index;
-}
-
-/* Burst mode functions */
-static inline int
-tim_add_entry_brst(struct otx2_tim_ring * const tim_ring,
- const uint16_t rel_bkt,
- struct rte_event_timer ** const tim,
- const struct otx2_tim_ent *ents,
- const uint16_t nb_timers, const uint8_t flags)
-{
- struct otx2_tim_ent *chunk = NULL;
- struct otx2_tim_bkt *mirr_bkt;
- struct otx2_tim_bkt *bkt;
- uint16_t chunk_remainder;
- uint16_t index = 0;
- uint64_t lock_sema;
- int16_t rem, crem;
- uint8_t lock_cnt;
-
-__retry:
- tim_get_target_bucket(tim_ring, rel_bkt, &bkt, &mirr_bkt);
-
- /* Only one thread beyond this. */
- lock_sema = tim_bkt_inc_lock(bkt);
- lock_cnt = (uint8_t)
- ((lock_sema >> TIM_BUCKET_W1_S_LOCK) & TIM_BUCKET_W1_M_LOCK);
-
- if (lock_cnt) {
- tim_bkt_dec_lock(bkt);
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " beq dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxrb %w[lock_cnt], [%[lock]] \n"
- " tst %w[lock_cnt], 255 \n"
- " bne rty%= \n"
- "dne%=: \n"
- : [lock_cnt] "=&r"(lock_cnt)
- : [lock] "r"(&bkt->lock)
- : "memory");
-#else
- while (__atomic_load_n(&bkt->lock, __ATOMIC_RELAXED))
- ;
-#endif
- goto __retry;
- }
-
- /* Bucket related checks. */
- if (unlikely(tim_bkt_get_hbt(lock_sema))) {
- if (tim_bkt_get_nent(lock_sema) != 0) {
- uint64_t hbt_state;
-#ifdef RTE_ARCH_ARM64
- asm volatile(" ldxr %[hbt], [%[w1]] \n"
- " tbz %[hbt], 33, dne%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldxr %[hbt], [%[w1]] \n"
- " tbnz %[hbt], 33, rty%= \n"
- "dne%=: \n"
- : [hbt] "=&r"(hbt_state)
- : [w1] "r"((&bkt->w1))
- : "memory");
-#else
- do {
- hbt_state = __atomic_load_n(&bkt->w1,
- __ATOMIC_RELAXED);
- } while (hbt_state & BIT_ULL(33));
-#endif
-
- if (!(hbt_state & BIT_ULL(34))) {
- tim_bkt_dec_lock(bkt);
- goto __retry;
- }
- }
- }
-
- chunk_remainder = tim_bkt_fetch_rem(lock_sema);
- rem = chunk_remainder - nb_timers;
- if (rem < 0) {
- crem = tim_ring->nb_chunk_slots - chunk_remainder;
- if (chunk_remainder && crem) {
- chunk = ((struct otx2_tim_ent *)
- mirr_bkt->current_chunk) + crem;
-
- index = tim_cpy_wrk(index, chunk_remainder, chunk, tim,
- ents, bkt);
- tim_bkt_sub_rem(bkt, chunk_remainder);
- tim_bkt_add_nent(bkt, chunk_remainder);
- }
-
- if (flags & OTX2_TIM_ENA_FB)
- chunk = tim_refill_chunk(bkt, mirr_bkt, tim_ring);
- if (flags & OTX2_TIM_ENA_DFB)
- chunk = tim_insert_chunk(bkt, mirr_bkt, tim_ring);
-
- if (unlikely(chunk == NULL)) {
- tim_bkt_dec_lock(bkt);
- rte_errno = ENOMEM;
- tim[index]->state = RTE_EVENT_TIMER_ERROR;
- return crem;
- }
- *(uint64_t *)(chunk + tim_ring->nb_chunk_slots) = 0;
- mirr_bkt->current_chunk = (uintptr_t)chunk;
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
-
- rem = nb_timers - chunk_remainder;
- tim_bkt_set_rem(bkt, tim_ring->nb_chunk_slots - rem);
- tim_bkt_add_nent(bkt, rem);
- } else {
- chunk = (struct otx2_tim_ent *)mirr_bkt->current_chunk;
- chunk += (tim_ring->nb_chunk_slots - chunk_remainder);
-
- tim_cpy_wrk(index, nb_timers, chunk, tim, ents, bkt);
- tim_bkt_sub_rem(bkt, nb_timers);
- tim_bkt_add_nent(bkt, nb_timers);
- }
-
- tim_bkt_dec_lock(bkt);
-
- return nb_timers;
-}
-
-static int
-tim_rm_entry(struct rte_event_timer *tim)
-{
- struct otx2_tim_ent *entry;
- struct otx2_tim_bkt *bkt;
- uint64_t lock_sema;
-
- if (tim->impl_opaque[1] == 0 || tim->impl_opaque[0] == 0)
- return -ENOENT;
-
- entry = (struct otx2_tim_ent *)(uintptr_t)tim->impl_opaque[0];
- if (entry->wqe != tim->ev.u64) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- return -ENOENT;
- }
-
- bkt = (struct otx2_tim_bkt *)(uintptr_t)tim->impl_opaque[1];
- lock_sema = tim_bkt_inc_lock(bkt);
- if (tim_bkt_get_hbt(lock_sema) || !tim_bkt_get_nent(lock_sema)) {
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
- return -ENOENT;
- }
-
- entry->w0 = 0;
- entry->wqe = 0;
- tim->state = RTE_EVENT_TIMER_CANCELED;
- tim->impl_opaque[0] = 0;
- tim->impl_opaque[1] = 0;
- tim_bkt_dec_lock(bkt);
-
- return 0;
-}
-
-#endif /* __OTX2_TIM_WORKER_H__ */
diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c
deleted file mode 100644
index 95139d27a3..0000000000
--- a/drivers/event/octeontx2/otx2_worker.c
+++ /dev/null
@@ -1,372 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_new_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_swtag(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
-
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag(ws);
- } else {
- otx2_ssogws_swtag_norm(ws, tag, new_tt);
- }
-
- ws->swtag_req = 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_fwd_group(struct otx2_ssogws *ws, const struct rte_event *ev,
- const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched(ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(ws->tag_op)) == grp)
- otx2_ssogws_fwd_swtag(ws, ev);
- else
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_fwd_group(ws, ev, grp);
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, flags, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(timeout_ticks); \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return 1; \
- } \
- \
- return otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint16_t ret = 1; \
- uint64_t iter; \
- \
- if (ws->swtag_req) { \
- ws->swtag_req = 0; \
- otx2_ssogws_swtag_wait(ws); \
- return ret; \
- } \
- \
- ret = otx2_ssogws_get_work(ws, ev, flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- for (iter = 1; iter < timeout_ticks && (ret == 0); iter++) \
- ret = otx2_ssogws_get_work(ws, ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem); \
- \
- return ret; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-uint16_t __rte_hot
-otx2_ssogws_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws *ws = port;
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_forward_event(ws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws *ws = port;
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_forward_event(ws, ev);
-
- return 1;
-}
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \
- (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-void
-ssogws_flush_events(struct otx2_ssogws *ws, uint8_t queue_id, uintptr_t base,
- otx2_handle_event_t fn, void *arg)
-{
- uint64_t cq_ds_cnt = 1;
- uint64_t aq_cnt = 1;
- uint64_t ds_cnt = 1;
- struct rte_event ev;
- uint64_t enable;
- uint64_t val;
-
- enable = otx2_read64(base + SSO_LF_GGRP_QCTL);
- if (!enable)
- return;
-
- val = queue_id; /* GGRP ID */
- val |= BIT_ULL(18); /* Grouped */
- val |= BIT_ULL(16); /* WAIT */
-
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- cq_ds_cnt &= 0x3FFF3FFF0000;
-
- while (aq_cnt || cq_ds_cnt || ds_cnt) {
- otx2_write64(val, ws->getwrk_op);
- otx2_ssogws_get_work_empty(ws, &ev, 0);
- if (fn != NULL && ev.u64 != 0)
- fn(arg, ev);
- if (ev.sched_type != SSO_TT_EMPTY)
- otx2_ssogws_swtag_flush(ws->tag_op, ws->swtag_flush_op);
- rte_mb();
- aq_cnt = otx2_read64(base + SSO_LF_GGRP_AQ_CNT);
- ds_cnt = otx2_read64(base + SSO_LF_GGRP_MISC_CNT);
- cq_ds_cnt = otx2_read64(base + SSO_LF_GGRP_INT_CNT);
- /* Extract cq and ds count */
- cq_ds_cnt &= 0x3FFF3FFF0000;
- }
-
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_GWC_INVAL);
- rte_mb();
-}
-
-void
-ssogws_reset(struct otx2_ssogws *ws)
-{
- uintptr_t base = OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op);
- uint64_t pend_state;
- uint8_t pend_tt;
- uint64_t tag;
-
- /* Wait till getwork/swtp/waitw/desched completes. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & (BIT_ULL(63) | BIT_ULL(62) | BIT_ULL(58)));
-
- tag = otx2_read64(base + SSOW_LF_GWS_TAG);
- pend_tt = (tag >> 32) & 0x3;
- if (pend_tt != SSO_TT_EMPTY) { /* Work was pending */
- if (pend_tt == SSO_SYNC_ATOMIC || pend_tt == SSO_SYNC_ORDERED)
- otx2_ssogws_swtag_untag(ws);
- otx2_ssogws_desched(ws);
- }
- rte_mb();
-
- /* Wait for desched to complete. */
- do {
- pend_state = otx2_read64(base + SSOW_LF_GWS_PENDSTATE);
- rte_mb();
- } while (pend_state & BIT_ULL(58));
-}
diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h
deleted file mode 100644
index aa766c6602..0000000000
--- a/drivers/event/octeontx2/otx2_worker.h
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_H__
-#define __OTX2_WORKER_H__
-
-#include <rte_common.h>
-#include <rte_branch_prediction.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-#include "otx2_ethdev_sec_tx.h"
-
-/* SSO Operations */
-
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags, const void * const lookup_mem)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- otx2_write64(BIT_ULL(16) | /* wait for work. */
- 1, /* Use Mask set 0. */
- ws->getwrk_op);
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags,
- lookup_mem);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf,
- ws->tstamp, flags,
- (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-/* Used in cleaning up workslot. */
-static __rte_always_inline uint16_t
-otx2_ssogws_get_work_empty(struct otx2_ssogws *ws, struct rte_event *ev,
- const uint32_t flags)
-{
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbz %[tag], 63, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8] \n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
-
- get_work1 = otx2_read64(ws->wqp_op);
- rte_prefetch_non_temporal((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch_non_temporal((const void *)mbuf);
-#endif
-
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY &&
- event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- otx2_wqe_to_mbuf(get_work1, mbuf, event.sub_event_type,
- (uint32_t) event.get_work0, flags, NULL);
- /* Extracting tstamp, if PTP enabled*/
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)get_work1)
- + OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, ws->tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_add_work(struct otx2_ssogws *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_desched(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt,
- uint16_t grp)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32) | ((uint64_t)grp << 34);
- otx2_write64(val, ws->swtag_desched_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_norm(struct otx2_ssogws *ws, uint32_t tag, uint8_t new_tt)
-{
- uint64_t val;
-
- val = tag | ((uint64_t)(new_tt & 0x3) << 32);
- otx2_write64(val, ws->swtag_norm_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_untag(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_SWTAG_UNTAG);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_flush(uint64_t tag_op, uint64_t flush_op)
-{
- if (OTX2_SSOW_TT_FROM_TAG(otx2_read64(tag_op)) == SSO_TT_EMPTY)
- return;
- otx2_write64(0, flush_op);
-}
-
-static __rte_always_inline void
-otx2_ssogws_desched(struct otx2_ssogws *ws)
-{
- otx2_write64(0, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_DESCHED);
-}
-
-static __rte_always_inline void
-otx2_ssogws_swtag_wait(struct otx2_ssogws *ws)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t swtp;
-
- asm volatile(" ldr %[swtb], [%[swtp_loc]] \n"
- " tbz %[swtb], 62, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[swtb], [%[swtp_loc]] \n"
- " tbnz %[swtb], 62, rty%= \n"
- "done%=: \n"
- : [swtb] "=&r" (swtp)
- : [swtp_loc] "r" (ws->tag_op));
-#else
- /* Wait for the SWTAG/SWTAG_FULL operation */
- while (otx2_read64(ws->tag_op) & BIT_ULL(62))
- ;
-#endif
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t tag_op)
-{
-#ifdef RTE_ARCH_ARM64
- uint64_t tag;
-
- asm volatile (
- " ldr %[tag], [%[tag_op]] \n"
- " tbnz %[tag], 35, done%= \n"
- " sevl \n"
- "rty%=: wfe \n"
- " ldr %[tag], [%[tag_op]] \n"
- " tbz %[tag], 35, rty%= \n"
- "done%=: \n"
- : [tag] "=&r" (tag)
- : [tag_op] "r" (tag_op)
- );
-#else
- /* Wait for the HEAD to be set */
- while (!(otx2_read64(tag_op) & BIT_ULL(35)))
- ;
-#endif
-}
-
-static __rte_always_inline const struct otx2_eth_txq *
-otx2_ssogws_xtract_meta(struct rte_mbuf *m,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT])
-{
- return (const struct otx2_eth_txq *)txq_data[m->port][
- rte_event_eth_tx_adapter_txq_get(m)];
-}
-
-static __rte_always_inline void
-otx2_ssogws_prepare_pkt(const struct otx2_eth_txq *txq, struct rte_mbuf *m,
- uint64_t *cmd, const uint32_t flags)
-{
- otx2_lmt_mov(cmd, txq->cmd, otx2_nix_tx_ext_subs(flags));
- otx2_nix_xmit_prepare(m, cmd, flags, txq->lso_tun_fmt);
-}
-
-static __rte_always_inline uint16_t
-otx2_ssogws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd,
- const uint64_t txq_data[][RTE_MAX_QUEUES_PER_PORT],
- const uint32_t flags)
-{
- struct rte_mbuf *m = ev->mbuf;
- const struct otx2_eth_txq *txq;
- uint16_t ref_cnt = m->refcnt;
-
- if ((flags & NIX_TX_OFFLOAD_SECURITY_F) &&
- (m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) {
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- return otx2_sec_event_tx(base, ev, m, txq, flags);
- }
-
- /* Perform header writes before barrier for TSO */
- otx2_nix_xmit_prepare_tso(m, flags);
- /* Lets commit any changes in the packet here in case when
- * fast free is set as no further changes will be made to mbuf.
- * In case of fast free is not set, both otx2_nix_prepare_mseg()
- * and otx2_nix_xmit_prepare() has a barrier after refcnt update.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
- txq = otx2_ssogws_xtract_meta(m, txq_data);
- otx2_ssogws_prepare_pkt(txq, m, cmd, flags);
-
- if (flags & NIX_TX_MULTI_SEG_F) {
- const uint16_t segdw = otx2_nix_prepare_mseg(m, cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, segdw, flags);
- if (!ev->sched_type) {
- otx2_nix_xmit_mseg_prep_lmt(cmd, txq->lmt_addr, segdw);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- } else {
- otx2_nix_xmit_mseg_one(cmd, txq->lmt_addr,
- txq->io_addr, segdw);
- }
- } else {
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- m->ol_flags, 4, flags);
-
- if (!ev->sched_type) {
- otx2_nix_xmit_prep_lmt(cmd, txq->lmt_addr, flags);
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
- if (otx2_nix_xmit_submit_lmt(txq->io_addr) == 0)
- otx2_nix_xmit_one(cmd, txq->lmt_addr,
- txq->io_addr, flags);
- } else {
- otx2_nix_xmit_one(cmd, txq->lmt_addr, txq->io_addr,
- flags);
- }
- }
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- if (ref_cnt > 1)
- return 1;
- }
-
- otx2_ssogws_swtag_flush(base + SSOW_LF_GWS_TAG,
- base + SSOW_LF_GWS_OP_SWTAG_FLUSH);
-
- return 1;
-}
-
-#endif
diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c
deleted file mode 100644
index 81af4ca904..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.c
+++ /dev/null
@@ -1,345 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_worker_dual.h"
-#include "otx2_worker.h"
-
-static __rte_noinline uint8_t
-otx2_ssogws_dual_new_event(struct otx2_ssogws_dual *ws,
- const struct rte_event *ev)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
- const uint64_t event_ptr = ev->u64;
- const uint16_t grp = ev->queue_id;
-
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- otx2_ssogws_dual_add_work(ws, event_ptr, tag, new_tt, grp);
-
- return 1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_swtag(struct otx2_ssogws_state *ws,
- const struct rte_event *ev)
-{
- const uint8_t cur_tt = OTX2_SSOW_TT_FROM_TAG(otx2_read64(ws->tag_op));
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- /* 96XX model
- * cur_tt/new_tt SSO_SYNC_ORDERED SSO_SYNC_ATOMIC SSO_SYNC_UNTAGGED
- *
- * SSO_SYNC_ORDERED norm norm untag
- * SSO_SYNC_ATOMIC norm norm untag
- * SSO_SYNC_UNTAGGED norm norm NOOP
- */
- if (new_tt == SSO_SYNC_UNTAGGED) {
- if (cur_tt != SSO_SYNC_UNTAGGED)
- otx2_ssogws_swtag_untag((struct otx2_ssogws *)ws);
- } else {
- otx2_ssogws_swtag_norm((struct otx2_ssogws *)ws, tag, new_tt);
- }
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_fwd_group(struct otx2_ssogws_state *ws,
- const struct rte_event *ev, const uint16_t grp)
-{
- const uint32_t tag = (uint32_t)ev->event;
- const uint8_t new_tt = ev->sched_type;
-
- otx2_write64(ev->u64, OTX2_SSOW_GET_BASE_ADDR(ws->getwrk_op) +
- SSOW_LF_GWS_OP_UPD_WQP_GRP1);
- rte_smp_wmb();
- otx2_ssogws_swtag_desched((struct otx2_ssogws *)ws, tag, new_tt, grp);
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws,
- struct otx2_ssogws_state *vws,
- const struct rte_event *ev)
-{
- const uint8_t grp = ev->queue_id;
-
- /* Group hasn't changed, Use SWTAG to forward the event */
- if (OTX2_SSOW_GRP_FROM_TAG(otx2_read64(vws->tag_op)) == grp) {
- otx2_ssogws_dual_fwd_swtag(vws, ev);
- ws->swtag_req = 1;
- } else {
- /*
- * Group has been changed for group based work pipelining,
- * Use deschedule/add_work operation to transfer the event to
- * new group/core
- */
- otx2_ssogws_dual_fwd_group(vws, ev, grp);
- }
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq(void *port, const struct rte_event *ev)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- switch (ev->op) {
- case RTE_EVENT_OP_NEW:
- rte_smp_mb();
- return otx2_ssogws_dual_new_event(ws, ev);
- case RTE_EVENT_OP_FORWARD:
- otx2_ssogws_dual_forward_event(ws, vws, ev);
- break;
- case RTE_EVENT_OP_RELEASE:
- otx2_ssogws_swtag_flush(vws->tag_op, vws->swtag_flush_op);
- break;
- default:
- return 0;
- }
-
- return 1;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- RTE_SET_USED(nb_events);
- return otx2_ssogws_dual_enq(port, ev);
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- uint16_t i, rc = 1;
-
- rte_smp_mb();
- if (ws->xaq_lmt <= *ws->fc_mem)
- return 0;
-
- for (i = 0; i < nb_events && rc; i++)
- rc = otx2_ssogws_dual_new_event(ws, &ev[i]);
-
- return nb_events;
-}
-
-uint16_t __rte_hot
-otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[],
- uint16_t nb_events)
-{
- struct otx2_ssogws_dual *ws = port;
- struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws];
-
- RTE_SET_USED(nb_events);
- otx2_ssogws_dual_forward_event(ws, vws, ev);
-
- return 1;
-}
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- rte_prefetch_non_temporal(ws); \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags, ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint8_t gw; \
- \
- RTE_SET_USED(timeout_ticks); \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \
- timeout_ticks); \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \
- struct rte_event *ev, \
- uint64_t timeout_ticks) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t iter; \
- uint8_t gw; \
- \
- if (ws->swtag_req) { \
- otx2_ssogws_swtag_wait((struct otx2_ssogws *) \
- &ws->ws_state[!ws->vws]); \
- ws->swtag_req = 0; \
- return 1; \
- } \
- \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], ev, \
- flags | NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- for (iter = 1; iter < timeout_ticks && (gw == 0); iter++) { \
- gw = otx2_ssogws_dual_get_work(&ws->ws_state[ws->vws], \
- &ws->ws_state[!ws->vws], \
- ev, flags | \
- NIX_RX_MULTI_SEG_F, \
- ws->lookup_mem, \
- ws->tstamp); \
- ws->vws = !ws->vws; \
- } \
- \
- return gw; \
-} \
- \
-uint16_t __rte_hot \
-otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events, \
- uint64_t timeout_ticks) \
-{ \
- RTE_SET_USED(nb_events); \
- \
- return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \
- timeout_ticks); \
-}
-
-SSO_RX_ADPTR_ENQ_FASTPATH_FUNC
-#undef R
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- struct otx2_ssogws_dual *ws = port; \
- uint64_t cmd[sz]; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, flags); \
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-uint16_t __rte_hot \
-otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \
- struct rte_event ev[], \
- uint16_t nb_events) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- struct otx2_ssogws_dual *ws = port; \
- \
- RTE_SET_USED(nb_events); \
- return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \
- cmd, (const uint64_t \
- (*)[RTE_MAX_QUEUES_PER_PORT]) \
- &ws->tx_adptr_data, \
- (flags) | NIX_TX_MULTI_SEG_F);\
-}
-SSO_TX_ADPTR_ENQ_FASTPATH_FUNC
-#undef T
diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h
deleted file mode 100644
index 36ae4dd88f..0000000000
--- a/drivers/event/octeontx2/otx2_worker_dual.h
+++ /dev/null
@@ -1,110 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_WORKER_DUAL_H__
-#define __OTX2_WORKER_DUAL_H__
-
-#include <rte_branch_prediction.h>
-#include <rte_common.h>
-
-#include <otx2_common.h>
-#include "otx2_evdev.h"
-#include "otx2_evdev_crypto_adptr_rx.h"
-
-/* SSO Operations */
-static __rte_always_inline uint16_t
-otx2_ssogws_dual_get_work(struct otx2_ssogws_state *ws,
- struct otx2_ssogws_state *ws_pair,
- struct rte_event *ev, const uint32_t flags,
- const void * const lookup_mem,
- struct otx2_timesync_info * const tstamp)
-{
- const uint64_t set_gw = BIT_ULL(16) | 1;
- union otx2_sso_event event;
- uint64_t tstamp_ptr;
- uint64_t get_work1;
- uint64_t mbuf;
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F)
- rte_prefetch_non_temporal(lookup_mem);
-#ifdef RTE_ARCH_ARM64
- asm volatile(
- "rty%=: \n"
- " ldr %[tag], [%[tag_loc]] \n"
- " ldr %[wqp], [%[wqp_loc]] \n"
- " tbnz %[tag], 63, rty%= \n"
- "done%=: str %[gw], [%[pong]] \n"
- " dmb ld \n"
- " prfm pldl1keep, [%[wqp], #8]\n"
- " sub %[mbuf], %[wqp], #0x80 \n"
- " prfm pldl1keep, [%[mbuf]] \n"
- : [tag] "=&r" (event.get_work0),
- [wqp] "=&r" (get_work1),
- [mbuf] "=&r" (mbuf)
- : [tag_loc] "r" (ws->tag_op),
- [wqp_loc] "r" (ws->wqp_op),
- [gw] "r" (set_gw),
- [pong] "r" (ws_pair->getwrk_op)
- );
-#else
- event.get_work0 = otx2_read64(ws->tag_op);
- while ((BIT_ULL(63)) & event.get_work0)
- event.get_work0 = otx2_read64(ws->tag_op);
- get_work1 = otx2_read64(ws->wqp_op);
- otx2_write64(set_gw, ws_pair->getwrk_op);
-
- rte_prefetch0((const void *)get_work1);
- mbuf = (uint64_t)((char *)get_work1 - sizeof(struct rte_mbuf));
- rte_prefetch0((const void *)mbuf);
-#endif
- event.get_work0 = (event.get_work0 & (0x3ull << 32)) << 6 |
- (event.get_work0 & (0x3FFull << 36)) << 4 |
- (event.get_work0 & 0xffffffff);
-
- if (event.sched_type != SSO_TT_EMPTY) {
- if ((flags & NIX_RX_OFFLOAD_SECURITY_F) &&
- (event.event_type == RTE_EVENT_TYPE_CRYPTODEV)) {
- get_work1 = otx2_handle_crypto_event(get_work1);
- } else if (event.event_type == RTE_EVENT_TYPE_ETHDEV) {
- uint8_t port = event.sub_event_type;
-
- event.sub_event_type = 0;
- otx2_wqe_to_mbuf(get_work1, mbuf, port,
- event.flow_id, flags, lookup_mem);
- /* Extracting tstamp, if PTP enabled. CGX will prepend
- * the timestamp at starting of packet data and it can
- * be derieved from WQE 9 dword which corresponds to SG
- * iova.
- * rte_pktmbuf_mtod_offset can be used for this purpose
- * but it brings down the performance as it reads
- * mbuf->buf_addr which is not part of cache in general
- * fast path.
- */
- tstamp_ptr = *(uint64_t *)(((struct nix_wqe_hdr_s *)
- get_work1) +
- OTX2_SSO_WQE_SG_PTR);
- otx2_nix_mbuf_to_tstamp((struct rte_mbuf *)mbuf, tstamp,
- flags, (uint64_t *)tstamp_ptr);
- get_work1 = mbuf;
- }
- }
-
- ev->event = event.get_work0;
- ev->u64 = get_work1;
-
- return !!get_work1;
-}
-
-static __rte_always_inline void
-otx2_ssogws_dual_add_work(struct otx2_ssogws_dual *ws, const uint64_t event_ptr,
- const uint32_t tag, const uint8_t new_tt,
- const uint16_t grp)
-{
- uint64_t add_work0;
-
- add_work0 = tag | ((uint64_t)(new_tt) << 32);
- otx2_store_pair(add_work0, event_ptr, ws->grps_base[grp]);
-}
-
-#endif
diff --git a/drivers/event/octeontx2/version.map b/drivers/event/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/event/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/mempool/cnxk/cnxk_mempool.c b/drivers/mempool/cnxk/cnxk_mempool.c
index 57be33b862..ea473552dd 100644
--- a/drivers/mempool/cnxk/cnxk_mempool.c
+++ b/drivers/mempool/cnxk/cnxk_mempool.c
@@ -161,48 +161,20 @@ npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id npa_pci_map[] = {
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_PF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KA,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CN10KAS,
- },
- {
- .class_id = RTE_CLASS_ANY_ID,
- .vendor_id = PCI_VENDOR_ID_CAVIUM,
- .device_id = PCI_DEVID_CNXK_RVU_NPA_VF,
- .subsystem_vendor_id = PCI_VENDOR_ID_CAVIUM,
- .subsystem_device_id = PCI_SUBSYSTEM_DEVID_CNF10KA,
- },
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN10KAS, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_NPA_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_NPA_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/mempool/meson.build b/drivers/mempool/meson.build
index d295263b87..dc88812585 100644
--- a/drivers/mempool/meson.build
+++ b/drivers/mempool/meson.build
@@ -7,7 +7,6 @@ drivers = [
'dpaa',
'dpaa2',
'octeontx',
- 'octeontx2',
'ring',
'stack',
]
diff --git a/drivers/mempool/octeontx2/meson.build b/drivers/mempool/octeontx2/meson.build
deleted file mode 100644
index a4bea6d364..0000000000
--- a/drivers/mempool/octeontx2/meson.build
+++ /dev/null
@@ -1,18 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_mempool.c',
- 'otx2_mempool_debug.c',
- 'otx2_mempool_irq.c',
- 'otx2_mempool_ops.c',
-)
-
-deps += ['eal', 'mbuf', 'kvargs', 'bus_pci', 'common_octeontx2', 'mempool']
diff --git a/drivers/mempool/octeontx2/otx2_mempool.c b/drivers/mempool/octeontx2/otx2_mempool.c
deleted file mode 100644
index f63dc06ef2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.c
+++ /dev/null
@@ -1,457 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_atomic.h>
-#include <rte_bus_pci.h>
-#include <rte_common.h>
-#include <rte_eal.h>
-#include <rte_io.h>
-#include <rte_kvargs.h>
-#include <rte_malloc.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_mempool.h"
-
-#define OTX2_NPA_DEV_NAME RTE_STR(otx2_npa_dev_)
-#define OTX2_NPA_DEV_NAME_LEN (sizeof(OTX2_NPA_DEV_NAME) + PCI_PRI_STR_SIZE)
-
-static inline int
-npa_lf_alloc(struct otx2_npa_lf *lf)
-{
- struct otx2_mbox *mbox = lf->mbox;
- struct npa_lf_alloc_req *req;
- struct npa_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_lf_alloc(mbox);
- req->aura_sz = lf->aura_sz;
- req->nr_pools = lf->nr_pools;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return NPA_LF_ERR_ALLOC;
-
- lf->stack_pg_ptrs = rsp->stack_pg_ptrs;
- lf->stack_pg_bytes = rsp->stack_pg_bytes;
- lf->qints = rsp->qints;
-
- return 0;
-}
-
-static int
-npa_lf_free(struct otx2_mbox *mbox)
-{
- otx2_mbox_alloc_msg_npa_lf_free(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npa_lf_init(struct otx2_npa_lf *lf, uintptr_t base, uint8_t aura_sz,
- uint32_t nr_pools, struct otx2_mbox *mbox)
-{
- uint32_t i, bmp_sz;
- int rc;
-
- /* Sanity checks */
- if (!lf || !base || !mbox || !nr_pools)
- return NPA_LF_ERR_PARAM;
-
- if (base & AURA_ID_MASK)
- return NPA_LF_ERR_BASE_INVALID;
-
- if (aura_sz == NPA_AURA_SZ_0 || aura_sz >= NPA_AURA_SZ_MAX)
- return NPA_LF_ERR_PARAM;
-
- memset(lf, 0x0, sizeof(*lf));
- lf->base = base;
- lf->aura_sz = aura_sz;
- lf->nr_pools = nr_pools;
- lf->mbox = mbox;
-
- rc = npa_lf_alloc(lf);
- if (rc)
- goto exit;
-
- bmp_sz = rte_bitmap_get_memory_footprint(nr_pools);
-
- /* Allocate memory for bitmap */
- lf->npa_bmp_mem = rte_zmalloc("npa_bmp_mem", bmp_sz,
- RTE_CACHE_LINE_SIZE);
- if (lf->npa_bmp_mem == NULL) {
- rc = -ENOMEM;
- goto lf_free;
- }
-
- /* Initialize pool resource bitmap array */
- lf->npa_bmp = rte_bitmap_init(nr_pools, lf->npa_bmp_mem, bmp_sz);
- if (lf->npa_bmp == NULL) {
- rc = -EINVAL;
- goto bmap_mem_free;
- }
-
- /* Mark all pools available */
- for (i = 0; i < nr_pools; i++)
- rte_bitmap_set(lf->npa_bmp, i);
-
- /* Allocate memory for qint context */
- lf->npa_qint_mem = rte_zmalloc("npa_qint_mem",
- sizeof(struct otx2_npa_qint) * nr_pools, 0);
- if (lf->npa_qint_mem == NULL) {
- rc = -ENOMEM;
- goto bmap_free;
- }
-
- /* Allocate memory for nap_aura_lim memory */
- lf->aura_lim = rte_zmalloc("npa_aura_lim_mem",
- sizeof(struct npa_aura_lim) * nr_pools, 0);
- if (lf->aura_lim == NULL) {
- rc = -ENOMEM;
- goto qint_free;
- }
-
- /* Init aura start & end limits */
- for (i = 0; i < nr_pools; i++) {
- lf->aura_lim[i].ptr_start = UINT64_MAX;
- lf->aura_lim[i].ptr_end = 0x0ull;
- }
-
- return 0;
-
-qint_free:
- rte_free(lf->npa_qint_mem);
-bmap_free:
- rte_bitmap_free(lf->npa_bmp);
-bmap_mem_free:
- rte_free(lf->npa_bmp_mem);
-lf_free:
- npa_lf_free(lf->mbox);
-exit:
- return rc;
-}
-
-static int
-npa_lf_fini(struct otx2_npa_lf *lf)
-{
- if (!lf)
- return NPA_LF_ERR_PARAM;
-
- rte_free(lf->aura_lim);
- rte_free(lf->npa_qint_mem);
- rte_bitmap_free(lf->npa_bmp);
- rte_free(lf->npa_bmp_mem);
-
- return npa_lf_free(lf->mbox);
-
-}
-
-static inline uint32_t
-otx2_aura_size_to_u32(uint8_t val)
-{
- if (val == NPA_AURA_SZ_0)
- return 128;
- if (val >= NPA_AURA_SZ_MAX)
- return BIT_ULL(20);
-
- return 1 << (val + 6);
-}
-
-static int
-parse_max_pools(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
- if (val < otx2_aura_size_to_u32(NPA_AURA_SZ_128))
- val = 128;
- if (val > otx2_aura_size_to_u32(NPA_AURA_SZ_1M))
- val = BIT_ULL(20);
-
- *(uint8_t *)extra_args = rte_log2_u32(val) - 6;
- return 0;
-}
-
-#define OTX2_MAX_POOLS "max_pools"
-
-static uint8_t
-otx2_parse_aura_size(struct rte_devargs *devargs)
-{
- uint8_t aura_sz = NPA_AURA_SZ_128;
- struct rte_kvargs *kvlist;
-
- if (devargs == NULL)
- goto exit;
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_MAX_POOLS, &parse_max_pools, &aura_sz);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-exit:
- return aura_sz;
-}
-
-static inline int
-npa_lf_attach(struct otx2_mbox *mbox)
-{
- struct rsrc_attach_req *req;
-
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
- req->npalf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-npa_lf_get_msix_offset(struct otx2_mbox *mbox, uint16_t *npa_msixoff)
-{
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- *npa_msixoff = msix_rsp->npa_msixoff;
-
- return rc;
-}
-
-/**
- * @internal
- * Finalize NPA LF.
- */
-int
-otx2_npa_lf_fini(void)
-{
- struct otx2_idev_cfg *idev;
- int rc = 0;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- if (rte_atomic16_add_return(&idev->npa_refcnt, -1) == 0) {
- otx2_npa_unregister_irqs(idev->npa_lf);
- rc |= npa_lf_fini(idev->npa_lf);
- rc |= npa_lf_detach(idev->npa_lf->mbox);
- otx2_npa_set_defaults(idev);
- }
-
- return rc;
-}
-
-/**
- * @internal
- * Initialize NPA LF.
- */
-int
-otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev)
-{
- struct otx2_dev *dev = otx2_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_npa_lf *lf;
- uint16_t npa_msixoff;
- uint32_t nr_pools;
- uint8_t aura_sz;
- int rc;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- /* Is NPA LF initialized by any another driver? */
- if (rte_atomic16_add_return(&idev->npa_refcnt, 1) == 1) {
-
- rc = npa_lf_attach(dev->mbox);
- if (rc)
- goto fail;
-
- rc = npa_lf_get_msix_offset(dev->mbox, &npa_msixoff);
- if (rc)
- goto npa_detach;
-
- aura_sz = otx2_parse_aura_size(pci_dev->device.devargs);
- nr_pools = otx2_aura_size_to_u32(aura_sz);
-
- lf = &dev->npalf;
- rc = npa_lf_init(lf, dev->bar2 + (RVU_BLOCK_ADDR_NPA << 20),
- aura_sz, nr_pools, dev->mbox);
-
- if (rc)
- goto npa_detach;
-
- lf->pf_func = dev->pf_func;
- lf->npa_msixoff = npa_msixoff;
- lf->intr_handle = pci_dev->intr_handle;
- lf->pci_dev = pci_dev;
-
- idev->npa_pf_func = dev->pf_func;
- idev->npa_lf = lf;
- rte_smp_wmb();
- rc = otx2_npa_register_irqs(lf);
- if (rc)
- goto npa_fini;
-
- rte_mbuf_set_platform_mempool_ops("octeontx2_npa");
- otx2_npa_dbg("npa_lf=%p pools=%d sz=%d pf_func=0x%x msix=0x%x",
- lf, nr_pools, aura_sz, lf->pf_func, npa_msixoff);
- }
-
- return 0;
-
-npa_fini:
- npa_lf_fini(idev->npa_lf);
-npa_detach:
- npa_lf_detach(dev->mbox);
-fail:
- rte_atomic16_dec(&idev->npa_refcnt);
- return rc;
-}
-
-static inline char*
-otx2_npa_dev_to_name(struct rte_pci_device *pci_dev, char *name)
-{
- snprintf(name, OTX2_NPA_DEV_NAME_LEN,
- OTX2_NPA_DEV_NAME PCI_PRI_FMT,
- pci_dev->addr.domain, pci_dev->addr.bus,
- pci_dev->addr.devid, pci_dev->addr.function);
-
- return name;
-}
-
-static int
-otx2_npa_init(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
- int rc = -ENOMEM;
-
- mz = rte_memzone_reserve_aligned(otx2_npa_dev_to_name(pci_dev, name),
- sizeof(*dev), SOCKET_ID_ANY,
- 0, OTX2_ALIGN);
- if (mz == NULL)
- goto error;
-
- dev = mz->addr;
-
- /* Initialize the base otx2_dev object */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc)
- goto malloc_fail;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto dev_uninit;
-
- dev->drv_inited = true;
- return 0;
-
-dev_uninit:
- otx2_npa_lf_fini();
- otx2_dev_fini(pci_dev, dev);
-malloc_fail:
- rte_memzone_free(mz);
-error:
- otx2_err("Failed to initialize npa device rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_npa_fini(struct rte_pci_device *pci_dev)
-{
- char name[OTX2_NPA_DEV_NAME_LEN];
- const struct rte_memzone *mz;
- struct otx2_dev *dev;
-
- mz = rte_memzone_lookup(otx2_npa_dev_to_name(pci_dev, name));
- if (mz == NULL)
- return -EINVAL;
-
- dev = mz->addr;
- if (!dev->drv_inited)
- goto dev_fini;
-
- dev->drv_inited = false;
- otx2_npa_lf_fini();
-
-dev_fini:
- if (otx2_npa_lf_active(dev)) {
- otx2_info("%s: common resource in use by other devices",
- pci_dev->name);
- return -EAGAIN;
- }
-
- otx2_dev_fini(pci_dev, dev);
- rte_memzone_free(mz);
-
- return 0;
-}
-
-static int
-npa_remove(struct rte_pci_device *pci_dev)
-{
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_fini(pci_dev);
-}
-
-static int
-npa_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- RTE_SET_USED(pci_drv);
-
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- return otx2_npa_init(pci_dev);
-}
-
-static const struct rte_pci_id pci_npa_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_NPA_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_npa = {
- .id_table = pci_npa_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA,
- .probe = npa_probe,
- .remove = npa_remove,
-};
-
-RTE_PMD_REGISTER_PCI(mempool_octeontx2, pci_npa);
-RTE_PMD_REGISTER_PCI_TABLE(mempool_octeontx2, pci_npa_map);
-RTE_PMD_REGISTER_KMOD_DEP(mempool_octeontx2, "vfio-pci");
-RTE_PMD_REGISTER_PARAM_STRING(mempool_octeontx2,
- OTX2_MAX_POOLS "=<128-1048576>"
- OTX2_NPA_LOCK_MASK "=<1-65535>");
diff --git a/drivers/mempool/octeontx2/otx2_mempool.h b/drivers/mempool/octeontx2/otx2_mempool.h
deleted file mode 100644
index 8aa548248d..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool.h
+++ /dev/null
@@ -1,221 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_MEMPOOL_H__
-#define __OTX2_MEMPOOL_H__
-
-#include <rte_bitmap.h>
-#include <rte_bus_pci.h>
-#include <rte_devargs.h>
-#include <rte_mempool.h>
-
-#include "otx2_common.h"
-#include "otx2_mbox.h"
-
-enum npa_lf_status {
- NPA_LF_ERR_PARAM = -512,
- NPA_LF_ERR_ALLOC = -513,
- NPA_LF_ERR_INVALID_BLOCK_SZ = -514,
- NPA_LF_ERR_AURA_ID_ALLOC = -515,
- NPA_LF_ERR_AURA_POOL_INIT = -516,
- NPA_LF_ERR_AURA_POOL_FINI = -517,
- NPA_LF_ERR_BASE_INVALID = -518,
-};
-
-struct otx2_npa_lf;
-struct otx2_npa_qint {
- struct otx2_npa_lf *lf;
- uint8_t qintx;
-};
-
-struct npa_aura_lim {
- uint64_t ptr_start;
- uint64_t ptr_end;
-};
-
-struct otx2_npa_lf {
- uint16_t qints;
- uintptr_t base;
- uint8_t aura_sz;
- uint16_t pf_func;
- uint32_t nr_pools;
- void *npa_bmp_mem;
- void *npa_qint_mem;
- uint16_t npa_msixoff;
- struct otx2_mbox *mbox;
- uint32_t stack_pg_ptrs;
- uint32_t stack_pg_bytes;
- struct rte_bitmap *npa_bmp;
- struct npa_aura_lim *aura_lim;
- struct rte_pci_device *pci_dev;
- struct rte_intr_handle *intr_handle;
-};
-
-#define AURA_ID_MASK (BIT_ULL(16) - 1)
-
-/*
- * Generate 64bit handle to have optimized alloc and free aura operation.
- * 0 - AURA_ID_MASK for storing the aura_id.
- * AURA_ID_MASK+1 - (2^64 - 1) for storing the lf base address.
- * This scheme is valid when OS can give AURA_ID_MASK
- * aligned address for lf base address.
- */
-static inline uint64_t
-npa_lf_aura_handle_gen(uint32_t aura_id, uintptr_t addr)
-{
- uint64_t val;
-
- val = aura_id & AURA_ID_MASK;
- return (uint64_t)addr | val;
-}
-
-static inline uint64_t
-npa_lf_aura_handle_to_aura(uint64_t aura_handle)
-{
- return aura_handle & AURA_ID_MASK;
-}
-
-static inline uintptr_t
-npa_lf_aura_handle_to_base(uint64_t aura_handle)
-{
- return (uintptr_t)(aura_handle & ~AURA_ID_MASK);
-}
-
-static inline uint64_t
-npa_lf_aura_op_alloc(uint64_t aura_handle, const int drop)
-{
- uint64_t wdata = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (drop)
- wdata |= BIT_ULL(63); /* DROP */
-
- return otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_ALLOCX(0)));
-}
-
-static inline void
-npa_lf_aura_op_free(uint64_t aura_handle, const int fabs, uint64_t iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
-
- if (fabs)
- reg |= BIT_ULL(63); /* FABS */
-
- otx2_store_pair(iova, reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_FREE0);
-}
-
-static inline uint64_t
-npa_lf_aura_op_cnt_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_CNT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_cnt_set(uint64_t aura_handle, const int sign, uint64_t count)
-{
- uint64_t reg = count & (BIT_ULL(36) - 1);
-
- if (sign)
- reg |= BIT_ULL(43); /* CNT_ADD */
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_CNT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_limit_get(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_LIMIT));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_limit_set(uint64_t aura_handle, uint64_t limit)
-{
- uint64_t reg = limit & (BIT_ULL(36) - 1);
-
- reg |= (npa_lf_aura_handle_to_aura(aura_handle) << 44);
-
- otx2_write64(reg,
- npa_lf_aura_handle_to_base(aura_handle) + NPA_LF_AURA_OP_LIMIT);
-}
-
-static inline uint64_t
-npa_lf_aura_op_available(uint64_t aura_handle)
-{
- uint64_t wdata;
- uint64_t reg;
-
- wdata = npa_lf_aura_handle_to_aura(aura_handle) << 44;
-
- reg = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(npa_lf_aura_handle_to_base(
- aura_handle) + NPA_LF_POOL_OP_AVAILABLE));
-
- if (reg & BIT_ULL(42) /* OP_ERR */)
- return 0;
- else
- return reg & 0xFFFFFFFFF;
-}
-
-static inline void
-npa_lf_aura_op_range_set(uint64_t aura_handle, uint64_t start_iova,
- uint64_t end_iova)
-{
- uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
-
- lim[reg].ptr_start = RTE_MIN(lim[reg].ptr_start, start_iova);
- lim[reg].ptr_end = RTE_MAX(lim[reg].ptr_end, end_iova);
-
- otx2_store_pair(lim[reg].ptr_start, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_START0);
- otx2_store_pair(lim[reg].ptr_end, reg,
- npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_POOL_OP_PTR_END0);
-}
-
-/* NPA LF */
-__rte_internal
-int otx2_npa_lf_init(struct rte_pci_device *pci_dev, void *otx2_dev);
-__rte_internal
-int otx2_npa_lf_fini(void);
-
-/* IRQ */
-int otx2_npa_register_irqs(struct otx2_npa_lf *lf);
-void otx2_npa_unregister_irqs(struct otx2_npa_lf *lf);
-
-/* Debug */
-int otx2_mempool_ctx_dump(struct otx2_npa_lf *lf);
-
-#endif /* __OTX2_MEMPOOL_H__ */
diff --git a/drivers/mempool/octeontx2/otx2_mempool_debug.c b/drivers/mempool/octeontx2/otx2_mempool_debug.c
deleted file mode 100644
index 279ea2e25f..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_debug.c
+++ /dev/null
@@ -1,135 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_mempool.h"
-
-#define npa_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-
-static inline void
-npa_lf_pool_dump(__otx2_io struct npa_pool_s *pool)
-{
- npa_dump("W0: Stack base\t\t0x%"PRIx64"", pool->stack_base);
- npa_dump("W1: ena \t\t%d\nW1: nat_align \t\t%d\nW1: stack_caching \t%d",
- pool->ena, pool->nat_align, pool->stack_caching);
- npa_dump("W1: stack_way_mask\t%d\nW1: buf_offset\t\t%d",
- pool->stack_way_mask, pool->buf_offset);
- npa_dump("W1: buf_size \t\t%d", pool->buf_size);
-
- npa_dump("W2: stack_max_pages \t%d\nW2: stack_pages\t\t%d",
- pool->stack_max_pages, pool->stack_pages);
-
- npa_dump("W3: op_pc \t\t0x%"PRIx64"", (uint64_t)pool->op_pc);
-
- npa_dump("W4: stack_offset\t%d\nW4: shift\t\t%d\nW4: avg_level\t\t%d",
- pool->stack_offset, pool->shift, pool->avg_level);
- npa_dump("W4: avg_con \t\t%d\nW4: fc_ena\t\t%d\nW4: fc_stype\t\t%d",
- pool->avg_con, pool->fc_ena, pool->fc_stype);
- npa_dump("W4: fc_hyst_bits\t%d\nW4: fc_up_crossing\t%d",
- pool->fc_hyst_bits, pool->fc_up_crossing);
- npa_dump("W4: update_time\t\t%d\n", pool->update_time);
-
- npa_dump("W5: fc_addr\t\t0x%"PRIx64"\n", pool->fc_addr);
-
- npa_dump("W6: ptr_start\t\t0x%"PRIx64"\n", pool->ptr_start);
-
- npa_dump("W7: ptr_end\t\t0x%"PRIx64"\n", pool->ptr_end);
- npa_dump("W8: err_int\t\t%d\nW8: err_int_ena\t\t%d",
- pool->err_int, pool->err_int_ena);
- npa_dump("W8: thresh_int\t\t%d", pool->thresh_int);
-
- npa_dump("W8: thresh_int_ena\t%d\nW8: thresh_up\t\t%d",
- pool->thresh_int_ena, pool->thresh_up);
- npa_dump("W8: thresh_qint_idx\t%d\nW8: err_qint_idx\t%d",
- pool->thresh_qint_idx, pool->err_qint_idx);
-}
-
-static inline void
-npa_lf_aura_dump(__otx2_io struct npa_aura_s *aura)
-{
- npa_dump("W0: Pool addr\t\t0x%"PRIx64"\n", aura->pool_addr);
-
- npa_dump("W1: ena\t\t\t%d\nW1: pool caching\t%d\nW1: pool way mask\t%d",
- aura->ena, aura->pool_caching, aura->pool_way_mask);
- npa_dump("W1: avg con\t\t%d\nW1: pool drop ena\t%d",
- aura->avg_con, aura->pool_drop_ena);
- npa_dump("W1: aura drop ena\t%d", aura->aura_drop_ena);
- npa_dump("W1: bp_ena\t\t%d\nW1: aura drop\t\t%d\nW1: aura shift\t\t%d",
- aura->bp_ena, aura->aura_drop, aura->shift);
- npa_dump("W1: avg_level\t\t%d\n", aura->avg_level);
-
- npa_dump("W2: count\t\t%"PRIx64"\nW2: nix0_bpid\t\t%d",
- (uint64_t)aura->count, aura->nix0_bpid);
- npa_dump("W2: nix1_bpid\t\t%d", aura->nix1_bpid);
-
- npa_dump("W3: limit\t\t%"PRIx64"\nW3: bp\t\t\t%d\nW3: fc_ena\t\t%d\n",
- (uint64_t)aura->limit, aura->bp, aura->fc_ena);
- npa_dump("W3: fc_up_crossing\t%d\nW3: fc_stype\t\t%d",
- aura->fc_up_crossing, aura->fc_stype);
-
- npa_dump("W3: fc_hyst_bits\t%d", aura->fc_hyst_bits);
-
- npa_dump("W4: fc_addr\t\t0x%"PRIx64"\n", aura->fc_addr);
-
- npa_dump("W5: pool_drop\t\t%d\nW5: update_time\t\t%d",
- aura->pool_drop, aura->update_time);
- npa_dump("W5: err_int\t\t%d", aura->err_int);
- npa_dump("W5: err_int_ena\t\t%d\nW5: thresh_int\t\t%d",
- aura->err_int_ena, aura->thresh_int);
- npa_dump("W5: thresh_int_ena\t%d", aura->thresh_int_ena);
-
- npa_dump("W5: thresh_up\t\t%d\nW5: thresh_qint_idx\t%d",
- aura->thresh_up, aura->thresh_qint_idx);
- npa_dump("W5: err_qint_idx\t%d", aura->err_qint_idx);
-
- npa_dump("W6: thresh\t\t%"PRIx64"\n", (uint64_t)aura->thresh);
-}
-
-int
-otx2_mempool_ctx_dump(struct otx2_npa_lf *lf)
-{
- struct npa_aq_enq_req *aq;
- struct npa_aq_enq_rsp *rsp;
- uint32_t q;
- int rc = 0;
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_POOL;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(%d) context", q);
- return rc;
- }
- npa_dump("============== pool=%d ===============\n", q);
- npa_lf_pool_dump(&rsp->pool);
- }
-
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aq = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
- aq->aura_id = q;
- aq->ctype = NPA_AQ_CTYPE_AURA;
- aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get aura(%d) context", q);
- return rc;
- }
- npa_dump("============== aura=%d ===============\n", q);
- npa_lf_aura_dump(&rsp->aura);
- }
-
- return rc;
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_irq.c b/drivers/mempool/octeontx2/otx2_mempool_irq.c
deleted file mode 100644
index 5fa22b9612..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_irq.c
+++ /dev/null
@@ -1,303 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_common.h>
-#include <rte_bus_pci.h>
-
-#include "otx2_common.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-
-static void
-npa_lf_err_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_ERR_INT);
-}
-
-static int
-npa_lf_register_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- /* Register err interrupt vector */
- rc = otx2_register_irq(handle, npa_lf_err_irq, lf, vec);
-
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_err_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_ERR_INT_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_err_irq, lf, vec);
-}
-
-static void
-npa_lf_ras_irq(void *param)
-{
- struct otx2_npa_lf *lf = (struct otx2_npa_lf *)param;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 "", intr);
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_RAS);
-}
-
-static int
-npa_lf_register_ras_irq(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int rc, vec;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, npa_lf_ras_irq, lf, vec);
- /* Enable hw interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1S);
-
- return rc;
-}
-
-static void
-npa_lf_unregister_ras_irq(struct otx2_npa_lf *lf)
-{
- int vec;
- struct rte_intr_handle *handle = lf->intr_handle;
-
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_RAS_ENA_W1C);
- otx2_unregister_irq(handle, npa_lf_ras_irq, lf, vec);
-}
-
-static inline uint8_t
-npa_lf_q_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(lf->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, lf->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-npa_lf_pool_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t p)
-{
- return npa_lf_q_irq_get_and_clear(lf, p, NPA_LF_POOL_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-npa_lf_aura_irq_get_and_clear(struct otx2_npa_lf *lf, uint32_t a)
-{
- return npa_lf_q_irq_get_and_clear(lf, a, NPA_LF_AURA_OP_INT, ~0xff00);
-}
-
-static void
-npa_lf_q_irq(void *param)
-{
- struct otx2_npa_qint *qint = (struct otx2_npa_qint *)param;
- struct otx2_npa_lf *lf = qint->lf;
- uint8_t irq, qintx = qint->qintx;
- uint32_t q, pool, aura;
- uint64_t intr;
-
- intr = otx2_read64(lf->base + NPA_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("queue_intr=0x%" PRIx64 " qintx=%d", intr, qintx);
-
- /* Handle pool queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
- /* Skip disabled POOL */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- pool = q % lf->qints;
- irq = npa_lf_pool_irq_get_and_clear(lf, pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_OVFLS))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_OVFLS", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_RANGE))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_RANGE", pool);
-
- if (irq & BIT_ULL(NPA_POOL_ERR_INT_PERR))
- otx2_err("Pool=%d NPA_POOL_ERR_INT_PERR", pool);
- }
-
- /* Handle aura queue interrupts */
- for (q = 0; q < lf->nr_pools; q++) {
-
- /* Skip disabled AURA */
- if (rte_bitmap_get(lf->npa_bmp, q))
- continue;
-
- aura = q % lf->qints;
- irq = npa_lf_aura_irq_get_and_clear(lf, aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_OVER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_OVER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_ADD_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_ADD_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_AURA_FREE_UNDER))
- otx2_err("Aura=%d NPA_AURA_ERR_INT_FREE_UNDER", aura);
-
- if (irq & BIT_ULL(NPA_AURA_ERR_INT_POOL_DIS))
- otx2_err("Aura=%d NPA_AURA_ERR_POOL_DIS", aura);
- }
-
- /* Clear interrupt */
- otx2_write64(intr, lf->base + NPA_LF_QINTX_INT(qintx));
- otx2_mempool_ctx_dump(lf);
-}
-
-static int
-npa_lf_register_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs, rc = 0;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- qintmem->lf = lf;
- qintmem->qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, npa_lf_q_irq, qintmem, vec);
- if (rc)
- break;
-
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-static void
-npa_lf_unregister_queue_irqs(struct otx2_npa_lf *lf)
-{
- struct rte_intr_handle *handle = lf->intr_handle;
- int vec, q, qs;
-
- /* Figure out max qintx required */
- qs = RTE_MIN(lf->qints, lf->nr_pools);
-
- for (q = 0; q < qs; q++) {
- vec = lf->npa_msixoff + NPA_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, lf->base + NPA_LF_QINTX_CNT(q));
- otx2_write64(0, lf->base + NPA_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, lf->base + NPA_LF_QINTX_ENA_W1C(q));
-
- struct otx2_npa_qint *qintmem = lf->npa_qint_mem;
- qintmem += q;
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, npa_lf_q_irq, qintmem, vec);
-
- qintmem->lf = NULL;
- qintmem->qintx = 0;
- }
-}
-
-int
-otx2_npa_register_irqs(struct otx2_npa_lf *lf)
-{
- int rc;
-
- if (lf->npa_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NPALF MSIX vector offset vector: 0x%x",
- lf->npa_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = npa_lf_register_err_irq(lf);
- /* Register RAS interrupt */
- rc |= npa_lf_register_ras_irq(lf);
- /* Register queue interrupts */
- rc |= npa_lf_register_queue_irqs(lf);
-
- return rc;
-}
-
-void
-otx2_npa_unregister_irqs(struct otx2_npa_lf *lf)
-{
- npa_lf_unregister_err_irq(lf);
- npa_lf_unregister_ras_irq(lf);
- npa_lf_unregister_queue_irqs(lf);
-}
diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
deleted file mode 100644
index 332e4f1cb2..0000000000
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ /dev/null
@@ -1,901 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_mempool.h>
-#include <rte_vect.h>
-
-#include "otx2_mempool.h"
-
-static int __rte_hot
-otx2_npa_enq(struct rte_mempool *mp, void * const *obj_table, unsigned int n)
-{
- unsigned int index; const uint64_t aura_handle = mp->pool_id;
- const uint64_t reg = npa_lf_aura_handle_to_aura(aura_handle);
- const uint64_t addr = npa_lf_aura_handle_to_base(aura_handle) +
- NPA_LF_AURA_OP_FREE0;
-
- /* Ensure mbuf init changes are written before the free pointers
- * are enqueued to the stack.
- */
- rte_io_wmb();
- for (index = 0; index < n; index++)
- otx2_store_pair((uint64_t)obj_table[index], reg, addr);
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_one(const int64_t wdata, int64_t * const addr,
- void **obj_table, uint8_t i)
-{
- uint8_t retry = 4;
-
- do {
- obj_table[i] = (void *)otx2_atomic64_add_nosync(wdata, addr);
- if (obj_table[i] != NULL)
- return 0;
-
- } while (retry--);
-
- return -ENOENT;
-}
-
-#if defined(RTE_ARCH_ARM64)
-static __rte_noinline int
-npa_lf_aura_op_search_alloc(const int64_t wdata, int64_t * const addr,
- void **obj_table, unsigned int n)
-{
- uint8_t i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL)
- continue;
- if (npa_lf_aura_op_alloc_one(wdata, addr, obj_table, i))
- return -ENOENT;
- }
-
- return 0;
-}
-
-static __rte_noinline int
-npa_lf_aura_op_alloc_bulk(const int64_t wdata, int64_t * const addr,
- unsigned int n, void **obj_table)
-{
- register const uint64_t wdata64 __asm("x26") = wdata;
- register const uint64_t wdata128 __asm("x27") = wdata;
- uint64x2_t failed = vdupq_n_u64(~0);
-
- switch (n) {
- case 32:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x16, x17, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x18, x19, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x20, x21, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x22, x23, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- "fmov d16, x16\n"
- "fmov v16.D[1], x17\n"
- "fmov d17, x18\n"
- "fmov v17.D[1], x19\n"
- "fmov d18, x20\n"
- "fmov v18.D[1], x21\n"
- "fmov d19, x22\n"
- "fmov v19.D[1], x23\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x0\n"
- "fmov v20.D[1], x1\n"
- "fmov d21, x2\n"
- "fmov v21.D[1], x3\n"
- "fmov d22, x4\n"
- "fmov v22.D[1], x5\n"
- "fmov d23, x6\n"
- "fmov v23.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "x16",
- "x17", "x18", "x19", "x20", "x21", "x22", "x23", "v16", "v17",
- "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 16:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x8, x9, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x10, x11, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x12, x13, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x14, x15, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "fmov d20, x8\n"
- "fmov v20.D[1], x9\n"
- "fmov d21, x10\n"
- "fmov v21.D[1], x11\n"
- "fmov d22, x12\n"
- "fmov v22.D[1], x13\n"
- "fmov d23, x14\n"
- "fmov v23.D[1], x15\n"
- "and %[failed].16B, %[failed].16B, v20.16B\n"
- "and %[failed].16B, %[failed].16B, v21.16B\n"
- "and %[failed].16B, %[failed].16B, v22.16B\n"
- "and %[failed].16B, %[failed].16B, v23.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- "st1 { v20.2d, v21.2d, v22.2d, v23.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "x8", "x9", "x10", "x11", "x12", "x13", "x14", "x15", "v16",
- "v17", "v18", "v19", "v20", "v21", "v22", "v23"
- );
- break;
- }
- case 8:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x4, x5, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x6, x7, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "fmov d18, x4\n"
- "fmov v18.D[1], x5\n"
- "fmov d19, x6\n"
- "fmov v19.D[1], x7\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "and %[failed].16B, %[failed].16B, v18.16B\n"
- "and %[failed].16B, %[failed].16B, v19.16B\n"
- "st1 { v16.2d, v17.2d, v18.2d, v19.2d}, [%[dst]], 64\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "x4", "x5", "x6", "x7",
- "v16", "v17", "v18", "v19"
- );
- break;
- }
- case 4:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "casp x2, x3, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "fmov d17, x2\n"
- "fmov v17.D[1], x3\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "and %[failed].16B, %[failed].16B, v17.16B\n"
- "st1 { v16.2d, v17.2d}, [%[dst]], 32\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "x2", "x3", "v16", "v17"
- );
- break;
- }
- case 2:
- {
- asm volatile (
- ".cpu generic+lse\n"
- "casp x0, x1, %[wdata64], %[wdata128], [%[loc]]\n"
- "fmov d16, x0\n"
- "fmov v16.D[1], x1\n"
- "and %[failed].16B, %[failed].16B, v16.16B\n"
- "st1 { v16.2d}, [%[dst]], 16\n"
- : "+Q" (*addr), [failed] "=&w" (failed)
- : [wdata64] "r" (wdata64), [wdata128] "r" (wdata128),
- [dst] "r" (obj_table), [loc] "r" (addr)
- : "memory", "x0", "x1", "v16"
- );
- break;
- }
- case 1:
- return npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- }
-
- if (unlikely(!(vgetq_lane_u64(failed, 0) & vgetq_lane_u64(failed, 1))))
- return npa_lf_aura_op_search_alloc(wdata, addr, (void **)
- ((char *)obj_table - (sizeof(uint64_t) * n)), n);
-
- return 0;
-}
-
-static __rte_noinline void
-otx2_npa_clear_alloc(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- unsigned int i;
-
- for (i = 0; i < n; i++) {
- if (obj_table[i] != NULL) {
- otx2_npa_enq(mp, &obj_table[i], 1);
- obj_table[i] = NULL;
- }
- }
-}
-
-static __rte_noinline int __rte_hot
-otx2_npa_deq_arm64(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- void **obj_table_bak = obj_table;
- const unsigned int nfree = n;
- unsigned int parts;
-
- int64_t * const addr = (int64_t * const)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- while (n) {
- parts = n > 31 ? 32 : rte_align32prevpow2(n);
- n -= parts;
- if (unlikely(npa_lf_aura_op_alloc_bulk(wdata, addr,
- parts, obj_table))) {
- otx2_npa_clear_alloc(mp, obj_table_bak, nfree - n);
- return -ENOENT;
- }
- obj_table += parts;
- }
-
- return 0;
-}
-
-#else
-
-static inline int __rte_hot
-otx2_npa_deq(struct rte_mempool *mp, void **obj_table, unsigned int n)
-{
- const int64_t wdata = npa_lf_aura_handle_to_aura(mp->pool_id);
- unsigned int index;
- uint64_t obj;
-
- int64_t * const addr = (int64_t *)
- (npa_lf_aura_handle_to_base(mp->pool_id) +
- NPA_LF_AURA_OP_ALLOCX(0));
- for (index = 0; index < n; index++, obj_table++) {
- obj = npa_lf_aura_op_alloc_one(wdata, addr, obj_table, 0);
- if (obj == 0) {
- for (; index > 0; index--) {
- obj_table--;
- otx2_npa_enq(mp, obj_table, 1);
- }
- return -ENOENT;
- }
- *obj_table = (void *)obj;
- }
-
- return 0;
-}
-
-#endif
-
-static unsigned int
-otx2_npa_get_count(const struct rte_mempool *mp)
-{
- return (unsigned int)npa_lf_aura_op_available(mp->pool_id);
-}
-
-static int
-npa_lf_aura_pool_init(struct otx2_mbox *mbox, uint32_t aura_id,
- struct npa_aura_s *aura, struct npa_pool_s *pool)
-{
- struct npa_aq_enq_req *aura_init_req, *pool_init_req;
- struct npa_aq_enq_rsp *aura_init_rsp, *pool_init_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -ENOMEM;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&aura_init_req->aura, aura, sizeof(*aura));
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_INIT;
- otx2_mbox_memcpy(&pool_init_req->pool, pool, sizeof(*pool));
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- aura_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
- off = mbox->rx_start + aura_init_rsp->hdr.next_msgoff;
- pool_init_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc == 2 && aura_init_rsp->hdr.rc == 0 && pool_init_rsp->hdr.rc == 0)
- return 0;
- else
- return NPA_LF_ERR_AURA_POOL_INIT;
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_init_req->aura_id = aura_id;
- aura_init_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return -ENOMEM;
- }
-
- pool_init_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- if (!pool_init_req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
- pool_init_req->aura_id = aura_id;
- pool_init_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_init_req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to lock POOL ctx to NDC");
- return -ENOMEM;
- }
-
- return 0;
-}
-
-static int
-npa_lf_aura_pool_fini(struct otx2_mbox *mbox,
- uint32_t aura_id,
- uint64_t aura_handle)
-{
- struct npa_aq_enq_req *aura_req, *pool_req;
- struct npa_aq_enq_rsp *aura_rsp, *pool_rsp;
- struct otx2_mbox_dev *mdev = &mbox->dev[0];
- struct ndc_sync_op *ndc_req;
- struct otx2_idev_cfg *idev;
- int rc, off;
-
- idev = otx2_intra_dev_get_cfg();
- if (idev == NULL)
- return -EINVAL;
-
- /* Procedure for disabling an aura/pool */
- rte_delay_us(10);
- npa_lf_aura_op_alloc(aura_handle, 0);
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_WRITE;
- pool_req->pool.ena = 0;
- pool_req->pool_mask.ena = ~pool_req->pool_mask.ena;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
- aura_req->aura.ena = 0;
- aura_req->aura_mask.ena = ~aura_req->aura_mask.ena;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- off = mbox->rx_start +
- RTE_ALIGN(sizeof(struct mbox_hdr), MBOX_MSG_ALIGN);
- pool_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- off = mbox->rx_start + pool_rsp->hdr.next_msgoff;
- aura_rsp = (struct npa_aq_enq_rsp *)((uintptr_t)mdev->mbase + off);
-
- if (rc != 2 || aura_rsp->hdr.rc != 0 || pool_rsp->hdr.rc != 0)
- return NPA_LF_ERR_AURA_POOL_FINI;
-
- /* Sync NDC-NPA for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->npa_lf_sync = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Error on NDC-NPA LF sync, rc %d", rc);
- return NPA_LF_ERR_AURA_POOL_FINI;
- }
-
- if (!(idev->npa_lock_mask & BIT_ULL(aura_id)))
- return 0;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- aura_req->aura_id = aura_id;
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock AURA ctx to NDC");
- return -EINVAL;
- }
-
- pool_req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
- pool_req->aura_id = aura_id;
- pool_req->ctype = NPA_AQ_CTYPE_POOL;
- pool_req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to unlock POOL ctx to NDC");
- return -EINVAL;
- }
-
- return 0;
-}
-
-static inline char*
-npa_lf_stack_memzone_name(struct otx2_npa_lf *lf, int pool_id, char *name)
-{
- snprintf(name, RTE_MEMZONE_NAMESIZE, "otx2_npa_stack_%x_%d",
- lf->pf_func, pool_id);
-
- return name;
-}
-
-static inline const struct rte_memzone *
-npa_lf_stack_dma_alloc(struct otx2_npa_lf *lf, char *name,
- int pool_id, size_t size)
-{
- return rte_memzone_reserve_aligned(
- npa_lf_stack_memzone_name(lf, pool_id, name), size, 0,
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-}
-
-static inline int
-npa_lf_stack_dma_free(struct otx2_npa_lf *lf, char *name, int pool_id)
-{
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(npa_lf_stack_memzone_name(lf, pool_id, name));
- if (mz == NULL)
- return -EINVAL;
-
- return rte_memzone_free(mz);
-}
-
-static inline int
-bitmap_ctzll(uint64_t slab)
-{
- if (slab == 0)
- return 0;
-
- return __builtin_ctzll(slab);
-}
-
-static int
-npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size,
- const uint32_t block_count, struct npa_aura_s *aura,
- struct npa_pool_s *pool, uint64_t *aura_handle)
-{
- int rc, aura_id, pool_id, stack_size, alloc_size;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- uint64_t slab;
- uint32_t pos;
-
- /* Sanity check */
- if (!lf || !block_size || !block_count ||
- !pool || !aura || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- /* Block size should be cache line aligned and in range of 128B-128KB */
- if (block_size % OTX2_ALIGN || block_size < 128 ||
- block_size > 128 * 1024)
- return NPA_LF_ERR_INVALID_BLOCK_SZ;
-
- pos = slab = 0;
- /* Scan from the beginning */
- __rte_bitmap_scan_init(lf->npa_bmp);
- /* Scan bitmap to get the free pool */
- rc = rte_bitmap_scan(lf->npa_bmp, &pos, &slab);
- /* Empty bitmap */
- if (rc == 0) {
- otx2_err("Mempools exhausted, 'max_pools' devargs to increase");
- return -ERANGE;
- }
-
- /* Get aura_id from resource bitmap */
- aura_id = pos + bitmap_ctzll(slab);
- /* Mark pool as reserved */
- rte_bitmap_clear(lf->npa_bmp, aura_id);
-
- /* Configuration based on each aura has separate pool(aura-pool pair) */
- pool_id = aura_id;
- rc = (aura_id < 0 || pool_id >= (int)lf->nr_pools || aura_id >=
- (int)BIT_ULL(6 + lf->aura_sz)) ? NPA_LF_ERR_AURA_ID_ALLOC : 0;
- if (rc)
- goto exit;
-
- /* Allocate stack memory */
- stack_size = (block_count + lf->stack_pg_ptrs - 1) / lf->stack_pg_ptrs;
- alloc_size = stack_size * lf->stack_pg_bytes;
-
- mz = npa_lf_stack_dma_alloc(lf, name, pool_id, alloc_size);
- if (mz == NULL) {
- rc = -ENOMEM;
- goto aura_res_put;
- }
-
- /* Update aura fields */
- aura->pool_addr = pool_id;/* AF will translate to associated poolctx */
- aura->ena = 1;
- aura->shift = rte_log2_u32(block_count);
- aura->shift = aura->shift < 8 ? 0 : aura->shift - 8;
- aura->limit = block_count;
- aura->pool_caching = 1;
- aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_ADD_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_AURA_FREE_UNDER);
- aura->err_int_ena |= BIT(NPA_AURA_ERR_INT_POOL_DIS);
- /* Many to one reduction */
- aura->err_qint_idx = aura_id % lf->qints;
-
- /* Update pool fields */
- pool->stack_base = mz->iova;
- pool->ena = 1;
- pool->buf_size = block_size / OTX2_ALIGN;
- pool->stack_max_pages = stack_size;
- pool->shift = rte_log2_u32(block_count);
- pool->shift = pool->shift < 8 ? 0 : pool->shift - 8;
- pool->ptr_start = 0;
- pool->ptr_end = ~0;
- pool->stack_caching = 1;
- pool->err_int_ena = BIT(NPA_POOL_ERR_INT_OVFLS);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_RANGE);
- pool->err_int_ena |= BIT(NPA_POOL_ERR_INT_PERR);
-
- /* Many to one reduction */
- pool->err_qint_idx = pool_id % lf->qints;
-
- /* Issue AURA_INIT and POOL_INIT op */
- rc = npa_lf_aura_pool_init(lf->mbox, aura_id, aura, pool);
- if (rc)
- goto stack_mem_free;
-
- *aura_handle = npa_lf_aura_handle_gen(aura_id, lf->base);
-
- /* Update aura count */
- npa_lf_aura_op_cnt_set(*aura_handle, 0, block_count);
- /* Read it back to make sure aura count is updated */
- npa_lf_aura_op_cnt_get(*aura_handle);
-
- return 0;
-
-stack_mem_free:
- rte_memzone_free(mz);
-aura_res_put:
- rte_bitmap_set(lf->npa_bmp, aura_id);
-exit:
- return rc;
-}
-
-static int
-npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- int aura_id, pool_id, rc;
-
- if (!lf || !aura_handle)
- return NPA_LF_ERR_PARAM;
-
- aura_id = pool_id = npa_lf_aura_handle_to_aura(aura_handle);
- rc = npa_lf_aura_pool_fini(lf->mbox, aura_id, aura_handle);
- rc |= npa_lf_stack_dma_free(lf, name, pool_id);
-
- rte_bitmap_set(lf->npa_bmp, aura_id);
-
- return rc;
-}
-
-static int
-npa_lf_aura_range_update_check(uint64_t aura_handle)
-{
- uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- struct npa_aura_lim *lim = lf->aura_lim;
- __otx2_io struct npa_pool_s *pool;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
-
- req->aura_id = aura_id;
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
- return rc;
- }
-
- pool = &rsp->pool;
-
- if (lim[aura_id].ptr_start != pool->ptr_start ||
- lim[aura_id].ptr_end != pool->ptr_end) {
- otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
- return -ERANGE;
- }
-
- return 0;
-}
-
-static int
-otx2_npa_alloc(struct rte_mempool *mp)
-{
- uint32_t block_size, block_count;
- uint64_t aura_handle = 0;
- struct otx2_npa_lf *lf;
- struct npa_aura_s aura;
- struct npa_pool_s pool;
- size_t padding;
- int rc;
-
- lf = otx2_npa_lf_obj_get();
- if (lf == NULL) {
- rc = -EINVAL;
- goto error;
- }
-
- block_size = mp->elt_size + mp->header_size + mp->trailer_size;
- /*
- * OCTEON TX2 has 8 sets, 41 ways L1D cache, VA<9:7> bits dictate
- * the set selection.
- * Add additional padding to ensure that the element size always
- * occupies odd number of cachelines to ensure even distribution
- * of elements among L1D cache sets.
- */
- padding = ((block_size / RTE_CACHE_LINE_SIZE) % 2) ? 0 :
- RTE_CACHE_LINE_SIZE;
- mp->trailer_size += padding;
- block_size += padding;
-
- block_count = mp->size;
-
- if (block_size % OTX2_ALIGN != 0) {
- otx2_err("Block size should be multiple of 128B");
- rc = -ERANGE;
- goto error;
- }
-
- memset(&aura, 0, sizeof(struct npa_aura_s));
- memset(&pool, 0, sizeof(struct npa_pool_s));
- pool.nat_align = 1;
- pool.buf_offset = 1;
-
- if ((uint32_t)pool.buf_offset * OTX2_ALIGN != mp->header_size) {
- otx2_err("Unsupported mp->header_size=%d", mp->header_size);
- rc = -EINVAL;
- goto error;
- }
-
- /* Use driver specific mp->pool_config to override aura config */
- if (mp->pool_config != NULL)
- memcpy(&aura, mp->pool_config, sizeof(struct npa_aura_s));
-
- rc = npa_lf_aura_pool_pair_alloc(lf, block_size, block_count,
- &aura, &pool, &aura_handle);
- if (rc) {
- otx2_err("Failed to alloc pool or aura rc=%d", rc);
- goto error;
- }
-
- /* Store aura_handle for future queue operations */
- mp->pool_id = aura_handle;
- otx2_npa_dbg("lf=%p block_sz=%d block_count=%d aura_handle=0x%"PRIx64,
- lf, block_size, block_count, aura_handle);
-
- /* Just hold the reference of the object */
- otx2_npa_lf_obj_ref();
- return 0;
-error:
- return rc;
-}
-
-static void
-otx2_npa_free(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
- int rc = 0;
-
- otx2_npa_dbg("lf=%p aura_handle=0x%"PRIx64, lf, mp->pool_id);
- if (lf != NULL)
- rc = npa_lf_aura_pool_pair_free(lf, mp->pool_id);
-
- if (rc)
- otx2_err("Failed to free pool or aura rc=%d", rc);
-
- /* Release the reference of npalf */
- otx2_npa_lf_fini();
-}
-
-static ssize_t
-otx2_npa_calc_mem_size(const struct rte_mempool *mp, uint32_t obj_num,
- uint32_t pg_shift, size_t *min_chunk_size, size_t *align)
-{
- size_t total_elt_sz;
-
- /* Need space for one more obj on each chunk to fulfill
- * alignment requirements.
- */
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
- return rte_mempool_op_calc_mem_size_helper(mp, obj_num, pg_shift,
- total_elt_sz, min_chunk_size,
- align);
-}
-
-static uint8_t
-otx2_npa_l1d_way_set_get(uint64_t iova)
-{
- return (iova >> rte_log2_u32(RTE_CACHE_LINE_SIZE)) & 0x7;
-}
-
-static int
-otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
- rte_iova_t iova, size_t len,
- rte_mempool_populate_obj_cb_t *obj_cb, void *obj_cb_arg)
-{
-#define OTX2_L1D_NB_SETS 8
- uint64_t distribution[OTX2_L1D_NB_SETS];
- rte_iova_t start_iova;
- size_t total_elt_sz;
- uint8_t set;
- size_t off;
- int i;
-
- if (iova == RTE_BAD_IOVA)
- return -EINVAL;
-
- total_elt_sz = mp->header_size + mp->elt_size + mp->trailer_size;
-
- /* Align object start address to a multiple of total_elt_sz */
- off = total_elt_sz - ((((uintptr_t)vaddr - 1) % total_elt_sz) + 1);
-
- if (len < off)
- return -EINVAL;
-
-
- vaddr = (char *)vaddr + off;
- iova += off;
- len -= off;
-
- memset(distribution, 0, sizeof(uint64_t) * OTX2_L1D_NB_SETS);
- start_iova = iova;
- while (start_iova < iova + len) {
- set = otx2_npa_l1d_way_set_get(start_iova + mp->header_size);
- distribution[set]++;
- start_iova += total_elt_sz;
- }
-
- otx2_npa_dbg("iova %"PRIx64", aligned iova %"PRIx64"", iova - off,
- iova);
- otx2_npa_dbg("length %"PRIu64", aligned length %"PRIu64"",
- (uint64_t)(len + off), (uint64_t)len);
- otx2_npa_dbg("element size %"PRIu64"", (uint64_t)total_elt_sz);
- otx2_npa_dbg("requested objects %"PRIu64", possible objects %"PRIu64"",
- (uint64_t)max_objs, (uint64_t)(len / total_elt_sz));
- otx2_npa_dbg("L1D set distribution :");
- for (i = 0; i < OTX2_L1D_NB_SETS; i++)
- otx2_npa_dbg("set[%d] : objects : %"PRIu64"", i,
- distribution[i]);
-
- npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
-
- if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
- return -EBUSY;
-
- return rte_mempool_op_populate_helper(mp,
- RTE_MEMPOOL_POPULATE_F_ALIGN_OBJ,
- max_objs, vaddr, iova, len,
- obj_cb, obj_cb_arg);
-}
-
-static struct rte_mempool_ops otx2_npa_ops = {
- .name = "octeontx2_npa",
- .alloc = otx2_npa_alloc,
- .free = otx2_npa_free,
- .enqueue = otx2_npa_enq,
- .get_count = otx2_npa_get_count,
- .calc_mem_size = otx2_npa_calc_mem_size,
- .populate = otx2_npa_populate,
-#if defined(RTE_ARCH_ARM64)
- .dequeue = otx2_npa_deq_arm64,
-#else
- .dequeue = otx2_npa_deq,
-#endif
-};
-
-RTE_MEMPOOL_REGISTER_OPS(otx2_npa_ops);
diff --git a/drivers/mempool/octeontx2/version.map b/drivers/mempool/octeontx2/version.map
deleted file mode 100644
index e6887ceb8f..0000000000
--- a/drivers/mempool/octeontx2/version.map
+++ /dev/null
@@ -1,8 +0,0 @@
-INTERNAL {
- global:
-
- otx2_npa_lf_fini;
- otx2_npa_lf_init;
-
- local: *;
-};
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index f8f3d3895e..d34bc6898f 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -579,6 +579,21 @@ cn9k_nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
}
static const struct rte_pci_id cn9k_pci_nix_map[] = {
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_PF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KA, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KB, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KC, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KD, PCI_DEVID_CNXK_RVU_AF_VF),
+ CNXK_PCI_ID(PCI_SUBSYSTEM_DEVID_CN9KE, PCI_DEVID_CNXK_RVU_AF_VF),
{
.vendor_id = 0,
},
diff --git a/drivers/net/meson.build b/drivers/net/meson.build
index 2355d1cde8..e35652fe63 100644
--- a/drivers/net/meson.build
+++ b/drivers/net/meson.build
@@ -45,7 +45,6 @@ drivers = [
'ngbe',
'null',
'octeontx',
- 'octeontx2',
'octeontx_ep',
'pcap',
'pfe',
diff --git a/drivers/net/octeontx2/meson.build b/drivers/net/octeontx2/meson.build
deleted file mode 100644
index ab15844cbc..0000000000
--- a/drivers/net/octeontx2/meson.build
+++ /dev/null
@@ -1,47 +0,0 @@
-# SPDX-License-Identifier: BSD-3-Clause
-# Copyright(C) 2019 Marvell International Ltd.
-#
-
-if not is_linux or not dpdk_conf.get('RTE_ARCH_64')
- build = false
- reason = 'only supported on 64-bit Linux'
- subdir_done()
-endif
-
-sources = files(
- 'otx2_rx.c',
- 'otx2_tx.c',
- 'otx2_tm.c',
- 'otx2_rss.c',
- 'otx2_mac.c',
- 'otx2_ptp.c',
- 'otx2_flow.c',
- 'otx2_link.c',
- 'otx2_vlan.c',
- 'otx2_stats.c',
- 'otx2_mcast.c',
- 'otx2_lookup.c',
- 'otx2_ethdev.c',
- 'otx2_flow_ctrl.c',
- 'otx2_flow_dump.c',
- 'otx2_flow_parse.c',
- 'otx2_flow_utils.c',
- 'otx2_ethdev_irq.c',
- 'otx2_ethdev_ops.c',
- 'otx2_ethdev_sec.c',
- 'otx2_ethdev_debug.c',
- 'otx2_ethdev_devargs.c',
-)
-
-deps += ['bus_pci', 'cryptodev', 'eventdev', 'security']
-deps += ['common_octeontx2', 'mempool_octeontx2']
-
-extra_flags = ['-flax-vector-conversions']
-foreach flag: extra_flags
- if cc.has_argument(flag)
- cflags += flag
- endif
-endforeach
-
-includes += include_directories('../../common/cpt')
-includes += include_directories('../../crypto/octeontx2')
diff --git a/drivers/net/octeontx2/otx2_ethdev.c b/drivers/net/octeontx2/otx2_ethdev.c
deleted file mode 100644
index 4f1c0b98de..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.c
+++ /dev/null
@@ -1,2814 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <ethdev_pci.h>
-#include <rte_io.h>
-#include <rte_malloc.h>
-#include <rte_mbuf.h>
-#include <rte_mbuf_pool_ops.h>
-#include <rte_mempool.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-
-static inline uint64_t
-nix_get_rx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_RX_OFFLOAD_CAPA;
-
- if (otx2_dev_is_vf(dev) ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG)
- capa &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
-
- return capa;
-}
-
-static inline uint64_t
-nix_get_tx_offload_capa(struct otx2_eth_dev *dev)
-{
- uint64_t capa = NIX_TX_OFFLOAD_CAPA;
-
- /* TSO not supported for earlier chip revisions */
- if (otx2_dev_is_96xx_A0(dev) || otx2_dev_is_95xx_Ax(dev))
- capa &= ~(RTE_ETH_TX_OFFLOAD_TCP_TSO |
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO);
- return capa;
-}
-
-static const struct otx2_dev_ops otx2_dev_ops = {
- .link_status_update = otx2_eth_dev_link_status_update,
- .ptp_info_update = otx2_eth_dev_ptp_info_update,
- .link_status_get = otx2_eth_dev_link_status_get,
-};
-
-static int
-nix_lf_alloc(struct otx2_eth_dev *dev, uint32_t nb_rxq, uint32_t nb_txq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_alloc_req *req;
- struct nix_lf_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_lf_alloc(mbox);
- req->rq_cnt = nb_rxq;
- req->sq_cnt = nb_txq;
- req->cq_cnt = nb_rxq;
- /* XQE_SZ should be in Sync with NIX_CQ_ENTRY_SZ */
- RTE_BUILD_BUG_ON(NIX_CQ_ENTRY_SZ != 128);
- req->xqe_sz = NIX_XQESZ_W16;
- req->rss_sz = dev->rss_info.rss_size;
- req->rss_grps = NIX_RSS_GRPS;
- req->npa_func = otx2_npa_pf_func_get();
- req->sso_func = otx2_sso_pf_func_get();
- req->rx_cfg = BIT_ULL(35 /* DIS_APAD */);
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM)) {
- req->rx_cfg |= BIT_ULL(37 /* CSUM_OL4 */);
- req->rx_cfg |= BIT_ULL(36 /* CSUM_IL4 */);
- }
- req->rx_cfg |= (BIT_ULL(32 /* DROP_RE */) |
- BIT_ULL(33 /* Outer L2 Length */) |
- BIT_ULL(38 /* Inner L4 UDP Length */) |
- BIT_ULL(39 /* Inner L3 Length */) |
- BIT_ULL(40 /* Outer L4 UDP Length */) |
- BIT_ULL(41 /* Outer L3 Length */));
-
- if (dev->rss_tag_as_xor == 0)
- req->flags = NIX_LF_RSS_TAG_LSB_AS_ADDER;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->sqb_size = rsp->sqb_size;
- dev->tx_chan_base = rsp->tx_chan_base;
- dev->rx_chan_base = rsp->rx_chan_base;
- dev->rx_chan_cnt = rsp->rx_chan_cnt;
- dev->tx_chan_cnt = rsp->tx_chan_cnt;
- dev->lso_tsov4_idx = rsp->lso_tsov4_idx;
- dev->lso_tsov6_idx = rsp->lso_tsov6_idx;
- dev->lf_tx_stats = rsp->lf_tx_stats;
- dev->lf_rx_stats = rsp->lf_rx_stats;
- dev->cints = rsp->cints;
- dev->qints = rsp->qints;
- dev->npc_flow.channel = dev->rx_chan_base;
- dev->ptp_en = rsp->hw_rx_tstamp_en;
-
- return 0;
-}
-
-static int
-nix_lf_switch_header_type_enable(struct otx2_eth_dev *dev, bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_set_pkind *req;
- struct msg_resp *rsp;
- int rc;
-
- if (dev->npc_flow.switch_header_type == 0)
- return 0;
-
- /* Notify AF about higig2 config */
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN90B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_CH_LEN_24B) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_CHLEN24B_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_EXDSA_PKIND;
- } else if (dev->npc_flow.switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- req->mode = OTX2_PRIV_FLAGS_CUSTOM;
- req->pkind = NPC_RX_VLAN_EXDSA_PKIND;
- }
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_RX;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- req = otx2_mbox_alloc_msg_npc_set_pkind(mbox);
- req->mode = dev->npc_flow.switch_header_type;
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B ||
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_24B)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
-
- if (enable == 0)
- req->mode = OTX2_PRIV_FLAGS_DEFAULT;
- req->dir = PKIND_TX;
- return otx2_mbox_process_msg(mbox, (void *)&rsp);
-}
-
-static int
-nix_lf_free(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lf_free_req *req;
- struct ndc_sync_op *ndc_req;
- int rc;
-
- /* Sync NDC-NIX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- ndc_req->nix_lf_rx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-[TX, RX] LF sync, rc %d", rc);
-
- req = otx2_mbox_alloc_msg_nix_lf_free(mbox);
- /* Let AF driver free all this nix lf's
- * NPC entries allocated using NPC MBOX.
- */
- req->flags = 0;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_start(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_rxtx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_enable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_start_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-npc_rx_disable(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- otx2_mbox_alloc_msg_nix_lf_stop_rx(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_start_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_start_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-cgx_intlbk_enable(struct otx2_eth_dev *dev, bool en)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (en && otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_intlbk_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_intlbk_disable(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cgx_stop_link_event(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_stop_linkevents(mbox);
-
- return otx2_mbox_process(mbox);
-}
-
-static inline void
-nix_rx_queue_reset(struct otx2_eth_rxq *rxq)
-{
- rxq->head = 0;
- rxq->available = 0;
-}
-
-static inline uint32_t
-nix_qsize_to_val(enum nix_q_size_e qsize)
-{
- return (16UL << (qsize * 2));
-}
-
-static inline enum nix_q_size_e
-nix_qsize_clampup_get(struct otx2_eth_dev *dev, uint32_t val)
-{
- int i;
-
- if (otx2_ethdev_fixup_is_min_4k_q(dev))
- i = nix_q_size_4K;
- else
- i = nix_q_size_16;
-
- for (; i < nix_q_size_max; i++)
- if (val <= nix_qsize_to_val(i))
- break;
-
- if (i >= nix_q_size_max)
- i = nix_q_size_max - 1;
-
- return i;
-}
-
-static int
-nix_cq_rq_init(struct rte_eth_dev *eth_dev, struct otx2_eth_dev *dev,
- uint16_t qid, struct otx2_eth_rxq *rxq, struct rte_mempool *mp)
-{
- struct otx2_mbox *mbox = dev->mbox;
- const struct rte_memzone *rz;
- uint32_t ring_size, cq_size;
- struct nix_aq_enq_req *aq;
- uint16_t first_skip;
- int rc;
-
- cq_size = rxq->qlen;
- ring_size = cq_size * NIX_CQ_ENTRY_SZ;
- rz = rte_eth_dma_zone_reserve(eth_dev, "cq", qid, ring_size,
- NIX_CQ_ALIGN, dev->node);
- if (rz == NULL) {
- otx2_err("Failed to allocate mem for cq hw ring");
- return -ENOMEM;
- }
- memset(rz->addr, 0, rz->len);
- rxq->desc = (uintptr_t)rz->addr;
- rxq->qmask = cq_size - 1;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->cq.ena = 1;
- aq->cq.caching = 1;
- aq->cq.qsize = rxq->qsize;
- aq->cq.base = rz->iova;
- aq->cq.avg_level = 0xff;
- aq->cq.cq_err_int_ena = BIT(NIX_CQERRINT_CQE_FAULT);
- aq->cq.cq_err_int_ena |= BIT(NIX_CQERRINT_DOOR_ERR);
-
- /* Many to one reduction */
- aq->cq.qint_idx = qid % dev->qints;
- /* Map CQ0 [RQ0] to CINT0 and so on till max 64 irqs */
- aq->cq.cint_idx = qid;
-
- if (otx2_ethdev_fixup_is_limit_cq_full(dev)) {
- const float rx_cq_skid = NIX_CQ_FULL_ERRATA_SKID;
- uint16_t min_rx_drop;
-
- min_rx_drop = ceil(rx_cq_skid / (float)cq_size);
- aq->cq.drop = min_rx_drop;
- aq->cq.drop_ena = 1;
- rxq->cq_drop = min_rx_drop;
- } else {
- rxq->cq_drop = NIX_CQ_THRESH_LEVEL;
- aq->cq.drop = rxq->cq_drop;
- aq->cq.drop_ena = 1;
- }
-
- /* TX pause frames enable flowctrl on RX side */
- if (dev->fc_info.tx_pause) {
- /* Single bpid is allocated for all rx channels for now */
- aq->cq.bpid = dev->fc_info.bpid[0];
- aq->cq.bp = rxq->cq_drop;
- aq->cq.bp_ena = 1;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_INIT;
-
- aq->rq.sso_ena = 0;
-
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- aq->rq.ipsech_ena = 1;
-
- aq->rq.cq = qid; /* RQ to CQ 1:1 mapped */
- aq->rq.spb_ena = 0;
- aq->rq.lpb_aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- first_skip = (sizeof(struct rte_mbuf));
- first_skip += RTE_PKTMBUF_HEADROOM;
- first_skip += rte_pktmbuf_priv_size(mp);
- rxq->data_off = first_skip;
-
- first_skip /= 8; /* Expressed in number of dwords */
- aq->rq.first_skip = first_skip;
- aq->rq.later_skip = (sizeof(struct rte_mbuf) / 8);
- aq->rq.flow_tagw = 32; /* 32-bits */
- aq->rq.lpb_sizem1 = mp->elt_size / 8;
- aq->rq.lpb_sizem1 -= 1; /* Expressed in size minus one */
- aq->rq.ena = 1;
- aq->rq.pb_caching = 0x2; /* First cache aligned block to LLC */
- aq->rq.xqe_imm_size = 0; /* No pkt data copy to CQE */
- aq->rq.rq_int_ena = 0;
- /* Many to one reduction */
- aq->rq.qint_idx = qid % dev->qints;
-
- aq->rq.xqe_drop_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to init rq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to LOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = qid;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_LOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to LOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_rq_enb_dis(struct rte_eth_dev *eth_dev,
- struct otx2_eth_rxq *rxq, const bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
-
- /* Pkts will be dropped silently if RQ is disabled */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->rq.ena = enb;
- aq->rq_mask.ena = ~(aq->rq_mask.ena);
-
- return otx2_mbox_process(mbox);
-}
-
-static int
-nix_cq_rq_uninit(struct rte_eth_dev *eth_dev, struct otx2_eth_rxq *rxq)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- int rc;
-
- /* RQ is already disabled */
- /* Disable CQ */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->cq.ena = 0;
- aq->cq_mask.ena = ~(aq->cq_mask.ena);
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to disable cq context");
- return rc;
- }
-
- if (dev->lock_rx_ctx) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK cq context");
- return rc;
- }
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- otx2_err("Failed to UNLOCK rq context");
- return -ENOMEM;
- }
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
- rc = otx2_mbox_process(mbox);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK rq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-static inline int
-nix_get_data_off(struct otx2_eth_dev *dev)
-{
- return otx2_ethdev_is_ptp_en(dev) ? NIX_TIMESYNC_RX_OFFSET : 0;
-}
-
-uint64_t
-otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id)
-{
- struct rte_mbuf mb_def;
- uint64_t *tmp;
-
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) % 8 != 0);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, refcnt) -
- offsetof(struct rte_mbuf, data_off) != 2);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, nb_segs) -
- offsetof(struct rte_mbuf, data_off) != 4);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, port) -
- offsetof(struct rte_mbuf, data_off) != 6);
- mb_def.nb_segs = 1;
- mb_def.data_off = RTE_PKTMBUF_HEADROOM + nix_get_data_off(dev);
- mb_def.port = port_id;
- rte_mbuf_refcnt_set(&mb_def, 1);
-
- /* Prevent compiler reordering: rearm_data covers previous fields */
- rte_compiler_barrier();
- tmp = (uint64_t *)&mb_def.rearm_data;
-
- return *tmp;
-}
-
-static void
-otx2_nix_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
-{
- struct otx2_eth_rxq *rxq = dev->data->rx_queues[qid];
-
- if (!rxq)
- return;
-
- otx2_nix_dbg("Releasing rxq %u", rxq->rq);
- nix_cq_rq_uninit(rxq->eth_dev, rxq);
- rte_free(rxq);
- dev->data->rx_queues[qid] = NULL;
-}
-
-static int
-otx2_nix_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t rq,
- uint16_t nb_desc, unsigned int socket,
- const struct rte_eth_rxconf *rx_conf,
- struct rte_mempool *mp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mempool_ops *ops;
- struct otx2_eth_rxq *rxq;
- const char *platform_ops;
- enum nix_q_size_e qsize;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_rxq, slow_path_start) >= 128);
-
- /* Sanity checks */
- if (rx_conf->rx_deferred_start == 1) {
- otx2_err("Deferred Rx start is not supported");
- goto fail;
- }
-
- platform_ops = rte_mbuf_platform_mempool_ops();
- /* This driver needs octeontx2_npa mempool ops to work */
- ops = rte_mempool_get_ops(mp->ops_index);
- if (strncmp(ops->name, platform_ops, RTE_MEMPOOL_OPS_NAMESIZE)) {
- otx2_err("mempool ops should be of octeontx2_npa type");
- goto fail;
- }
-
- if (mp->pool_id == 0) {
- otx2_err("Invalid pool_id");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed */
- if (eth_dev->data->rx_queues[rq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", rq);
- otx2_nix_rx_queue_release(eth_dev, rq);
- rte_eth_dma_zone_free(eth_dev, "cq", rq);
- }
-
- offloads = rx_conf->offloads | eth_dev->data->dev_conf.rxmode.offloads;
- dev->rx_offloads |= offloads;
-
- /* Find the CQ queue size */
- qsize = nix_qsize_clampup_get(dev, nb_desc);
- /* Allocate rxq memory */
- rxq = rte_zmalloc_socket("otx2 rxq", sizeof(*rxq), OTX2_ALIGN, socket);
- if (rxq == NULL) {
- otx2_err("Failed to allocate rq=%d", rq);
- rc = -ENOMEM;
- goto fail;
- }
-
- rxq->eth_dev = eth_dev;
- rxq->rq = rq;
- rxq->cq_door = dev->base + NIX_LF_CQ_OP_DOOR;
- rxq->cq_status = (int64_t *)(dev->base + NIX_LF_CQ_OP_STATUS);
- rxq->wdata = (uint64_t)rq << 32;
- rxq->aura = npa_lf_aura_handle_to_aura(mp->pool_id);
- rxq->mbuf_initializer = otx2_nix_rxq_mbuf_setup(dev,
- eth_dev->data->port_id);
- rxq->offloads = offloads;
- rxq->pool = mp;
- rxq->qlen = nix_qsize_to_val(qsize);
- rxq->qsize = qsize;
- rxq->lookup_mem = otx2_nix_fastpath_lookup_mem_get();
- rxq->tstamp = &dev->tstamp;
-
- eth_dev->data->rx_queues[rq] = rxq;
-
- /* Alloc completion queue */
- rc = nix_cq_rq_init(eth_dev, dev, rq, rxq, mp);
- if (rc) {
- otx2_err("Failed to allocate rxq=%u", rq);
- goto free_rxq;
- }
-
- rxq->qconf.socket_id = socket;
- rxq->qconf.nb_desc = nb_desc;
- rxq->qconf.mempool = mp;
- memcpy(&rxq->qconf.conf.rx, rx_conf, sizeof(struct rte_eth_rxconf));
-
- nix_rx_queue_reset(rxq);
- otx2_nix_dbg("rq=%d pool=%s qsize=%d nb_desc=%d->%d",
- rq, mp->name, qsize, nb_desc, rxq->qlen);
-
- eth_dev->data->rx_queue_state[rq] = RTE_ETH_QUEUE_STATE_STOPPED;
-
- /* Calculating delta and freq mult between PTP HI clock and tsc.
- * These are needed in deriving raw clock value from tsc counter.
- * read_clock eth op returns raw clock value.
- */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc) {
- otx2_err("Failed to calculate delta and freq mult");
- goto fail;
- }
- }
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- return 0;
-
-free_rxq:
- otx2_nix_rx_queue_release(eth_dev, rq);
-fail:
- return rc;
-}
-
-static inline uint8_t
-nix_sq_max_sqe_sz(struct otx2_eth_txq *txq)
-{
- /*
- * Maximum three segments can be supported with W8, Choose
- * NIX_MAXSQESZ_W16 for multi segment offload.
- */
- if (txq->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- return NIX_MAXSQESZ_W16;
- else
- return NIX_MAXSQESZ_W8;
-}
-
-static uint16_t
-nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- uint16_t flags = 0;
-
- if (rxmode->mq_mode == RTE_ETH_MQ_RX_RSS &&
- (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_RSS_HASH))
- flags |= NIX_RX_OFFLOAD_RSS_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_TCP_CKSUM |
- RTE_ETH_RX_OFFLOAD_UDP_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM))
- flags |= NIX_RX_OFFLOAD_CHECKSUM_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- flags |= NIX_RX_MULTI_SEG_F;
-
- if (dev->rx_offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP))
- flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY)
- flags |= NIX_RX_OFFLOAD_SECURITY_F;
-
- if (!dev->ptype_disable)
- flags |= NIX_RX_OFFLOAD_PTYPE_F;
-
- return flags;
-}
-
-static uint16_t
-nix_tx_offload_flags(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t conf = dev->tx_offloads;
- uint16_t flags = 0;
-
- /* Fastpath is dependent on these enums */
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_TCP_CKSUM != (1ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_SCTP_CKSUM != (2ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_UDP_CKSUM != (3ULL << 52));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IP_CKSUM != (1ULL << 54));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_IPV4 != (1ULL << 55));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IP_CKSUM != (1ULL << 58));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV4 != (1ULL << 59));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_IPV6 != (1ULL << 60));
- RTE_BUILD_BUG_ON(RTE_MBUF_F_TX_OUTER_UDP_CKSUM != (1ULL << 41));
- RTE_BUILD_BUG_ON(RTE_MBUF_L2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_L3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL2_LEN_BITS != 7);
- RTE_BUILD_BUG_ON(RTE_MBUF_OUTL3_LEN_BITS != 9);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, data_off) !=
- offsetof(struct rte_mbuf, buf_iova) + 8);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, ol_flags) !=
- offsetof(struct rte_mbuf, buf_iova) + 16);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, pkt_len) !=
- offsetof(struct rte_mbuf, ol_flags) + 12);
- RTE_BUILD_BUG_ON(offsetof(struct rte_mbuf, tx_offload) !=
- offsetof(struct rte_mbuf, pool) + 2 * sizeof(void *));
-
- if (conf & RTE_ETH_TX_OFFLOAD_VLAN_INSERT ||
- conf & RTE_ETH_TX_OFFLOAD_QINQ_INSERT)
- flags |= NIX_TX_OFFLOAD_VLAN_QINQ_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM)
- flags |= NIX_TX_OFFLOAD_OL3_OL4_CSUM_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_TCP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_UDP_CKSUM ||
- conf & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM)
- flags |= NIX_TX_OFFLOAD_L3_L4_CSUM_F;
-
- if (!(conf & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE))
- flags |= NIX_TX_OFFLOAD_MBUF_NOFF_F;
-
- if (conf & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- flags |= NIX_TX_MULTI_SEG_F;
-
- /* Enable Inner checksum for TSO */
- if (conf & RTE_ETH_TX_OFFLOAD_TCP_TSO)
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- /* Enable Inner and Outer checksum for Tunnel TSO */
- if (conf & (RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO |
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO))
- flags |= (NIX_TX_OFFLOAD_TSO_F |
- NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F);
-
- if (conf & RTE_ETH_TX_OFFLOAD_SECURITY)
- flags |= NIX_TX_OFFLOAD_SECURITY_F;
-
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
- flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- return flags;
-}
-
-static int
-nix_sqb_lock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to LOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to LOCK POOL context");
- return -ENOMEM;
- }
- }
-
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to lock POOL in NDC");
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_sqb_unlock(struct rte_mempool *mp)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(npa_lf->mbox, 0);
- rc = otx2_mbox_wait_for_rsp(npa_lf->mbox, 0);
- if (rc < 0) {
- otx2_err("Failed to UNLOCK AURA context");
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- if (!req) {
- otx2_err("Failed to UNLOCK POOL context");
- return -ENOMEM;
- }
- }
- req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- req->ctype = NPA_AQ_CTYPE_POOL;
- req->op = NPA_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(npa_lf->mbox);
- if (rc < 0) {
- otx2_err("Unable to UNLOCK AURA in NDC");
- return rc;
- }
-
- return 0;
-}
-
-void
-otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq)
-{
- struct rte_pktmbuf_pool_private *mbp_priv;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_dev *dev;
- uint32_t buffsz;
-
- eth_dev = rxq->eth_dev;
- dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Get rx buffer size */
- mbp_priv = rte_mempool_get_priv(rxq->pool);
- buffsz = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
-
- if (eth_dev->data->mtu + (uint32_t)NIX_L2_OVERHEAD > buffsz) {
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_SCATTER;
- dev->tx_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;
-
- /* Setting up the rx[tx]_offload_flags due to change
- * in rx[tx]_offloads.
- */
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- }
-}
-
-static int
-nix_sq_init(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *sq;
- uint32_t rr_quantum;
- uint16_t smq;
- int rc;
-
- if (txq->sqb_pool->pool_id == 0)
- return -EINVAL;
-
- rc = otx2_nix_tm_get_leaf_data(dev, txq->sq, &rr_quantum, &smq);
- if (rc) {
- otx2_err("Failed to get sq->smq(leaf node), rc=%d", rc);
- return rc;
- }
-
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_INIT;
- sq->sq.max_sqe_size = nix_sq_max_sqe_sz(txq);
-
- sq->sq.smq = smq;
- sq->sq.smq_rr_quantum = rr_quantum;
- sq->sq.default_chan = dev->tx_chan_base;
- sq->sq.sqe_stype = NIX_STYPE_STF;
- sq->sq.ena = 1;
- if (sq->sq.max_sqe_size == NIX_MAXSQESZ_W8)
- sq->sq.sqe_stype = NIX_STYPE_STP;
- sq->sq.sqb_aura =
- npa_lf_aura_handle_to_aura(txq->sqb_pool->pool_id);
- sq->sq.sq_int_ena = BIT(NIX_SQINT_LMT_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SQB_ALLOC_FAIL);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_SEND_ERR);
- sq->sq.sq_int_ena |= BIT(NIX_SQINT_MNQ_ERR);
-
- /* Many to one reduction */
- sq->sq.qint_idx = txq->sq % dev->qints;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- if (dev->lock_tx_ctx) {
- sq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- sq->qidx = txq->sq;
- sq->ctype = NIX_AQ_CTYPE_SQ;
- sq->op = NIX_AQ_INSTOP_LOCK;
-
- rc = otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static int
-nix_sq_uninit(struct otx2_eth_txq *txq)
-{
- struct otx2_eth_dev *dev = txq->dev;
- struct otx2_mbox *mbox = dev->mbox;
- struct ndc_sync_op *ndc_req;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint16_t sqes_per_sqb;
- void *sqb_buf;
- int rc, count;
-
- otx2_nix_dbg("Cleaning up sq %u", txq->sq);
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Check if sq is already cleaned up */
- if (!rsp->sq.ena)
- return 0;
-
- /* Disable sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- aq->sq_mask.ena = ~aq->sq_mask.ena;
- aq->sq.ena = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (dev->lock_tx_ctx) {
- /* Unlock sq */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_UNLOCK;
-
- rc = otx2_mbox_process(mbox);
- if (rc < 0)
- return rc;
-
- nix_sqb_unlock(txq->sqb_pool);
- }
-
- /* Read SQ and free sqb's */
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = txq->sq;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (aq->sq.smq_pend)
- otx2_err("SQ has pending sqe's");
-
- count = aq->sq.sqb_count;
- sqes_per_sqb = 1 << txq->sqes_per_sqb_log2;
- /* Free SQB's that are used */
- sqb_buf = (void *)rsp->sq.head_sqb;
- while (count) {
- void *next_sqb;
-
- next_sqb = *(void **)((uintptr_t)sqb_buf + (uint32_t)
- ((sqes_per_sqb - 1) *
- nix_sq_max_sqe_sz(txq)));
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- (uint64_t)sqb_buf);
- sqb_buf = next_sqb;
- count--;
- }
-
- /* Free next to use sqb */
- if (rsp->sq.next_sqb)
- npa_lf_aura_op_free(txq->sqb_pool->pool_id, 1,
- rsp->sq.next_sqb);
-
- /* Sync NDC-NIX-TX for LF */
- ndc_req = otx2_mbox_alloc_msg_ndc_sync_op(mbox);
- ndc_req->nix_lf_tx_sync = 1;
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Error on NDC-NIX-TX LF sync, rc %d", rc);
-
- return rc;
-}
-
-static int
-nix_sqb_aura_limit_cfg(struct rte_mempool *mp, uint16_t nb_sqb_bufs)
-{
- struct otx2_npa_lf *npa_lf = otx2_intra_dev_get_cfg()->npa_lf;
- struct npa_aq_enq_req *aura_req;
-
- aura_req = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- aura_req->aura_id = npa_lf_aura_handle_to_aura(mp->pool_id);
- aura_req->ctype = NPA_AQ_CTYPE_AURA;
- aura_req->op = NPA_AQ_INSTOP_WRITE;
-
- aura_req->aura.limit = nb_sqb_bufs;
- aura_req->aura_mask.limit = ~(aura_req->aura_mask.limit);
-
- return otx2_mbox_process(npa_lf->mbox);
-}
-
-static int
-nix_alloc_sqb_pool(int port, struct otx2_eth_txq *txq, uint16_t nb_desc)
-{
- struct otx2_eth_dev *dev = txq->dev;
- uint16_t sqes_per_sqb, nb_sqb_bufs;
- char name[RTE_MEMPOOL_NAMESIZE];
- struct rte_mempool_objsz sz;
- struct npa_aura_s *aura;
- uint32_t tmp, blk_sz;
-
- aura = (struct npa_aura_s *)((uintptr_t)txq->fc_mem + OTX2_ALIGN);
- snprintf(name, sizeof(name), "otx2_sqb_pool_%d_%d", port, txq->sq);
- blk_sz = dev->sqb_size;
-
- if (nix_sq_max_sqe_sz(txq) == NIX_MAXSQESZ_W16)
- sqes_per_sqb = (dev->sqb_size / 8) / 16;
- else
- sqes_per_sqb = (dev->sqb_size / 8) / 8;
-
- nb_sqb_bufs = nb_desc / sqes_per_sqb;
- /* Clamp up to devarg passed SQB count */
- nb_sqb_bufs = RTE_MIN(dev->max_sqb_count, RTE_MAX(NIX_DEF_SQB,
- nb_sqb_bufs + NIX_SQB_LIST_SPACE));
-
- txq->sqb_pool = rte_mempool_create_empty(name, NIX_MAX_SQB, blk_sz,
- 0, 0, dev->node,
- RTE_MEMPOOL_F_NO_SPREAD);
- txq->nb_sqb_bufs = nb_sqb_bufs;
- txq->sqes_per_sqb_log2 = (uint16_t)rte_log2_u32(sqes_per_sqb);
- txq->nb_sqb_bufs_adj = nb_sqb_bufs -
- RTE_ALIGN_MUL_CEIL(nb_sqb_bufs, sqes_per_sqb) / sqes_per_sqb;
- txq->nb_sqb_bufs_adj =
- (NIX_SQB_LOWER_THRESH * txq->nb_sqb_bufs_adj) / 100;
-
- if (txq->sqb_pool == NULL) {
- otx2_err("Failed to allocate sqe mempool");
- goto fail;
- }
-
- memset(aura, 0, sizeof(*aura));
- aura->fc_ena = 1;
- aura->fc_addr = txq->fc_iova;
- aura->fc_hyst_bits = 0; /* Store count on all updates */
- if (rte_mempool_set_ops_byname(txq->sqb_pool, "octeontx2_npa", aura)) {
- otx2_err("Failed to set ops for sqe mempool");
- goto fail;
- }
- if (rte_mempool_populate_default(txq->sqb_pool) < 0) {
- otx2_err("Failed to populate sqe mempool");
- goto fail;
- }
-
- tmp = rte_mempool_calc_obj_size(blk_sz, RTE_MEMPOOL_F_NO_SPREAD, &sz);
- if (dev->sqb_size != sz.elt_size) {
- otx2_err("sqe pool block size is not expected %d != %d",
- dev->sqb_size, tmp);
- goto fail;
- }
-
- nix_sqb_aura_limit_cfg(txq->sqb_pool, txq->nb_sqb_bufs);
- if (dev->lock_tx_ctx)
- nix_sqb_lock(txq->sqb_pool);
-
- return 0;
-fail:
- return -ENOMEM;
-}
-
-void
-otx2_nix_form_default_desc(struct otx2_eth_txq *txq)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- struct nix_send_mem_s *send_mem;
- union nix_send_sg_s *sg;
-
- /* Initialize the fields based on basic single segment packet */
- memset(&txq->cmd, 0, sizeof(txq->cmd));
-
- if (txq->dev->tx_offload_flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 2(EXT_HDR) + 1(SG) + 1(IOVA) = 6/2 - 1 = 2 */
- send_hdr->w0.sizem1 = 2;
-
- send_hdr_ext = (struct nix_send_ext_s *)&txq->cmd[2];
- send_hdr_ext->w0.subdc = NIX_SUBDC_EXT;
- if (txq->dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- /* Default: one seg packet would have:
- * 2(HDR) + 2(EXT) + 1(SG) + 1(IOVA) + 2(MEM)
- * => 8/2 - 1 = 3
- */
- send_hdr->w0.sizem1 = 3;
- send_hdr_ext->w0.tstmp = 1;
-
- /* To calculate the offset for send_mem,
- * send_hdr->w0.sizem1 * 2
- */
- send_mem = (struct nix_send_mem_s *)(txq->cmd +
- (send_hdr->w0.sizem1 << 1));
- send_mem->subdc = NIX_SUBDC_MEM;
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP;
- send_mem->addr = txq->dev->tstamp.tx_tstamp_iova;
- }
- sg = (union nix_send_sg_s *)&txq->cmd[4];
- } else {
- send_hdr = (struct nix_send_hdr_s *)&txq->cmd[0];
- /* 2(HDR) + 1(SG) + 1(IOVA) = 4/2 - 1 = 1 */
- send_hdr->w0.sizem1 = 1;
- sg = (union nix_send_sg_s *)&txq->cmd[2];
- }
-
- send_hdr->w0.sq = txq->sq;
- sg->subdc = NIX_SUBDC_SG;
- sg->segs = 1;
- sg->ld_type = NIX_SENDLDTYPE_LDD;
-
- rte_smp_wmb();
-}
-
-static void
-otx2_nix_tx_queue_release(struct rte_eth_dev *eth_dev, uint16_t qid)
-{
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[qid];
-
- if (!txq)
- return;
-
- otx2_nix_dbg("Releasing txq %u", txq->sq);
-
- /* Flush and disable tm */
- otx2_nix_sq_flush_pre(txq, eth_dev->data->dev_started);
-
- /* Free sqb's and disable sq */
- nix_sq_uninit(txq);
-
- if (txq->sqb_pool) {
- rte_mempool_free(txq->sqb_pool);
- txq->sqb_pool = NULL;
- }
- otx2_nix_sq_flush_post(txq);
- rte_free(txq);
- eth_dev->data->tx_queues[qid] = NULL;
-}
-
-
-static int
-otx2_nix_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t sq,
- uint16_t nb_desc, unsigned int socket_id,
- const struct rte_eth_txconf *tx_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct rte_memzone *fc;
- struct otx2_eth_txq *txq;
- uint64_t offloads;
- int rc;
-
- rc = -EINVAL;
-
- /* Compile time check to make sure all fast path elements in a CL */
- RTE_BUILD_BUG_ON(offsetof(struct otx2_eth_txq, slow_path_start) >= 128);
-
- if (tx_conf->tx_deferred_start) {
- otx2_err("Tx deferred start is not supported");
- goto fail;
- }
-
- /* Free memory prior to re-allocation if needed. */
- if (eth_dev->data->tx_queues[sq] != NULL) {
- otx2_nix_dbg("Freeing memory prior to re-allocation %d", sq);
- otx2_nix_tx_queue_release(eth_dev, sq);
- }
-
- /* Find the expected offloads for this queue */
- offloads = tx_conf->offloads | eth_dev->data->dev_conf.txmode.offloads;
-
- /* Allocating tx queue data structure */
- txq = rte_zmalloc_socket("otx2_ethdev TX queue", sizeof(*txq),
- OTX2_ALIGN, socket_id);
- if (txq == NULL) {
- otx2_err("Failed to alloc txq=%d", sq);
- rc = -ENOMEM;
- goto fail;
- }
- txq->sq = sq;
- txq->dev = dev;
- txq->sqb_pool = NULL;
- txq->offloads = offloads;
- dev->tx_offloads |= offloads;
- eth_dev->data->tx_queues[sq] = txq;
-
- /*
- * Allocate memory for flow control updates from HW.
- * Alloc one cache line, so that fits all FC_STYPE modes.
- */
- fc = rte_eth_dma_zone_reserve(eth_dev, "fcmem", sq,
- OTX2_ALIGN + sizeof(struct npa_aura_s),
- OTX2_ALIGN, dev->node);
- if (fc == NULL) {
- otx2_err("Failed to allocate mem for fcmem");
- rc = -ENOMEM;
- goto free_txq;
- }
- txq->fc_iova = fc->iova;
- txq->fc_mem = fc->addr;
-
- /* Initialize the aura sqb pool */
- rc = nix_alloc_sqb_pool(eth_dev->data->port_id, txq, nb_desc);
- if (rc) {
- otx2_err("Failed to alloc sqe pool rc=%d", rc);
- goto free_txq;
- }
-
- /* Initialize the SQ */
- rc = nix_sq_init(txq);
- if (rc) {
- otx2_err("Failed to init sq=%d context", sq);
- goto free_txq;
- }
-
- txq->fc_cache_pkts = 0;
- txq->io_addr = dev->base + NIX_LF_OP_SENDX(0);
- /* Evenly distribute LMT slot for each sq */
- txq->lmt_addr = (void *)(dev->lmt_addr + ((sq & LMT_SLOT_MASK) << 12));
-
- txq->qconf.socket_id = socket_id;
- txq->qconf.nb_desc = nb_desc;
- memcpy(&txq->qconf.conf.tx, tx_conf, sizeof(struct rte_eth_txconf));
-
- txq->lso_tun_fmt = dev->lso_tun_fmt;
- otx2_nix_form_default_desc(txq);
-
- otx2_nix_dbg("sq=%d fc=%p offload=0x%" PRIx64 " sqb=0x%" PRIx64 ""
- " lmt_addr=%p nb_sqb_bufs=%d sqes_per_sqb_log2=%d", sq,
- fc->addr, offloads, txq->sqb_pool->pool_id, txq->lmt_addr,
- txq->nb_sqb_bufs, txq->sqes_per_sqb_log2);
- eth_dev->data->tx_queue_state[sq] = RTE_ETH_QUEUE_STATE_STOPPED;
- return 0;
-
-free_txq:
- otx2_nix_tx_queue_release(eth_dev, sq);
-fail:
- return rc;
-}
-
-static int
-nix_store_queue_cfg_and_then_release(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = NULL;
- struct otx2_eth_qconf *rx_qconf = NULL;
- struct otx2_eth_txq **txq;
- struct otx2_eth_rxq **rxq;
- int i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- tx_qconf = malloc(nb_txq * sizeof(*tx_qconf));
- if (tx_qconf == NULL) {
- otx2_err("Failed to allocate memory for tx_qconf");
- goto fail;
- }
-
- rx_qconf = malloc(nb_rxq * sizeof(*rx_qconf));
- if (rx_qconf == NULL) {
- otx2_err("Failed to allocate memory for rx_qconf");
- goto fail;
- }
-
- txq = (struct otx2_eth_txq **)eth_dev->data->tx_queues;
- for (i = 0; i < nb_txq; i++) {
- if (txq[i] == NULL) {
- tx_qconf[i].valid = false;
- otx2_info("txq[%d] is already released", i);
- continue;
- }
- memcpy(&tx_qconf[i], &txq[i]->qconf, sizeof(*tx_qconf));
- tx_qconf[i].valid = true;
- otx2_nix_tx_queue_release(eth_dev, i);
- }
-
- rxq = (struct otx2_eth_rxq **)eth_dev->data->rx_queues;
- for (i = 0; i < nb_rxq; i++) {
- if (rxq[i] == NULL) {
- rx_qconf[i].valid = false;
- otx2_info("rxq[%d] is already released", i);
- continue;
- }
- memcpy(&rx_qconf[i], &rxq[i]->qconf, sizeof(*rx_qconf));
- rx_qconf[i].valid = true;
- otx2_nix_rx_queue_release(eth_dev, i);
- }
-
- dev->tx_qconf = tx_qconf;
- dev->rx_qconf = rx_qconf;
- return 0;
-
-fail:
- free(tx_qconf);
- free(rx_qconf);
-
- return -ENOMEM;
-}
-
-static int
-nix_restore_queue_cfg(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_eth_qconf *tx_qconf = dev->tx_qconf;
- struct otx2_eth_qconf *rx_qconf = dev->rx_qconf;
- int rc, i, nb_rxq, nb_txq;
-
- nb_rxq = RTE_MIN(dev->configured_nb_rx_qs, eth_dev->data->nb_rx_queues);
- nb_txq = RTE_MIN(dev->configured_nb_tx_qs, eth_dev->data->nb_tx_queues);
-
- rc = -ENOMEM;
- /* Setup tx & rx queues with previous configuration so
- * that the queues can be functional in cases like ports
- * are started without re configuring queues.
- *
- * Usual re config sequence is like below:
- * port_configure() {
- * if(reconfigure) {
- * queue_release()
- * queue_setup()
- * }
- * queue_configure() {
- * queue_release()
- * queue_setup()
- * }
- * }
- * port_start()
- *
- * In some application's control path, queue_configure() would
- * NOT be invoked for TXQs/RXQs in port_configure().
- * In such cases, queues can be functional after start as the
- * queues are already setup in port_configure().
- */
- for (i = 0; i < nb_txq; i++) {
- if (!tx_qconf[i].valid)
- continue;
- rc = otx2_nix_tx_queue_setup(eth_dev, i, tx_qconf[i].nb_desc,
- tx_qconf[i].socket_id,
- &tx_qconf[i].conf.tx);
- if (rc) {
- otx2_err("Failed to setup tx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_tx_queue_release(eth_dev, i);
- goto fail;
- }
- }
-
- free(tx_qconf); tx_qconf = NULL;
-
- for (i = 0; i < nb_rxq; i++) {
- if (!rx_qconf[i].valid)
- continue;
- rc = otx2_nix_rx_queue_setup(eth_dev, i, rx_qconf[i].nb_desc,
- rx_qconf[i].socket_id,
- &rx_qconf[i].conf.rx,
- rx_qconf[i].mempool);
- if (rc) {
- otx2_err("Failed to setup rx queue rc=%d", rc);
- for (i -= 1; i >= 0; i--)
- otx2_nix_rx_queue_release(eth_dev, i);
- goto release_tx_queues;
- }
- }
-
- free(rx_qconf); rx_qconf = NULL;
-
- return 0;
-
-release_tx_queues:
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
-fail:
- if (tx_qconf)
- free(tx_qconf);
- if (rx_qconf)
- free(rx_qconf);
-
- return rc;
-}
-
-static uint16_t
-nix_eth_nop_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- RTE_SET_USED(queue);
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- return 0;
-}
-
-static void
-nix_set_nop_rxtx_function(struct rte_eth_dev *eth_dev)
-{
- /* These dummy functions are required for supporting
- * some applications which reconfigure queues without
- * stopping tx burst and rx burst threads(eg kni app)
- * When the queues context is saved, txq/rxqs are released
- * which caused app crash since rx/tx burst is still
- * on different lcores
- */
- eth_dev->tx_pkt_burst = nix_eth_nop_burst;
- eth_dev->rx_pkt_burst = nix_eth_nop_burst;
- rte_mb();
-}
-
-static void
-nix_lso_tcp(struct nix_lso_format_cfg *req, bool v4)
-{
- volatile struct nix_lso_format *field;
-
- /* Format works only with TCP packet marked by OL3/OL4 */
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
- /* TCP flags field */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_udp_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Outer UDP length */
- field->layer = NIX_TXLAYER_OL4;
- field->offset = 4;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static void
-nix_lso_tun_tcp(struct nix_lso_format_cfg *req,
- bool outer_v4, bool inner_v4)
-{
- volatile struct nix_lso_format *field;
-
- field = (volatile struct nix_lso_format *)&req->fields[0];
- req->field_mask = NIX_LSO_FIELD_MASK;
- /* Outer IPv4/IPv6 len */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = outer_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (outer_v4) {
- /* IPID */
- field->layer = NIX_TXLAYER_OL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* Inner IPv4/IPv6 */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = inner_v4 ? 2 : 4;
- field->sizem1 = 1; /* 2B */
- field->alg = NIX_LSOALG_ADD_PAYLEN;
- field++;
- if (inner_v4) {
- /* IPID field */
- field->layer = NIX_TXLAYER_IL3;
- field->offset = 4;
- field->sizem1 = 1;
- /* Incremented linearly per segment */
- field->alg = NIX_LSOALG_ADD_SEGNUM;
- field++;
- }
-
- /* TCP sequence number update */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 4;
- field->sizem1 = 3; /* 4 bytes */
- field->alg = NIX_LSOALG_ADD_OFFSET;
- field++;
-
- /* TCP flags field */
- field->layer = NIX_TXLAYER_IL4;
- field->offset = 12;
- field->sizem1 = 1;
- field->alg = NIX_LSOALG_TCP_FLAGS;
- field++;
-}
-
-static int
-nix_setup_lso_formats(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_lso_format_cfg_rsp *rsp;
- struct nix_lso_format_cfg *req;
- uint8_t *fmt;
- int rc;
-
- /* Skip if TSO was not requested */
- if (!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F))
- return 0;
- /*
- * IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV4)
- return -EFAULT;
- otx2_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
-
-
- /*
- * IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tcp(req, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (rsp->lso_format_idx != NIX_LSO_FORMAT_IDX_TSOV6)
- return -EFAULT;
- otx2_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/UDP/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_udp_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_udp_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv4/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, true, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv4/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, true);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
-
- /*
- * IPv6/TUN HDR/IPv6/TCP LSO
- */
- req = otx2_mbox_alloc_msg_nix_lso_format_cfg(mbox);
- nix_lso_tun_tcp(req, false, false);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- dev->lso_tun_idx[NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- otx2_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
-
- /* Save all tun formats into u64 for fast path.
- * Lower 32bit has non-udp tunnel formats.
- * Upper 32bit has udp tunnel formats.
- */
- fmt = dev->lso_tun_idx;
- dev->lso_tun_fmt = ((uint64_t)fmt[NIX_LSO_TUN_V4V4] |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 8 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 16 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 24);
-
- fmt = dev->lso_udp_tun_idx;
- dev->lso_tun_fmt |= ((uint64_t)fmt[NIX_LSO_TUN_V4V4] << 32 |
- (uint64_t)fmt[NIX_LSO_TUN_V4V6] << 40 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V4] << 48 |
- (uint64_t)fmt[NIX_LSO_TUN_V6V6] << 56);
-
- return 0;
-}
-
-static int
-otx2_nix_configure(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct rte_eth_conf *conf = &data->dev_conf;
- struct rte_eth_rxmode *rxmode = &conf->rxmode;
- struct rte_eth_txmode *txmode = &conf->txmode;
- char ea_fmt[RTE_ETHER_ADDR_FMT_SIZE];
- struct rte_ether_addr *ea;
- uint8_t nb_rxq, nb_txq;
- int rc;
-
- rc = -EINVAL;
-
- /* Sanity checks */
- if (rte_eal_has_hugepages() == 0) {
- otx2_err("Huge page is not configured");
- goto fail_configure;
- }
-
- if (conf->dcb_capability_en == 1) {
- otx2_err("dcb enable is not supported");
- goto fail_configure;
- }
-
- if (conf->fdir_conf.mode != RTE_FDIR_MODE_NONE) {
- otx2_err("Flow director is not supported");
- goto fail_configure;
- }
-
- if (rxmode->mq_mode != RTE_ETH_MQ_RX_NONE &&
- rxmode->mq_mode != RTE_ETH_MQ_RX_RSS) {
- otx2_err("Unsupported mq rx mode %d", rxmode->mq_mode);
- goto fail_configure;
- }
-
- if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) {
- otx2_err("Unsupported mq tx mode %d", txmode->mq_mode);
- goto fail_configure;
- }
-
- if (otx2_dev_is_Ax(dev) &&
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_SCTP_CKSUM) &&
- ((txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM) ||
- (txmode->offloads & RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM))) {
- otx2_err("Outer IP and SCTP checksum unsupported");
- goto fail_configure;
- }
-
- /* Free the resources allocated from the previous configure */
- if (dev->configured == 1) {
- otx2_eth_sec_fini(eth_dev);
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
- otx2_nix_vlan_fini(eth_dev);
- otx2_nix_mc_addr_list_uninstall(eth_dev);
- otx2_flow_free_all_resources(dev);
- oxt2_nix_unregister_queue_irqs(eth_dev);
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
- nix_set_nop_rxtx_function(eth_dev);
- rc = nix_store_queue_cfg_and_then_release(eth_dev);
- if (rc)
- goto fail_configure;
- otx2_nix_tm_fini(eth_dev);
- nix_lf_free(dev);
- }
-
- dev->rx_offloads = rxmode->offloads;
- dev->tx_offloads = txmode->offloads;
- dev->rx_offload_flags |= nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags |= nix_tx_offload_flags(eth_dev);
- dev->rss_info.rss_grps = NIX_RSS_GRPS;
-
- nb_rxq = RTE_MAX(data->nb_rx_queues, 1);
- nb_txq = RTE_MAX(data->nb_tx_queues, 1);
-
- /* Alloc a nix lf */
- rc = nix_lf_alloc(dev, nb_rxq, nb_txq);
- if (rc) {
- otx2_err("Failed to init nix_lf rc=%d", rc);
- goto fail_offloads;
- }
-
- otx2_nix_err_intr_enb_dis(eth_dev, true);
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- if (dev->ptp_en &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- goto free_nix_lf;
- }
-
- rc = nix_lf_switch_header_type_enable(dev, true);
- if (rc) {
- otx2_err("Failed to enable switch type nix_lf rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = nix_setup_lso_formats(dev);
- if (rc) {
- otx2_err("failed to setup nix lso format fields, rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Configure RSS */
- rc = otx2_nix_rss_config(eth_dev);
- if (rc) {
- otx2_err("Failed to configure rss rc=%d", rc);
- goto free_nix_lf;
- }
-
- /* Init the default TM scheduler hierarchy */
- rc = otx2_nix_tm_init_default(eth_dev);
- if (rc) {
- otx2_err("Failed to init traffic manager rc=%d", rc);
- goto free_nix_lf;
- }
-
- rc = otx2_nix_vlan_offload_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init vlan offload rc=%d", rc);
- goto tm_fini;
- }
-
- /* Register queue IRQs */
- rc = oxt2_nix_register_queue_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register queue interrupts rc=%d", rc);
- goto vlan_fini;
- }
-
- /* Register cq IRQs */
- if (eth_dev->data->dev_conf.intr_conf.rxq) {
- if (eth_dev->data->nb_rx_queues > dev->cints) {
- otx2_err("Rx interrupt cannot be enabled, rxq > %d",
- dev->cints);
- goto q_irq_fini;
- }
- /* Rx interrupt feature cannot work with vector mode because,
- * vector mode doesn't process packets unless min 4 pkts are
- * received, while cq interrupts are generated even for 1 pkt
- * in the CQ.
- */
- dev->scalar_ena = true;
-
- rc = oxt2_nix_register_cq_irqs(eth_dev);
- if (rc) {
- otx2_err("Failed to register CQ interrupts rc=%d", rc);
- goto q_irq_fini;
- }
- }
-
- /* Configure loop back mode */
- rc = cgx_intlbk_enable(dev, eth_dev->data->dev_conf.lpbk_mode);
- if (rc) {
- otx2_err("Failed to configure cgx loop back mode rc=%d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_rxchan_bpid_cfg(eth_dev, true);
- if (rc) {
- otx2_err("Failed to configure nix rx chan bpid cfg rc=%d", rc);
- goto cq_fini;
- }
-
- /* Enable security */
- rc = otx2_eth_sec_init(eth_dev);
- if (rc)
- goto cq_fini;
-
- rc = otx2_nix_flow_ctrl_init(eth_dev);
- if (rc) {
- otx2_err("Failed to init flow ctrl mode %d", rc);
- goto cq_fini;
- }
-
- rc = otx2_nix_mc_addr_list_install(eth_dev);
- if (rc < 0) {
- otx2_err("Failed to install mc address list rc=%d", rc);
- goto sec_fini;
- }
-
- /*
- * Restore queue config when reconfigure followed by
- * reconfigure and no queue configure invoked from application case.
- */
- if (dev->configured == 1) {
- rc = nix_restore_queue_cfg(eth_dev);
- if (rc)
- goto uninstall_mc_list;
- }
-
- /* Update the mac address */
- ea = eth_dev->data->mac_addrs;
- memcpy(ea, dev->mac_addr, RTE_ETHER_ADDR_LEN);
- if (rte_is_zero_ether_addr(ea))
- rte_eth_random_addr((uint8_t *)ea);
-
- rte_ether_format_addr(ea_fmt, RTE_ETHER_ADDR_FMT_SIZE, ea);
-
- /* Apply new link configurations if changed */
- rc = otx2_apply_link_speed(eth_dev);
- if (rc) {
- otx2_err("Failed to set link configuration");
- goto uninstall_mc_list;
- }
-
- otx2_nix_dbg("Configured port%d mac=%s nb_rxq=%d nb_txq=%d"
- " rx_offloads=0x%" PRIx64 " tx_offloads=0x%" PRIx64 ""
- " rx_flags=0x%x tx_flags=0x%x",
- eth_dev->data->port_id, ea_fmt, nb_rxq,
- nb_txq, dev->rx_offloads, dev->tx_offloads,
- dev->rx_offload_flags, dev->tx_offload_flags);
-
- /* All good */
- dev->configured = 1;
- dev->configured_nb_rx_qs = data->nb_rx_queues;
- dev->configured_nb_tx_qs = data->nb_tx_queues;
- return 0;
-
-uninstall_mc_list:
- otx2_nix_mc_addr_list_uninstall(eth_dev);
-sec_fini:
- otx2_eth_sec_fini(eth_dev);
-cq_fini:
- oxt2_nix_unregister_cq_irqs(eth_dev);
-q_irq_fini:
- oxt2_nix_unregister_queue_irqs(eth_dev);
-vlan_fini:
- otx2_nix_vlan_fini(eth_dev);
-tm_fini:
- otx2_nix_tm_fini(eth_dev);
-free_nix_lf:
- nix_lf_free(dev);
-fail_offloads:
- dev->rx_offload_flags &= ~nix_rx_offload_flags(eth_dev);
- dev->tx_offload_flags &= ~nix_tx_offload_flags(eth_dev);
-fail_configure:
- dev->configured = 0;
- return rc;
-}
-
-int
-otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc = -EINVAL;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-int
-otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_txq *txq;
- int rc;
-
- txq = eth_dev->data->tx_queues[qidx];
-
- if (data->tx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- txq->fc_cache_pkts = 0;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, txq=%u, rc=%d",
- qidx, rc);
- goto done;
- }
-
- data->tx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STARTED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, true);
- if (rc) {
- otx2_err("Failed to enable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STARTED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_rx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx)
-{
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[qidx];
- struct rte_eth_dev_data *data = eth_dev->data;
- int rc;
-
- if (data->rx_queue_state[qidx] == RTE_ETH_QUEUE_STATE_STOPPED)
- return 0;
-
- rc = nix_rq_enb_dis(rxq->eth_dev, rxq, false);
- if (rc) {
- otx2_err("Failed to disable rxq=%u, rc=%d", qidx, rc);
- goto done;
- }
-
- data->rx_queue_state[qidx] = RTE_ETH_QUEUE_STATE_STOPPED;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_dev_stop(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_mbuf *rx_pkts[32];
- struct otx2_eth_rxq *rxq;
- struct rte_eth_link link;
- int count, i, j, rc;
-
- nix_lf_switch_header_type_enable(dev, false);
- nix_cgx_stop_link_event(dev);
- npc_rx_disable(dev);
-
- /* Stop rx queues and free up pkts pending */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_stop(eth_dev, i);
- if (rc)
- continue;
-
- rxq = eth_dev->data->rx_queues[i];
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- while (count) {
- for (j = 0; j < count; j++)
- rte_pktmbuf_free(rx_pkts[j]);
- count = dev->rx_pkt_burst_no_offload(rxq, rx_pkts, 32);
- }
- }
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- /* Bring down link status internally */
- memset(&link, 0, sizeof(link));
- rte_eth_linkstatus_set(eth_dev, &link);
-
- return 0;
-}
-
-static int
-otx2_nix_dev_start(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- /* MTU recalculate should be avoided here if PTP is enabled by PF, as
- * otx2_nix_recalc_mtu would be invoked during otx2_nix_ptp_enable_vf
- * call below.
- */
- if (eth_dev->data->nb_rx_queues != 0 && !otx2_ethdev_is_ptp_en(dev)) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- return rc;
- }
-
- /* Start rx queues */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rc = otx2_nix_rx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = otx2_nix_tx_queue_start(eth_dev, i);
- if (rc)
- return rc;
- }
-
- rc = otx2_nix_update_flow_ctrl_mode(eth_dev);
- if (rc) {
- otx2_err("Failed to update flow ctrl mode %d", rc);
- return rc;
- }
-
- /* Enable PTP if it was requested by the app or if it is already
- * enabled in PF owning this VF
- */
- memset(&dev->tstamp, 0, sizeof(struct otx2_timesync_info));
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) ||
- otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_enable(eth_dev);
- else
- otx2_nix_timesync_disable(eth_dev);
-
- /* Update VF about data off shifted by 8 bytes if PTP already
- * enabled in PF owning this VF
- */
- if (otx2_ethdev_is_ptp_en(dev) && otx2_dev_is_vf(dev))
- otx2_nix_ptp_enable_vf(eth_dev);
-
- if (dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F) {
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
- }
-
- rc = npc_rx_enable(dev);
- if (rc) {
- otx2_err("Failed to enable NPC rx %d", rc);
- return rc;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, true);
-
- rc = nix_cgx_start_link_event(dev);
- if (rc) {
- otx2_err("Failed to start cgx link event %d", rc);
- goto rx_disable;
- }
-
- otx2_nix_toggle_flag_link_cfg(dev, false);
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-
-rx_disable:
- npc_rx_disable(dev);
- otx2_nix_toggle_flag_link_cfg(dev, false);
- return rc;
-}
-
-static int otx2_nix_dev_reset(struct rte_eth_dev *eth_dev);
-static int otx2_nix_dev_close(struct rte_eth_dev *eth_dev);
-
-/* Initialize and register driver with DPDK Application */
-static const struct eth_dev_ops otx2_eth_dev_ops = {
- .dev_infos_get = otx2_nix_info_get,
- .dev_configure = otx2_nix_configure,
- .link_update = otx2_nix_link_update,
- .tx_queue_setup = otx2_nix_tx_queue_setup,
- .tx_queue_release = otx2_nix_tx_queue_release,
- .tm_ops_get = otx2_nix_tm_ops_get,
- .rx_queue_setup = otx2_nix_rx_queue_setup,
- .rx_queue_release = otx2_nix_rx_queue_release,
- .dev_start = otx2_nix_dev_start,
- .dev_stop = otx2_nix_dev_stop,
- .dev_close = otx2_nix_dev_close,
- .tx_queue_start = otx2_nix_tx_queue_start,
- .tx_queue_stop = otx2_nix_tx_queue_stop,
- .rx_queue_start = otx2_nix_rx_queue_start,
- .rx_queue_stop = otx2_nix_rx_queue_stop,
- .dev_set_link_up = otx2_nix_dev_set_link_up,
- .dev_set_link_down = otx2_nix_dev_set_link_down,
- .dev_supported_ptypes_get = otx2_nix_supported_ptypes_get,
- .dev_ptypes_set = otx2_nix_ptypes_set,
- .dev_reset = otx2_nix_dev_reset,
- .stats_get = otx2_nix_dev_stats_get,
- .stats_reset = otx2_nix_dev_stats_reset,
- .get_reg = otx2_nix_dev_get_reg,
- .mtu_set = otx2_nix_mtu_set,
- .mac_addr_add = otx2_nix_mac_addr_add,
- .mac_addr_remove = otx2_nix_mac_addr_del,
- .mac_addr_set = otx2_nix_mac_addr_set,
- .set_mc_addr_list = otx2_nix_set_mc_addr_list,
- .promiscuous_enable = otx2_nix_promisc_enable,
- .promiscuous_disable = otx2_nix_promisc_disable,
- .allmulticast_enable = otx2_nix_allmulticast_enable,
- .allmulticast_disable = otx2_nix_allmulticast_disable,
- .queue_stats_mapping_set = otx2_nix_queue_stats_mapping,
- .reta_update = otx2_nix_dev_reta_update,
- .reta_query = otx2_nix_dev_reta_query,
- .rss_hash_update = otx2_nix_rss_hash_update,
- .rss_hash_conf_get = otx2_nix_rss_hash_conf_get,
- .xstats_get = otx2_nix_xstats_get,
- .xstats_get_names = otx2_nix_xstats_get_names,
- .xstats_reset = otx2_nix_xstats_reset,
- .xstats_get_by_id = otx2_nix_xstats_get_by_id,
- .xstats_get_names_by_id = otx2_nix_xstats_get_names_by_id,
- .rxq_info_get = otx2_nix_rxq_info_get,
- .txq_info_get = otx2_nix_txq_info_get,
- .rx_burst_mode_get = otx2_rx_burst_mode_get,
- .tx_burst_mode_get = otx2_tx_burst_mode_get,
- .tx_done_cleanup = otx2_nix_tx_done_cleanup,
- .set_queue_rate_limit = otx2_nix_tm_set_queue_rate_limit,
- .pool_ops_supported = otx2_nix_pool_ops_supported,
- .flow_ops_get = otx2_nix_dev_flow_ops_get,
- .get_module_info = otx2_nix_get_module_info,
- .get_module_eeprom = otx2_nix_get_module_eeprom,
- .fw_version_get = otx2_nix_fw_version_get,
- .flow_ctrl_get = otx2_nix_flow_ctrl_get,
- .flow_ctrl_set = otx2_nix_flow_ctrl_set,
- .timesync_enable = otx2_nix_timesync_enable,
- .timesync_disable = otx2_nix_timesync_disable,
- .timesync_read_rx_timestamp = otx2_nix_timesync_read_rx_timestamp,
- .timesync_read_tx_timestamp = otx2_nix_timesync_read_tx_timestamp,
- .timesync_adjust_time = otx2_nix_timesync_adjust_time,
- .timesync_read_time = otx2_nix_timesync_read_time,
- .timesync_write_time = otx2_nix_timesync_write_time,
- .vlan_offload_set = otx2_nix_vlan_offload_set,
- .vlan_filter_set = otx2_nix_vlan_filter_set,
- .vlan_strip_queue_set = otx2_nix_vlan_strip_queue_set,
- .vlan_tpid_set = otx2_nix_vlan_tpid_set,
- .vlan_pvid_set = otx2_nix_vlan_pvid_set,
- .rx_queue_intr_enable = otx2_nix_rx_queue_intr_enable,
- .rx_queue_intr_disable = otx2_nix_rx_queue_intr_disable,
- .read_clock = otx2_nix_read_clock,
-};
-
-static inline int
-nix_lf_attach(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct rsrc_attach_req *req;
-
- /* Attach NIX(lf) */
- req = otx2_mbox_alloc_msg_attach_resources(mbox);
- req->modify = true;
- req->nixlf = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static inline int
-nix_lf_get_msix_offset(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct msix_offset_rsp *msix_rsp;
- int rc;
-
- /* Get NPA and NIX MSIX vector offsets */
- otx2_mbox_alloc_msg_msix_offset(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&msix_rsp);
-
- dev->nix_msixoff = msix_rsp->nix_msixoff;
-
- return rc;
-}
-
-static inline int
-otx2_eth_dev_lf_detach(struct otx2_mbox *mbox)
-{
- struct rsrc_detach_req *req;
-
- req = otx2_mbox_alloc_msg_detach_resources(mbox);
-
- /* Detach all except npa lf */
- req->partial = true;
- req->nixlf = true;
- req->sso = true;
- req->ssow = true;
- req->timlfs = true;
- req->cptlfs = true;
-
- return otx2_mbox_process(mbox);
-}
-
-static bool
-otx2_eth_dev_is_sdp(struct rte_pci_device *pci_dev)
-{
- if (pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_PF ||
- pci_dev->id.device_id == PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- return true;
- return false;
-}
-
-static inline uint64_t
-nix_get_blkaddr(struct otx2_eth_dev *dev)
-{
- uint64_t reg;
-
- /* Reading the discovery register to know which NIX is the LF
- * attached to.
- */
- reg = otx2_read64(dev->bar2 +
- RVU_PF_BLOCK_ADDRX_DISC(RVU_BLOCK_ADDR_NIX0));
-
- return reg & 0x1FFULL ? RVU_BLOCK_ADDR_NIX0 : RVU_BLOCK_ADDR_NIX1;
-}
-
-static int
-otx2_eth_dev_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, max_entries;
-
- eth_dev->dev_ops = &otx2_eth_dev_ops;
- eth_dev->rx_queue_count = otx2_nix_rx_queue_count;
- eth_dev->rx_descriptor_status = otx2_nix_rx_descriptor_status;
- eth_dev->tx_descriptor_status = otx2_nix_tx_descriptor_status;
-
- /* For secondary processes, the primary has done all the work */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
- /* Setup callbacks for secondary process */
- otx2_eth_set_tx_function(eth_dev);
- otx2_eth_set_rx_function(eth_dev);
- return 0;
- }
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
-
- rte_eth_copy_pci_info(eth_dev, pci_dev);
- eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;
-
- /* Zero out everything after OTX2_DEV to allow proper dev_reset() */
- memset(&dev->otx2_eth_dev_data_start, 0, sizeof(*dev) -
- offsetof(struct otx2_eth_dev, otx2_eth_dev_data_start));
-
- /* Parse devargs string */
- rc = otx2_ethdev_parse_devargs(eth_dev->device->devargs, dev);
- if (rc) {
- otx2_err("Failed to parse devargs rc=%d", rc);
- goto error;
- }
-
- if (!dev->mbox_active) {
- /* Initialize the base otx2_dev object
- * only if already present
- */
- rc = otx2_dev_init(pci_dev, dev);
- if (rc) {
- otx2_err("Failed to initialize otx2_dev rc=%d", rc);
- goto error;
- }
- }
- if (otx2_eth_dev_is_sdp(pci_dev))
- dev->sdp_link = true;
- else
- dev->sdp_link = false;
- /* Device generic callbacks */
- dev->ops = &otx2_dev_ops;
- dev->eth_dev = eth_dev;
-
- /* Grab the NPA LF if required */
- rc = otx2_npa_lf_init(pci_dev, dev);
- if (rc)
- goto otx2_dev_uninit;
-
- dev->configured = 0;
- dev->drv_inited = true;
- dev->ptype_disable = 0;
- dev->lmt_addr = dev->bar2 + (RVU_BLOCK_ADDR_LMT << 20);
-
- /* Attach NIX LF */
- rc = nix_lf_attach(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- dev->base = dev->bar2 + (nix_get_blkaddr(dev) << 20);
-
- /* Get NIX MSIX offset */
- rc = nix_lf_get_msix_offset(dev);
- if (rc)
- goto otx2_npa_uninit;
-
- /* Register LF irq handlers */
- rc = otx2_nix_register_irqs(eth_dev);
- if (rc)
- goto mbox_detach;
-
- /* Get maximum number of supported MAC entries */
- max_entries = otx2_cgx_mac_max_entries_get(dev);
- if (max_entries < 0) {
- otx2_err("Failed to get max entries for mac addr");
- rc = -ENOTSUP;
- goto unregister_irq;
- }
-
- /* For VFs, returned max_entries will be 0. But to keep default MAC
- * address, one entry must be allocated. So setting up to 1.
- */
- if (max_entries == 0)
- max_entries = 1;
-
- eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", max_entries *
- RTE_ETHER_ADDR_LEN, 0);
- if (eth_dev->data->mac_addrs == NULL) {
- otx2_err("Failed to allocate memory for mac addr");
- rc = -ENOMEM;
- goto unregister_irq;
- }
-
- dev->max_mac_entries = max_entries;
-
- rc = otx2_nix_mac_addr_get(eth_dev, dev->mac_addr);
- if (rc)
- goto free_mac_addrs;
-
- /* Update the mac address */
- memcpy(eth_dev->data->mac_addrs, dev->mac_addr, RTE_ETHER_ADDR_LEN);
-
- /* Also sync same MAC address to CGX table */
- otx2_cgx_mac_addr_set(eth_dev, ð_dev->data->mac_addrs[0]);
-
- /* Initialize the tm data structures */
- otx2_nix_tm_conf_init(eth_dev);
-
- dev->tx_offload_capa = nix_get_tx_offload_capa(dev);
- dev->rx_offload_capa = nix_get_rx_offload_capa(dev);
-
- if (otx2_dev_is_96xx_A0(dev) ||
- otx2_dev_is_95xx_Ax(dev)) {
- dev->hwcap |= OTX2_FIXUP_F_MIN_4K_Q;
- dev->hwcap |= OTX2_FIXUP_F_LIMIT_CQ_FULL;
- }
-
- /* Create security ctx */
- rc = otx2_eth_sec_ctx_create(eth_dev);
- if (rc)
- goto free_mac_addrs;
- dev->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_SECURITY;
- dev->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_SECURITY;
-
- /* Initialize rte-flow */
- rc = otx2_flow_init(dev);
- if (rc)
- goto sec_ctx_destroy;
-
- otx2_nix_mc_filter_init(dev);
-
- otx2_nix_dbg("Port=%d pf=%d vf=%d ver=%s msix_off=%d hwcap=0x%" PRIx64
- " rxoffload_capa=0x%" PRIx64 " txoffload_capa=0x%" PRIx64,
- eth_dev->data->port_id, dev->pf, dev->vf,
- OTX2_ETH_DEV_PMD_VERSION, dev->nix_msixoff, dev->hwcap,
- dev->rx_offload_capa, dev->tx_offload_capa);
- return 0;
-
-sec_ctx_destroy:
- otx2_eth_sec_ctx_destroy(eth_dev);
-free_mac_addrs:
- rte_free(eth_dev->data->mac_addrs);
-unregister_irq:
- otx2_nix_unregister_irqs(eth_dev);
-mbox_detach:
- otx2_eth_dev_lf_detach(dev->mbox);
-otx2_npa_uninit:
- otx2_npa_lf_fini();
-otx2_dev_uninit:
- otx2_dev_fini(pci_dev, dev);
-error:
- otx2_err("Failed to init nix eth_dev rc=%d", rc);
- return rc;
-}
-
-static int
-otx2_eth_dev_uninit(struct rte_eth_dev *eth_dev, bool mbox_close)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_pci_device *pci_dev;
- int rc, i;
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Clear the flag since we are closing down */
- dev->configured = 0;
-
- /* Disable nix bpid config */
- otx2_nix_rxchan_bpid_cfg(eth_dev, false);
-
- npc_rx_disable(dev);
-
- /* Disable vlan offloads */
- otx2_nix_vlan_fini(eth_dev);
-
- /* Disable other rte_flow entries */
- otx2_flow_fini(dev);
-
- /* Free multicast filter list */
- otx2_nix_mc_filter_fini(dev);
-
- /* Disable PTP if already enabled */
- if (otx2_ethdev_is_ptp_en(dev))
- otx2_nix_timesync_disable(eth_dev);
-
- nix_cgx_stop_link_event(dev);
-
- /* Unregister the dev ops, this is required to stop VFs from
- * receiving link status updates on exit path.
- */
- dev->ops = NULL;
-
- /* Free up SQs */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_release(eth_dev, i);
- eth_dev->data->nb_tx_queues = 0;
-
- /* Free up RQ's and CQ's */
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++)
- otx2_nix_rx_queue_release(eth_dev, i);
- eth_dev->data->nb_rx_queues = 0;
-
- /* Free tm resources */
- rc = otx2_nix_tm_fini(eth_dev);
- if (rc)
- otx2_err("Failed to cleanup tm, rc=%d", rc);
-
- /* Unregister queue irqs */
- oxt2_nix_unregister_queue_irqs(eth_dev);
-
- /* Unregister cq irqs */
- if (eth_dev->data->dev_conf.intr_conf.rxq)
- oxt2_nix_unregister_cq_irqs(eth_dev);
-
- rc = nix_lf_free(dev);
- if (rc)
- otx2_err("Failed to free nix lf, rc=%d", rc);
-
- rc = otx2_npa_lf_fini();
- if (rc)
- otx2_err("Failed to cleanup npa lf, rc=%d", rc);
-
- /* Disable security */
- otx2_eth_sec_fini(eth_dev);
-
- /* Destroy security ctx */
- otx2_eth_sec_ctx_destroy(eth_dev);
-
- rte_free(eth_dev->data->mac_addrs);
- eth_dev->data->mac_addrs = NULL;
- dev->drv_inited = false;
-
- pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- otx2_nix_unregister_irqs(eth_dev);
-
- rc = otx2_eth_dev_lf_detach(dev->mbox);
- if (rc)
- otx2_err("Failed to detach resources, rc=%d", rc);
-
- /* Check if mbox close is needed */
- if (!mbox_close)
- return 0;
-
- if (otx2_npa_lf_active(dev) || otx2_dev_active_vfs(dev)) {
- /* Will be freed later by PMD */
- eth_dev->data->dev_private = NULL;
- return 0;
- }
-
- otx2_dev_fini(pci_dev, dev);
- return 0;
-}
-
-static int
-otx2_nix_dev_close(struct rte_eth_dev *eth_dev)
-{
- otx2_eth_dev_uninit(eth_dev, true);
- return 0;
-}
-
-static int
-otx2_nix_dev_reset(struct rte_eth_dev *eth_dev)
-{
- int rc;
-
- rc = otx2_eth_dev_uninit(eth_dev, false);
- if (rc)
- return rc;
-
- return otx2_eth_dev_init(eth_dev);
-}
-
-static int
-nix_remove(struct rte_pci_device *pci_dev)
-{
- struct rte_eth_dev *eth_dev;
- struct otx2_idev_cfg *idev;
- struct otx2_dev *otx2_dev;
- int rc;
-
- eth_dev = rte_eth_dev_allocated(pci_dev->device.name);
- if (eth_dev) {
- /* Cleanup eth dev */
- rc = otx2_eth_dev_uninit(eth_dev, true);
- if (rc)
- return rc;
-
- rte_eth_dev_release_port(eth_dev);
- }
-
- /* Nothing to be done for secondary processes */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY)
- return 0;
-
- /* Check for common resources */
- idev = otx2_intra_dev_get_cfg();
- if (!idev || !idev->npa_lf || idev->npa_lf->pci_dev != pci_dev)
- return 0;
-
- otx2_dev = container_of(idev->npa_lf, struct otx2_dev, npalf);
-
- if (otx2_npa_lf_active(otx2_dev) || otx2_dev_active_vfs(otx2_dev))
- goto exit;
-
- /* Safe to cleanup mbox as no more users */
- otx2_dev_fini(pci_dev, otx2_dev);
- rte_free(otx2_dev);
- return 0;
-
-exit:
- otx2_info("%s: common resource in use by other devices", pci_dev->name);
- return -EAGAIN;
-}
-
-static int
-nix_probe(struct rte_pci_driver *pci_drv, struct rte_pci_device *pci_dev)
-{
- int rc;
-
- RTE_SET_USED(pci_drv);
-
- rc = rte_eth_dev_pci_generic_probe(pci_dev, sizeof(struct otx2_eth_dev),
- otx2_eth_dev_init);
-
- /* On error on secondary, recheck if port exists in primary or
- * in mid of detach state.
- */
- if (rte_eal_process_type() != RTE_PROC_PRIMARY && rc)
- if (!rte_eth_dev_allocated(pci_dev->device.name))
- return 0;
- return rc;
-}
-
-static const struct rte_pci_id pci_nix_map[] = {
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_RVU_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_AF_VF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_PF)
- },
- {
- RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM,
- PCI_DEVID_OCTEONTX2_RVU_SDP_VF)
- },
- {
- .vendor_id = 0,
- },
-};
-
-static struct rte_pci_driver pci_nix = {
- .id_table = pci_nix_map,
- .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_NEED_IOVA_AS_VA |
- RTE_PCI_DRV_INTR_LSC,
- .probe = nix_probe,
- .remove = nix_remove,
-};
-
-RTE_PMD_REGISTER_PCI(OCTEONTX2_PMD, pci_nix);
-RTE_PMD_REGISTER_PCI_TABLE(OCTEONTX2_PMD, pci_nix_map);
-RTE_PMD_REGISTER_KMOD_DEP(OCTEONTX2_PMD, "vfio-pci");
diff --git a/drivers/net/octeontx2/otx2_ethdev.h b/drivers/net/octeontx2/otx2_ethdev.h
deleted file mode 100644
index a5282c6c12..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev.h
+++ /dev/null
@@ -1,619 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_H__
-#define __OTX2_ETHDEV_H__
-
-#include <math.h>
-#include <stdint.h>
-
-#include <rte_common.h>
-#include <rte_ethdev.h>
-#include <rte_kvargs.h>
-#include <rte_mbuf.h>
-#include <rte_mempool.h>
-#include <rte_security_driver.h>
-#include <rte_spinlock.h>
-#include <rte_string_fns.h>
-#include <rte_time.h>
-
-#include "otx2_common.h"
-#include "otx2_dev.h"
-#include "otx2_flow.h"
-#include "otx2_irq.h"
-#include "otx2_mempool.h"
-#include "otx2_rx.h"
-#include "otx2_tm.h"
-#include "otx2_tx.h"
-
-#define OTX2_ETH_DEV_PMD_VERSION "1.0"
-
-/* Ethdev HWCAP and Fixup flags. Use from MSB bits to avoid conflict with dev */
-
-/* Minimum CQ size should be 4K */
-#define OTX2_FIXUP_F_MIN_4K_Q BIT_ULL(63)
-#define otx2_ethdev_fixup_is_min_4k_q(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_MIN_4K_Q)
-/* Limit CQ being full */
-#define OTX2_FIXUP_F_LIMIT_CQ_FULL BIT_ULL(62)
-#define otx2_ethdev_fixup_is_limit_cq_full(dev) \
- ((dev)->hwcap & OTX2_FIXUP_F_LIMIT_CQ_FULL)
-
-/* Used for struct otx2_eth_dev::flags */
-#define OTX2_LINK_CFG_IN_PROGRESS_F BIT_ULL(0)
-
-/* VLAN tag inserted by NIX_TX_VTAG_ACTION.
- * In Tx space is always reserved for this in FRS.
- */
-#define NIX_MAX_VTAG_INS 2
-#define NIX_MAX_VTAG_ACT_SIZE (4 * NIX_MAX_VTAG_INS)
-
-/* ETH_HLEN+ETH_FCS+2*VLAN_HLEN */
-#define NIX_L2_OVERHEAD \
- (RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN + 8)
-#define NIX_L2_MAX_LEN \
- (RTE_ETHER_MTU + NIX_L2_OVERHEAD)
-
-/* HW config of frame size doesn't include FCS */
-#define NIX_MAX_HW_FRS 9212
-#define NIX_MIN_HW_FRS 60
-
-/* Since HW FRS includes NPC VTAG insertion space, user has reduced FRS */
-#define NIX_MAX_FRS \
- (NIX_MAX_HW_FRS + RTE_ETHER_CRC_LEN - NIX_MAX_VTAG_ACT_SIZE)
-
-#define NIX_MIN_FRS \
- (NIX_MIN_HW_FRS + RTE_ETHER_CRC_LEN)
-
-#define NIX_MAX_MTU \
- (NIX_MAX_FRS - NIX_L2_OVERHEAD)
-
-#define NIX_MAX_SQB 512
-#define NIX_DEF_SQB 16
-#define NIX_MIN_SQB 8
-#define NIX_SQB_LIST_SPACE 2
-#define NIX_RSS_RETA_SIZE_MAX 256
-/* Group 0 will be used for RSS, 1 -7 will be used for rte_flow RSS action*/
-#define NIX_RSS_GRPS 8
-#define NIX_HASH_KEY_SIZE 48 /* 352 Bits */
-#define NIX_RSS_RETA_SIZE 64
-#define NIX_RX_MIN_DESC 16
-#define NIX_RX_MIN_DESC_ALIGN 16
-#define NIX_RX_NB_SEG_MAX 6
-#define NIX_CQ_ENTRY_SZ 128
-#define NIX_CQ_ALIGN 512
-#define NIX_SQB_LOWER_THRESH 70
-#define LMT_SLOT_MASK 0x7f
-#define NIX_RX_DEFAULT_RING_SZ 4096
-
-/* If PTP is enabled additional SEND MEM DESC is required which
- * takes 2 words, hence max 7 iova address are possible
- */
-#if defined(RTE_LIBRTE_IEEE1588)
-#define NIX_TX_NB_SEG_MAX 7
-#else
-#define NIX_TX_NB_SEG_MAX 9
-#endif
-
-#define NIX_TX_MSEG_SG_DWORDS \
- ((RTE_ALIGN_MUL_CEIL(NIX_TX_NB_SEG_MAX, 3) / 3) \
- + NIX_TX_NB_SEG_MAX)
-
-/* Apply BP/DROP when CQ is 95% full */
-#define NIX_CQ_THRESH_LEVEL (5 * 256 / 100)
-#define NIX_CQ_FULL_ERRATA_SKID (1024ull * 256)
-
-#define CQ_OP_STAT_OP_ERR 63
-#define CQ_OP_STAT_CQ_ERR 46
-
-#define OP_ERR BIT_ULL(CQ_OP_STAT_OP_ERR)
-#define CQ_ERR BIT_ULL(CQ_OP_STAT_CQ_ERR)
-
-#define CQ_CQE_THRESH_DEFAULT 0x1ULL /* IRQ triggered when
- * NIX_LF_CINTX_CNT[QCOUNT]
- * crosses this value
- */
-#define CQ_TIMER_THRESH_DEFAULT 0xAULL /* ~1usec i.e (0xA * 100nsec) */
-#define CQ_TIMER_THRESH_MAX 255
-
-#define NIX_RSS_L3_L4_SRC_DST (RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY \
- | RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY)
-
-#define NIX_RSS_OFFLOAD (RTE_ETH_RSS_PORT | RTE_ETH_RSS_IP | RTE_ETH_RSS_UDP |\
- RTE_ETH_RSS_TCP | RTE_ETH_RSS_SCTP | \
- RTE_ETH_RSS_TUNNEL | RTE_ETH_RSS_L2_PAYLOAD | \
- NIX_RSS_L3_L4_SRC_DST | RTE_ETH_RSS_LEVEL_MASK | \
- RTE_ETH_RSS_C_VLAN)
-
-#define NIX_TX_OFFLOAD_CAPA ( \
- RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE | \
- RTE_ETH_TX_OFFLOAD_MT_LOCKFREE | \
- RTE_ETH_TX_OFFLOAD_VLAN_INSERT | \
- RTE_ETH_TX_OFFLOAD_QINQ_INSERT | \
- RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_UDP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_TX_OFFLOAD_TCP_TSO | \
- RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO | \
- RTE_ETH_TX_OFFLOAD_MULTI_SEGS | \
- RTE_ETH_TX_OFFLOAD_IPV4_CKSUM)
-
-#define NIX_RX_OFFLOAD_CAPA ( \
- RTE_ETH_RX_OFFLOAD_CHECKSUM | \
- RTE_ETH_RX_OFFLOAD_SCTP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | \
- RTE_ETH_RX_OFFLOAD_SCATTER | \
- RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM | \
- RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP | \
- RTE_ETH_RX_OFFLOAD_TIMESTAMP | \
- RTE_ETH_RX_OFFLOAD_RSS_HASH)
-
-#define NIX_DEFAULT_RSS_CTX_GROUP 0
-#define NIX_DEFAULT_RSS_MCAM_IDX -1
-
-#define otx2_ethdev_is_ptp_en(dev) ((dev)->ptp_en)
-
-#define NIX_TIMESYNC_TX_CMD_LEN 8
-/* Additional timesync values. */
-#define OTX2_CYCLECOUNTER_MASK 0xffffffffffffffffULL
-
-#define OCTEONTX2_PMD net_octeontx2
-
-#define otx2_ethdev_is_same_driver(dev) \
- (strcmp((dev)->device->driver->name, RTE_STR(OCTEONTX2_PMD)) == 0)
-
-enum nix_q_size_e {
- nix_q_size_16, /* 16 entries */
- nix_q_size_64, /* 64 entries */
- nix_q_size_256,
- nix_q_size_1K,
- nix_q_size_4K,
- nix_q_size_16K,
- nix_q_size_64K,
- nix_q_size_256K,
- nix_q_size_1M, /* Million entries */
- nix_q_size_max
-};
-
-enum nix_lso_tun_type {
- NIX_LSO_TUN_V4V4,
- NIX_LSO_TUN_V4V6,
- NIX_LSO_TUN_V6V4,
- NIX_LSO_TUN_V6V6,
- NIX_LSO_TUN_MAX,
-};
-
-struct otx2_qint {
- struct rte_eth_dev *eth_dev;
- uint8_t qintx;
-};
-
-struct otx2_rss_info {
- uint64_t nix_rss;
- uint32_t flowkey_cfg;
- uint16_t rss_size;
- uint8_t rss_grps;
- uint8_t alg_idx; /* Selected algo index */
- uint16_t ind_tbl[NIX_RSS_RETA_SIZE_MAX];
- uint8_t key[NIX_HASH_KEY_SIZE];
-};
-
-struct otx2_eth_qconf {
- union {
- struct rte_eth_txconf tx;
- struct rte_eth_rxconf rx;
- } conf;
- void *mempool;
- uint32_t socket_id;
- uint16_t nb_desc;
- uint8_t valid;
-};
-
-struct otx2_fc_info {
- enum rte_eth_fc_mode mode; /**< Link flow control mode */
- uint8_t rx_pause;
- uint8_t tx_pause;
- uint8_t chan_cnt;
- uint16_t bpid[NIX_MAX_CHAN];
-};
-
-struct vlan_mkex_info {
- struct npc_xtract_info la_xtract;
- struct npc_xtract_info lb_xtract;
- uint64_t lb_lt_offset;
-};
-
-struct mcast_entry {
- struct rte_ether_addr mcast_mac;
- uint16_t mcam_index;
- TAILQ_ENTRY(mcast_entry) next;
-};
-
-TAILQ_HEAD(otx2_nix_mc_filter_tbl, mcast_entry);
-
-struct vlan_entry {
- uint32_t mcam_idx;
- uint16_t vlan_id;
- TAILQ_ENTRY(vlan_entry) next;
-};
-
-TAILQ_HEAD(otx2_vlan_filter_tbl, vlan_entry);
-
-struct otx2_vlan_info {
- struct otx2_vlan_filter_tbl fltr_tbl;
- /* MKEX layer info */
- struct mcam_entry def_tx_mcam_ent;
- struct mcam_entry def_rx_mcam_ent;
- struct vlan_mkex_info mkex;
- /* Default mcam entry that matches vlan packets */
- uint32_t def_rx_mcam_idx;
- uint32_t def_tx_mcam_idx;
- /* MCAM entry that matches double vlan packets */
- uint32_t qinq_mcam_idx;
- /* Indices of tx_vtag def registers */
- uint32_t outer_vlan_idx;
- uint32_t inner_vlan_idx;
- uint16_t outer_vlan_tpid;
- uint16_t inner_vlan_tpid;
- uint16_t pvid;
- /* QinQ entry allocated before default one */
- uint8_t qinq_before_def;
- uint8_t pvid_insert_on;
- /* Rx vtag action type */
- uint8_t vtag_type_idx;
- uint8_t filter_on;
- uint8_t strip_on;
- uint8_t qinq_on;
- uint8_t promisc_on;
-};
-
-struct otx2_eth_dev {
- OTX2_DEV; /* Base class */
- RTE_MARKER otx2_eth_dev_data_start;
- uint16_t sqb_size;
- uint16_t rx_chan_base;
- uint16_t tx_chan_base;
- uint8_t rx_chan_cnt;
- uint8_t tx_chan_cnt;
- uint8_t lso_tsov4_idx;
- uint8_t lso_tsov6_idx;
- uint8_t lso_udp_tun_idx[NIX_LSO_TUN_MAX];
- uint8_t lso_tun_idx[NIX_LSO_TUN_MAX];
- uint64_t lso_tun_fmt;
- uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
- uint8_t mkex_pfl_name[MKEX_NAME_LEN];
- uint8_t max_mac_entries;
- bool dmac_filter_enable;
- uint8_t lf_tx_stats;
- uint8_t lf_rx_stats;
- uint8_t lock_rx_ctx;
- uint8_t lock_tx_ctx;
- uint16_t flags;
- uint16_t cints;
- uint16_t qints;
- uint8_t configured;
- uint8_t configured_qints;
- uint8_t configured_cints;
- uint8_t configured_nb_rx_qs;
- uint8_t configured_nb_tx_qs;
- uint8_t ptype_disable;
- uint16_t nix_msixoff;
- uintptr_t base;
- uintptr_t lmt_addr;
- uint16_t scalar_ena;
- uint16_t rss_tag_as_xor;
- uint16_t max_sqb_count;
- uint16_t rx_offload_flags; /* Selected Rx offload flags(NIX_RX_*_F) */
- uint64_t rx_offloads;
- uint16_t tx_offload_flags; /* Selected Tx offload flags(NIX_TX_*_F) */
- uint64_t tx_offloads;
- uint64_t rx_offload_capa;
- uint64_t tx_offload_capa;
- struct otx2_qint qints_mem[RTE_MAX_QUEUES_PER_PORT];
- struct otx2_qint cints_mem[RTE_MAX_QUEUES_PER_PORT];
- uint16_t txschq[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_index[NIX_TXSCH_LVL_CNT];
- uint16_t txschq_contig_index[NIX_TXSCH_LVL_CNT];
- /* Dis-contiguous queues */
- uint16_t txschq_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- /* Contiguous queues */
- uint16_t txschq_contig_list[NIX_TXSCH_LVL_CNT][MAX_TXSCHQ_PER_FUNC];
- uint16_t otx2_tm_root_lvl;
- uint16_t link_cfg_lvl;
- uint16_t tm_flags;
- uint16_t tm_leaf_cnt;
- uint64_t tm_rate_min;
- struct otx2_nix_tm_node_list node_list;
- struct otx2_nix_tm_shaper_profile_list shaper_profile_list;
- struct otx2_rss_info rss_info;
- struct otx2_fc_info fc_info;
- uint32_t txmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- uint32_t rxmap[RTE_ETHDEV_QUEUE_STAT_CNTRS];
- struct otx2_npc_flow_info npc_flow;
- struct otx2_vlan_info vlan_info;
- struct otx2_eth_qconf *tx_qconf;
- struct otx2_eth_qconf *rx_qconf;
- struct rte_eth_dev *eth_dev;
- eth_rx_burst_t rx_pkt_burst_no_offload;
- /* PTP counters */
- bool ptp_en;
- struct otx2_timesync_info tstamp;
- struct rte_timecounter systime_tc;
- struct rte_timecounter rx_tstamp_tc;
- struct rte_timecounter tx_tstamp_tc;
- double clk_freq_mult;
- uint64_t clk_delta;
- bool mc_tbl_set;
- struct otx2_nix_mc_filter_tbl mc_fltr_tbl;
- bool sdp_link; /* SDP flag */
- /* Inline IPsec params */
- uint16_t ipsec_in_max_spi;
- rte_spinlock_t ipsec_tbl_lock;
- uint8_t duplex;
- uint32_t speed;
-} __rte_cache_aligned;
-
-struct otx2_eth_txq {
- uint64_t cmd[8];
- int64_t fc_cache_pkts;
- uint64_t *fc_mem;
- void *lmt_addr;
- rte_iova_t io_addr;
- rte_iova_t fc_iova;
- uint16_t sqes_per_sqb_log2;
- int16_t nb_sqb_bufs_adj;
- uint64_t lso_tun_fmt;
- RTE_MARKER slow_path_start;
- uint16_t nb_sqb_bufs;
- uint16_t sq;
- uint64_t offloads;
- struct otx2_eth_dev *dev;
- struct rte_mempool *sqb_pool;
- struct otx2_eth_qconf qconf;
-} __rte_cache_aligned;
-
-struct otx2_eth_rxq {
- uint64_t mbuf_initializer;
- uint64_t data_off;
- uintptr_t desc;
- void *lookup_mem;
- uintptr_t cq_door;
- uint64_t wdata;
- int64_t *cq_status;
- uint32_t head;
- uint32_t qmask;
- uint32_t available;
- uint16_t rq;
- struct otx2_timesync_info *tstamp;
- RTE_MARKER slow_path_start;
- uint64_t aura;
- uint64_t offloads;
- uint32_t qlen;
- struct rte_mempool *pool;
- enum nix_q_size_e qsize;
- struct rte_eth_dev *eth_dev;
- struct otx2_eth_qconf qconf;
- uint16_t cq_drop;
-} __rte_cache_aligned;
-
-static inline struct otx2_eth_dev *
-otx2_eth_pmd_priv(struct rte_eth_dev *eth_dev)
-{
- return eth_dev->data->dev_private;
-}
-
-/* Ops */
-int otx2_nix_info_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_info *dev_info);
-int otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev,
- const struct rte_flow_ops **ops);
-int otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size);
-int otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo);
-int otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info);
-int otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool);
-void otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo);
-void otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo);
-int otx2_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-int otx2_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id,
- struct rte_eth_burst_mode *mode);
-uint32_t otx2_nix_rx_queue_count(void *rx_queue);
-int otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt);
-int otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset);
-int otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset);
-
-void otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en);
-int otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_tx_queue_start(struct rte_eth_dev *eth_dev, uint16_t qidx);
-int otx2_nix_tx_queue_stop(struct rte_eth_dev *eth_dev, uint16_t qidx);
-uint64_t otx2_nix_rxq_mbuf_setup(struct otx2_eth_dev *dev, uint16_t port_id);
-
-/* Multicast filter APIs */
-void otx2_nix_mc_filter_init(struct otx2_eth_dev *dev);
-void otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev);
-int otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev);
-int otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev);
-int otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr);
-
-/* MTU */
-int otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu);
-int otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev);
-void otx2_nix_enable_mseg_on_jumbo(struct otx2_eth_rxq *rxq);
-
-
-/* Link */
-void otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set);
-int otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete);
-void otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-void otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link);
-int otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev);
-int otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev);
-int otx2_apply_link_speed(struct rte_eth_dev *eth_dev);
-
-/* IRQ */
-int otx2_nix_register_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev);
-int oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev);
-void oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev);
-void otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-void otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-int otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id);
-
-/* Debug */
-int otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data);
-int otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev,
- struct rte_dev_reg_info *regs);
-int otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev);
-void otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq);
-void otx2_nix_tm_dump(struct otx2_eth_dev *dev);
-
-/* Stats */
-int otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats);
-int otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_queue_stats_mapping(struct rte_eth_dev *dev,
- uint16_t queue_id, uint8_t stat_idx,
- uint8_t is_rx);
-int otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats, unsigned int n);
-int otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-int otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- uint64_t *values, unsigned int n);
-int otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit);
-
-/* RSS */
-void otx2_nix_rss_set_key(struct otx2_eth_dev *dev,
- uint8_t *key, uint32_t key_len);
-uint32_t otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev,
- uint64_t ethdev_rss, uint8_t rss_level);
-int otx2_rss_set_hf(struct otx2_eth_dev *dev,
- uint32_t flowkey_cfg, uint8_t *alg_idx,
- uint8_t group, int mcam_index);
-int otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev, uint8_t group,
- uint16_t *ind_tbl);
-int otx2_nix_rss_config(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size);
-int otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-int otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf);
-
-/* CGX */
-int otx2_cgx_rxtx_start(struct otx2_eth_dev *dev);
-int otx2_cgx_rxtx_stop(struct otx2_eth_dev *dev);
-int otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-
-/* Flow Control */
-int otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev);
-
-int otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf);
-
-int otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb);
-
-int otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev);
-
-/* VLAN */
-int otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask);
-void otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable);
-int otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on);
-void otx2_nix_vlan_strip_queue_set(struct rte_eth_dev *dev,
- uint16_t queue, int on);
-int otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid);
-int otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on);
-
-/* Lookup configuration */
-void *otx2_nix_fastpath_lookup_mem_get(void);
-
-/* PTYPES */
-const uint32_t *otx2_nix_supported_ptypes_get(struct rte_eth_dev *dev);
-int otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask);
-
-/* Mac address handling */
-int otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr);
-int otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr);
-int otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *addr,
- uint32_t index, uint32_t pool);
-void otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index);
-int otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev);
-
-/* Devargs */
-int otx2_ethdev_parse_devargs(struct rte_devargs *devargs,
- struct otx2_eth_dev *dev);
-
-/* Rx and Tx routines */
-void otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev);
-void otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev);
-void otx2_nix_form_default_desc(struct otx2_eth_txq *txq);
-
-/* Timesync - PTP routines */
-int otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev);
-int otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t flags);
-int otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp);
-int otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta);
-int otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts);
-int otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev,
- struct timespec *ts);
-int otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en);
-int otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *time);
-int otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev);
-void otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_debug.c b/drivers/net/octeontx2/otx2_ethdev_debug.c
deleted file mode 100644
index 6d951bc7e2..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_debug.c
+++ /dev/null
@@ -1,811 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-#define nix_dump(fmt, ...) fprintf(stderr, fmt "\n", ##__VA_ARGS__)
-#define NIX_REG_INFO(reg) {reg, #reg}
-#define NIX_REG_NAME_SZ 48
-
-struct nix_lf_reg_info {
- uint32_t offset;
- const char *name;
-};
-
-static const struct
-nix_lf_reg_info nix_lf_reg[] = {
- NIX_REG_INFO(NIX_LF_RX_SECRETX(0)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(1)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(2)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(3)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(4)),
- NIX_REG_INFO(NIX_LF_RX_SECRETX(5)),
- NIX_REG_INFO(NIX_LF_CFG),
- NIX_REG_INFO(NIX_LF_GINT),
- NIX_REG_INFO(NIX_LF_GINT_W1S),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_GINT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT),
- NIX_REG_INFO(NIX_LF_ERR_INT_W1S),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1C),
- NIX_REG_INFO(NIX_LF_ERR_INT_ENA_W1S),
- NIX_REG_INFO(NIX_LF_RAS),
- NIX_REG_INFO(NIX_LF_RAS_W1S),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1C),
- NIX_REG_INFO(NIX_LF_RAS_ENA_W1S),
- NIX_REG_INFO(NIX_LF_SQ_OP_ERR_DBG),
- NIX_REG_INFO(NIX_LF_MNQ_ERR_DBG),
- NIX_REG_INFO(NIX_LF_SEND_ERR_DBG),
-};
-
-static int
-nix_lf_get_reg_count(struct otx2_eth_dev *dev)
-{
- int reg_count = 0;
-
- reg_count = RTE_DIM(nix_lf_reg);
- /* NIX_LF_TX_STATX */
- reg_count += dev->lf_tx_stats;
- /* NIX_LF_RX_STATX */
- reg_count += dev->lf_rx_stats;
- /* NIX_LF_QINTX_CNT*/
- reg_count += dev->qints;
- /* NIX_LF_QINTX_INT */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1S */
- reg_count += dev->qints;
- /* NIX_LF_QINTX_ENA_W1C */
- reg_count += dev->qints;
- /* NIX_LF_CINTX_CNT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_WAIT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_INT_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1S */
- reg_count += dev->cints;
- /* NIX_LF_CINTX_ENA_W1C */
- reg_count += dev->cints;
-
- return reg_count;
-}
-
-int
-otx2_nix_reg_dump(struct otx2_eth_dev *dev, uint64_t *data)
-{
- uintptr_t nix_lf_base = dev->base;
- bool dump_stdout;
- uint64_t reg;
- uint32_t i;
-
- dump_stdout = data ? 0 : 1;
-
- for (i = 0; i < RTE_DIM(nix_lf_reg); i++) {
- reg = otx2_read64(nix_lf_base + nix_lf_reg[i].offset);
- if (dump_stdout && reg)
- nix_dump("%32s = 0x%" PRIx64,
- nix_lf_reg[i].name, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_TX_STATX */
- for (i = 0; i < dev->lf_tx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_TX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_TX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_RX_STATX */
- for (i = 0; i < dev->lf_rx_stats; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_RX_STATX(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_RX_STATX", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_CNT*/
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_INT */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1S */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_QINTX_ENA_W1C */
- for (i = 0; i < dev->qints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_QINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_QINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_CNT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_CNT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_CNT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_WAIT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_WAIT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_WAIT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_INT_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_INT_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_INT_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1S */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1S(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1S", i, reg);
- if (data)
- *data++ = reg;
- }
-
- /* NIX_LF_CINTX_ENA_W1C */
- for (i = 0; i < dev->cints; i++) {
- reg = otx2_read64(nix_lf_base + NIX_LF_CINTX_ENA_W1C(i));
- if (dump_stdout && reg)
- nix_dump("%32s_%d = 0x%" PRIx64,
- "NIX_LF_CINTX_ENA_W1C", i, reg);
- if (data)
- *data++ = reg;
- }
- return 0;
-}
-
-int
-otx2_nix_dev_get_reg(struct rte_eth_dev *eth_dev, struct rte_dev_reg_info *regs)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t *data = regs->data;
-
- if (data == NULL) {
- regs->length = nix_lf_get_reg_count(dev);
- regs->width = 8;
- return 0;
- }
-
- if (!regs->length ||
- regs->length == (uint32_t)nix_lf_get_reg_count(dev)) {
- otx2_nix_reg_dump(dev, data);
- return 0;
- }
-
- return -ENOTSUP;
-}
-
-static inline void
-nix_lf_sq_dump(__otx2_io struct nix_sq_ctx_s *ctx)
-{
- nix_dump("W0: sqe_way_mask \t\t%d\nW0: cq \t\t\t\t%d",
- ctx->sqe_way_mask, ctx->cq);
- nix_dump("W0: sdp_mcast \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->sdp_mcast, ctx->substream);
- nix_dump("W0: qint_idx \t\t\t%d\nW0: ena \t\t\t%d\n",
- ctx->qint_idx, ctx->ena);
-
- nix_dump("W1: sqb_count \t\t\t%d\nW1: default_chan \t\t%d",
- ctx->sqb_count, ctx->default_chan);
- nix_dump("W1: smq_rr_quantum \t\t%d\nW1: sso_ena \t\t\t%d",
- ctx->smq_rr_quantum, ctx->sso_ena);
- nix_dump("W1: xoff \t\t\t%d\nW1: cq_ena \t\t\t%d\nW1: smq\t\t\t\t%d\n",
- ctx->xoff, ctx->cq_ena, ctx->smq);
-
- nix_dump("W2: sqe_stype \t\t\t%d\nW2: sq_int_ena \t\t\t%d",
- ctx->sqe_stype, ctx->sq_int_ena);
- nix_dump("W2: sq_int \t\t\t%d\nW2: sqb_aura \t\t\t%d",
- ctx->sq_int, ctx->sqb_aura);
- nix_dump("W2: smq_rr_count \t\t%d\n", ctx->smq_rr_count);
-
- nix_dump("W3: smq_next_sq_vld\t\t%d\nW3: smq_pend\t\t\t%d",
- ctx->smq_next_sq_vld, ctx->smq_pend);
- nix_dump("W3: smenq_next_sqb_vld \t%d\nW3: head_offset\t\t\t%d",
- ctx->smenq_next_sqb_vld, ctx->head_offset);
- nix_dump("W3: smenq_offset\t\t%d\nW3: tail_offset \t\t%d",
- ctx->smenq_offset, ctx->tail_offset);
- nix_dump("W3: smq_lso_segnum \t\t%d\nW3: smq_next_sq \t\t%d",
- ctx->smq_lso_segnum, ctx->smq_next_sq);
- nix_dump("W3: mnq_dis \t\t\t%d\nW3: lmt_dis \t\t\t%d",
- ctx->mnq_dis, ctx->lmt_dis);
- nix_dump("W3: cq_limit\t\t\t%d\nW3: max_sqe_size\t\t%d\n",
- ctx->cq_limit, ctx->max_sqe_size);
-
- nix_dump("W4: next_sqb \t\t\t0x%" PRIx64 "", ctx->next_sqb);
- nix_dump("W5: tail_sqb \t\t\t0x%" PRIx64 "", ctx->tail_sqb);
- nix_dump("W6: smenq_sqb \t\t\t0x%" PRIx64 "", ctx->smenq_sqb);
- nix_dump("W7: smenq_next_sqb \t\t0x%" PRIx64 "", ctx->smenq_next_sqb);
- nix_dump("W8: head_sqb \t\t\t0x%" PRIx64 "", ctx->head_sqb);
-
- nix_dump("W9: vfi_lso_vld \t\t%d\nW9: vfi_lso_vlan1_ins_ena\t%d",
- ctx->vfi_lso_vld, ctx->vfi_lso_vlan1_ins_ena);
- nix_dump("W9: vfi_lso_vlan0_ins_ena\t%d\nW9: vfi_lso_mps\t\t\t%d",
- ctx->vfi_lso_vlan0_ins_ena, ctx->vfi_lso_mps);
- nix_dump("W9: vfi_lso_sb \t\t\t%d\nW9: vfi_lso_sizem1\t\t%d",
- ctx->vfi_lso_sb, ctx->vfi_lso_sizem1);
- nix_dump("W9: vfi_lso_total\t\t%d", ctx->vfi_lso_total);
-
- nix_dump("W10: scm_lso_rem \t\t0x%" PRIx64 "",
- (uint64_t)ctx->scm_lso_rem);
- nix_dump("W11: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W12: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W14: dropped_octs \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_octs);
- nix_dump("W15: dropped_pkts \t\t0x%" PRIx64 "",
- (uint64_t)ctx->drop_pkts);
-}
-
-static inline void
-nix_lf_rq_dump(__otx2_io struct nix_rq_ctx_s *ctx)
-{
- nix_dump("W0: wqe_aura \t\t\t%d\nW0: substream \t\t\t0x%03x",
- ctx->wqe_aura, ctx->substream);
- nix_dump("W0: cq \t\t\t\t%d\nW0: ena_wqwd \t\t\t%d",
- ctx->cq, ctx->ena_wqwd);
- nix_dump("W0: ipsech_ena \t\t\t%d\nW0: sso_ena \t\t\t%d",
- ctx->ipsech_ena, ctx->sso_ena);
- nix_dump("W0: ena \t\t\t%d\n", ctx->ena);
-
- nix_dump("W1: lpb_drop_ena \t\t%d\nW1: spb_drop_ena \t\t%d",
- ctx->lpb_drop_ena, ctx->spb_drop_ena);
- nix_dump("W1: xqe_drop_ena \t\t%d\nW1: wqe_caching \t\t%d",
- ctx->xqe_drop_ena, ctx->wqe_caching);
- nix_dump("W1: pb_caching \t\t\t%d\nW1: sso_tt \t\t\t%d",
- ctx->pb_caching, ctx->sso_tt);
- nix_dump("W1: sso_grp \t\t\t%d\nW1: lpb_aura \t\t\t%d",
- ctx->sso_grp, ctx->lpb_aura);
- nix_dump("W1: spb_aura \t\t\t%d\n", ctx->spb_aura);
-
- nix_dump("W2: xqe_hdr_split \t\t%d\nW2: xqe_imm_copy \t\t%d",
- ctx->xqe_hdr_split, ctx->xqe_imm_copy);
- nix_dump("W2: xqe_imm_size \t\t%d\nW2: later_skip \t\t\t%d",
- ctx->xqe_imm_size, ctx->later_skip);
- nix_dump("W2: first_skip \t\t\t%d\nW2: lpb_sizem1 \t\t\t%d",
- ctx->first_skip, ctx->lpb_sizem1);
- nix_dump("W2: spb_ena \t\t\t%d\nW2: wqe_skip \t\t\t%d",
- ctx->spb_ena, ctx->wqe_skip);
- nix_dump("W2: spb_sizem1 \t\t\t%d\n", ctx->spb_sizem1);
-
- nix_dump("W3: spb_pool_pass \t\t%d\nW3: spb_pool_drop \t\t%d",
- ctx->spb_pool_pass, ctx->spb_pool_drop);
- nix_dump("W3: spb_aura_pass \t\t%d\nW3: spb_aura_drop \t\t%d",
- ctx->spb_aura_pass, ctx->spb_aura_drop);
- nix_dump("W3: wqe_pool_pass \t\t%d\nW3: wqe_pool_drop \t\t%d",
- ctx->wqe_pool_pass, ctx->wqe_pool_drop);
- nix_dump("W3: xqe_pass \t\t\t%d\nW3: xqe_drop \t\t\t%d\n",
- ctx->xqe_pass, ctx->xqe_drop);
-
- nix_dump("W4: qint_idx \t\t\t%d\nW4: rq_int_ena \t\t\t%d",
- ctx->qint_idx, ctx->rq_int_ena);
- nix_dump("W4: rq_int \t\t\t%d\nW4: lpb_pool_pass \t\t%d",
- ctx->rq_int, ctx->lpb_pool_pass);
- nix_dump("W4: lpb_pool_drop \t\t%d\nW4: lpb_aura_pass \t\t%d",
- ctx->lpb_pool_drop, ctx->lpb_aura_pass);
- nix_dump("W4: lpb_aura_drop \t\t%d\n", ctx->lpb_aura_drop);
-
- nix_dump("W5: flow_tagw \t\t\t%d\nW5: bad_utag \t\t\t%d",
- ctx->flow_tagw, ctx->bad_utag);
- nix_dump("W5: good_utag \t\t\t%d\nW5: ltag \t\t\t%d\n",
- ctx->good_utag, ctx->ltag);
-
- nix_dump("W6: octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->octs);
- nix_dump("W7: pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->pkts);
- nix_dump("W8: drop_octs \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_octs);
- nix_dump("W9: drop_pkts \t\t\t0x%" PRIx64 "", (uint64_t)ctx->drop_pkts);
- nix_dump("W10: re_pkts \t\t\t0x%" PRIx64 "\n", (uint64_t)ctx->re_pkts);
-}
-
-static inline void
-nix_lf_cq_dump(__otx2_io struct nix_cq_ctx_s *ctx)
-{
- nix_dump("W0: base \t\t\t0x%" PRIx64 "\n", ctx->base);
-
- nix_dump("W1: wrptr \t\t\t%" PRIx64 "", (uint64_t)ctx->wrptr);
- nix_dump("W1: avg_con \t\t\t%d\nW1: cint_idx \t\t\t%d",
- ctx->avg_con, ctx->cint_idx);
- nix_dump("W1: cq_err \t\t\t%d\nW1: qint_idx \t\t\t%d",
- ctx->cq_err, ctx->qint_idx);
- nix_dump("W1: bpid \t\t\t%d\nW1: bp_ena \t\t\t%d\n",
- ctx->bpid, ctx->bp_ena);
-
- nix_dump("W2: update_time \t\t%d\nW2: avg_level \t\t\t%d",
- ctx->update_time, ctx->avg_level);
- nix_dump("W2: head \t\t\t%d\nW2: tail \t\t\t%d\n",
- ctx->head, ctx->tail);
-
- nix_dump("W3: cq_err_int_ena \t\t%d\nW3: cq_err_int \t\t\t%d",
- ctx->cq_err_int_ena, ctx->cq_err_int);
- nix_dump("W3: qsize \t\t\t%d\nW3: caching \t\t\t%d",
- ctx->qsize, ctx->caching);
- nix_dump("W3: substream \t\t\t0x%03x\nW3: ena \t\t\t%d",
- ctx->substream, ctx->ena);
- nix_dump("W3: drop_ena \t\t\t%d\nW3: drop \t\t\t%d",
- ctx->drop_ena, ctx->drop);
- nix_dump("W3: bp \t\t\t\t%d\n", ctx->bp);
-}
-
-int
-otx2_nix_queues_ctx_dump(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, q, rq = eth_dev->data->nb_rx_queues;
- int sq = eth_dev->data->nb_tx_queues;
- struct otx2_mbox *mbox = dev->mbox;
- struct npa_aq_enq_rsp *npa_rsp;
- struct npa_aq_enq_req *npa_aq;
- struct otx2_npa_lf *npa_lf;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
-
- npa_lf = otx2_npa_lf_obj_get();
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get cq context");
- goto fail;
- }
- nix_dump("============== port=%d cq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_cq_dump(&rsp->cq);
- }
-
- for (q = 0; q < rq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc) {
- otx2_err("Failed to get rq context");
- goto fail;
- }
- nix_dump("============== port=%d rq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_rq_dump(&rsp->rq);
- }
- for (q = 0; q < sq; q++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = q;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get sq context");
- goto fail;
- }
- nix_dump("============== port=%d sq=%d ===============",
- eth_dev->data->port_id, q);
- nix_lf_sq_dump(&rsp->sq);
-
- if (!npa_lf) {
- otx2_err("NPA LF doesn't exist");
- continue;
- }
-
- /* Dump SQB Aura minimal info */
- npa_aq = otx2_mbox_alloc_msg_npa_aq_enq(npa_lf->mbox);
- npa_aq->aura_id = rsp->sq.sqb_aura;
- npa_aq->ctype = NPA_AQ_CTYPE_AURA;
- npa_aq->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(npa_lf->mbox, (void *)&npa_rsp);
- if (rc) {
- otx2_err("Failed to get sq's sqb_aura context");
- continue;
- }
-
- nix_dump("\nSQB Aura W0: Pool addr\t\t0x%"PRIx64"",
- npa_rsp->aura.pool_addr);
- nix_dump("SQB Aura W1: ena\t\t\t%d",
- npa_rsp->aura.ena);
- nix_dump("SQB Aura W2: count\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.count);
- nix_dump("SQB Aura W3: limit\t\t%"PRIx64"",
- (uint64_t)npa_rsp->aura.limit);
- nix_dump("SQB Aura W3: fc_ena\t\t%d",
- npa_rsp->aura.fc_ena);
- nix_dump("SQB Aura W4: fc_addr\t\t0x%"PRIx64"\n",
- npa_rsp->aura.fc_addr);
- }
-
-fail:
- return rc;
-}
-
-/* Dumps struct nix_cqe_hdr_s and struct nix_rx_parse_s */
-void
-otx2_nix_cqe_dump(const struct nix_cqe_hdr_s *cq)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
-
- nix_dump("tag \t\t0x%x\tq \t\t%d\t\tnode \t\t%d\tcqe_type \t%d",
- cq->tag, cq->q, cq->node, cq->cqe_type);
-
- nix_dump("W0: chan \t%d\t\tdesc_sizem1 \t%d",
- rx->chan, rx->desc_sizem1);
- nix_dump("W0: imm_copy \t%d\t\texpress \t%d",
- rx->imm_copy, rx->express);
- nix_dump("W0: wqwd \t%d\t\terrlev \t\t%d\t\terrcode \t%d",
- rx->wqwd, rx->errlev, rx->errcode);
- nix_dump("W0: latype \t%d\t\tlbtype \t\t%d\t\tlctype \t\t%d",
- rx->latype, rx->lbtype, rx->lctype);
- nix_dump("W0: ldtype \t%d\t\tletype \t\t%d\t\tlftype \t\t%d",
- rx->ldtype, rx->letype, rx->lftype);
- nix_dump("W0: lgtype \t%d \t\tlhtype \t\t%d",
- rx->lgtype, rx->lhtype);
-
- nix_dump("W1: pkt_lenm1 \t%d", rx->pkt_lenm1);
- nix_dump("W1: l2m \t%d\t\tl2b \t\t%d\t\tl3m \t\t%d\tl3b \t\t%d",
- rx->l2m, rx->l2b, rx->l3m, rx->l3b);
- nix_dump("W1: vtag0_valid %d\t\tvtag0_gone \t%d",
- rx->vtag0_valid, rx->vtag0_gone);
- nix_dump("W1: vtag1_valid %d\t\tvtag1_gone \t%d",
- rx->vtag1_valid, rx->vtag1_gone);
- nix_dump("W1: pkind \t%d", rx->pkind);
- nix_dump("W1: vtag0_tci \t%d\t\tvtag1_tci \t%d",
- rx->vtag0_tci, rx->vtag1_tci);
-
- nix_dump("W2: laflags \t%d\t\tlbflags\t\t%d\t\tlcflags \t%d",
- rx->laflags, rx->lbflags, rx->lcflags);
- nix_dump("W2: ldflags \t%d\t\tleflags\t\t%d\t\tlfflags \t%d",
- rx->ldflags, rx->leflags, rx->lfflags);
- nix_dump("W2: lgflags \t%d\t\tlhflags \t%d",
- rx->lgflags, rx->lhflags);
-
- nix_dump("W3: eoh_ptr \t%d\t\twqe_aura \t%d\t\tpb_aura \t%d",
- rx->eoh_ptr, rx->wqe_aura, rx->pb_aura);
- nix_dump("W3: match_id \t%d", rx->match_id);
-
- nix_dump("W4: laptr \t%d\t\tlbptr \t\t%d\t\tlcptr \t\t%d",
- rx->laptr, rx->lbptr, rx->lcptr);
- nix_dump("W4: ldptr \t%d\t\tleptr \t\t%d\t\tlfptr \t\t%d",
- rx->ldptr, rx->leptr, rx->lfptr);
- nix_dump("W4: lgptr \t%d\t\tlhptr \t\t%d", rx->lgptr, rx->lhptr);
-
- nix_dump("W5: vtag0_ptr \t%d\t\tvtag1_ptr \t%d\t\tflow_key_alg \t%d",
- rx->vtag0_ptr, rx->vtag1_ptr, rx->flow_key_alg);
-}
-
-static uint8_t
-prepare_nix_tm_reg_dump(uint16_t hw_lvl, uint16_t schq, uint16_t link,
- uint64_t *reg, char regstr[][NIX_REG_NAME_SZ])
-{
- uint8_t k = 0;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_SMQX_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_SMQ[%u]_CFG", schq);
-
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_MDQX_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_PIR", schq);
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_CIR", schq);
-
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_MDQ[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SDP_LINK_CFG", schq);
-
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL4X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL4[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL3X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PARENT", schq);
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq, link);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL3_TL2[%u]_LINK[%u]_CFG", schq, link);
-
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL2X_PIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_PIR", schq);
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SHAPE", schq);
-
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL2[%u]_SW_XOFF", schq);
- break;
- case NIX_TXSCH_LVL_TL1:
-
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_TOPOLOGY", schq);
-
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SCHEDULE", schq);
-
- reg[k] = NIX_AF_TL1X_CIR(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_CIR", schq);
-
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_SW_XOFF", schq);
-
- reg[k] = NIX_AF_TL1X_DROPPED_PACKETS(schq);
- snprintf(regstr[k++], NIX_REG_NAME_SZ,
- "NIX_AF_TL1[%u]_DROPPED_PACKETS", schq);
- break;
- default:
- break;
- }
-
- if (k > MAX_REGS_PER_MBOX_MSG) {
- nix_dump("\t!!!NIX TM Registers request overflow!!!");
- return 0;
- }
- return k;
-}
-
-/* Dump TM hierarchy and registers */
-void
-otx2_nix_tm_dump(struct otx2_eth_dev *dev)
-{
- char regstr[MAX_REGS_PER_MBOX_MSG * 2][NIX_REG_NAME_SZ];
- struct otx2_nix_tm_node *tm_node, *root_node, *parent;
- uint64_t reg[MAX_REGS_PER_MBOX_MSG * 2];
- struct nix_txschq_config *req;
- const char *lvlstr, *parent_lvlstr;
- struct nix_txschq_config *rsp;
- uint32_t schq, parent_schq;
- int hw_lvl, j, k, rc;
-
- nix_dump("===TM hierarchy and registers dump of %s===",
- dev->eth_dev->data->name);
-
- root_node = NULL;
-
- for (hw_lvl = 0; hw_lvl <= NIX_TXSCH_LVL_CNT; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != hw_lvl)
- continue;
-
- parent = tm_node->parent;
- if (hw_lvl == NIX_TXSCH_LVL_CNT) {
- lvlstr = "SQ";
- schq = tm_node->id;
- } else {
- lvlstr = nix_hwlvl2str(tm_node->hw_lvl);
- schq = tm_node->hw_id;
- }
-
- if (parent) {
- parent_schq = parent->hw_id;
- parent_lvlstr =
- nix_hwlvl2str(parent->hw_lvl);
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- parent_schq = otx2_nix_get_link(dev);
- parent_lvlstr = "LINK";
- } else {
- parent_schq = tm_node->parent_hw_id;
- parent_lvlstr =
- nix_hwlvl2str(tm_node->hw_lvl + 1);
- }
-
- nix_dump("%s_%d->%s_%d", lvlstr, schq,
- parent_lvlstr, parent_schq);
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- /* Need to dump TL1 when root is TL2 */
- if (tm_node->hw_lvl == dev->otx2_tm_root_lvl)
- root_node = tm_node;
-
- /* Dump registers only when HWRES is present */
- k = prepare_nix_tm_reg_dump(tm_node->hw_lvl, schq,
- otx2_nix_get_link(dev), reg,
- regstr);
- if (!k)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = tm_node->hw_lvl;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
- nix_dump("\n");
- }
-
- /* Dump TL1 node data when root level is TL2 */
- if (root_node && root_node->hw_lvl == NIX_TXSCH_LVL_TL2) {
- k = prepare_nix_tm_reg_dump(NIX_TXSCH_LVL_TL1,
- root_node->parent_hw_id,
- otx2_nix_get_link(dev),
- reg, regstr);
- if (!k)
- return;
-
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->read = 1;
- req->lvl = NIX_TXSCH_LVL_TL1;
- req->num_regs = k;
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- rc = otx2_mbox_process_msg(dev->mbox, (void **)&rsp);
- if (!rc) {
- for (j = 0; j < k; j++)
- nix_dump("\t%s=0x%016"PRIx64,
- regstr[j], rsp->regval[j]);
- } else {
- nix_dump("\t!!!Failed to dump registers!!!");
- }
- }
-
- otx2_nix_queues_ctx_dump(dev->eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_devargs.c b/drivers/net/octeontx2/otx2_ethdev_devargs.c
deleted file mode 100644
index 60bf6c3f5f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_devargs.c
+++ /dev/null
@@ -1,215 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-#include <math.h>
-
-#include "otx2_ethdev.h"
-
-static int
-parse_flow_max_priority(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the max priority to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flow_prealloc_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint16_t val;
-
- val = atoi(value);
-
- /* Limit the prealloc size to 32 */
- if (val < 1 || val > 32)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_reta_size(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val <= RTE_ETH_RSS_RETA_SIZE_64)
- val = RTE_ETH_RSS_RETA_SIZE_64;
- else if (val > RTE_ETH_RSS_RETA_SIZE_64 && val <= RTE_ETH_RSS_RETA_SIZE_128)
- val = RTE_ETH_RSS_RETA_SIZE_128;
- else if (val > RTE_ETH_RSS_RETA_SIZE_128 && val <= RTE_ETH_RSS_RETA_SIZE_256)
- val = RTE_ETH_RSS_RETA_SIZE_256;
- else
- val = NIX_RSS_RETA_SIZE;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_ipsec_in_max_spi(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_flag(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- *(uint16_t *)extra_args = atoi(value);
-
- return 0;
-}
-
-static int
-parse_sqb_count(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
- uint32_t val;
-
- val = atoi(value);
-
- if (val < NIX_MIN_SQB || val > NIX_MAX_SQB)
- return -EINVAL;
-
- *(uint16_t *)extra_args = val;
-
- return 0;
-}
-
-static int
-parse_switch_header_type(const char *key, const char *value, void *extra_args)
-{
- RTE_SET_USED(key);
-
- if (strcmp(value, "higig2") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_HIGIG;
-
- if (strcmp(value, "dsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EDSA;
-
- if (strcmp(value, "chlen90b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_90B;
-
- if (strcmp(value, "chlen24b") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_CH_LEN_24B;
-
- if (strcmp(value, "exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_EXDSA;
-
- if (strcmp(value, "vlan_exdsa") == 0)
- *(uint16_t *)extra_args = OTX2_PRIV_FLAGS_VLAN_EXDSA;
-
- return 0;
-}
-
-#define OTX2_RSS_RETA_SIZE "reta_size"
-#define OTX2_IPSEC_IN_MAX_SPI "ipsec_in_max_spi"
-#define OTX2_SCL_ENABLE "scalar_enable"
-#define OTX2_MAX_SQB_COUNT "max_sqb_count"
-#define OTX2_FLOW_PREALLOC_SIZE "flow_prealloc_size"
-#define OTX2_FLOW_MAX_PRIORITY "flow_max_priority"
-#define OTX2_SWITCH_HEADER_TYPE "switch_header"
-#define OTX2_RSS_TAG_AS_XOR "tag_as_xor"
-#define OTX2_LOCK_RX_CTX "lock_rx_ctx"
-#define OTX2_LOCK_TX_CTX "lock_tx_ctx"
-
-int
-otx2_ethdev_parse_devargs(struct rte_devargs *devargs, struct otx2_eth_dev *dev)
-{
- uint16_t rss_size = NIX_RSS_RETA_SIZE;
- uint16_t sqb_count = NIX_MAX_SQB;
- uint16_t flow_prealloc_size = 8;
- uint16_t switch_header_type = 0;
- uint16_t flow_max_priority = 3;
- uint16_t ipsec_in_max_spi = 1;
- uint16_t rss_tag_as_xor = 0;
- uint16_t scalar_enable = 0;
- struct rte_kvargs *kvlist;
- uint16_t lock_rx_ctx = 0;
- uint16_t lock_tx_ctx = 0;
-
- if (devargs == NULL)
- goto null_devargs;
-
- kvlist = rte_kvargs_parse(devargs->args, NULL);
- if (kvlist == NULL)
- goto exit;
-
- rte_kvargs_process(kvlist, OTX2_RSS_RETA_SIZE,
- &parse_reta_size, &rss_size);
- rte_kvargs_process(kvlist, OTX2_IPSEC_IN_MAX_SPI,
- &parse_ipsec_in_max_spi, &ipsec_in_max_spi);
- rte_kvargs_process(kvlist, OTX2_SCL_ENABLE,
- &parse_flag, &scalar_enable);
- rte_kvargs_process(kvlist, OTX2_MAX_SQB_COUNT,
- &parse_sqb_count, &sqb_count);
- rte_kvargs_process(kvlist, OTX2_FLOW_PREALLOC_SIZE,
- &parse_flow_prealloc_size, &flow_prealloc_size);
- rte_kvargs_process(kvlist, OTX2_FLOW_MAX_PRIORITY,
- &parse_flow_max_priority, &flow_max_priority);
- rte_kvargs_process(kvlist, OTX2_SWITCH_HEADER_TYPE,
- &parse_switch_header_type, &switch_header_type);
- rte_kvargs_process(kvlist, OTX2_RSS_TAG_AS_XOR,
- &parse_flag, &rss_tag_as_xor);
- rte_kvargs_process(kvlist, OTX2_LOCK_RX_CTX,
- &parse_flag, &lock_rx_ctx);
- rte_kvargs_process(kvlist, OTX2_LOCK_TX_CTX,
- &parse_flag, &lock_tx_ctx);
- otx2_parse_common_devargs(kvlist);
- rte_kvargs_free(kvlist);
-
-null_devargs:
- dev->ipsec_in_max_spi = ipsec_in_max_spi;
- dev->scalar_ena = scalar_enable;
- dev->rss_tag_as_xor = rss_tag_as_xor;
- dev->max_sqb_count = sqb_count;
- dev->lock_rx_ctx = lock_rx_ctx;
- dev->lock_tx_ctx = lock_tx_ctx;
- dev->rss_info.rss_size = rss_size;
- dev->npc_flow.flow_prealloc_size = flow_prealloc_size;
- dev->npc_flow.flow_max_priority = flow_max_priority;
- dev->npc_flow.switch_header_type = switch_header_type;
- return 0;
-
-exit:
- return -EINVAL;
-}
-
-RTE_PMD_REGISTER_PARAM_STRING(OCTEONTX2_PMD,
- OTX2_RSS_RETA_SIZE "=<64|128|256>"
- OTX2_IPSEC_IN_MAX_SPI "=<1-65535>"
- OTX2_SCL_ENABLE "=1"
- OTX2_MAX_SQB_COUNT "=<8-512>"
- OTX2_FLOW_PREALLOC_SIZE "=<1-32>"
- OTX2_FLOW_MAX_PRIORITY "=<1-32>"
- OTX2_SWITCH_HEADER_TYPE "=<higig2|dsa|chlen90b|chlen24b>"
- OTX2_RSS_TAG_AS_XOR "=1"
- OTX2_NPA_LOCK_MASK "=<1-65535>"
- OTX2_LOCK_RX_CTX "=1"
- OTX2_LOCK_TX_CTX "=1");
diff --git a/drivers/net/octeontx2/otx2_ethdev_irq.c b/drivers/net/octeontx2/otx2_ethdev_irq.c
deleted file mode 100644
index cc573bb2e8..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_irq.c
+++ /dev/null
@@ -1,493 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include <rte_bus_pci.h>
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-
-static void
-nix_lf_err_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_ERR_INT);
- if (intr == 0)
- return;
-
- otx2_err("Err_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_ERR_INT);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_err_irq, eth_dev, vec);
- /* Enable all dev interrupt except for RQ_DISABLED */
- otx2_nix_err_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_err_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_ERR_INT;
-
- /* Clear err interrupt */
- otx2_nix_err_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_err_irq, eth_dev, vec);
-}
-
-static void
-nix_lf_ras_irq(void *param)
-{
- struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_RAS);
- if (intr == 0)
- return;
-
- otx2_err("Ras_intr=0x%" PRIx64 " pf=%d, vf=%d", intr, dev->pf, dev->vf);
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_RAS);
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-static int
-nix_lf_register_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- /* Set used interrupt vectors */
- rc = otx2_register_irq(handle, nix_lf_ras_irq, eth_dev, vec);
- /* Enable dev interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, true);
-
- return rc;
-}
-
-static void
-nix_lf_unregister_ras_irq(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec;
-
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_POISON;
-
- /* Clear err interrupt */
- otx2_nix_ras_intr_enb_dis(eth_dev, false);
- otx2_unregister_irq(handle, nix_lf_ras_irq, eth_dev, vec);
-}
-
-static inline uint8_t
-nix_lf_q_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t q,
- uint32_t off, uint64_t mask)
-{
- uint64_t reg, wdata;
- uint8_t qint;
-
- wdata = (uint64_t)q << 44;
- reg = otx2_atomic64_add_nosync(wdata, (int64_t *)(dev->base + off));
-
- if (reg & BIT_ULL(42) /* OP_ERR */) {
- otx2_err("Failed execute irq get off=0x%x", off);
- return 0;
- }
-
- qint = reg & 0xff;
- wdata &= mask;
- otx2_write64(wdata | qint, dev->base + off);
-
- return qint;
-}
-
-static inline uint8_t
-nix_lf_rq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t rq)
-{
- return nix_lf_q_irq_get_and_clear(dev, rq, NIX_LF_RQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_cq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t cq)
-{
- return nix_lf_q_irq_get_and_clear(dev, cq, NIX_LF_CQ_OP_INT, ~0xff00);
-}
-
-static inline uint8_t
-nix_lf_sq_irq_get_and_clear(struct otx2_eth_dev *dev, uint16_t sq)
-{
- return nix_lf_q_irq_get_and_clear(dev, sq, NIX_LF_SQ_OP_INT, ~0x1ff00);
-}
-
-static inline void
-nix_lf_sq_debug_reg(struct otx2_eth_dev *dev, uint32_t off)
-{
- uint64_t reg;
-
- reg = otx2_read64(dev->base + off);
- if (reg & BIT_ULL(44))
- otx2_err("SQ=%d err_code=0x%x",
- (int)((reg >> 8) & 0xfffff), (uint8_t)(reg & 0xff));
-}
-
-static void
-nix_lf_cq_irq(void *param)
-{
- struct otx2_qint *cint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = cint->eth_dev;
- struct otx2_eth_dev *dev;
-
- dev = otx2_eth_pmd_priv(eth_dev);
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_INT(cint->qintx));
-}
-
-static void
-nix_lf_q_irq(void *param)
-{
- struct otx2_qint *qint = (struct otx2_qint *)param;
- struct rte_eth_dev *eth_dev = qint->eth_dev;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t irq, qintx = qint->qintx;
- int q, cq, rq, sq;
- uint64_t intr;
-
- intr = otx2_read64(dev->base + NIX_LF_QINTX_INT(qintx));
- if (intr == 0)
- return;
-
- otx2_err("Queue_intr=0x%" PRIx64 " qintx=%d pf=%d, vf=%d",
- intr, qintx, dev->pf, dev->vf);
-
- /* Handle RQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- rq = q % dev->qints;
- irq = nix_lf_rq_irq_get_and_clear(dev, rq);
-
- if (irq & BIT_ULL(NIX_RQINT_DROP))
- otx2_err("RQ=%d NIX_RQINT_DROP", rq);
-
- if (irq & BIT_ULL(NIX_RQINT_RED))
- otx2_err("RQ=%d NIX_RQINT_RED", rq);
- }
-
- /* Handle CQ interrupts */
- for (q = 0; q < eth_dev->data->nb_rx_queues; q++) {
- cq = q % dev->qints;
- irq = nix_lf_cq_irq_get_and_clear(dev, cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_DOOR_ERR))
- otx2_err("CQ=%d NIX_CQERRINT_DOOR_ERR", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_WR_FULL))
- otx2_err("CQ=%d NIX_CQERRINT_WR_FULL", cq);
-
- if (irq & BIT_ULL(NIX_CQERRINT_CQE_FAULT))
- otx2_err("CQ=%d NIX_CQERRINT_CQE_FAULT", cq);
- }
-
- /* Handle SQ interrupts */
- for (q = 0; q < eth_dev->data->nb_tx_queues; q++) {
- sq = q % dev->qints;
- irq = nix_lf_sq_irq_get_and_clear(dev, sq);
-
- if (irq & BIT_ULL(NIX_SQINT_LMT_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_LMT_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SQ_OP_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_MNQ_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_MNQ_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_MNQ_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SEND_ERR)) {
- otx2_err("SQ=%d NIX_SQINT_SEND_ERR", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- if (irq & BIT_ULL(NIX_SQINT_SQB_ALLOC_FAIL)) {
- otx2_err("SQ=%d NIX_SQINT_SQB_ALLOC_FAIL", sq);
- nix_lf_sq_debug_reg(dev, NIX_LF_SEND_ERR_DBG);
- }
- }
-
- /* Clear interrupt */
- otx2_write64(intr, dev->base + NIX_LF_QINTX_INT(qintx));
-
- /* Dump registers to std out */
- otx2_nix_reg_dump(dev, NULL);
- otx2_nix_queues_ctx_dump(eth_dev);
-}
-
-int
-oxt2_nix_register_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q, sqs, rqs, qs, rc = 0;
-
- /* Figure out max qintx required */
- rqs = RTE_MIN(dev->qints, eth_dev->data->nb_rx_queues);
- sqs = RTE_MIN(dev->qints, eth_dev->data->nb_tx_queues);
- qs = RTE_MAX(rqs, sqs);
-
- dev->configured_qints = qs;
-
- for (q = 0; q < qs; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- dev->qints_mem[q].eth_dev = eth_dev;
- dev->qints_mem[q].qintx = q;
-
- /* Sync qints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- if (rc)
- break;
-
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
- /* Enable QINT interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1S(q));
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_queue_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_qints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_QINT_START + q;
-
- /* Clear QINT CNT */
- otx2_write64(0, dev->base + NIX_LF_QINTX_CNT(q));
- otx2_write64(0, dev->base + NIX_LF_QINTX_INT(q));
-
- /* Clear interrupt */
- otx2_write64(~0ull, dev->base + NIX_LF_QINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_q_irq,
- &dev->qints_mem[q], vec);
- }
-}
-
-int
-oxt2_nix_register_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rc = 0, vec, q;
-
- dev->configured_cints = RTE_MIN(dev->cints,
- eth_dev->data->nb_rx_queues);
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- dev->cints_mem[q].eth_dev = eth_dev;
- dev->cints_mem[q].qintx = q;
-
- /* Sync cints_mem update */
- rte_smp_wmb();
-
- /* Register queue irq vector */
- rc = otx2_register_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- if (rc) {
- otx2_err("Fail to register CQ irq, rc=%d", rc);
- return rc;
- }
-
- rc = rte_intr_vec_list_alloc(handle, "intr_vec",
- dev->configured_cints);
- if (rc) {
- otx2_err("Fail to allocate intr vec list, "
- "rc=%d", rc);
- return rc;
- }
- /* VFIO vector zero is resereved for misc interrupt so
- * doing required adjustment. (b13bfab4cd)
- */
- if (rte_intr_vec_list_index_set(handle, q,
- RTE_INTR_VEC_RXTX_OFFSET + vec))
- return -1;
-
- /* Configure CQE interrupt coalescing parameters */
- otx2_write64(((CQ_CQE_THRESH_DEFAULT) |
- (CQ_CQE_THRESH_DEFAULT << 32) |
- (CQ_TIMER_THRESH_DEFAULT << 48)),
- dev->base + NIX_LF_CINTX_WAIT((q)));
-
- /* Keeping the CQ interrupt disabled as the rx interrupt
- * feature needs to be enabled/disabled on demand.
- */
- }
-
- return rc;
-}
-
-void
-oxt2_nix_unregister_cq_irqs(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct rte_intr_handle *handle = pci_dev->intr_handle;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int vec, q;
-
- for (q = 0; q < dev->configured_cints; q++) {
- vec = dev->nix_msixoff + NIX_LF_INT_VEC_CINT_START + q;
-
- /* Clear CINT CNT */
- otx2_write64(0, dev->base + NIX_LF_CINTX_CNT(q));
-
- /* Clear interrupt */
- otx2_write64(BIT_ULL(0), dev->base + NIX_LF_CINTX_ENA_W1C(q));
-
- /* Unregister queue irq vector */
- otx2_unregister_irq(handle, nix_lf_cq_irq,
- &dev->cints_mem[q], vec);
- }
-}
-
-int
-otx2_nix_register_irqs(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- if (dev->nix_msixoff == MSIX_VECTOR_INVALID) {
- otx2_err("Invalid NIXLF MSIX vector offset vector: 0x%x",
- dev->nix_msixoff);
- return -EINVAL;
- }
-
- /* Register lf err interrupt */
- rc = nix_lf_register_err_irq(eth_dev);
- /* Register RAS interrupt */
- rc |= nix_lf_register_ras_irq(eth_dev);
-
- return rc;
-}
-
-void
-otx2_nix_unregister_irqs(struct rte_eth_dev *eth_dev)
-{
- nix_lf_unregister_err_irq(eth_dev);
- nix_lf_unregister_ras_irq(eth_dev);
-}
-
-int
-otx2_nix_rx_queue_intr_enable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1S(rx_queue_id));
-
- return 0;
-}
-
-int
-otx2_nix_rx_queue_intr_disable(struct rte_eth_dev *eth_dev,
- uint16_t rx_queue_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Clear and disable CINT interrupt */
- otx2_write64(BIT_ULL(0), dev->base +
- NIX_LF_CINTX_ENA_W1C(rx_queue_id));
-
- return 0;
-}
-
-void
-otx2_nix_err_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* Enable all nix lf error interrupts except
- * RQ_DISABLED and CQ_DISABLED.
- */
- if (enb)
- otx2_write64(~(BIT_ULL(11) | BIT_ULL(24)),
- dev->base + NIX_LF_ERR_INT_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_ERR_INT_ENA_W1C);
-}
-
-void
-otx2_nix_ras_intr_enb_dis(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (enb)
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1S);
- else
- otx2_write64(~0ull, dev->base + NIX_LF_RAS_ENA_W1C);
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_ops.c b/drivers/net/octeontx2/otx2_ethdev_ops.c
deleted file mode 100644
index 48781514c3..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_ops.c
+++ /dev/null
@@ -1,589 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_ethdev.h>
-#include <rte_mbuf_pool_ops.h>
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_mtu_set(struct rte_eth_dev *eth_dev, uint16_t mtu)
-{
- uint32_t buffsz, frame_size = mtu + NIX_L2_OVERHEAD;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_frs_cfg *req;
- int rc;
-
- if (dev->configured && otx2_ethdev_is_ptp_en(dev))
- frame_size += NIX_TIMESYNC_RX_OFFSET;
-
- buffsz = data->min_rx_buf_size - RTE_PKTMBUF_HEADROOM;
-
- /* Refuse MTU that requires the support of scattered packets
- * when this feature has not been enabled before.
- */
- if (data->dev_started && frame_size > buffsz &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER))
- return -EINVAL;
-
- /* Check <seg size> * <max_seg> >= max_frame */
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER) &&
- (frame_size > buffsz * NIX_RX_NB_SEG_MAX))
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->update_smq = true;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
- /* FRS HW config should exclude FCS but include NPC VTAG insert size */
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN + NIX_MAX_VTAG_ACT_SIZE;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Now just update Rx MAXLEN */
- req = otx2_mbox_alloc_msg_nix_set_hw_frs(mbox);
- req->maxlen = frame_size - RTE_ETHER_CRC_LEN;
- if (otx2_dev_is_sdp(dev))
- req->sdp_link = true;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- return rc;
-}
-
-int
-otx2_nix_recalc_mtu(struct rte_eth_dev *eth_dev)
-{
- struct rte_eth_dev_data *data = eth_dev->data;
- struct otx2_eth_rxq *rxq;
- int rc;
-
- rxq = data->rx_queues[0];
-
- /* Setup scatter mode if needed by jumbo */
- otx2_nix_enable_mseg_on_jumbo(rxq);
-
- rc = otx2_nix_mtu_set(eth_dev, data->mtu);
- if (rc)
- otx2_err("Failed to set default MTU size %d", rc);
-
- return rc;
-}
-
-static void
-nix_cgx_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- if (en)
- otx2_mbox_alloc_msg_cgx_promisc_enable(mbox);
- else
- otx2_mbox_alloc_msg_cgx_promisc_disable(mbox);
-
- otx2_mbox_process(mbox);
-}
-
-void
-otx2_nix_promisc_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
- eth_dev->data->promiscuous = en;
- otx2_nix_vlan_update_promisc(eth_dev, en);
-}
-
-int
-otx2_nix_promisc_enable(struct rte_eth_dev *eth_dev)
-{
- otx2_nix_promisc_config(eth_dev, 1);
- nix_cgx_promisc_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_promisc_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- otx2_nix_promisc_config(eth_dev, dev->dmac_filter_enable);
- nix_cgx_promisc_config(eth_dev, 0);
- dev->dmac_filter_enable = false;
-
- return 0;
-}
-
-static void
-nix_allmulticast_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rx_mode *req;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- req = otx2_mbox_alloc_msg_nix_set_rx_mode(mbox);
-
- if (en)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_ALLMULTI;
- else if (eth_dev->data->promiscuous)
- req->mode = NIX_RX_MODE_UCAST | NIX_RX_MODE_PROMISC;
-
- otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_allmulticast_enable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 1);
-
- return 0;
-}
-
-int
-otx2_nix_allmulticast_disable(struct rte_eth_dev *eth_dev)
-{
- nix_allmulticast_config(eth_dev, 0);
-
- return 0;
-}
-
-void
-otx2_nix_rxq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_rxq_info *qinfo)
-{
- struct otx2_eth_rxq *rxq;
-
- rxq = eth_dev->data->rx_queues[queue_id];
-
- qinfo->mp = rxq->pool;
- qinfo->scattered_rx = eth_dev->data->scattered_rx;
- qinfo->nb_desc = rxq->qconf.nb_desc;
-
- qinfo->conf.rx_free_thresh = 0;
- qinfo->conf.rx_drop_en = 0;
- qinfo->conf.rx_deferred_start = 0;
- qinfo->conf.offloads = rxq->offloads;
-}
-
-void
-otx2_nix_txq_info_get(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- struct rte_eth_txq_info *qinfo)
-{
- struct otx2_eth_txq *txq;
-
- txq = eth_dev->data->tx_queues[queue_id];
-
- qinfo->nb_desc = txq->qconf.nb_desc;
-
- qinfo->conf.tx_thresh.pthresh = 0;
- qinfo->conf.tx_thresh.hthresh = 0;
- qinfo->conf.tx_thresh.wthresh = 0;
-
- qinfo->conf.tx_free_thresh = 0;
- qinfo->conf.tx_rs_thresh = 0;
- qinfo->conf.offloads = txq->offloads;
- qinfo->conf.tx_deferred_start = 0;
-}
-
-int
-otx2_rx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } rx_offload_map[] = {
- {NIX_RX_OFFLOAD_RSS_F, "RSS,"},
- {NIX_RX_OFFLOAD_PTYPE_F, " Ptype,"},
- {NIX_RX_OFFLOAD_CHECKSUM_F, " Checksum,"},
- {NIX_RX_OFFLOAD_VLAN_STRIP_F, " VLAN Strip,"},
- {NIX_RX_OFFLOAD_MARK_UPDATE_F, " Mark Update,"},
- {NIX_RX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_RX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Rx Offloads:",
- "Scalar, Rx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Rx offload info */
- for (i = 0; i < RTE_DIM(rx_offload_map); i++) {
- if (dev->rx_offload_flags & rx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- rx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-int
-otx2_tx_burst_mode_get(struct rte_eth_dev *eth_dev,
- __rte_unused uint16_t queue_id,
- struct rte_eth_burst_mode *mode)
-{
- ssize_t bytes = 0, str_size = RTE_ETH_BURST_MODE_INFO_SIZE, rc;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- const struct burst_info {
- uint16_t flags;
- const char *output;
- } tx_offload_map[] = {
- {NIX_TX_OFFLOAD_L3_L4_CSUM_F, " Inner L3/L4 csum,"},
- {NIX_TX_OFFLOAD_OL3_OL4_CSUM_F, " Outer L3/L4 csum,"},
- {NIX_TX_OFFLOAD_VLAN_QINQ_F, " VLAN Insertion,"},
- {NIX_TX_OFFLOAD_MBUF_NOFF_F, " MBUF free disable,"},
- {NIX_TX_OFFLOAD_TSTAMP_F, " Timestamp,"},
- {NIX_TX_OFFLOAD_TSO_F, " TSO,"},
- {NIX_TX_MULTI_SEG_F, " Scattered,"}
- };
- static const char *const burst_mode[] = {"Vector Neon, Tx Offloads:",
- "Scalar, Tx Offloads:"
- };
- uint32_t i;
-
- /* Update burst mode info */
- rc = rte_strscpy(mode->info + bytes, burst_mode[dev->scalar_ena],
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
-
- /* Update Tx offload info */
- for (i = 0; i < RTE_DIM(tx_offload_map); i++) {
- if (dev->tx_offload_flags & tx_offload_map[i].flags) {
- rc = rte_strscpy(mode->info + bytes,
- tx_offload_map[i].output,
- str_size - bytes);
- if (rc < 0)
- goto done;
-
- bytes += rc;
- }
- }
-
-done:
- return 0;
-}
-
-static void
-nix_rx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_CQ_OP_STATUS));
- if (val & (OP_ERR | CQ_ERR))
- val = 0;
-
- *tail = (uint32_t)(val & 0xFFFFF);
- *head = (uint32_t)((val >> 20) & 0xFFFFF);
-}
-
-uint32_t
-otx2_nix_rx_queue_count(void *rx_queue)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(rxq->eth_dev);
- uint32_t head, tail;
-
- nix_rx_head_tail_get(dev, &head, &tail, rxq->rq);
- return (tail - head) % rxq->qlen;
-}
-
-static inline int
-nix_offset_has_packet(uint32_t head, uint32_t tail, uint16_t offset)
-{
- /* Check given offset(queue index) has packet filled by HW */
- if (tail > head && offset <= tail && offset >= head)
- return 1;
- /* Wrap around case */
- if (head > tail && (offset >= head || offset <= tail))
- return 1;
-
- return 0;
-}
-
-int
-otx2_nix_rx_descriptor_status(void *rx_queue, uint16_t offset)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- uint32_t head, tail;
-
- if (rxq->qlen <= offset)
- return -EINVAL;
-
- nix_rx_head_tail_get(otx2_eth_pmd_priv(rxq->eth_dev),
- &head, &tail, rxq->rq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_RX_DESC_DONE;
- else
- return RTE_ETH_RX_DESC_AVAIL;
-}
-
-static void
-nix_tx_head_tail_get(struct otx2_eth_dev *dev,
- uint32_t *head, uint32_t *tail, uint16_t queue_idx)
-{
- uint64_t reg, val;
-
- if (head == NULL || tail == NULL)
- return;
-
- reg = (((uint64_t)queue_idx) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)
- (dev->base + NIX_LF_SQ_OP_STATUS));
- if (val & OP_ERR)
- val = 0;
-
- *tail = (uint32_t)((val >> 28) & 0x3F);
- *head = (uint32_t)((val >> 20) & 0x3F);
-}
-
-int
-otx2_nix_tx_descriptor_status(void *tx_queue, uint16_t offset)
-{
- struct otx2_eth_txq *txq = tx_queue;
- uint32_t head, tail;
-
- if (txq->qconf.nb_desc <= offset)
- return -EINVAL;
-
- nix_tx_head_tail_get(txq->dev, &head, &tail, txq->sq);
-
- if (nix_offset_has_packet(head, tail, offset))
- return RTE_ETH_TX_DESC_DONE;
- else
- return RTE_ETH_TX_DESC_FULL;
-}
-
-/* It is a NOP for octeontx2 as HW frees the buffer on xmit */
-int
-otx2_nix_tx_done_cleanup(void *txq, uint32_t free_cnt)
-{
- RTE_SET_USED(txq);
- RTE_SET_USED(free_cnt);
-
- return 0;
-}
-
-int
-otx2_nix_fw_version_get(struct rte_eth_dev *eth_dev, char *fw_version,
- size_t fw_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc = (int)fw_size;
-
- if (fw_size > sizeof(dev->mkex_pfl_name))
- rc = sizeof(dev->mkex_pfl_name);
-
- rc = strlcpy(fw_version, (char *)dev->mkex_pfl_name, rc);
-
- rc += 1; /* Add the size of '\0' */
- if (fw_size < (size_t)rc)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_pool_ops_supported(struct rte_eth_dev *eth_dev, const char *pool)
-{
- RTE_SET_USED(eth_dev);
-
- if (!strcmp(pool, rte_mbuf_platform_mempool_ops()))
- return 0;
-
- return -ENOTSUP;
-}
-
-int
-otx2_nix_dev_flow_ops_get(struct rte_eth_dev *eth_dev __rte_unused,
- const struct rte_flow_ops **ops)
-{
- *ops = &otx2_flow_ops;
- return 0;
-}
-
-static struct cgx_fw_data *
-nix_get_fwdata(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_fw_data *rsp = NULL;
- int rc;
-
- otx2_mbox_alloc_msg_cgx_get_aux_link_info(mbox);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get fw data: %d", rc);
- return NULL;
- }
-
- return rsp;
-}
-
-int
-otx2_nix_get_module_info(struct rte_eth_dev *eth_dev,
- struct rte_eth_dev_module_info *modinfo)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- modinfo->type = rsp->fwdata.sfp_eeprom.sff_id;
- modinfo->eeprom_len = SFP_EEPROM_SIZE;
-
- return 0;
-}
-
-int
-otx2_nix_get_module_eeprom(struct rte_eth_dev *eth_dev,
- struct rte_dev_eeprom_info *info)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_fw_data *rsp;
-
- if (info->offset + info->length > SFP_EEPROM_SIZE)
- return -EINVAL;
-
- rsp = nix_get_fwdata(dev);
- if (rsp == NULL)
- return -EIO;
-
- otx2_mbox_memcpy(info->data, rsp->fwdata.sfp_eeprom.buf + info->offset,
- info->length);
-
- return 0;
-}
-
-int
-otx2_nix_info_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *devinfo)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- devinfo->min_rx_bufsize = NIX_MIN_FRS;
- devinfo->max_rx_pktlen = NIX_MAX_FRS;
- devinfo->max_rx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_tx_queues = RTE_MAX_QUEUES_PER_PORT;
- devinfo->max_mac_addrs = dev->max_mac_entries;
- devinfo->max_vfs = pci_dev->max_vfs;
- devinfo->max_mtu = devinfo->max_rx_pktlen - NIX_L2_OVERHEAD;
- devinfo->min_mtu = devinfo->min_rx_bufsize - NIX_L2_OVERHEAD;
- if (dev->configured && otx2_ethdev_is_ptp_en(dev)) {
- devinfo->max_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->min_mtu -= NIX_TIMESYNC_RX_OFFSET;
- devinfo->max_rx_pktlen -= NIX_TIMESYNC_RX_OFFSET;
- }
-
- devinfo->rx_offload_capa = dev->rx_offload_capa;
- devinfo->tx_offload_capa = dev->tx_offload_capa;
- devinfo->rx_queue_offload_capa = 0;
- devinfo->tx_queue_offload_capa = 0;
-
- devinfo->reta_size = dev->rss_info.rss_size;
- devinfo->hash_key_size = NIX_HASH_KEY_SIZE;
- devinfo->flow_type_rss_offloads = NIX_RSS_OFFLOAD;
-
- devinfo->default_rxconf = (struct rte_eth_rxconf) {
- .rx_drop_en = 0,
- .offloads = 0,
- };
-
- devinfo->default_txconf = (struct rte_eth_txconf) {
- .offloads = 0,
- };
-
- devinfo->default_rxportconf = (struct rte_eth_dev_portconf) {
- .ring_size = NIX_RX_DEFAULT_RING_SZ,
- };
-
- devinfo->rx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = NIX_RX_MIN_DESC,
- .nb_align = NIX_RX_MIN_DESC_ALIGN,
- .nb_seg_max = NIX_RX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_RX_NB_SEG_MAX,
- };
- devinfo->rx_desc_lim.nb_max =
- RTE_ALIGN_MUL_FLOOR(devinfo->rx_desc_lim.nb_max,
- NIX_RX_MIN_DESC_ALIGN);
-
- devinfo->tx_desc_lim = (struct rte_eth_desc_lim) {
- .nb_max = UINT16_MAX,
- .nb_min = 1,
- .nb_align = 1,
- .nb_seg_max = NIX_TX_NB_SEG_MAX,
- .nb_mtu_seg_max = NIX_TX_NB_SEG_MAX,
- };
-
- /* Auto negotiation disabled */
- devinfo->speed_capa = RTE_ETH_LINK_SPEED_FIXED;
- if (!otx2_dev_is_vf_or_sdp(dev) && !otx2_dev_is_lbk(dev)) {
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |
- RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G;
-
- /* 50G and 100G to be supported for board version C0
- * and above.
- */
- if (!otx2_dev_is_Ax(dev))
- devinfo->speed_capa |= RTE_ETH_LINK_SPEED_50G |
- RTE_ETH_LINK_SPEED_100G;
- }
-
- devinfo->dev_capa = RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP |
- RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP;
- devinfo->dev_capa &= ~RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.c b/drivers/net/octeontx2/otx2_ethdev_sec.c
deleted file mode 100644
index 4d40184de4..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.c
+++ /dev/null
@@ -1,923 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#include <rte_cryptodev.h>
-#include <rte_esp.h>
-#include <rte_ethdev.h>
-#include <rte_eventdev.h>
-#include <rte_ip.h>
-#include <rte_malloc.h>
-#include <rte_memzone.h>
-#include <rte_security.h>
-#include <rte_security_driver.h>
-#include <rte_udp.h>
-
-#include "otx2_common.h"
-#include "otx2_cryptodev_qp.h"
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_fp.h"
-#include "otx2_sec_idev.h"
-#include "otx2_security.h"
-
-#define ERR_STR_SZ 256
-
-struct eth_sec_tag_const {
- RTE_STD_C11
- union {
- struct {
- uint32_t rsvd_11_0 : 12;
- uint32_t port : 8;
- uint32_t event_type : 4;
- uint32_t rsvd_31_24 : 8;
- };
- uint32_t u32;
- };
-};
-
-static struct rte_cryptodev_capabilities otx2_eth_sec_crypto_caps[] = {
- { /* AES GCM */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD,
- {.aead = {
- .algo = RTE_CRYPTO_AEAD_AES_GCM,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .digest_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- },
- .aad_size = {
- .min = 8,
- .max = 12,
- .increment = 4
- },
- .iv_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* AES CBC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
- {.cipher = {
- .algo = RTE_CRYPTO_CIPHER_AES_CBC,
- .block_size = 16,
- .key_size = {
- .min = 16,
- .max = 32,
- .increment = 8
- },
- .iv_size = {
- .min = 16,
- .max = 16,
- .increment = 0
- }
- }, }
- }, }
- },
- { /* SHA1 HMAC */
- .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
- {.sym = {
- .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
- {.auth = {
- .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
- .block_size = 64,
- .key_size = {
- .min = 20,
- .max = 64,
- .increment = 1
- },
- .digest_size = {
- .min = 12,
- .max = 12,
- .increment = 0
- },
- }, }
- }, }
- },
- RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST()
-};
-
-static const struct rte_security_capability otx2_eth_sec_capabilities[] = {
- { /* IPsec Inline Protocol ESP Tunnel Ingress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_INGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- { /* IPsec Inline Protocol ESP Tunnel Egress */
- .action = RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL,
- .protocol = RTE_SECURITY_PROTOCOL_IPSEC,
- .ipsec = {
- .proto = RTE_SECURITY_IPSEC_SA_PROTO_ESP,
- .mode = RTE_SECURITY_IPSEC_SA_MODE_TUNNEL,
- .direction = RTE_SECURITY_IPSEC_SA_DIR_EGRESS,
- .options = { 0 }
- },
- .crypto_capabilities = otx2_eth_sec_crypto_caps,
- .ol_flags = RTE_SECURITY_TX_OLOAD_NEED_MDATA
- },
- {
- .action = RTE_SECURITY_ACTION_TYPE_NONE
- }
-};
-
-static void
-lookup_mem_sa_tbl_clear(struct rte_eth_dev *eth_dev)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return;
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
- if (sa_tbl[port] == NULL)
- return;
-
- rte_free(sa_tbl[port]);
- sa_tbl[port] = NULL;
-}
-
-static int
-lookup_mem_sa_index_update(struct rte_eth_dev *eth_dev, int spi, void *sa,
- char *err_str)
-{
- static const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- const struct rte_memzone *mz;
- uint64_t **sa_tbl;
- uint8_t *mem;
-
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not find fastpath lookup table");
- return -EINVAL;
- }
-
- mem = mz->addr;
-
- sa_tbl = (uint64_t **)RTE_PTR_ADD(mem, OTX2_NIX_SA_TBL_START);
-
- if (sa_tbl[port] == NULL) {
- sa_tbl[port] = rte_malloc(NULL, dev->ipsec_in_max_spi *
- sizeof(uint64_t), 0);
- }
-
- sa_tbl[port][spi] = (uint64_t)sa;
-
- return 0;
-}
-
-static inline void
-in_sa_mz_name_get(char *name, int size, uint16_t port)
-{
- snprintf(name, size, "otx2_ipsec_in_sadb_%u", port);
-}
-
-static struct otx2_ipsec_fp_in_sa *
-in_sa_get(uint16_t port, int sa_index)
-{
- char name[RTE_MEMZONE_NAMESIZE];
- struct otx2_ipsec_fp_in_sa *sa;
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL) {
- otx2_err("Could not get the memzone reserved for IN SA DB");
- return NULL;
- }
-
- sa = mz->addr;
-
- return sa + sa_index;
-}
-
-static int
-ipsec_sa_const_set(struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *xform,
- struct otx2_sec_session_ipsec_ip *sess)
-{
- struct rte_crypto_sym_xform *cipher_xform, *auth_xform;
-
- sess->partial_len = sizeof(struct rte_ipv4_hdr);
-
- if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_ESP) {
- sess->partial_len += sizeof(struct rte_esp_hdr);
- sess->roundup_len = sizeof(struct rte_esp_tail);
- } else if (ipsec->proto == RTE_SECURITY_IPSEC_SA_PROTO_AH) {
- sess->partial_len += OTX2_SEC_AH_HDR_LEN;
- } else {
- return -EINVAL;
- }
-
- if (ipsec->options.udp_encap)
- sess->partial_len += sizeof(struct rte_udp_hdr);
-
- if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM) {
- sess->partial_len += OTX2_SEC_AES_GCM_IV_LEN;
- sess->partial_len += OTX2_SEC_AES_GCM_MAC_LEN;
- sess->roundup_byte = OTX2_SEC_AES_GCM_ROUNDUP_BYTE_LEN;
- }
- return 0;
- }
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_EGRESS) {
- cipher_xform = xform;
- auth_xform = xform->next;
- } else if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- auth_xform = xform;
- cipher_xform = xform->next;
- } else {
- return -EINVAL;
- }
- if (cipher_xform->cipher.algo == RTE_CRYPTO_CIPHER_AES_CBC) {
- sess->partial_len += OTX2_SEC_AES_CBC_IV_LEN;
- sess->roundup_byte = OTX2_SEC_AES_CBC_ROUNDUP_BYTE_LEN;
- } else {
- return -EINVAL;
- }
-
- if (auth_xform->auth.algo == RTE_CRYPTO_AUTH_SHA1_HMAC)
- sess->partial_len += OTX2_SEC_SHA1_HMAC_LEN;
- else
- return -EINVAL;
-
- return 0;
-}
-
-static int
-hmac_init(struct otx2_ipsec_fp_sa_ctl *ctl, struct otx2_cpt_qp *qp,
- const uint8_t *auth_key, int len, uint8_t *hmac_key)
-{
- struct inst_data {
- struct otx2_cpt_res cpt_res;
- uint8_t buffer[64];
- } *md;
-
- volatile struct otx2_cpt_res *res;
- uint64_t timeout, lmt_status;
- struct otx2_cpt_inst_s inst;
- rte_iova_t md_iova;
- int ret;
-
- memset(&inst, 0, sizeof(struct otx2_cpt_inst_s));
-
- md = rte_zmalloc(NULL, sizeof(struct inst_data), OTX2_CPT_RES_ALIGN);
- if (md == NULL)
- return -ENOMEM;
-
- memcpy(md->buffer, auth_key, len);
-
- md_iova = rte_malloc_virt2iova(md);
- if (md_iova == RTE_BAD_IOVA) {
- ret = -EINVAL;
- goto free_md;
- }
-
- inst.res_addr = md_iova + offsetof(struct inst_data, cpt_res);
- inst.opcode = OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD;
- inst.param2 = ctl->auth_type;
- inst.dlen = len;
- inst.dptr = md_iova + offsetof(struct inst_data, buffer);
- inst.rptr = inst.dptr;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
-
- md->cpt_res.compcode = 0;
- md->cpt_res.uc_compcode = 0xff;
-
- timeout = rte_get_timer_cycles() + 5 * rte_get_timer_hz();
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(qp->lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(qp->lf_nq_reg);
- } while (lmt_status == 0);
-
- res = (volatile struct otx2_cpt_res *)&md->cpt_res;
-
- /* Wait until instruction completes or times out */
- while (res->uc_compcode == 0xff) {
- if (rte_get_timer_cycles() > timeout)
- break;
- }
-
- if (res->u16[0] != OTX2_SEC_COMP_GOOD) {
- ret = -EIO;
- goto free_md;
- }
-
- /* Retrieve the ipad and opad from rptr */
- memcpy(hmac_key, md->buffer, 48);
-
- ret = 0;
-
-free_md:
- rte_free(md);
- return ret;
-}
-
-static int
-eth_sec_ipsec_out_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_out_sa *sa;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- struct otx2_cpt_qp *qp;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_EGRESS;
- sess = &priv->ipsec.ip;
-
- sa = &sess->out_sa;
- ctl = &sa->ctl;
- if (ctl->valid) {
- otx2_err("SA already registered");
- return -EINVAL;
- }
-
- memset(sess, 0, sizeof(struct otx2_sec_session_ipsec_ip));
-
- sess->seq = 1;
-
- ret = ipsec_sa_const_set(ipsec, crypto_xform, sess);
- if (ret < 0)
- return ret;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD)
- memcpy(sa->nonce, &ipsec->salt, 4);
-
- if (ipsec->options.udp_encap == 1) {
- sa->udp_src = 4500;
- sa->udp_dst = 4500;
- }
-
- if (ipsec->mode == RTE_SECURITY_IPSEC_SA_MODE_TUNNEL) {
- /* Start ip id from 1 */
- sess->ip_id = 1;
-
- if (ipsec->tunnel.type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memcpy(&sa->ip_src, &ipsec->tunnel.ipv4.src_ip,
- sizeof(struct in_addr));
- memcpy(&sa->ip_dst, &ipsec->tunnel.ipv4.dst_ip,
- sizeof(struct in_addr));
- } else {
- return -EINVAL;
- }
- } else {
- return -EINVAL;
- }
-
- cipher_xform = crypto_xform;
- auth_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0)
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- else
- return -EINVAL;
-
- /* Determine word 7 of CPT instruction */
- inst.u64[7] = 0;
- inst.egrp = OTX2_CPT_EGRP_INLINE_IPSEC;
- inst.cptr = rte_mempool_virt2iova(sa);
- sess->inst_w7 = inst.u64[7];
-
- /* Get CPT QP to be used for this SA */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret)
- return ret;
-
- sess->qp = qp;
-
- sess->cpt_lmtline = qp->lmtline;
- sess->cpt_nq_reg = qp->lf_nq_reg;
-
- /* Populate control word */
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret)
- goto cpt_put;
-
- if (auth_key_len && auth_key) {
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- if (ret)
- goto cpt_put;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- return 0;
-cpt_put:
- otx2_sec_idev_tx_cpt_qp_put(sess->qp);
- return ret;
-}
-
-static int
-eth_sec_ipsec_in_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sec_sess)
-{
- struct rte_crypto_sym_xform *auth_xform, *cipher_xform;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_sec_session_ipsec_ip *sess;
- uint16_t port = eth_dev->data->port_id;
- int cipher_key_len, auth_key_len, ret;
- const uint8_t *cipher_key, *auth_key;
- struct otx2_ipsec_fp_sa_ctl *ctl;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- char err_str[ERR_STR_SZ];
- struct otx2_cpt_qp *qp;
-
- memset(err_str, 0, ERR_STR_SZ);
-
- if (ipsec->spi >= dev->ipsec_in_max_spi) {
- otx2_err("SPI exceeds max supported");
- return -EINVAL;
- }
-
- sa = in_sa_get(port, ipsec->spi);
- if (sa == NULL)
- return -ENOMEM;
-
- ctl = &sa->ctl;
-
- priv = get_sec_session_private_data(sec_sess);
- priv->ipsec.dir = RTE_SECURITY_IPSEC_SA_DIR_INGRESS;
- sess = &priv->ipsec.ip;
-
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
-
- if (ctl->valid) {
- snprintf(err_str, ERR_STR_SZ, "SA already registered");
- ret = -EEXIST;
- goto tbl_unlock;
- }
-
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
- auth_xform = crypto_xform;
- cipher_xform = crypto_xform->next;
-
- cipher_key_len = 0;
- auth_key_len = 0;
- auth_key = NULL;
-
- if (crypto_xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) {
- if (crypto_xform->aead.algo == RTE_CRYPTO_AEAD_AES_GCM)
- memcpy(sa->nonce, &ipsec->salt, 4);
- cipher_key = crypto_xform->aead.key.data;
- cipher_key_len = crypto_xform->aead.key.length;
- } else {
- cipher_key = cipher_xform->cipher.key.data;
- cipher_key_len = cipher_xform->cipher.key.length;
- auth_key = auth_xform->auth.key.data;
- auth_key_len = auth_xform->auth.key.length;
- }
-
- if (cipher_key_len != 0) {
- memcpy(sa->cipher_key, cipher_key, cipher_key_len);
- } else {
- snprintf(err_str, ERR_STR_SZ, "Invalid cipher key len");
- ret = -EINVAL;
- goto sa_clear;
- }
-
- sess->in_sa = sa;
-
- sa->userdata = priv->userdata;
-
- sa->replay_win_sz = ipsec->replay_win_sz;
-
- if (lookup_mem_sa_index_update(eth_dev, ipsec->spi, sa, err_str)) {
- ret = -EINVAL;
- goto sa_clear;
- }
-
- ret = ipsec_fp_sa_ctl_set(ipsec, crypto_xform, ctl);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not set SA CTL word (err: %d)", ret);
- goto sa_clear;
- }
-
- if (auth_key_len && auth_key) {
- /* Get a queue pair for HMAC init */
- ret = otx2_sec_idev_tx_cpt_qp_get(port, &qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not get CPT QP");
- goto sa_clear;
- }
-
- ret = hmac_init(ctl, qp, auth_key, auth_key_len, sa->hmac_key);
- otx2_sec_idev_tx_cpt_qp_put(qp);
- if (ret) {
- snprintf(err_str, ERR_STR_SZ, "Could not put CPT QP");
- goto sa_clear;
- }
- }
-
- if (sa->replay_win_sz) {
- if (sa->replay_win_sz > OTX2_IPSEC_MAX_REPLAY_WIN_SZ) {
- snprintf(err_str, ERR_STR_SZ,
- "Replay window size is not supported");
- ret = -ENOTSUP;
- goto sa_clear;
- }
- sa->replay = rte_zmalloc(NULL, sizeof(struct otx2_ipsec_replay),
- 0);
- if (sa->replay == NULL) {
- snprintf(err_str, ERR_STR_SZ,
- "Could not allocate memory");
- ret = -ENOMEM;
- goto sa_clear;
- }
-
- rte_spinlock_init(&sa->replay->lock);
- /*
- * Set window bottom to 1, base and top to size of
- * window
- */
- sa->replay->winb = 1;
- sa->replay->wint = sa->replay_win_sz;
- sa->replay->base = sa->replay_win_sz;
- sa->esn_low = 0;
- sa->esn_hi = 0;
- }
-
- rte_io_wmb();
- ctl->valid = 1;
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- return 0;
-
-sa_clear:
- memset(sa, 0, sizeof(struct otx2_ipsec_fp_in_sa));
-
-tbl_unlock:
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
-
- otx2_err("%s", err_str);
-
- return ret;
-}
-
-static int
-eth_sec_ipsec_sess_create(struct rte_eth_dev *eth_dev,
- struct rte_security_ipsec_xform *ipsec,
- struct rte_crypto_sym_xform *crypto_xform,
- struct rte_security_session *sess)
-{
- int ret;
-
- ret = ipsec_fp_xform_verify(ipsec, crypto_xform);
- if (ret)
- return ret;
-
- if (ipsec->direction == RTE_SECURITY_IPSEC_SA_DIR_INGRESS)
- return eth_sec_ipsec_in_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
- else
- return eth_sec_ipsec_out_sess_create(eth_dev, ipsec,
- crypto_xform, sess);
-}
-
-static int
-otx2_eth_sec_session_create(void *device,
- struct rte_security_session_conf *conf,
- struct rte_security_session *sess,
- struct rte_mempool *mempool)
-{
- struct otx2_sec_session *priv;
- int ret;
-
- if (conf->action_type != RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL)
- return -ENOTSUP;
-
- if (rte_mempool_get(mempool, (void **)&priv)) {
- otx2_err("Could not allocate security session private data");
- return -ENOMEM;
- }
-
- set_sec_session_private_data(sess, priv);
-
- /*
- * Save userdata provided by the application. For ingress packets, this
- * could be used to identify the SA.
- */
- priv->userdata = conf->userdata;
-
- if (conf->protocol == RTE_SECURITY_PROTOCOL_IPSEC)
- ret = eth_sec_ipsec_sess_create(device, &conf->ipsec,
- conf->crypto_xform,
- sess);
- else
- ret = -ENOTSUP;
-
- if (ret)
- goto mempool_put;
-
- return 0;
-
-mempool_put:
- rte_mempool_put(mempool, priv);
- set_sec_session_private_data(sess, NULL);
- return ret;
-}
-
-static void
-otx2_eth_sec_free_anti_replay(struct otx2_ipsec_fp_in_sa *sa)
-{
- if (sa != NULL) {
- if (sa->replay_win_sz && sa->replay)
- rte_free(sa->replay);
- }
-}
-
-static int
-otx2_eth_sec_session_destroy(void *device,
- struct rte_security_session *sess)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(device);
- struct otx2_sec_session_ipsec_ip *sess_ip;
- struct otx2_ipsec_fp_in_sa *sa;
- struct otx2_sec_session *priv;
- struct rte_mempool *sess_mp;
- int ret;
-
- priv = get_sec_session_private_data(sess);
- if (priv == NULL)
- return -EINVAL;
-
- sess_ip = &priv->ipsec.ip;
-
- if (priv->ipsec.dir == RTE_SECURITY_IPSEC_SA_DIR_INGRESS) {
- rte_spinlock_lock(&dev->ipsec_tbl_lock);
- sa = sess_ip->in_sa;
-
- /* Release the anti replay window */
- otx2_eth_sec_free_anti_replay(sa);
-
- /* Clear SA table entry */
- if (sa != NULL) {
- sa->ctl.valid = 0;
- rte_io_wmb();
- }
-
- rte_spinlock_unlock(&dev->ipsec_tbl_lock);
- }
-
- /* Release CPT LF used for this session */
- if (sess_ip->qp != NULL) {
- ret = otx2_sec_idev_tx_cpt_qp_put(sess_ip->qp);
- if (ret)
- return ret;
- }
-
- sess_mp = rte_mempool_from_obj(priv);
-
- set_sec_session_private_data(sess, NULL);
- rte_mempool_put(sess_mp, priv);
-
- return 0;
-}
-
-static unsigned int
-otx2_eth_sec_session_get_size(void *device __rte_unused)
-{
- return sizeof(struct otx2_sec_session);
-}
-
-static const struct rte_security_capability *
-otx2_eth_sec_capabilities_get(void *device __rte_unused)
-{
- return otx2_eth_sec_capabilities;
-}
-
-static struct rte_security_ops otx2_eth_sec_ops = {
- .session_create = otx2_eth_sec_session_create,
- .session_destroy = otx2_eth_sec_session_destroy,
- .session_get_size = otx2_eth_sec_session_get_size,
- .capabilities_get = otx2_eth_sec_capabilities_get
-};
-
-int
-otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev)
-{
- struct rte_security_ctx *ctx;
- int ret;
-
- ctx = rte_malloc("otx2_eth_sec_ctx",
- sizeof(struct rte_security_ctx), 0);
- if (ctx == NULL)
- return -ENOMEM;
-
- ret = otx2_sec_idev_cfg_init(eth_dev->data->port_id);
- if (ret) {
- rte_free(ctx);
- return ret;
- }
-
- /* Populate ctx */
-
- ctx->device = eth_dev;
- ctx->ops = &otx2_eth_sec_ops;
- ctx->sess_cnt = 0;
- ctx->flags =
- (RTE_SEC_CTX_F_FAST_SET_MDATA | RTE_SEC_CTX_F_FAST_GET_UDATA);
-
- eth_dev->security_ctx = ctx;
-
- return 0;
-}
-
-void
-otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev)
-{
- rte_free(eth_dev->security_ctx);
-}
-
-static int
-eth_sec_ipsec_cfg(struct rte_eth_dev *eth_dev, uint8_t tt)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- struct nix_inline_ipsec_lf_cfg *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct eth_sec_tag_const tag_const;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_lookup(name);
- if (mz == NULL)
- return -EINVAL;
-
- req = otx2_mbox_alloc_msg_nix_inline_ipsec_lf_cfg(mbox);
- req->enable = 1;
- req->sa_base_addr = mz->iova;
-
- req->ipsec_cfg0.tt = tt;
-
- tag_const.u32 = 0;
- tag_const.event_type = RTE_EVENT_TYPE_ETHDEV;
- tag_const.port = port;
- req->ipsec_cfg0.tag_const = tag_const.u32;
-
- req->ipsec_cfg0.sa_pow2_size =
- rte_log2_u32(sizeof(struct otx2_ipsec_fp_in_sa));
- req->ipsec_cfg0.lenm1_max = NIX_MAX_FRS - 1;
-
- req->ipsec_cfg1.sa_idx_w = rte_log2_u32(dev->ipsec_in_max_spi);
- req->ipsec_cfg1.sa_idx_max = dev->ipsec_in_max_spi - 1;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- int ret;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = 0; /* Read RQ:0 context */
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
-
- ret = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (ret < 0) {
- otx2_err("Could not read RQ context");
- return ret;
- }
-
- /* Update tag type */
- ret = eth_sec_ipsec_cfg(eth_dev, rsp->rq.sso_tt);
- if (ret < 0)
- otx2_err("Could not update sec eth tag type");
-
- return ret;
-}
-
-int
-otx2_eth_sec_init(struct rte_eth_dev *eth_dev)
-{
- const size_t sa_width = sizeof(struct otx2_ipsec_fp_in_sa);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
- int mz_sz, ret;
- uint16_t nb_sa;
-
- RTE_BUILD_BUG_ON(sa_width < 32 || sa_width > 512 ||
- !RTE_IS_POWER_OF_2(sa_width));
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return 0;
-
- if (rte_security_dynfield_register() < 0)
- return -rte_errno;
-
- nb_sa = dev->ipsec_in_max_spi;
- mz_sz = nb_sa * sa_width;
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- mz = rte_memzone_reserve_aligned(name, mz_sz, rte_socket_id(),
- RTE_MEMZONE_IOVA_CONTIG, OTX2_ALIGN);
-
- if (mz == NULL) {
- otx2_err("Could not allocate inbound SA DB");
- return -ENOMEM;
- }
-
- memset(mz->addr, 0, mz_sz);
-
- ret = eth_sec_ipsec_cfg(eth_dev, SSO_TT_ORDERED);
- if (ret < 0) {
- otx2_err("Could not configure inline IPsec");
- goto sec_fini;
- }
-
- rte_spinlock_init(&dev->ipsec_tbl_lock);
-
- return 0;
-
-sec_fini:
- otx2_err("Could not configure device for security");
- otx2_eth_sec_fini(eth_dev);
- return ret;
-}
-
-void
-otx2_eth_sec_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t port = eth_dev->data->port_id;
- char name[RTE_MEMZONE_NAMESIZE];
-
- if (!(dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) &&
- !(dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY))
- return;
-
- lookup_mem_sa_tbl_clear(eth_dev);
-
- in_sa_mz_name_get(name, RTE_MEMZONE_NAMESIZE, port);
- rte_memzone_free(rte_memzone_lookup(name));
-}
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec.h b/drivers/net/octeontx2/otx2_ethdev_sec.h
deleted file mode 100644
index 298b00bf89..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec.h
+++ /dev/null
@@ -1,130 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright (C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_H__
-#define __OTX2_ETHDEV_SEC_H__
-
-#include <rte_ethdev.h>
-
-#include "otx2_ipsec_fp.h"
-#include "otx2_ipsec_po.h"
-
-#define OTX2_CPT_RES_ALIGN 16
-#define OTX2_NIX_SEND_DESC_ALIGN 16
-#define OTX2_CPT_INST_SIZE 64
-
-#define OTX2_CPT_EGRP_INLINE_IPSEC 1
-
-#define OTX2_CPT_OP_INLINE_IPSEC_OUTB (0x40 | 0x25)
-#define OTX2_CPT_OP_INLINE_IPSEC_INB (0x40 | 0x26)
-#define OTX2_CPT_OP_WRITE_HMAC_IPAD_OPAD (0x40 | 0x27)
-
-#define OTX2_SEC_CPT_COMP_GOOD 0x1
-#define OTX2_SEC_UC_COMP_GOOD 0x0
-#define OTX2_SEC_COMP_GOOD (OTX2_SEC_UC_COMP_GOOD << 8 | \
- OTX2_SEC_CPT_COMP_GOOD)
-
-/* CPT Result */
-struct otx2_cpt_res {
- union {
- struct {
- uint64_t compcode:8;
- uint64_t uc_compcode:8;
- uint64_t doneint:1;
- uint64_t reserved_17_63:47;
- uint64_t reserved_64_127;
- };
- uint16_t u16[8];
- };
-};
-
-struct otx2_cpt_inst_s {
- union {
- struct {
- /* W0 */
- uint64_t nixtxl : 3;
- uint64_t doneint : 1;
- uint64_t nixtx_addr : 60;
- /* W1 */
- uint64_t res_addr : 64;
- /* W2 */
- uint64_t tag : 32;
- uint64_t tt : 2;
- uint64_t grp : 10;
- uint64_t rsvd_175_172 : 4;
- uint64_t rvu_pf_func : 16;
- /* W3 */
- uint64_t qord : 1;
- uint64_t rsvd_194_193 : 2;
- uint64_t wqe_ptr : 61;
- /* W4 */
- uint64_t dlen : 16;
- uint64_t param2 : 16;
- uint64_t param1 : 16;
- uint64_t opcode : 16;
- /* W5 */
- uint64_t dptr : 64;
- /* W6 */
- uint64_t rptr : 64;
- /* W7 */
- uint64_t cptr : 61;
- uint64_t egrp : 3;
- };
- uint64_t u64[8];
- };
-};
-
-/*
- * Security session for inline IPsec protocol offload. This is private data of
- * inline capable PMD.
- */
-struct otx2_sec_session_ipsec_ip {
- RTE_STD_C11
- union {
- /*
- * Inbound SA would accessed by crypto block. And so the memory
- * is allocated differently and shared with the h/w. Only
- * holding a pointer to this memory in the session private
- * space.
- */
- void *in_sa;
- /* Outbound SA */
- struct otx2_ipsec_fp_out_sa out_sa;
- };
-
- /* Address of CPT LMTLINE */
- void *cpt_lmtline;
- /* CPT LF enqueue register address */
- rte_iova_t cpt_nq_reg;
-
- /* Pre calculated lengths and data for a session */
- uint8_t partial_len;
- uint8_t roundup_len;
- uint8_t roundup_byte;
- uint16_t ip_id;
- union {
- uint64_t esn;
- struct {
- uint32_t seq;
- uint32_t esn_hi;
- };
- };
-
- uint64_t inst_w7;
-
- /* CPT QP used by SA */
- struct otx2_cpt_qp *qp;
-};
-
-int otx2_eth_sec_ctx_create(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_ctx_destroy(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_update_tag_type(struct rte_eth_dev *eth_dev);
-
-int otx2_eth_sec_init(struct rte_eth_dev *eth_dev);
-
-void otx2_eth_sec_fini(struct rte_eth_dev *eth_dev);
-
-#endif /* __OTX2_ETHDEV_SEC_H__ */
diff --git a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h b/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
deleted file mode 100644
index 021782009f..0000000000
--- a/drivers/net/octeontx2/otx2_ethdev_sec_tx.h
+++ /dev/null
@@ -1,182 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2020 Marvell International Ltd.
- */
-
-#ifndef __OTX2_ETHDEV_SEC_TX_H__
-#define __OTX2_ETHDEV_SEC_TX_H__
-
-#include <rte_security.h>
-#include <rte_mbuf.h>
-
-#include "otx2_ethdev_sec.h"
-#include "otx2_security.h"
-
-struct otx2_ipsec_fp_out_hdr {
- uint32_t ip_id;
- uint32_t seq;
- uint8_t iv[16];
-};
-
-static __rte_always_inline int32_t
-otx2_ipsec_fp_out_rlen_get(struct otx2_sec_session_ipsec_ip *sess,
- uint32_t plen)
-{
- uint32_t enc_payload_len;
-
- enc_payload_len = RTE_ALIGN_CEIL(plen + sess->roundup_len,
- sess->roundup_byte);
-
- return sess->partial_len + enc_payload_len;
-}
-
-static __rte_always_inline void
-otx2_ssogws_head_wait(uint64_t base);
-
-static __rte_always_inline int
-otx2_sec_event_tx(uint64_t base, struct rte_event *ev, struct rte_mbuf *m,
- const struct otx2_eth_txq *txq, const uint32_t offload_flags)
-{
- uint32_t dlen, rlen, desc_headroom, extend_head, extend_tail;
- struct otx2_sec_session_ipsec_ip *sess;
- struct otx2_ipsec_fp_out_hdr *hdr;
- struct otx2_ipsec_fp_out_sa *sa;
- uint64_t data_addr, desc_addr;
- struct otx2_sec_session *priv;
- struct otx2_cpt_inst_s inst;
- uint64_t lmt_status;
- char *data;
-
- struct desc {
- struct otx2_cpt_res cpt_res __rte_aligned(OTX2_CPT_RES_ALIGN);
- struct nix_send_hdr_s nix_hdr
- __rte_aligned(OTX2_NIX_SEND_DESC_ALIGN);
- union nix_send_sg_s nix_sg;
- struct nix_iova_s nix_iova;
- } *sd;
-
- priv = (struct otx2_sec_session *)(*rte_security_dynfield(m));
- sess = &priv->ipsec.ip;
- sa = &sess->out_sa;
-
- RTE_ASSERT(sess->cpt_lmtline != NULL);
- RTE_ASSERT(!(offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F));
-
- dlen = rte_pktmbuf_pkt_len(m) + sizeof(*hdr) - RTE_ETHER_HDR_LEN;
- rlen = otx2_ipsec_fp_out_rlen_get(sess, dlen - sizeof(*hdr));
-
- RTE_BUILD_BUG_ON(OTX2_CPT_RES_ALIGN % OTX2_NIX_SEND_DESC_ALIGN);
- RTE_BUILD_BUG_ON(sizeof(sd->cpt_res) % OTX2_NIX_SEND_DESC_ALIGN);
-
- extend_head = sizeof(*hdr);
- extend_tail = rlen - dlen;
-
- desc_headroom = (OTX2_CPT_RES_ALIGN - 1) + sizeof(*sd);
-
- if (unlikely(!rte_pktmbuf_is_contiguous(m)) ||
- unlikely(rte_pktmbuf_headroom(m) < extend_head + desc_headroom) ||
- unlikely(rte_pktmbuf_tailroom(m) < extend_tail)) {
- goto drop;
- }
-
- /*
- * Extend mbuf data to point to the expected packet buffer for NIX.
- * This includes the Ethernet header followed by the encrypted IPsec
- * payload
- */
- rte_pktmbuf_append(m, extend_tail);
- data = rte_pktmbuf_prepend(m, extend_head);
- data_addr = rte_pktmbuf_iova(m);
-
- /*
- * Move the Ethernet header, to insert otx2_ipsec_fp_out_hdr prior
- * to the IP header
- */
- memcpy(data, data + sizeof(*hdr), RTE_ETHER_HDR_LEN);
-
- hdr = (struct otx2_ipsec_fp_out_hdr *)(data + RTE_ETHER_HDR_LEN);
-
- if (sa->ctl.enc_type == OTX2_IPSEC_FP_SA_ENC_AES_GCM) {
- /* AES-128-GCM */
- memcpy(hdr->iv, &sa->nonce, 4);
- memset(hdr->iv + 4, 0, 12); //TODO: make it random
- } else {
- /* AES-128-[CBC] + [SHA1] */
- memset(hdr->iv, 0, 16); //TODO: make it random
- }
-
- /* Keep CPT result and NIX send descriptors in headroom */
- sd = (void *)RTE_PTR_ALIGN(data - desc_headroom, OTX2_CPT_RES_ALIGN);
- desc_addr = data_addr - RTE_PTR_DIFF(data, sd);
-
- /* Prepare CPT instruction */
-
- inst.nixtx_addr = (desc_addr + offsetof(struct desc, nix_hdr)) >> 4;
- inst.doneint = 0;
- inst.nixtxl = 1;
- inst.res_addr = desc_addr + offsetof(struct desc, cpt_res);
- inst.u64[2] = 0;
- inst.u64[3] = 0;
- inst.wqe_ptr = desc_addr >> 3; /* FIXME: Handle errors */
- inst.qord = 1;
- inst.opcode = OTX2_CPT_OP_INLINE_IPSEC_OUTB;
- inst.dlen = dlen;
- inst.dptr = data_addr + RTE_ETHER_HDR_LEN;
- inst.u64[7] = sess->inst_w7;
-
- /* First word contains 8 bit completion code & 8 bit uc comp code */
- sd->cpt_res.u16[0] = 0;
-
- /* Prepare NIX send descriptors for output expected from CPT */
-
- sd->nix_hdr.w0.u = 0;
- sd->nix_hdr.w1.u = 0;
- sd->nix_hdr.w0.sq = txq->sq;
- sd->nix_hdr.w0.sizem1 = 1;
- sd->nix_hdr.w0.total = rte_pktmbuf_data_len(m);
- sd->nix_hdr.w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)
- sd->nix_hdr.w0.df = otx2_nix_prefree_seg(m);
-
- sd->nix_sg.u = 0;
- sd->nix_sg.subdc = NIX_SUBDC_SG;
- sd->nix_sg.ld_type = NIX_SENDLDTYPE_LDD;
- sd->nix_sg.segs = 1;
- sd->nix_sg.seg1_size = rte_pktmbuf_data_len(m);
-
- sd->nix_iova.addr = rte_mbuf_data_iova(m);
-
- /* Mark mempool object as "put" since it is freed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
-
- if (!ev->sched_type)
- otx2_ssogws_head_wait(base + SSOW_LF_GWS_TAG);
-
- inst.param1 = sess->esn_hi >> 16;
- inst.param2 = sess->esn_hi & 0xffff;
-
- hdr->seq = rte_cpu_to_be_32(sess->seq);
- hdr->ip_id = rte_cpu_to_be_32(sess->ip_id);
-
- sess->ip_id++;
- sess->esn++;
-
- rte_io_wmb();
-
- do {
- otx2_lmt_mov(sess->cpt_lmtline, &inst, 2);
- lmt_status = otx2_lmt_submit(sess->cpt_nq_reg);
- } while (lmt_status == 0);
-
- return 1;
-
-drop:
- if (offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* Don't free if reference count > 1 */
- if (rte_pktmbuf_prefree_seg(m) == NULL)
- return 0;
- }
- rte_pktmbuf_free(m);
- return 0;
-}
-
-#endif /* __OTX2_ETHDEV_SEC_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow.c b/drivers/net/octeontx2/otx2_flow.c
deleted file mode 100644
index 1d0fe4e950..0000000000
--- a/drivers/net/octeontx2/otx2_flow.c
+++ /dev/null
@@ -1,1189 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-enum flow_vtag_cfg_dir { VTAG_TX, VTAG_RX };
-
-int
-otx2_flow_free_all_resources(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct otx2_mcam_ents_info *info;
- struct rte_bitmap *bmap;
- struct rte_flow *flow;
- int entry_count = 0;
- int rc, idx;
-
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- info = &npc->flow_entry_info[idx];
- entry_count += info->live_ent;
- }
-
- if (entry_count == 0)
- return 0;
-
- /* Free all MCAM entries allocated */
- rc = otx2_flow_mcam_free_all_entries(mbox);
-
- /* Free any MCAM counters and delete flow list */
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- while ((flow = TAILQ_FIRST(&npc->flow_list[idx])) != NULL) {
- if (flow->ctr_id != NPC_COUNTER_NONE)
- rc |= otx2_flow_mcam_free_counter(mbox,
- flow->ctr_id);
-
- TAILQ_REMOVE(&npc->flow_list[idx], flow, next);
- rte_free(flow);
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
- }
- info = &npc->flow_entry_info[idx];
- info->free_ent = 0;
- info->live_ent = 0;
- }
- return rc;
-}
-
-
-static int
-flow_program_npc(struct otx2_parse_state *pst, struct otx2_mbox *mbox,
- struct otx2_npc_flow_info *flow_info)
-{
- /* This is non-LDATA part in search key */
- uint64_t key_data[2] = {0ULL, 0ULL};
- uint64_t key_mask[2] = {0ULL, 0ULL};
- int intf = pst->flow->nix_intf;
- int key_len, bit = 0, index;
- int off, idx, data_off = 0;
- uint8_t lid, mask, data;
- uint16_t layer_info;
- uint64_t lt, flags;
-
-
- /* Skip till Layer A data start */
- while (bit < NPC_PARSE_KEX_S_LA_OFFSET) {
- if (flow_info->keyx_supp_nmask[intf] & (1 << bit))
- data_off++;
- bit++;
- }
-
- /* Each bit represents 1 nibble */
- data_off *= 4;
-
- index = 0;
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- /* Offset in key */
- off = NPC_PARSE_KEX_S_LID_OFFSET(lid);
- lt = pst->lt[lid] & 0xf;
- flags = pst->flags[lid] & 0xff;
-
- /* NPC_LAYER_KEX_S */
- layer_info = ((flow_info->keyx_supp_nmask[intf] >> off) & 0x7);
-
- if (layer_info) {
- for (idx = 0; idx <= 2 ; idx++) {
- if (layer_info & (1 << idx)) {
- if (idx == 2)
- data = lt;
- else if (idx == 1)
- data = ((flags >> 4) & 0xf);
- else
- data = (flags & 0xf);
-
- if (data_off >= 64) {
- data_off = 0;
- index++;
- }
- key_data[index] |= ((uint64_t)data <<
- data_off);
- mask = 0xf;
- if (lt == 0)
- mask = 0;
- key_mask[index] |= ((uint64_t)mask <<
- data_off);
- data_off += 4;
- }
- }
- }
- }
-
- otx2_npc_dbg("Npc prog key data0: 0x%" PRIx64 ", data1: 0x%" PRIx64,
- key_data[0], key_data[1]);
-
- /* Copy this into mcam string */
- key_len = (pst->npc->keyx_len[intf] + 7) / 8;
- otx2_npc_dbg("Key_len = %d", key_len);
- memcpy(pst->flow->mcam_data, key_data, key_len);
- memcpy(pst->flow->mcam_mask, key_mask, key_len);
-
- otx2_npc_dbg("Final flow data");
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- otx2_npc_dbg("data[%d]: 0x%" PRIx64 ", mask[%d]: 0x%" PRIx64,
- idx, pst->flow->mcam_data[idx],
- idx, pst->flow->mcam_mask[idx]);
- }
-
- /*
- * Now we have mcam data and mask formatted as
- * [Key_len/4 nibbles][0 or 1 nibble hole][data]
- * hole is present if key_len is odd number of nibbles.
- * mcam data must be split into 64 bits + 48 bits segments
- * for each back W0, W1.
- */
-
- return otx2_flow_mcam_alloc_and_write(pst->flow, mbox, pst, flow_info);
-}
-
-static int
-flow_parse_attr(struct rte_eth_dev *eth_dev,
- const struct rte_flow_attr *attr,
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const char *errmsg = NULL;
-
- if (attr == NULL)
- errmsg = "Attribute can't be empty";
- else if (attr->group)
- errmsg = "Groups are not supported";
- else if (attr->priority >= dev->npc_flow.flow_max_priority)
- errmsg = "Priority should be with in specified range";
- else if ((!attr->egress && !attr->ingress) ||
- (attr->egress && attr->ingress))
- errmsg = "Exactly one of ingress or egress must be set";
-
- if (errmsg != NULL) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ATTR,
- attr, errmsg);
- return -ENOTSUP;
- }
-
- if (attr->ingress)
- flow->nix_intf = OTX2_INTF_RX;
- else
- flow->nix_intf = OTX2_INTF_TX;
-
- flow->priority = attr->priority;
- return 0;
-}
-
-static inline int
-flow_get_free_rss_grp(struct rte_bitmap *bmap,
- uint32_t size, uint32_t *pos)
-{
- for (*pos = 0; *pos < size; ++*pos) {
- if (!rte_bitmap_get(bmap, *pos))
- break;
- }
-
- return *pos < size ? 0 : -1;
-}
-
-static int
-flow_configure_rss_action(struct otx2_eth_dev *dev,
- const struct rte_flow_action_rss *rss,
- uint8_t *alg_idx, uint32_t *rss_grp,
- int mcam_index)
-{
- struct otx2_npc_flow_info *flow_info = &dev->npc_flow;
- uint16_t reta[NIX_RSS_RETA_SIZE_MAX];
- uint32_t flowkey_cfg, grp_aval, i;
- uint16_t *ind_tbl = NULL;
- uint8_t flowkey_algx;
- int rc;
-
- rc = flow_get_free_rss_grp(flow_info->rss_grp_entries,
- flow_info->rss_grps, &grp_aval);
- /* RSS group :0 is not usable for flow rss action */
- if (rc < 0 || grp_aval == 0)
- return -ENOSPC;
-
- *rss_grp = grp_aval;
-
- otx2_nix_rss_set_key(dev, (uint8_t *)(uintptr_t)rss->key,
- rss->key_len);
-
- /* If queue count passed in the rss action is less than
- * HW configured reta size, replicate rss action reta
- * across HW reta table.
- */
- if (dev->rss_info.rss_size > rss->queue_num) {
- ind_tbl = reta;
-
- for (i = 0; i < (dev->rss_info.rss_size / rss->queue_num); i++)
- memcpy(reta + i * rss->queue_num, rss->queue,
- sizeof(uint16_t) * rss->queue_num);
-
- i = dev->rss_info.rss_size % rss->queue_num;
- if (i)
- memcpy(&reta[dev->rss_info.rss_size] - i,
- rss->queue, i * sizeof(uint16_t));
- } else {
- ind_tbl = (uint16_t *)(uintptr_t)rss->queue;
- }
-
- rc = otx2_nix_rss_tbl_init(dev, *rss_grp, ind_tbl);
- if (rc) {
- otx2_err("Failed to init rss table rc = %d", rc);
- return rc;
- }
-
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss->types, rss->level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &flowkey_algx,
- *rss_grp, mcam_index);
- if (rc) {
- otx2_err("Failed to set rss hash function rc = %d", rc);
- return rc;
- }
-
- *alg_idx = flowkey_algx;
-
- rte_bitmap_set(flow_info->rss_grp_entries, *rss_grp);
-
- return 0;
-}
-
-
-static int
-flow_program_rss_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- const struct rte_flow_action_rss *rss;
- uint32_t rss_grp;
- uint8_t alg_idx;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_RSS) {
- rss = (const struct rte_flow_action_rss *)actions->conf;
-
- rc = flow_configure_rss_action(dev,
- rss, &alg_idx, &rss_grp,
- flow->mcam_id);
- if (rc)
- return rc;
-
- flow->npc_action &= (~(0xfULL));
- flow->npc_action |= NIX_RX_ACTIONOP_RSS;
- flow->npc_action |=
- ((uint64_t)(alg_idx & NIX_RSS_ACT_ALG_MASK) <<
- NIX_RSS_ACT_ALG_OFFSET) |
- ((uint64_t)(rss_grp & NIX_RSS_ACT_GRP_MASK) <<
- NIX_RSS_ACT_GRP_OFFSET);
- }
- }
- return 0;
-}
-
-static int
-flow_free_rss_action(struct rte_eth_dev *eth_dev,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- uint32_t rss_grp;
-
- if (flow->npc_action & NIX_RX_ACTIONOP_RSS) {
- rss_grp = (flow->npc_action >> NIX_RSS_ACT_GRP_OFFSET) &
- NIX_RSS_ACT_GRP_MASK;
- if (rss_grp == 0 || rss_grp >= npc->rss_grps)
- return -EINVAL;
-
- rte_bitmap_clear(npc->rss_grp_entries, rss_grp);
- }
-
- return 0;
-}
-
-static int
-flow_update_sec_tt(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[])
-{
- int rc = 0;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_SECURITY) {
- rc = otx2_eth_sec_update_tag_type(eth_dev);
- break;
- }
- }
-
- return rc;
-}
-
-static int
-flow_parse_meta_items(__rte_unused struct otx2_parse_state *pst)
-{
- otx2_npc_dbg("Meta Item");
- return 0;
-}
-
-/*
- * Parse function of each layer:
- * - Consume one or more patterns that are relevant.
- * - Update parse_state
- * - Set parse_state.pattern = last item consumed
- * - Set appropriate error code/message when returning error.
- */
-typedef int (*flow_parse_stage_func_t)(struct otx2_parse_state *pst);
-
-static int
-flow_parse_pattern(struct rte_eth_dev *dev,
- const struct rte_flow_item pattern[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- flow_parse_stage_func_t parse_stage_funcs[] = {
- flow_parse_meta_items,
- otx2_flow_parse_higig2_hdr,
- otx2_flow_parse_la,
- otx2_flow_parse_lb,
- otx2_flow_parse_lc,
- otx2_flow_parse_ld,
- otx2_flow_parse_le,
- otx2_flow_parse_lf,
- otx2_flow_parse_lg,
- otx2_flow_parse_lh,
- };
- struct otx2_eth_dev *hw = dev->data->dev_private;
- uint8_t layer = 0;
- int key_offset;
- int rc;
-
- if (pattern == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM_NUM, NULL,
- "pattern is NULL");
- return -EINVAL;
- }
-
- memset(pst, 0, sizeof(*pst));
- pst->npc = &hw->npc_flow;
- pst->error = error;
- pst->flow = flow;
-
- /* Use integral byte offset */
- key_offset = pst->npc->keyx_len[flow->nix_intf];
- key_offset = (key_offset + 7) / 8;
-
- /* Location where LDATA would begin */
- pst->mcam_data = (uint8_t *)flow->mcam_data;
- pst->mcam_mask = (uint8_t *)flow->mcam_mask;
-
- while (pattern->type != RTE_FLOW_ITEM_TYPE_END &&
- layer < RTE_DIM(parse_stage_funcs)) {
- otx2_npc_dbg("Pattern type = %d", pattern->type);
-
- /* Skip place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- pst->pattern = pattern;
- otx2_npc_dbg("Is tunnel = %d, layer = %d", pst->tunnel, layer);
- rc = parse_stage_funcs[layer](pst);
- if (rc != 0)
- return -rte_errno;
-
- layer++;
-
- /*
- * Parse stage function sets pst->pattern to
- * 1 past the last item it consumed.
- */
- pattern = pst->pattern;
-
- if (pst->terminate)
- break;
- }
-
- /* Skip trailing place-holders */
- pattern = otx2_flow_skip_void_and_any_items(pattern);
-
- /* Are there more items than what we can handle? */
- if (pattern->type != RTE_FLOW_ITEM_TYPE_END) {
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, pattern,
- "unsupported item in the sequence");
- return -ENOTSUP;
- }
-
- return 0;
-}
-
-static int
-flow_parse_rule(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow,
- struct otx2_parse_state *pst)
-{
- int err;
-
- /* Check attributes */
- err = flow_parse_attr(dev, attr, error, flow);
- if (err)
- return err;
-
- /* Check actions */
- err = otx2_flow_parse_actions(dev, attr, actions, error, flow);
- if (err)
- return err;
-
- /* Check pattern */
- err = flow_parse_pattern(dev, pattern, error, flow, pst);
- if (err)
- return err;
-
- /* Check for overlaps? */
- return 0;
-}
-
-static int
-otx2_flow_validate(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_parse_state parse_state;
- struct rte_flow flow;
-
- memset(&flow, 0, sizeof(flow));
- return flow_parse_rule(dev, attr, pattern, actions, error, &flow,
- &parse_state);
-}
-
-static int
-flow_program_vtag_action(struct rte_eth_dev *eth_dev,
- const struct rte_flow_action actions[],
- struct rte_flow *flow)
-{
- uint16_t vlan_id = 0, vlan_ethtype = RTE_ETHER_TYPE_VLAN;
- struct otx2_eth_dev *dev = eth_dev->data->dev_private;
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } tx_vtag_action;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- bool vlan_insert_action = false;
- uint64_t rx_vtag_action = 0;
- uint8_t vlan_pcp = 0;
- int rc;
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- if (actions->type == RTE_FLOW_ACTION_TYPE_OF_POP_VLAN) {
- if (dev->npc_flow.vtag_actions == 1) {
- vtag_cfg =
- otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_RX;
- vtag_cfg->rx.strip_vtag = 1;
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- }
-
- rx_vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- rx_vtag_action |= (NPC_LID_LB << 8);
- rx_vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = rx_vtag_action;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID) {
- const struct rte_flow_action_of_set_vlan_vid *vtag =
- (const struct rte_flow_action_of_set_vlan_vid *)
- actions->conf;
- vlan_id = rte_be_to_cpu_16(vtag->vlan_vid);
- if (vlan_id > 0xfff) {
- otx2_err("Invalid vlan_id for set vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type == RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN) {
- const struct rte_flow_action_of_push_vlan *ethtype =
- (const struct rte_flow_action_of_push_vlan *)
- actions->conf;
- vlan_ethtype = rte_be_to_cpu_16(ethtype->ethertype);
- if (vlan_ethtype != RTE_ETHER_TYPE_VLAN &&
- vlan_ethtype != RTE_ETHER_TYPE_QINQ) {
- otx2_err("Invalid ethtype specified for push"
- " vlan action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- } else if (actions->type ==
- RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP) {
- const struct rte_flow_action_of_set_vlan_pcp *pcp =
- (const struct rte_flow_action_of_set_vlan_pcp *)
- actions->conf;
- vlan_pcp = pcp->vlan_pcp;
- if (vlan_pcp > 0x7) {
- otx2_err("Invalid PCP value for pcp action");
- return -EINVAL;
- }
- vlan_insert_action = true;
- }
- }
-
- if (vlan_insert_action) {
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- vtag_cfg->tx.vtag0 =
- ((vlan_ethtype << 16) | (vlan_pcp << 13) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- tx_vtag_action.reg = 0;
- tx_vtag_action.act.vtag0_def = rsp->vtag0_idx;
- if (tx_vtag_action.act.vtag0_def < 0) {
- otx2_err("Failed to config TX VTAG action");
- return -EINVAL;
- }
- tx_vtag_action.act.vtag0_lid = NPC_LID_LA;
- tx_vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- tx_vtag_action.act.vtag0_relptr =
- NIX_TX_VTAGACTION_VTAG0_RELPTR;
- flow->vtag_action = tx_vtag_action.reg;
- }
- return 0;
-}
-
-static struct rte_flow *
-otx2_flow_create(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_item pattern[],
- const struct rte_flow_action actions[],
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_parse_state parse_state;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_flow *flow, *flow_iter;
- struct otx2_flow_list *list;
- int rc;
-
- flow = rte_zmalloc("otx2_rte_flow", sizeof(*flow), 0);
- if (flow == NULL) {
- rte_flow_error_set(error, ENOMEM,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Memory allocation failed");
- return NULL;
- }
- memset(flow, 0, sizeof(*flow));
-
- rc = flow_parse_rule(dev, attr, pattern, actions, error, flow,
- &parse_state);
- if (rc != 0)
- goto err_exit;
-
- rc = flow_program_vtag_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program vlan action");
- goto err_exit;
- }
-
- parse_state.is_vf = otx2_dev_is_vf(hw);
-
- rc = flow_program_npc(&parse_state, mbox, &hw->npc_flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to insert filter");
- goto err_exit;
- }
-
- rc = flow_program_rss_action(dev, actions, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to program rss action");
- goto err_exit;
- }
-
- if (hw->rx_offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
- rc = flow_update_sec_tt(dev, actions);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to update tt with sec act");
- goto err_exit;
- }
- }
-
- list = &hw->npc_flow.flow_list[flow->priority];
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > flow->mcam_id) {
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- return flow;
- }
- }
-
- TAILQ_INSERT_TAIL(list, flow, next);
- return flow;
-
-err_exit:
- rte_free(flow);
- return NULL;
-}
-
-static int
-otx2_flow_destroy(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- struct otx2_mbox *mbox = hw->mbox;
- struct rte_bitmap *bmap;
- uint16_t match_id;
- int rc;
-
- match_id = (flow->npc_action >> NIX_RX_ACT_MATCH_OFFSET) &
- NIX_RX_ACT_MATCH_MASK;
-
- if (match_id && match_id < OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- if (rte_atomic32_read(&npc->mark_actions) == 0)
- return -EINVAL;
-
- /* Clear mark offload flag if there are no more mark actions */
- if (rte_atomic32_sub_return(&npc->mark_actions, 1) == 0) {
- hw->rx_offload_flags &= ~NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
- }
-
- if (flow->nix_intf == OTX2_INTF_RX && flow->vtag_action) {
- npc->vtag_actions--;
- if (npc->vtag_actions == 0) {
- if (hw->vlan_info.strip_on == 0) {
- hw->rx_offload_flags &=
- ~NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
- }
- }
-
- rc = flow_free_rss_action(dev, flow);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to free rss action");
- }
-
- rc = otx2_flow_mcam_free_entry(mbox, flow->mcam_id);
- if (rc != 0) {
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to destroy filter");
- }
-
- TAILQ_REMOVE(&npc->flow_list[flow->priority], flow, next);
-
- bmap = npc->live_entries[flow->priority];
- rte_bitmap_clear(bmap, flow->mcam_id);
-
- rte_free(flow);
- return 0;
-}
-
-static int
-otx2_flow_flush(struct rte_eth_dev *dev,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries "
- ", counters");
- rte_flow_error_set(error, EIO,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Failed to flush filter");
- return -rte_errno;
- }
-
- return 0;
-}
-
-static int
-otx2_flow_isolate(struct rte_eth_dev *dev __rte_unused,
- int enable __rte_unused,
- struct rte_flow_error *error)
-{
- /*
- * If we support, we need to un-install the default mcam
- * entry for this port.
- */
-
- rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Flow isolation not supported");
-
- return -rte_errno;
-}
-
-static int
-otx2_flow_query(struct rte_eth_dev *dev,
- struct rte_flow *flow,
- const struct rte_flow_action *action,
- void *data,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct rte_flow_query_count *query = data;
- struct otx2_mbox *mbox = hw->mbox;
- const char *errmsg = NULL;
- int errcode = ENOTSUP;
- int rc;
-
- if (action->type != RTE_FLOW_ACTION_TYPE_COUNT) {
- errmsg = "Only COUNT is supported in query";
- goto err_exit;
- }
-
- if (flow->ctr_id == NPC_COUNTER_NONE) {
- errmsg = "Counter is not available";
- goto err_exit;
- }
-
- rc = otx2_flow_mcam_read_counter(mbox, flow->ctr_id, &query->hits);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error reading flow counter";
- goto err_exit;
- }
- query->hits_set = 1;
- query->bytes_set = 0;
-
- if (query->reset)
- rc = otx2_flow_mcam_clear_counter(mbox, flow->ctr_id);
- if (rc != 0) {
- errcode = EIO;
- errmsg = "Error clearing flow counter";
- goto err_exit;
- }
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- errmsg);
- return -rte_errno;
-}
-
-static int
-otx2_flow_dev_dump(struct rte_eth_dev *dev,
- struct rte_flow *flow, FILE *file,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- uint32_t max_prio, i;
-
- if (file == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- NULL,
- "Invalid file");
- return -EINVAL;
- }
- if (flow != NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_HANDLE,
- NULL,
- "Invalid argument");
- return -EINVAL;
- }
-
- max_prio = hw->npc_flow.flow_max_priority;
-
- for (i = 0; i < max_prio; i++) {
- list = &hw->npc_flow.flow_list[i];
-
- /* List in ascending order of mcam entries */
- TAILQ_FOREACH(flow_iter, list, next) {
- otx2_flow_dump(file, hw, flow_iter);
- }
- }
-
- return 0;
-}
-
-const struct rte_flow_ops otx2_flow_ops = {
- .validate = otx2_flow_validate,
- .create = otx2_flow_create,
- .destroy = otx2_flow_destroy,
- .flush = otx2_flow_flush,
- .query = otx2_flow_query,
- .isolate = otx2_flow_isolate,
- .dev_dump = otx2_flow_dev_dump,
-};
-
-static int
-flow_supp_key_len(uint32_t supp_mask)
-{
- int nib_count = 0;
- while (supp_mask) {
- nib_count++;
- supp_mask &= (supp_mask - 1);
- }
- return nib_count * 4;
-}
-
-/* Refer HRM register:
- * NPC_AF_INTF(0..1)_LID(0..7)_LT(0..15)_LD(0..1)_CFG
- * and
- * NPC_AF_INTF(0..1)_LDATA(0..1)_FLAGS(0..15)_CFG
- **/
-#define BYTESM1_SHIFT 16
-#define HDR_OFF_SHIFT 8
-static void
-flow_update_kex_info(struct npc_xtract_info *xtract_info,
- uint64_t val)
-{
- xtract_info->len = ((val >> BYTESM1_SHIFT) & 0xf) + 1;
- xtract_info->hdr_off = (val >> HDR_OFF_SHIFT) & 0xff;
- xtract_info->key_off = val & 0x3f;
- xtract_info->enable = ((val >> 7) & 0x1);
- xtract_info->flags_enable = ((val >> 6) & 0x1);
-}
-
-static void
-flow_process_mkex_cfg(struct otx2_npc_flow_info *npc,
- struct npc_get_kex_cfg_rsp *kex_rsp)
-{
- volatile uint64_t (*q)[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT]
- [NPC_MAX_LD];
- struct npc_xtract_info *x_info = NULL;
- int lid, lt, ld, fl, ix;
- otx2_dxcfg_t *p;
- uint64_t keyw;
- uint64_t val;
-
- npc->keyx_supp_nmask[NPC_MCAM_RX] =
- kex_rsp->rx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_supp_nmask[NPC_MCAM_TX] =
- kex_rsp->tx_keyx_cfg & 0x7fffffffULL;
- npc->keyx_len[NPC_MCAM_RX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_RX]);
- npc->keyx_len[NPC_MCAM_TX] =
- flow_supp_key_len(npc->keyx_supp_nmask[NPC_MCAM_TX]);
-
- keyw = (kex_rsp->rx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_RX] = keyw;
- keyw = (kex_rsp->tx_keyx_cfg >> 32) & 0x7ULL;
- npc->keyw[NPC_MCAM_TX] = keyw;
-
- /* Update KEX_LD_FLAG */
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- for (fl = 0; fl < NPC_MAX_LFL; fl++) {
- x_info =
- &npc->prx_fxcfg[ix][ld][fl].xtract[0];
- val = kex_rsp->intf_ld_flags[ix][ld][fl];
- flow_update_kex_info(x_info, val);
- }
- }
- }
-
- /* Update LID, LT and LDATA cfg */
- p = &npc->prx_dxcfg;
- q = (volatile uint64_t (*)[][NPC_MAX_LID][NPC_MAX_LT][NPC_MAX_LD])
- (&kex_rsp->intf_lid_lt_ld);
- for (ix = 0; ix < NPC_MAX_INTF; ix++) {
- for (lid = 0; lid < NPC_MAX_LID; lid++) {
- for (lt = 0; lt < NPC_MAX_LT; lt++) {
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- x_info = &(*p)[ix][lid][lt].xtract[ld];
- val = (*q)[ix][lid][lt][ld];
- flow_update_kex_info(x_info, val);
- }
- }
- }
- }
- /* Update LDATA Flags cfg */
- npc->prx_lfcfg[0].i = kex_rsp->kex_ld_flags[0];
- npc->prx_lfcfg[1].i = kex_rsp->kex_ld_flags[1];
-}
-
-static struct otx2_idev_kex_cfg *
-flow_intra_dev_kex_cfg(void)
-{
- static const char name[] = "octeontx2_intra_device_kex_conf";
- struct otx2_idev_kex_cfg *idev;
- const struct rte_memzone *mz;
-
- mz = rte_memzone_lookup(name);
- if (mz)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, sizeof(struct otx2_idev_kex_cfg),
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz) {
- idev = mz->addr;
- rte_atomic16_set(&idev->kex_refcnt, 0);
- return idev;
- }
- return NULL;
-}
-
-static int
-flow_fetch_kex_cfg(struct otx2_eth_dev *dev)
-{
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_get_kex_cfg_rsp *kex_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- char mkex_pfl_name[MKEX_NAME_LEN];
- struct otx2_idev_kex_cfg *idev;
- int rc = 0;
-
- idev = flow_intra_dev_kex_cfg();
- if (!idev)
- return -ENOMEM;
-
- /* Is kex_cfg read by any another driver? */
- if (rte_atomic16_add_return(&idev->kex_refcnt, 1) == 1) {
- /* Call mailbox to get key & data size */
- (void)otx2_mbox_alloc_msg_npc_get_kex_cfg(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&kex_rsp);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config");
- goto done;
- }
- memcpy(&idev->kex_cfg, kex_rsp,
- sizeof(struct npc_get_kex_cfg_rsp));
- }
-
- otx2_mbox_memcpy(mkex_pfl_name,
- idev->kex_cfg.mkex_pfl_name, MKEX_NAME_LEN);
-
- strlcpy((char *)dev->mkex_pfl_name,
- mkex_pfl_name, sizeof(dev->mkex_pfl_name));
-
- flow_process_mkex_cfg(npc, &idev->kex_cfg);
-
-done:
- return rc;
-}
-
-#define OTX2_MCAM_TOT_ENTRIES_96XX (4096)
-#define OTX2_MCAM_TOT_ENTRIES_98XX (16384)
-
-static int otx2_mcam_tot_entries(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_98xx(dev))
- return OTX2_MCAM_TOT_ENTRIES_98XX;
- else
- return OTX2_MCAM_TOT_ENTRIES_96XX;
-}
-
-int
-otx2_flow_init(struct otx2_eth_dev *hw)
-{
- uint8_t *mem = NULL, *nix_mem = NULL, *npc_mem = NULL;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- uint32_t bmap_sz, tot_mcam_entries = 0;
- int rc = 0, idx;
-
- rc = flow_fetch_kex_cfg(hw);
- if (rc) {
- otx2_err("Failed to fetch NPC keyx config from idev");
- return rc;
- }
-
- rte_atomic32_init(&npc->mark_actions);
- npc->vtag_actions = 0;
-
- tot_mcam_entries = otx2_mcam_tot_entries(hw);
- npc->mcam_entries = tot_mcam_entries >> npc->keyw[NPC_MCAM_RX];
- /* Free, free_rev, live and live_rev entries */
- bmap_sz = rte_bitmap_get_memory_footprint(npc->mcam_entries);
- mem = rte_zmalloc(NULL, 4 * bmap_sz * npc->flow_max_priority,
- RTE_CACHE_LINE_SIZE);
- if (mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- return rc;
- }
-
- npc->flow_entry_info = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_mcam_ents_info),
- 0);
- if (npc->flow_entry_info == NULL) {
- otx2_err("flow_entry_info alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries == NULL) {
- otx2_err("free_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->free_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->free_entries_rev == NULL) {
- otx2_err("free_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries == NULL) {
- otx2_err("live_entries alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->live_entries_rev = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct rte_bitmap *),
- 0);
- if (npc->live_entries_rev == NULL) {
- otx2_err("live_entries_rev alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->flow_list = rte_zmalloc(NULL, npc->flow_max_priority
- * sizeof(struct otx2_flow_list),
- 0);
- if (npc->flow_list == NULL) {
- otx2_err("flow_list alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc_mem = mem;
- for (idx = 0; idx < npc->flow_max_priority; idx++) {
- TAILQ_INIT(&npc->flow_list[idx]);
-
- npc->free_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->free_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->live_entries_rev[idx] =
- rte_bitmap_init(npc->mcam_entries, mem, bmap_sz);
- mem += bmap_sz;
-
- npc->flow_entry_info[idx].free_ent = 0;
- npc->flow_entry_info[idx].live_ent = 0;
- npc->flow_entry_info[idx].max_id = 0;
- npc->flow_entry_info[idx].min_id = ~(0);
- }
-
- npc->rss_grps = NIX_RSS_GRPS;
-
- bmap_sz = rte_bitmap_get_memory_footprint(npc->rss_grps);
- nix_mem = rte_zmalloc(NULL, bmap_sz, RTE_CACHE_LINE_SIZE);
- if (nix_mem == NULL) {
- otx2_err("Bmap alloc failed");
- rc = -ENOMEM;
- goto err;
- }
-
- npc->rss_grp_entries = rte_bitmap_init(npc->rss_grps, nix_mem, bmap_sz);
-
- /* Group 0 will be used for RSS,
- * 1 -7 will be used for rte_flow RSS action
- */
- rte_bitmap_set(npc->rss_grp_entries, 0);
-
- return 0;
-
-err:
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
- if (npc_mem)
- rte_free(npc_mem);
- return rc;
-}
-
-int
-otx2_flow_fini(struct otx2_eth_dev *hw)
-{
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- int rc;
-
- rc = otx2_flow_free_all_resources(hw);
- if (rc) {
- otx2_err("Error when deleting NPC MCAM entries, counters");
- return rc;
- }
-
- if (npc->flow_list)
- rte_free(npc->flow_list);
- if (npc->live_entries_rev)
- rte_free(npc->live_entries_rev);
- if (npc->live_entries)
- rte_free(npc->live_entries);
- if (npc->free_entries_rev)
- rte_free(npc->free_entries_rev);
- if (npc->free_entries)
- rte_free(npc->free_entries);
- if (npc->flow_entry_info)
- rte_free(npc->flow_entry_info);
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_flow.h b/drivers/net/octeontx2/otx2_flow.h
deleted file mode 100644
index 790e6ef1e8..0000000000
--- a/drivers/net/octeontx2/otx2_flow.h
+++ /dev/null
@@ -1,414 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_FLOW_H__
-#define __OTX2_FLOW_H__
-
-#include <stdint.h>
-
-#include <rte_flow_driver.h>
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-#include "otx2_mbox.h"
-
-struct otx2_eth_dev;
-
-int otx2_flow_init(struct otx2_eth_dev *hw);
-int otx2_flow_fini(struct otx2_eth_dev *hw);
-extern const struct rte_flow_ops otx2_flow_ops;
-
-enum {
- OTX2_INTF_RX = 0,
- OTX2_INTF_TX = 1,
- OTX2_INTF_MAX = 2,
-};
-
-#define NPC_IH_LENGTH 8
-#define NPC_TPID_LENGTH 2
-#define NPC_HIGIG2_LENGTH 16
-#define NPC_MAX_RAW_ITEM_LEN 16
-#define NPC_COUNTER_NONE (-1)
-/* 32 bytes from LDATA_CFG & 32 bytes from FLAGS_CFG */
-#define NPC_MAX_EXTRACT_DATA_LEN (64)
-#define NPC_LDATA_LFLAG_LEN (16)
-#define NPC_MAX_KEY_NIBBLES (31)
-/* Nibble offsets */
-#define NPC_LAYER_KEYX_SZ (3)
-#define NPC_PARSE_KEX_S_LA_OFFSET (7)
-#define NPC_PARSE_KEX_S_LID_OFFSET(lid) \
- ((((lid) - NPC_LID_LA) * NPC_LAYER_KEYX_SZ) \
- + NPC_PARSE_KEX_S_LA_OFFSET)
-
-
-/* supported flow actions flags */
-#define OTX2_FLOW_ACT_MARK (1 << 0)
-#define OTX2_FLOW_ACT_FLAG (1 << 1)
-#define OTX2_FLOW_ACT_DROP (1 << 2)
-#define OTX2_FLOW_ACT_QUEUE (1 << 3)
-#define OTX2_FLOW_ACT_RSS (1 << 4)
-#define OTX2_FLOW_ACT_DUP (1 << 5)
-#define OTX2_FLOW_ACT_SEC (1 << 6)
-#define OTX2_FLOW_ACT_COUNT (1 << 7)
-#define OTX2_FLOW_ACT_PF (1 << 8)
-#define OTX2_FLOW_ACT_VF (1 << 9)
-#define OTX2_FLOW_ACT_VLAN_STRIP (1 << 10)
-#define OTX2_FLOW_ACT_VLAN_INSERT (1 << 11)
-#define OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT (1 << 12)
-#define OTX2_FLOW_ACT_VLAN_PCP_INSERT (1 << 13)
-
-/* terminating actions */
-#define OTX2_FLOW_ACT_TERM (OTX2_FLOW_ACT_DROP | \
- OTX2_FLOW_ACT_QUEUE | \
- OTX2_FLOW_ACT_RSS | \
- OTX2_FLOW_ACT_DUP | \
- OTX2_FLOW_ACT_SEC)
-
-/* This mark value indicates flag action */
-#define OTX2_FLOW_FLAG_VAL (0xffff)
-
-#define NIX_RX_ACT_MATCH_OFFSET (40)
-#define NIX_RX_ACT_MATCH_MASK (0xFFFF)
-
-#define NIX_RSS_ACT_GRP_OFFSET (20)
-#define NIX_RSS_ACT_ALG_OFFSET (56)
-#define NIX_RSS_ACT_GRP_MASK (0xFFFFF)
-#define NIX_RSS_ACT_ALG_MASK (0x1F)
-
-/* PMD-specific definition of the opaque struct rte_flow */
-#define OTX2_MAX_MCAM_WIDTH_DWORDS 7
-
-enum npc_mcam_intf {
- NPC_MCAM_RX,
- NPC_MCAM_TX
-};
-
-struct npc_xtract_info {
- /* Length in bytes of pkt data extracted. len = 0
- * indicates that extraction is disabled.
- */
- uint8_t len;
- uint8_t hdr_off; /* Byte offset of proto hdr: extract_src */
- uint8_t key_off; /* Byte offset in MCAM key where data is placed */
- uint8_t enable; /* Extraction enabled or disabled */
- uint8_t flags_enable; /* Flags extraction enabled */
-};
-
-/* Information for a given {LAYER, LTYPE} */
-struct npc_lid_lt_xtract_info {
- /* Info derived from parser configuration */
- uint16_t npc_proto; /* Network protocol identified */
- uint8_t valid_flags_mask; /* Flags applicable */
- uint8_t is_terminating:1; /* No more parsing */
- struct npc_xtract_info xtract[NPC_MAX_LD];
-};
-
-union npc_kex_ldata_flags_cfg {
- struct {
- #if defined(__BIG_ENDIAN_BITFIELD)
- uint64_t rvsd_62_1 : 61;
- uint64_t lid : 3;
- #else
- uint64_t lid : 3;
- uint64_t rvsd_62_1 : 61;
- #endif
- } s;
-
- uint64_t i;
-};
-
-typedef struct npc_lid_lt_xtract_info
- otx2_dxcfg_t[NPC_MAX_INTF][NPC_MAX_LID][NPC_MAX_LT];
-typedef struct npc_lid_lt_xtract_info
- otx2_fxcfg_t[NPC_MAX_INTF][NPC_MAX_LD][NPC_MAX_LFL];
-typedef union npc_kex_ldata_flags_cfg otx2_ld_flags_t[NPC_MAX_LD];
-
-
-/* MBOX_MSG_NPC_GET_DATAX_CFG Response */
-struct npc_get_datax_cfg {
- /* NPC_AF_KEX_LDATA(0..1)_FLAGS_CFG */
- union npc_kex_ldata_flags_cfg ld_flags[NPC_MAX_LD];
- /* Extract information indexed with [LID][LTYPE] */
- struct npc_lid_lt_xtract_info lid_lt_xtract[NPC_MAX_LID][NPC_MAX_LT];
- /* Flags based extract indexed with [LDATA][FLAGS_LOWER_NIBBLE]
- * Fields flags_ena_ld0, flags_ena_ld1 in
- * struct npc_lid_lt_xtract_info indicate if this is applicable
- * for a given {LAYER, LTYPE}
- */
- struct npc_xtract_info flag_xtract[NPC_MAX_LD][NPC_MAX_LT];
-};
-
-struct otx2_mcam_ents_info {
- /* Current max & min values of mcam index */
- uint32_t max_id;
- uint32_t min_id;
- uint32_t free_ent;
- uint32_t live_ent;
-};
-
-struct otx2_flow_dump_data {
- uint8_t lid;
- uint16_t ltype;
-};
-
-struct rte_flow {
- uint8_t nix_intf;
- uint32_t mcam_id;
- int32_t ctr_id;
- uint32_t priority;
- /* Contiguous match string */
- uint64_t mcam_data[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t mcam_mask[OTX2_MAX_MCAM_WIDTH_DWORDS];
- uint64_t npc_action;
- uint64_t vtag_action;
- struct otx2_flow_dump_data dump_data[32];
- uint16_t num_patterns;
- TAILQ_ENTRY(rte_flow) next;
-};
-
-TAILQ_HEAD(otx2_flow_list, rte_flow);
-
-/* Accessed from ethdev private - otx2_eth_dev */
-struct otx2_npc_flow_info {
- rte_atomic32_t mark_actions;
- uint32_t vtag_actions;
- uint32_t keyx_supp_nmask[NPC_MAX_INTF];/* nibble mask */
- uint32_t keyx_len[NPC_MAX_INTF]; /* per intf key len in bits */
- uint32_t datax_len[NPC_MAX_INTF]; /* per intf data len in bits */
- uint32_t keyw[NPC_MAX_INTF]; /* max key + data len bits */
- uint32_t mcam_entries; /* mcam entries supported */
- otx2_dxcfg_t prx_dxcfg; /* intf, lid, lt, extract */
- otx2_fxcfg_t prx_fxcfg; /* Flag extract */
- otx2_ld_flags_t prx_lfcfg; /* KEX LD_Flags CFG */
- /* mcam entry info per priority level: both free & in-use */
- struct otx2_mcam_ents_info *flow_entry_info;
- /* Bitmap of free preallocated entries in ascending index &
- * descending priority
- */
- struct rte_bitmap **free_entries;
- /* Bitmap of free preallocated entries in descending index &
- * ascending priority
- */
- struct rte_bitmap **free_entries_rev;
- /* Bitmap of live entries in ascending index & descending priority */
- struct rte_bitmap **live_entries;
- /* Bitmap of live entries in descending index & ascending priority */
- struct rte_bitmap **live_entries_rev;
- /* Priority bucket wise tail queue of all rte_flow resources */
- struct otx2_flow_list *flow_list;
- uint32_t rss_grps; /* rss groups supported */
- struct rte_bitmap *rss_grp_entries;
- uint16_t channel; /*rx channel */
- uint16_t flow_prealloc_size;
- uint16_t flow_max_priority;
- uint16_t switch_header_type;
-};
-
-struct otx2_parse_state {
- struct otx2_npc_flow_info *npc;
- const struct rte_flow_item *pattern;
- const struct rte_flow_item *last_pattern; /* Temp usage */
- struct rte_flow_error *error;
- struct rte_flow *flow;
- uint8_t tunnel;
- uint8_t terminate;
- uint8_t layer_mask;
- uint8_t lt[NPC_MAX_LID];
- uint8_t flags[NPC_MAX_LID];
- uint8_t *mcam_data; /* point to flow->mcam_data + key_len */
- uint8_t *mcam_mask; /* point to flow->mcam_mask + key_len */
- bool is_vf;
-};
-
-struct otx2_flow_item_info {
- const void *def_mask; /* rte_flow default mask */
- void *hw_mask; /* hardware supported mask */
- int len; /* length of item */
- const void *spec; /* spec to use, NULL implies match any */
- const void *mask; /* mask to use */
- uint8_t hw_hdr_len; /* Extra data len at each layer*/
-};
-
-struct otx2_idev_kex_cfg {
- struct npc_get_kex_cfg_rsp kex_cfg;
- rte_atomic16_t kex_refcnt;
-};
-
-enum npc_kpu_parser_flag {
- NPC_F_NA = 0,
- NPC_F_PKI,
- NPC_F_PKI_VLAN,
- NPC_F_PKI_ETAG,
- NPC_F_PKI_ITAG,
- NPC_F_PKI_MPLS,
- NPC_F_PKI_NSH,
- NPC_F_ETYPE_UNK,
- NPC_F_ETHER_VLAN,
- NPC_F_ETHER_ETAG,
- NPC_F_ETHER_ITAG,
- NPC_F_ETHER_MPLS,
- NPC_F_ETHER_NSH,
- NPC_F_STAG_CTAG,
- NPC_F_STAG_CTAG_UNK,
- NPC_F_STAG_STAG_CTAG,
- NPC_F_STAG_STAG_STAG,
- NPC_F_QINQ_CTAG,
- NPC_F_QINQ_CTAG_UNK,
- NPC_F_QINQ_QINQ_CTAG,
- NPC_F_QINQ_QINQ_QINQ,
- NPC_F_BTAG_ITAG,
- NPC_F_BTAG_ITAG_STAG,
- NPC_F_BTAG_ITAG_CTAG,
- NPC_F_BTAG_ITAG_UNK,
- NPC_F_ETAG_CTAG,
- NPC_F_ETAG_BTAG_ITAG,
- NPC_F_ETAG_STAG,
- NPC_F_ETAG_QINQ,
- NPC_F_ETAG_ITAG,
- NPC_F_ETAG_ITAG_STAG,
- NPC_F_ETAG_ITAG_CTAG,
- NPC_F_ETAG_ITAG_UNK,
- NPC_F_ITAG_STAG_CTAG,
- NPC_F_ITAG_STAG,
- NPC_F_ITAG_CTAG,
- NPC_F_MPLS_4_LABELS,
- NPC_F_MPLS_3_LABELS,
- NPC_F_MPLS_2_LABELS,
- NPC_F_IP_HAS_OPTIONS,
- NPC_F_IP_IP_IN_IP,
- NPC_F_IP_6TO4,
- NPC_F_IP_MPLS_IN_IP,
- NPC_F_IP_UNK_PROTO,
- NPC_F_IP_IP_IN_IP_HAS_OPTIONS,
- NPC_F_IP_6TO4_HAS_OPTIONS,
- NPC_F_IP_MPLS_IN_IP_HAS_OPTIONS,
- NPC_F_IP_UNK_PROTO_HAS_OPTIONS,
- NPC_F_IP6_HAS_EXT,
- NPC_F_IP6_TUN_IP6,
- NPC_F_IP6_MPLS_IN_IP,
- NPC_F_TCP_HAS_OPTIONS,
- NPC_F_TCP_HTTP,
- NPC_F_TCP_HTTPS,
- NPC_F_TCP_PPTP,
- NPC_F_TCP_UNK_PORT,
- NPC_F_TCP_HTTP_HAS_OPTIONS,
- NPC_F_TCP_HTTPS_HAS_OPTIONS,
- NPC_F_TCP_PPTP_HAS_OPTIONS,
- NPC_F_TCP_UNK_PORT_HAS_OPTIONS,
- NPC_F_UDP_VXLAN,
- NPC_F_UDP_VXLAN_NOVNI,
- NPC_F_UDP_VXLAN_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE,
- NPC_F_UDP_VXLANGPE_NSH,
- NPC_F_UDP_VXLANGPE_MPLS,
- NPC_F_UDP_VXLANGPE_NOVNI,
- NPC_F_UDP_VXLANGPE_NOVNI_NSH,
- NPC_F_UDP_VXLANGPE_NOVNI_MPLS,
- NPC_F_UDP_VXLANGPE_UNK,
- NPC_F_UDP_VXLANGPE_NONP,
- NPC_F_UDP_GTP_GTPC,
- NPC_F_UDP_GTP_GTPU_G_PDU,
- NPC_F_UDP_GTP_GTPU_UNK,
- NPC_F_UDP_UNK_PORT,
- NPC_F_UDP_GENEVE,
- NPC_F_UDP_GENEVE_OAM,
- NPC_F_UDP_GENEVE_CRI_OPT,
- NPC_F_UDP_GENEVE_OAM_CRI_OPT,
- NPC_F_GRE_NVGRE,
- NPC_F_GRE_HAS_SRE,
- NPC_F_GRE_HAS_CSUM,
- NPC_F_GRE_HAS_KEY,
- NPC_F_GRE_HAS_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY,
- NPC_F_GRE_HAS_CSUM_SEQ,
- NPC_F_GRE_HAS_KEY_SEQ,
- NPC_F_GRE_HAS_CSUM_KEY_SEQ,
- NPC_F_GRE_HAS_ROUTE,
- NPC_F_GRE_UNK_PROTO,
- NPC_F_GRE_VER1,
- NPC_F_GRE_VER1_HAS_SEQ,
- NPC_F_GRE_VER1_HAS_ACK,
- NPC_F_GRE_VER1_HAS_SEQ_ACK,
- NPC_F_GRE_VER1_UNK_PROTO,
- NPC_F_TU_ETHER_UNK,
- NPC_F_TU_ETHER_CTAG,
- NPC_F_TU_ETHER_CTAG_UNK,
- NPC_F_TU_ETHER_STAG_CTAG,
- NPC_F_TU_ETHER_STAG_CTAG_UNK,
- NPC_F_TU_ETHER_STAG,
- NPC_F_TU_ETHER_STAG_UNK,
- NPC_F_TU_ETHER_QINQ_CTAG,
- NPC_F_TU_ETHER_QINQ_CTAG_UNK,
- NPC_F_TU_ETHER_QINQ,
- NPC_F_TU_ETHER_QINQ_UNK,
- NPC_F_LAST /* has to be the last item */
-};
-
-
-int otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id);
-
-int otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count);
-
-int otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id);
-
-int otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry);
-
-int otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox);
-
-int otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt, uint8_t flags);
-
-int otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error);
-
-void otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask);
-
-int otx2_flow_mcam_alloc_and_write(struct rte_flow *flow,
- struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info);
-
-void otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- int lid, int lt);
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern);
-
-int otx2_flow_parse_lh(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lg(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lf(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_le(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_ld(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lc(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_lb(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_la(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst);
-
-int otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow);
-
-int otx2_flow_free_all_resources(struct otx2_eth_dev *hw);
-
-int otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid);
-
-void otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow);
-#endif /* __OTX2_FLOW_H__ */
diff --git a/drivers/net/octeontx2/otx2_flow_ctrl.c b/drivers/net/octeontx2/otx2_flow_ctrl.c
deleted file mode 100644
index 071740de86..0000000000
--- a/drivers/net/octeontx2/otx2_flow_ctrl.c
+++ /dev/null
@@ -1,252 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rxchan_bpid_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_bp_cfg_req *req;
- struct nix_bp_cfg_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_sdp(dev))
- return 0;
-
- if (enb) {
- req = otx2_mbox_alloc_msg_nix_bp_enable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
- req->bpid_per_chan = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || req->chan_cnt != rsp->chan_cnt) {
- otx2_err("Insufficient BPIDs, alloc=%u < req=%u rc=%d",
- rsp->chan_cnt, req->chan_cnt, rc);
- return rc;
- }
-
- fc->bpid[0] = rsp->chan_bpid[0];
- } else {
- req = otx2_mbox_alloc_msg_nix_bp_disable(mbox);
- req->chan_base = 0;
- req->chan_cnt = 1;
-
- rc = otx2_mbox_process(mbox);
-
- memset(fc->bpid, 0, sizeof(uint16_t) * NIX_MAX_CHAN);
- }
-
- return rc;
-}
-
-int
-otx2_nix_flow_ctrl_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_pause_frm_cfg *req, *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_lbk(dev)) {
- fc_conf->mode = RTE_ETH_FC_NONE;
- return 0;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 0;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto done;
-
- if (rsp->rx_pause && rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_FULL;
- else if (rsp->rx_pause)
- fc_conf->mode = RTE_ETH_FC_RX_PAUSE;
- else if (rsp->tx_pause)
- fc_conf->mode = RTE_ETH_FC_TX_PAUSE;
- else
- fc_conf->mode = RTE_ETH_FC_NONE;
-
-done:
- return rc;
-}
-
-static int
-otx2_nix_cq_bp_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *aq;
- struct otx2_eth_rxq *rxq;
- int i, rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- rxq = eth_dev->data->rx_queues[i];
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq) {
- /* The shared memory buffer can be full.
- * flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!aq)
- return -ENOMEM;
- }
- aq->qidx = rxq->rq;
- aq->ctype = NIX_AQ_CTYPE_CQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
-
- if (enb) {
- aq->cq.bpid = fc->bpid[0];
- aq->cq_mask.bpid = ~(aq->cq_mask.bpid);
- aq->cq.bp = rxq->cq_drop;
- aq->cq_mask.bp = ~(aq->cq_mask.bp);
- }
-
- aq->cq.bp_ena = !!enb;
- aq->cq_mask.bp_ena = ~(aq->cq_mask.bp_ena);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-static int
-otx2_nix_rx_fc_cfg(struct rte_eth_dev *eth_dev, bool enb)
-{
- return otx2_nix_cq_bp_cfg(eth_dev, enb);
-}
-
-int
-otx2_nix_flow_ctrl_set(struct rte_eth_dev *eth_dev,
- struct rte_eth_fc_conf *fc_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_pause_frm_cfg *req;
- uint8_t tx_pause, rx_pause;
- int rc = 0;
-
- if (otx2_dev_is_lbk(dev)) {
- otx2_info("No flow control support for LBK bound ethports");
- return -ENOTSUP;
- }
-
- if (fc_conf->high_water || fc_conf->low_water || fc_conf->pause_time ||
- fc_conf->mac_ctrl_frame_fwd || fc_conf->autoneg) {
- otx2_info("Flowctrl parameter is not supported");
- return -EINVAL;
- }
-
- if (fc_conf->mode == fc->mode)
- return 0;
-
- rx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_RX_PAUSE);
- tx_pause = (fc_conf->mode == RTE_ETH_FC_FULL) ||
- (fc_conf->mode == RTE_ETH_FC_TX_PAUSE);
-
- /* Check if TX pause frame is already enabled or not */
- if (fc->tx_pause ^ tx_pause) {
- if (otx2_dev_is_Ax(dev) && eth_dev->data->dev_started) {
- /* on Ax, CQ should be in disabled state
- * while setting flow control configuration.
- */
- otx2_info("Stop the port=%d for setting flow control\n",
- eth_dev->data->port_id);
- return 0;
- }
- /* TX pause frames, enable/disable flowctrl on RX side. */
- rc = otx2_nix_rx_fc_cfg(eth_dev, tx_pause);
- if (rc)
- return rc;
- }
-
- req = otx2_mbox_alloc_msg_cgx_cfg_pause_frm(mbox);
- req->set = 1;
- req->rx_pause = rx_pause;
- req->tx_pause = tx_pause;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- fc->tx_pause = tx_pause;
- fc->rx_pause = rx_pause;
- fc->mode = fc_conf->mode;
-
- return rc;
-}
-
-int
-otx2_nix_update_flow_ctrl_mode(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- fc_conf.mode = fc->mode;
-
- /* To avoid Link credit deadlock on Ax, disable Tx FC if it's enabled */
- if (otx2_dev_is_Ax(dev) &&
- (dev->npc_flow.switch_header_type != OTX2_PRIV_FLAGS_HIGIG) &&
- (fc_conf.mode == RTE_ETH_FC_FULL || fc_conf.mode == RTE_ETH_FC_RX_PAUSE)) {
- fc_conf.mode =
- (fc_conf.mode == RTE_ETH_FC_FULL ||
- fc_conf.mode == RTE_ETH_FC_TX_PAUSE) ?
- RTE_ETH_FC_TX_PAUSE : RTE_ETH_FC_NONE;
- }
-
- return otx2_nix_flow_ctrl_set(eth_dev, &fc_conf);
-}
-
-int
-otx2_nix_flow_ctrl_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_fc_info *fc = &dev->fc_info;
- struct rte_eth_fc_conf fc_conf;
- int rc;
-
- if (otx2_dev_is_lbk(dev) || otx2_dev_is_sdp(dev))
- return 0;
-
- memset(&fc_conf, 0, sizeof(struct rte_eth_fc_conf));
- /* Both Rx & Tx flow ctrl get enabled(RTE_ETH_FC_FULL) in HW
- * by AF driver, update those info in PMD structure.
- */
- rc = otx2_nix_flow_ctrl_get(eth_dev, &fc_conf);
- if (rc)
- goto exit;
-
- fc->mode = fc_conf.mode;
- fc->rx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_RX_PAUSE);
- fc->tx_pause = (fc_conf.mode == RTE_ETH_FC_FULL) ||
- (fc_conf.mode == RTE_ETH_FC_TX_PAUSE);
-
-exit:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_dump.c b/drivers/net/octeontx2/otx2_flow_dump.c
deleted file mode 100644
index 3f86071300..0000000000
--- a/drivers/net/octeontx2/otx2_flow_dump.c
+++ /dev/null
@@ -1,595 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_flow.h"
-
-#define NPC_MAX_FIELD_NAME_SIZE 80
-#define NPC_RX_ACTIONOP_MASK GENMASK(3, 0)
-#define NPC_RX_ACTION_PFFUNC_MASK GENMASK(19, 4)
-#define NPC_RX_ACTION_INDEX_MASK GENMASK(39, 20)
-#define NPC_RX_ACTION_MATCH_MASK GENMASK(55, 40)
-#define NPC_RX_ACTION_FLOWKEY_MASK GENMASK(60, 56)
-
-#define NPC_TX_ACTION_INDEX_MASK GENMASK(31, 12)
-#define NPC_TX_ACTION_MATCH_MASK GENMASK(47, 32)
-
-#define NIX_RX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_RX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_RX_VTAGACT_VTAG0_TYPE_MASK GENMASK(14, 12)
-#define NIX_RX_VTAGACT_VTAG0_VALID_MASK BIT_ULL(15)
-
-#define NIX_RX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_RX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_RX_VTAGACT_VTAG1_TYPE_MASK GENMASK(46, 44)
-#define NIX_RX_VTAGACT_VTAG1_VALID_MASK BIT_ULL(47)
-
-#define NIX_TX_VTAGACT_VTAG0_RELPTR_MASK GENMASK(7, 0)
-#define NIX_TX_VTAGACT_VTAG0_LID_MASK GENMASK(10, 8)
-#define NIX_TX_VTAGACT_VTAG0_OP_MASK GENMASK(13, 12)
-#define NIX_TX_VTAGACT_VTAG0_DEF_MASK GENMASK(25, 16)
-
-#define NIX_TX_VTAGACT_VTAG1_RELPTR_MASK GENMASK(39, 32)
-#define NIX_TX_VTAGACT_VTAG1_LID_MASK GENMASK(42, 40)
-#define NIX_TX_VTAGACT_VTAG1_OP_MASK GENMASK(45, 44)
-#define NIX_TX_VTAGACT_VTAG1_DEF_MASK GENMASK(57, 48)
-
-struct npc_rx_parse_nibble_s {
- uint16_t chan : 3;
- uint16_t errlev : 1;
- uint16_t errcode : 2;
- uint16_t l2l3bm : 1;
- uint16_t laflags : 2;
- uint16_t latype : 1;
- uint16_t lbflags : 2;
- uint16_t lbtype : 1;
- uint16_t lcflags : 2;
- uint16_t lctype : 1;
- uint16_t ldflags : 2;
- uint16_t ldtype : 1;
- uint16_t leflags : 2;
- uint16_t letype : 1;
- uint16_t lfflags : 2;
- uint16_t lftype : 1;
- uint16_t lgflags : 2;
- uint16_t lgtype : 1;
- uint16_t lhflags : 2;
- uint16_t lhtype : 1;
-} __rte_packed;
-
-const char *intf_str[] = {
- "NIX-RX",
- "NIX-TX",
-};
-
-const char *ltype_str[NPC_MAX_LID][NPC_MAX_LT] = {
- [NPC_LID_LA][0] = "NONE",
- [NPC_LID_LA][NPC_LT_LA_ETHER] = "LA_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_ETHER] = "LA_IH_NIX_ETHER",
- [NPC_LID_LA][NPC_LT_LA_HIGIG2_ETHER] = "LA_HIGIG2_ETHER",
- [NPC_LID_LA][NPC_LT_LA_IH_NIX_HIGIG2_ETHER] = "LA_IH_NIX_HIGIG2_ETHER",
- [NPC_LID_LB][0] = "NONE",
- [NPC_LID_LB][NPC_LT_LB_CTAG] = "LB_CTAG",
- [NPC_LID_LB][NPC_LT_LB_STAG_QINQ] = "LB_STAG_QINQ",
- [NPC_LID_LB][NPC_LT_LB_ETAG] = "LB_ETAG",
- [NPC_LID_LB][NPC_LT_LB_EXDSA] = "LB_EXDSA",
- [NPC_LID_LB][NPC_LT_LB_VLAN_EXDSA] = "LB_VLAN_EXDSA",
- [NPC_LID_LC][0] = "NONE",
- [NPC_LID_LC][NPC_LT_LC_IP] = "LC_IP",
- [NPC_LID_LC][NPC_LT_LC_IP6] = "LC_IP6",
- [NPC_LID_LC][NPC_LT_LC_ARP] = "LC_ARP",
- [NPC_LID_LC][NPC_LT_LC_IP6_EXT] = "LC_IP6_EXT",
- [NPC_LID_LC][NPC_LT_LC_NGIO] = "LC_NGIO",
- [NPC_LID_LD][0] = "NONE",
- [NPC_LID_LD][NPC_LT_LD_ICMP] = "LD_ICMP",
- [NPC_LID_LD][NPC_LT_LD_ICMP6] = "LD_ICMP6",
- [NPC_LID_LD][NPC_LT_LD_UDP] = "LD_UDP",
- [NPC_LID_LD][NPC_LT_LD_TCP] = "LD_TCP",
- [NPC_LID_LD][NPC_LT_LD_SCTP] = "LD_SCTP",
- [NPC_LID_LD][NPC_LT_LD_GRE] = "LD_GRE",
- [NPC_LID_LD][NPC_LT_LD_NVGRE] = "LD_NVGRE",
- [NPC_LID_LE][0] = "NONE",
- [NPC_LID_LE][NPC_LT_LE_VXLAN] = "LE_VXLAN",
- [NPC_LID_LE][NPC_LT_LE_ESP] = "LE_ESP",
- [NPC_LID_LE][NPC_LT_LE_GTPC] = "LE_GTPC",
- [NPC_LID_LE][NPC_LT_LE_GTPU] = "LE_GTPU",
- [NPC_LID_LE][NPC_LT_LE_GENEVE] = "LE_GENEVE",
- [NPC_LID_LE][NPC_LT_LE_VXLANGPE] = "LE_VXLANGPE",
- [NPC_LID_LF][0] = "NONE",
- [NPC_LID_LF][NPC_LT_LF_TU_ETHER] = "LF_TU_ETHER",
- [NPC_LID_LG][0] = "NONE",
- [NPC_LID_LG][NPC_LT_LG_TU_IP] = "LG_TU_IP",
- [NPC_LID_LG][NPC_LT_LG_TU_IP6] = "LG_TU_IP6",
- [NPC_LID_LH][0] = "NONE",
- [NPC_LID_LH][NPC_LT_LH_TU_UDP] = "LH_TU_UDP",
- [NPC_LID_LH][NPC_LT_LH_TU_TCP] = "LH_TU_TCP",
- [NPC_LID_LH][NPC_LT_LH_TU_SCTP] = "LH_TU_SCTP",
- [NPC_LID_LH][NPC_LT_LH_TU_ESP] = "LH_TU_ESP",
-};
-
-static uint16_t
-otx2_get_nibbles(struct rte_flow *flow, uint16_t size, uint32_t bit_offset)
-{
- uint32_t byte_index, noffset;
- uint16_t data, mask;
- uint8_t *bytes;
-
- bytes = (uint8_t *)flow->mcam_data;
- mask = (1ULL << (size * 4)) - 1;
- byte_index = bit_offset / 8;
- noffset = bit_offset % 8;
- data = *(uint16_t *)&bytes[byte_index];
- data >>= noffset;
- data &= mask;
-
- return data;
-}
-
-static void
-otx2_flow_print_parse_nibbles(FILE *file, struct rte_flow *flow,
- uint64_t parse_nibbles)
-{
- struct npc_rx_parse_nibble_s *rx_parse;
- uint32_t data, offset = 0;
-
- rx_parse = (struct npc_rx_parse_nibble_s *)&parse_nibbles;
-
- if (rx_parse->chan) {
- data = otx2_get_nibbles(flow, 3, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_CHAN:%#03X\n", data);
- offset += 12;
- }
-
- if (rx_parse->errlev) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRLEV:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->errcode) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_ERRCODE:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->l2l3bm) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_L2L3_BCAST:%#X\n", data);
- offset += 4;
- }
-
- if (rx_parse->latype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_LTYPE:%s\n",
- ltype_str[NPC_LID_LA][data]);
- offset += 4;
- }
-
- if (rx_parse->laflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LA_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lbtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_LTYPE:%s\n",
- ltype_str[NPC_LID_LB][data]);
- offset += 4;
- }
-
- if (rx_parse->lbflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LB_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lctype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_LTYPE:%s\n",
- ltype_str[NPC_LID_LC][data]);
- offset += 4;
- }
-
- if (rx_parse->lcflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LC_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->ldtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_LTYPE:%s\n",
- ltype_str[NPC_LID_LD][data]);
- offset += 4;
- }
-
- if (rx_parse->ldflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LD_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->letype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_LTYPE:%s\n",
- ltype_str[NPC_LID_LE][data]);
- offset += 4;
- }
-
- if (rx_parse->leflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LE_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lftype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_LTYPE:%s\n",
- ltype_str[NPC_LID_LF][data]);
- offset += 4;
- }
-
- if (rx_parse->lfflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LF_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lgtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_LTYPE:%s\n",
- ltype_str[NPC_LID_LG][data]);
- offset += 4;
- }
-
- if (rx_parse->lgflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LG_FLAGS:%#02X\n", data);
- offset += 8;
- }
-
- if (rx_parse->lhtype) {
- data = otx2_get_nibbles(flow, 1, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_LTYPE:%s\n",
- ltype_str[NPC_LID_LH][data]);
- offset += 4;
- }
-
- if (rx_parse->lhflags) {
- data = otx2_get_nibbles(flow, 2, offset);
- fprintf(file, "\tNPC_PARSE_NIBBLE_LH_FLAGS:%#02X\n", data);
- }
-}
-
-static void
-otx2_flow_print_xtractinfo(FILE *file, struct npc_xtract_info *lfinfo,
- struct rte_flow *flow, int lid, int lt)
-{
- uint8_t *datastart, *maskstart;
- int i;
-
- datastart = (uint8_t *)&flow->mcam_data + lfinfo->key_off;
- maskstart = (uint8_t *)&flow->mcam_mask + lfinfo->key_off;
-
- fprintf(file, "\t%s, hdr offset:%#X, len:%#X, key offset:%#X, ",
- ltype_str[lid][lt], lfinfo->hdr_off,
- lfinfo->len, lfinfo->key_off);
-
- fprintf(file, "Data:0X");
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", datastart[i]);
-
- fprintf(file, ", ");
-
- fprintf(file, "Mask:0X");
-
- for (i = lfinfo->len - 1; i >= 0; i--)
- fprintf(file, "%02X", maskstart[i]);
-
- fprintf(file, "\n");
-}
-
-static void
-otx2_flow_print_item(FILE *file, struct otx2_eth_dev *hw,
- struct npc_xtract_info *xinfo, struct rte_flow *flow,
- int intf, int lid, int lt, int ld)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_xtract_info *lflags_info;
- int i, lf_cfg;
-
- otx2_flow_print_xtractinfo(file, xinfo, flow, lid, lt);
-
- if (xinfo->flags_enable) {
- lf_cfg = npc_flow->prx_lfcfg[ld].i;
-
- if (lf_cfg == lid) {
- for (i = 0; i < NPC_MAX_LFL; i++) {
- lflags_info = npc_flow->prx_fxcfg[intf]
- [ld][i].xtract;
-
- otx2_flow_print_xtractinfo(file, lflags_info,
- flow, lid, lt);
- }
- }
- }
-}
-
-static void
-otx2_flow_dump_patterns(FILE *file, struct otx2_eth_dev *hw,
- struct rte_flow *flow)
-{
- struct otx2_npc_flow_info *npc_flow = &hw->npc_flow;
- struct npc_lid_lt_xtract_info *lt_xinfo;
- struct npc_xtract_info *xinfo;
- uint32_t intf, lid, ld, i;
- uint64_t parse_nibbles;
- uint16_t ltype;
-
- intf = flow->nix_intf;
- parse_nibbles = npc_flow->keyx_supp_nmask[intf];
- otx2_flow_print_parse_nibbles(file, flow, parse_nibbles);
-
- for (i = 0; i < flow->num_patterns; i++) {
- lid = flow->dump_data[i].lid;
- ltype = flow->dump_data[i].ltype;
- lt_xinfo = &npc_flow->prx_dxcfg[intf][lid][ltype];
-
- for (ld = 0; ld < NPC_MAX_LD; ld++) {
- xinfo = <_xinfo->xtract[ld];
- if (!xinfo->enable)
- continue;
- otx2_flow_print_item(file, hw, xinfo, flow, intf, lid,
- ltype, ld);
- }
- }
-}
-
-static void
-otx2_flow_dump_tx_action(FILE *file, uint64_t npc_action)
-{
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
- uint32_t tx_op, index, match_id;
-
- tx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (tx_op) {
- case NIX_TX_ACTIONOP_DROP:
- fprintf(file, "NIX_TX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_TX_ACTIONOP_UCAST_DEFAULT:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_DEFAULT);
- break;
- case NIX_TX_ACTIONOP_UCAST_CHAN:
- fprintf(file, "NIX_TX_ACTIONOP_UCAST_DEFAULT (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_UCAST_CHAN);
- strncpy(index_name, "Transmit Channel:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_MCAST:
- fprintf(file, "NIX_TX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast Table Index:",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_TX_ACTIONOP_DROP_VIOL:
- fprintf(file, "NIX_TX_ACTIONOP_DROP_VIOL (%lu)\n",
- (uint64_t)NIX_TX_ACTIONOP_DROP_VIOL);
- break;
- }
-
- index = ((npc_action & NPC_TX_ACTION_INDEX_MASK) >> 12) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_TX_ACTION_MATCH_MASK) >> 32) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-}
-
-static void
-otx2_flow_dump_rx_action(FILE *file, uint64_t npc_action)
-{
- uint32_t rx_op, pf_func, index, match_id, flowkey_alg;
- char index_name[NPC_MAX_FIELD_NAME_SIZE] = "Index:";
-
- rx_op = npc_action & NPC_RX_ACTIONOP_MASK;
-
- fprintf(file, "\tActionOp:");
-
- switch (rx_op) {
- case NIX_RX_ACTIONOP_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_DROP);
- break;
- case NIX_RX_ACTIONOP_UCAST:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST);
- strncpy(index_name, "RQ Index", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_UCAST_IPSEC:
- fprintf(file, "NIX_RX_ACTIONOP_UCAST_IPSEC (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_UCAST_IPSEC);
- strncpy(index_name, "RQ Index:", NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_MCAST:
- fprintf(file, "NIX_RX_ACTIONOP_MCAST (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MCAST);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_RSS:
- fprintf(file, "NIX_RX_ACTIONOP_RSS (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_RSS);
- strncpy(index_name, "RSS Group Index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- case NIX_RX_ACTIONOP_PF_FUNC_DROP:
- fprintf(file, "NIX_RX_ACTIONOP_PF_FUNC_DROP (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_PF_FUNC_DROP);
- break;
- case NIX_RX_ACTIONOP_MIRROR:
- fprintf(file, "NIX_RX_ACTIONOP_MIRROR (%lu)\n",
- (uint64_t)NIX_RX_ACTIONOP_MIRROR);
- strncpy(index_name, "Multicast/mirror table index",
- NPC_MAX_FIELD_NAME_SIZE);
- break;
- }
-
- pf_func = ((npc_action & NPC_RX_ACTION_PFFUNC_MASK) >> 4) & 0xFFFF;
-
- fprintf(file, "\tPF_FUNC: %#04X\n", pf_func);
-
- index = ((npc_action & NPC_RX_ACTION_INDEX_MASK) >> 20) & 0xFFFFF;
-
- fprintf(file, "\t%s:%#05X\n", index_name, index);
-
- match_id = ((npc_action & NPC_RX_ACTION_MATCH_MASK) >> 40) & 0xFFFF;
-
- fprintf(file, "\tMatch Id:%#04X\n", match_id);
-
- flowkey_alg = ((npc_action & NPC_RX_ACTION_FLOWKEY_MASK) >> 56) & 0x1F;
-
- fprintf(file, "\tFlow Key Alg:%#X\n", flowkey_alg);
-}
-
-static void
-otx2_flow_dump_parsed_action(FILE *file, uint64_t npc_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX Action:%#016lX\n", npc_action);
- otx2_flow_dump_rx_action(file, npc_action);
- } else {
- fprintf(file, "NPC TX Action:%#016lX\n", npc_action);
- otx2_flow_dump_tx_action(file, npc_action);
- }
-}
-
-static void
-otx2_flow_dump_rx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t type, lid, relptr;
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG0_VALID_MASK) {
- relptr = vtag_action & NIX_RX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG0_LID_MASK) >> 8)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG0_TYPE_MASK) >> 12)
- & 0x7;
-
- fprintf(file, "\tVTAG0:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-
- if (vtag_action & NIX_RX_VTAGACT_VTAG1_VALID_MASK) {
- relptr = ((vtag_action & NIX_RX_VTAGACT_VTAG1_RELPTR_MASK)
- >> 32) & 0xFF;
- lid = ((vtag_action & NIX_RX_VTAGACT_VTAG1_LID_MASK) >> 40)
- & 0x7;
- type = ((vtag_action & NIX_RX_VTAGACT_VTAG1_TYPE_MASK) >> 44)
- & 0x7;
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\ttype:%#X\n", type);
- }
-}
-
-static void
-otx2_get_vtag_opname(uint32_t op, char *opname, int len)
-{
- switch (op) {
- case 0x0:
- strncpy(opname, "NOP", len - 1);
- break;
- case 0x1:
- strncpy(opname, "INSERT", len - 1);
- break;
- case 0x2:
- strncpy(opname, "REPLACE", len - 1);
- break;
- }
-}
-
-static void
-otx2_flow_dump_tx_vtag_action(FILE *file, uint64_t vtag_action)
-{
- uint32_t relptr, lid, op, vtag_def;
- char opname[10];
-
- relptr = vtag_action & NIX_TX_VTAGACT_VTAG0_RELPTR_MASK;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG0_LID_MASK) >> 8) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG0_OP_MASK) >> 12) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG0_DEF_MASK) >> 16)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG0 relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-
- relptr = ((vtag_action & NIX_TX_VTAGACT_VTAG1_RELPTR_MASK) >> 32)
- & 0xFF;
- lid = ((vtag_action & NIX_TX_VTAGACT_VTAG1_LID_MASK) >> 40) & 0x7;
- op = ((vtag_action & NIX_TX_VTAGACT_VTAG1_OP_MASK) >> 44) & 0x3;
- vtag_def = ((vtag_action & NIX_TX_VTAGACT_VTAG1_DEF_MASK) >> 48)
- & 0x3FF;
-
- otx2_get_vtag_opname(op, opname, sizeof(opname));
-
- fprintf(file, "\tVTAG1:relptr:%#X\n", relptr);
- fprintf(file, "\tlid:%#X\n", lid);
- fprintf(file, "\top:%s\n", opname);
- fprintf(file, "\tvtag_def:%#X\n", vtag_def);
-}
-
-static void
-otx2_flow_dump_vtag_action(FILE *file, uint64_t vtag_action, bool is_rx)
-{
- if (is_rx) {
- fprintf(file, "NPC RX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_rx_vtag_action(file, vtag_action);
- } else {
- fprintf(file, "NPC TX VTAG Action:%#016lX\n", vtag_action);
- otx2_flow_dump_tx_vtag_action(file, vtag_action);
- }
-}
-
-void
-otx2_flow_dump(FILE *file, struct otx2_eth_dev *hw, struct rte_flow *flow)
-{
- bool is_rx = 0;
- int i;
-
- fprintf(file, "MCAM Index:%d\n", flow->mcam_id);
- fprintf(file, "Interface :%s (%d)\n", intf_str[flow->nix_intf],
- flow->nix_intf);
- fprintf(file, "Priority :%d\n", flow->priority);
-
- if (flow->nix_intf == NIX_INTF_RX)
- is_rx = 1;
-
- otx2_flow_dump_parsed_action(file, flow->npc_action, is_rx);
- otx2_flow_dump_vtag_action(file, flow->vtag_action, is_rx);
- fprintf(file, "Patterns:\n");
- otx2_flow_dump_patterns(file, hw, flow);
-
- fprintf(file, "MCAM Raw Data :\n");
-
- for (i = 0; i < OTX2_MAX_MCAM_WIDTH_DWORDS; i++) {
- fprintf(file, "\tDW%d :%016lX\n", i, flow->mcam_data[i]);
- fprintf(file, "\tDW%d_Mask:%016lX\n", i, flow->mcam_mask[i]);
- }
-
- fprintf(file, "\n");
-}
diff --git a/drivers/net/octeontx2/otx2_flow_parse.c b/drivers/net/octeontx2/otx2_flow_parse.c
deleted file mode 100644
index 91267bbb81..0000000000
--- a/drivers/net/octeontx2/otx2_flow_parse.c
+++ /dev/null
@@ -1,1239 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-const struct rte_flow_item *
-otx2_flow_skip_void_and_any_items(const struct rte_flow_item *pattern)
-{
- while ((pattern->type == RTE_FLOW_ITEM_TYPE_VOID) ||
- (pattern->type == RTE_FLOW_ITEM_TYPE_ANY))
- pattern++;
-
- return pattern;
-}
-
-/*
- * Tunnel+ESP, Tunnel+ICMP4/6, Tunnel+TCP, Tunnel+UDP,
- * Tunnel+SCTP
- */
-int
-otx2_flow_parse_lh(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LH;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LH_TU_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LH_TU_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LH_TU_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LH_TU_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+IPv4, Tunnel+IPv6 */
-int
-otx2_flow_parse_lg(struct otx2_parse_state *pst)
-{
- struct otx2_flow_item_info info;
- char hw_mask[64];
- int lid, lt;
- int rc;
-
- if (!pst->tunnel)
- return 0;
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LG;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- lt = NPC_LT_LG_TU_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_IPV6) {
- lt = NPC_LT_LG_TU_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- } else {
- /* There is no tunneled IP header */
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* Tunnel+Ether */
-int
-otx2_flow_parse_lf(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern, *last_pattern;
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int nr_vlans = 0;
- int rc;
-
- /* We hit this layer if there is a tunneling protocol */
- if (!pst->tunnel)
- return 0;
-
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LF;
- lt = NPC_LT_LF_TU_ETHER;
- lflags = 0;
-
- info.def_mask = &rte_flow_item_vlan_mask;
- /* No match support for vlan tags */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- /* Look ahead and find out any VLAN tags. These can be
- * detected but no data matching is available.
- */
- last_pattern = pst->pattern;
- pattern = pst->pattern + 1;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
- otx2_npc_dbg("Nr_vlans = %d", nr_vlans);
- switch (nr_vlans) {
- case 0:
- break;
- case 1:
- lflags = NPC_F_TU_ETHER_CTAG;
- break;
- case 2:
- lflags = NPC_F_TU_ETHER_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 2 vlans with tunneled Ethernet "
- "not supported");
- return -rte_errno;
- }
-
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- info.hw_hdr_len = 0;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- pst->pattern = last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-int
-otx2_flow_parse_le(struct otx2_parse_state *pst)
-{
- /*
- * We are positioned at UDP. Scan ahead and look for
- * UDP encapsulated tunnel protocols. If available,
- * parse them. In that case handle this:
- * - RTE spec assumes we point to tunnel header.
- * - NPC parser provides offset from UDP header.
- */
-
- /*
- * Note: Add support to GENEVE, VXLAN_GPE when we
- * upgrade DPDK
- *
- * Note: Better to split flags into two nibbles:
- * - Higher nibble can have flags
- * - Lower nibble to further enumerate protocols
- * and have flags based extraction
- */
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- char hw_mask[64];
- int rc;
-
- if (pst->tunnel)
- return 0;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LE);
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LE;
- lflags = 0;
-
- /* Ensure we are not matching anything in UDP */
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc)
- return rc;
-
- info.hw_mask = &hw_mask;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- otx2_npc_dbg("Pattern->type = %d", pattern->type);
- switch (pattern->type) {
- case RTE_FLOW_ITEM_TYPE_VXLAN:
- lflags = NPC_F_UDP_VXLAN;
- info.def_mask = &rte_flow_item_vxlan_mask;
- info.len = sizeof(struct rte_flow_item_vxlan);
- lt = NPC_LT_LE_VXLAN;
- break;
- case RTE_FLOW_ITEM_TYPE_ESP:
- lt = NPC_LT_LE_ESP;
- info.def_mask = &rte_flow_item_esp_mask;
- info.len = sizeof(struct rte_flow_item_esp);
- break;
- case RTE_FLOW_ITEM_TYPE_GTPC:
- lflags = NPC_F_UDP_GTP_GTPC;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPC;
- break;
- case RTE_FLOW_ITEM_TYPE_GTPU:
- lflags = NPC_F_UDP_GTP_GTPU_G_PDU;
- info.def_mask = &rte_flow_item_gtp_mask;
- info.len = sizeof(struct rte_flow_item_gtp);
- lt = NPC_LT_LE_GTPU;
- break;
- case RTE_FLOW_ITEM_TYPE_GENEVE:
- lflags = NPC_F_UDP_GENEVE;
- info.def_mask = &rte_flow_item_geneve_mask;
- info.len = sizeof(struct rte_flow_item_geneve);
- lt = NPC_LT_LE_GENEVE;
- break;
- case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
- lflags = NPC_F_UDP_VXLANGPE;
- info.def_mask = &rte_flow_item_vxlan_gpe_mask;
- info.len = sizeof(struct rte_flow_item_vxlan_gpe);
- lt = NPC_LT_LE_VXLANGPE;
- break;
- default:
- return 0;
- }
-
- pst->tunnel = 1;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static int
-flow_parse_mpls_label_stack(struct otx2_parse_state *pst, int *flag)
-{
- int nr_labels = 0;
- const struct rte_flow_item *pattern = pst->pattern;
- struct otx2_flow_item_info info;
- int rc;
- uint8_t flag_list[] = {0, NPC_F_MPLS_2_LABELS,
- NPC_F_MPLS_3_LABELS, NPC_F_MPLS_4_LABELS};
-
- /*
- * pst->pattern points to first MPLS label. We only check
- * that subsequent labels do not have anything to match.
- */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- while (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS) {
- nr_labels++;
-
- /* Basic validation of 2nd/3rd/4th mpls item */
- if (nr_labels > 1) {
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- pst->last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- if (nr_labels > 4) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->last_pattern,
- "more than 4 mpls labels not supported");
- return -rte_errno;
- }
-
- *flag = flag_list[nr_labels - 1];
- return 0;
-}
-
-int
-otx2_flow_parse_mpls(struct otx2_parse_state *pst, int lid)
-{
- /* Find number of MPLS labels */
- struct rte_flow_item_mpls hw_mask;
- struct otx2_flow_item_info info;
- int lt, lflags;
- int rc;
-
- lflags = 0;
-
- if (lid == NPC_LID_LC)
- lt = NPC_LT_LC_MPLS;
- else if (lid == NPC_LID_LD)
- lt = NPC_LT_LD_TU_MPLS_IN_IP;
- else
- lt = NPC_LT_LE_TU_MPLS_IN_UDP;
-
- /* Prepare for parsing the first item */
- info.def_mask = &rte_flow_item_mpls_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_mpls);
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /*
- * Parse for more labels.
- * This sets lflags and pst->last_pattern correctly.
- */
- rc = flow_parse_mpls_label_stack(pst, &lflags);
- if (rc != 0)
- return rc;
-
- pst->tunnel = 1;
- pst->pattern = pst->last_pattern;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-/*
- * ICMP, ICMP6, UDP, TCP, SCTP, VXLAN, GRE, NVGRE,
- * GTP, GTPC, GTPU, ESP
- *
- * Note: UDP tunnel protocols are identified by flags.
- * LPTR for these protocol still points to UDP
- * header. Need flag based extraction to support
- * this.
- */
-int
-otx2_flow_parse_ld(struct otx2_parse_state *pst)
-{
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint32_t gre_key_mask = 0xffffffff;
- struct otx2_flow_item_info info;
- int lid, lt, lflags;
- int rc;
-
- if (pst->tunnel) {
- /* We have already parsed MPLS or IPv4/v6 followed
- * by MPLS or IPv4/v6. Subsequent TCP/UDP etc
- * would be parsed as tunneled versions. Skip
- * this layer, except for tunneled MPLS. If LC is
- * MPLS, we have anyway skipped all stacked MPLS
- * labels.
- */
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LD);
- return 0;
- }
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.def_mask = NULL;
- info.len = 0;
- info.hw_hdr_len = 0;
-
- lid = NPC_LID_LD;
- lflags = 0;
-
- otx2_npc_dbg("Pst->pattern->type = %d", pst->pattern->type);
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_ICMP:
- if (pst->lt[NPC_LID_LC] == NPC_LT_LC_IP6)
- lt = NPC_LT_LD_ICMP6;
- else
- lt = NPC_LT_LD_ICMP;
- info.def_mask = &rte_flow_item_icmp_mask;
- info.len = sizeof(struct rte_flow_item_icmp);
- break;
- case RTE_FLOW_ITEM_TYPE_UDP:
- lt = NPC_LT_LD_UDP;
- info.def_mask = &rte_flow_item_udp_mask;
- info.len = sizeof(struct rte_flow_item_udp);
- break;
- case RTE_FLOW_ITEM_TYPE_TCP:
- lt = NPC_LT_LD_TCP;
- info.def_mask = &rte_flow_item_tcp_mask;
- info.len = sizeof(struct rte_flow_item_tcp);
- break;
- case RTE_FLOW_ITEM_TYPE_SCTP:
- lt = NPC_LT_LD_SCTP;
- info.def_mask = &rte_flow_item_sctp_mask;
- info.len = sizeof(struct rte_flow_item_sctp);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &rte_flow_item_gre_mask;
- info.len = sizeof(struct rte_flow_item_gre);
- break;
- case RTE_FLOW_ITEM_TYPE_GRE_KEY:
- lt = NPC_LT_LD_GRE;
- info.def_mask = &gre_key_mask;
- info.len = sizeof(gre_key_mask);
- info.hw_hdr_len = 4;
- break;
- case RTE_FLOW_ITEM_TYPE_NVGRE:
- lt = NPC_LT_LD_NVGRE;
- lflags = NPC_F_GRE_NVGRE;
- info.def_mask = &rte_flow_item_nvgre_mask;
- info.len = sizeof(struct rte_flow_item_nvgre);
- /* Further IP/Ethernet are parsed as tunneled */
- pst->tunnel = 1;
- break;
- default:
- return 0;
- }
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-static inline void
-flow_check_lc_ip_tunnel(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern + 1;
-
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_MPLS ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV4 ||
- pattern->type == RTE_FLOW_ITEM_TYPE_IPV6)
- pst->tunnel = 1;
-}
-
-static int
-otx2_flow_raw_item_prepare(const struct rte_flow_item_raw *raw_spec,
- const struct rte_flow_item_raw *raw_mask,
- struct otx2_flow_item_info *info,
- uint8_t *spec_buf, uint8_t *mask_buf)
-{
- uint32_t custom_hdr_size = 0;
-
- memset(spec_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- memset(mask_buf, 0, NPC_MAX_RAW_ITEM_LEN);
- custom_hdr_size = raw_spec->offset + raw_spec->length;
-
- memcpy(spec_buf + raw_spec->offset, raw_spec->pattern,
- raw_spec->length);
-
- if (raw_mask->pattern) {
- memcpy(mask_buf + raw_spec->offset, raw_mask->pattern,
- raw_spec->length);
- } else {
- memset(mask_buf + raw_spec->offset, 0xFF, raw_spec->length);
- }
-
- info->len = custom_hdr_size;
- info->spec = spec_buf;
- info->mask = mask_buf;
-
- return 0;
-}
-
-/* Outer IPv4, Outer IPv6, MPLS, ARP */
-int
-otx2_flow_parse_lc(struct otx2_parse_state *pst)
-{
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- const struct rte_flow_item_raw *raw_spec;
- struct otx2_flow_item_info info;
- int lid, lt, len;
- int rc;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_MPLS)
- return otx2_flow_parse_mpls(pst, NPC_LID_LC);
-
- info.hw_mask = &hw_mask;
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = 0;
- lid = NPC_LID_LC;
-
- switch (pst->pattern->type) {
- case RTE_FLOW_ITEM_TYPE_IPV4:
- lt = NPC_LT_LC_IP;
- info.def_mask = &rte_flow_item_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6;
- info.def_mask = &rte_flow_item_ipv6_mask;
- info.len = sizeof(struct rte_flow_item_ipv6);
- break;
- case RTE_FLOW_ITEM_TYPE_ARP_ETH_IPV4:
- lt = NPC_LT_LC_ARP;
- info.def_mask = &rte_flow_item_arp_eth_ipv4_mask;
- info.len = sizeof(struct rte_flow_item_arp_eth_ipv4);
- break;
- case RTE_FLOW_ITEM_TYPE_IPV6_EXT:
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_IP6_EXT;
- info.def_mask = &rte_flow_item_ipv6_ext_mask;
- info.len = sizeof(struct rte_flow_item_ipv6_ext);
- info.hw_hdr_len = 40;
- break;
- case RTE_FLOW_ITEM_TYPE_RAW:
- raw_spec = pst->pattern->spec;
- if (!raw_spec->relative)
- return 0;
-
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- lid = NPC_LID_LC;
- lt = NPC_LT_LC_NGIO;
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- break;
- default:
- /* No match at this layer */
- return 0;
- }
-
- /* Identify if IP tunnels MPLS or IPv4/v6 */
- flow_check_lc_ip_tunnel(pst);
-
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-/* VLAN, ETAG */
-int
-otx2_flow_parse_lb(struct otx2_parse_state *pst)
-{
- const struct rte_flow_item *pattern = pst->pattern;
- uint8_t raw_spec_buf[NPC_MAX_RAW_ITEM_LEN];
- uint8_t raw_mask_buf[NPC_MAX_RAW_ITEM_LEN];
- const struct rte_flow_item *last_pattern;
- const struct rte_flow_item_raw *raw_spec;
- char hw_mask[NPC_MAX_EXTRACT_DATA_LEN];
- struct otx2_flow_item_info info;
- int lid, lt, lflags, len;
- int nr_vlans = 0;
- int rc;
-
- info.spec = NULL;
- info.mask = NULL;
- info.hw_hdr_len = NPC_TPID_LENGTH;
-
- lid = NPC_LID_LB;
- lflags = 0;
- last_pattern = pattern;
-
- if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- /* RTE vlan is either 802.1q or 802.1ad,
- * this maps to either CTAG/STAG. We need to decide
- * based on number of VLANS present. Matching is
- * supported on first tag only.
- */
- info.def_mask = &rte_flow_item_vlan_mask;
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
-
- pattern = pst->pattern;
- while (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- nr_vlans++;
-
- /* Basic validation of 2nd/3rd vlan item */
- if (nr_vlans > 1) {
- otx2_npc_dbg("Vlans = %d", nr_vlans);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
- }
- last_pattern = pattern;
- pattern++;
- pattern = otx2_flow_skip_void_and_any_items(pattern);
- }
-
- switch (nr_vlans) {
- case 1:
- lt = NPC_LT_LB_CTAG;
- break;
- case 2:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_CTAG;
- break;
- case 3:
- lt = NPC_LT_LB_STAG_QINQ;
- lflags = NPC_F_STAG_STAG_CTAG;
- break;
- default:
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- last_pattern,
- "more than 3 vlans not supported");
- return -rte_errno;
- }
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_E_TAG) {
- /* we can support ETAG and match a subsequent CTAG
- * without any matching support.
- */
- lt = NPC_LT_LB_ETAG;
- lflags = 0;
-
- last_pattern = pst->pattern;
- pattern = otx2_flow_skip_void_and_any_items(pst->pattern + 1);
- if (pattern->type == RTE_FLOW_ITEM_TYPE_VLAN) {
- info.def_mask = &rte_flow_item_vlan_mask;
- /* set supported mask to NULL for vlan tag */
- info.hw_mask = NULL;
- info.len = sizeof(struct rte_flow_item_vlan);
- rc = otx2_flow_parse_item_basic(pattern, &info,
- pst->error);
- if (rc != 0)
- return rc;
-
- lflags = NPC_F_ETAG_CTAG;
- last_pattern = pattern;
- }
-
- info.def_mask = &rte_flow_item_e_tag_mask;
- info.len = sizeof(struct rte_flow_item_e_tag);
- } else if (pst->pattern->type == RTE_FLOW_ITEM_TYPE_RAW) {
- raw_spec = pst->pattern->spec;
- if (raw_spec->relative)
- return 0;
- len = raw_spec->length + raw_spec->offset;
- if (len > NPC_MAX_RAW_ITEM_LEN) {
- rte_flow_error_set(pst->error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Spec length too big");
- return -rte_errno;
- }
-
- if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_VLAN_EXDSA) {
- lt = NPC_LT_LB_VLAN_EXDSA;
- } else if (pst->npc->switch_header_type ==
- OTX2_PRIV_FLAGS_EXDSA) {
- lt = NPC_LT_LB_EXDSA;
- } else {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "exdsa or vlan_exdsa not enabled on"
- " port");
- return -rte_errno;
- }
-
- otx2_flow_raw_item_prepare((const struct rte_flow_item_raw *)
- pst->pattern->spec,
- (const struct rte_flow_item_raw *)
- pst->pattern->mask, &info,
- raw_spec_buf, raw_mask_buf);
-
- info.hw_hdr_len = 0;
- } else {
- return 0;
- }
-
- info.hw_mask = &hw_mask;
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
-
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc != 0)
- return rc;
-
- /* Point pattern to last item consumed */
- pst->pattern = last_pattern;
- return otx2_flow_update_parse_state(pst, &info, lid, lt, lflags);
-}
-
-
-int
-otx2_flow_parse_la(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_eth hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_ETH)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len += NPC_HIGIG2_LENGTH;
- }
- } else {
- if (pst->npc->switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_HIGIG2_LENGTH;
- }
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_eth_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_eth);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-int
-otx2_flow_parse_higig2_hdr(struct otx2_parse_state *pst)
-{
- struct rte_flow_item_higig2_hdr hw_mask;
- struct otx2_flow_item_info info;
- int lid, lt;
- int rc;
-
- /* Identify the pattern type into lid, lt */
- if (pst->pattern->type != RTE_FLOW_ITEM_TYPE_HIGIG2)
- return 0;
-
- lid = NPC_LID_LA;
- lt = NPC_LT_LA_HIGIG2_ETHER;
- info.hw_hdr_len = 0;
-
- if (pst->flow->nix_intf == NIX_INTF_TX) {
- lt = NPC_LT_LA_IH_NIX_HIGIG2_ETHER;
- info.hw_hdr_len = NPC_IH_LENGTH;
- }
-
- /* Prepare for parsing the item */
- info.def_mask = &rte_flow_item_higig2_hdr_mask;
- info.hw_mask = &hw_mask;
- info.len = sizeof(struct rte_flow_item_higig2_hdr);
- otx2_flow_get_hw_supp_mask(pst, &info, lid, lt);
- info.spec = NULL;
- info.mask = NULL;
-
- /* Basic validation of item parameters */
- rc = otx2_flow_parse_item_basic(pst->pattern, &info, pst->error);
- if (rc)
- return rc;
-
- /* Update pst if not validate only? clash check? */
- return otx2_flow_update_parse_state(pst, &info, lid, lt, 0);
-}
-
-static int
-parse_rss_action(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action *act,
- struct rte_flow_error *error)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_rss_info *rss_info = &hw->rss_info;
- const struct rte_flow_action_rss *rss;
- uint32_t i;
-
- rss = (const struct rte_flow_action_rss *)act->conf;
-
- /* Not supported */
- if (attr->egress) {
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
- attr, "No support of RSS in egress");
- }
-
- if (dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "multi-queue mode is disabled");
-
- /* Parse RSS related parameters from configuration */
- if (!rss || !rss->queue_num)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act, "no valid queues");
-
- if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT)
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "non-default RSS hash functions"
- " are not supported");
-
- if (rss->key_len && rss->key_len > RTE_DIM(rss_info->key))
- return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ACTION, act,
- "RSS hash key too large");
-
- if (rss->queue_num > rss_info->rss_size)
- return rte_flow_error_set
- (error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, act,
- "too many queues for RSS context");
-
- for (i = 0; i < rss->queue_num; i++) {
- if (rss->queue[i] >= dev->data->nb_rx_queues)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION,
- act,
- "queue id > max number"
- " of queues");
- }
-
- return 0;
-}
-
-int
-otx2_flow_parse_actions(struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- const struct rte_flow_action actions[],
- struct rte_flow_error *error,
- struct rte_flow *flow)
-{
- struct otx2_eth_dev *hw = dev->data->dev_private;
- struct otx2_npc_flow_info *npc = &hw->npc_flow;
- const struct rte_flow_action_mark *act_mark;
- const struct rte_flow_action_queue *act_q;
- const struct rte_flow_action_vf *vf_act;
- uint16_t pf_func, vf_id, port_id, pf_id;
- char if_name[RTE_ETH_NAME_MAX_LEN];
- bool vlan_insert_action = false;
- struct rte_eth_dev *eth_dev;
- const char *errmsg = NULL;
- int sel_act, req_act = 0;
- int errcode = 0;
- int mark = 0;
- int rq = 0;
-
- /* Initialize actions */
- flow->ctr_id = NPC_COUNTER_NONE;
- pf_func = otx2_pfvf_func(hw->pf, hw->vf);
-
- for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) {
- otx2_npc_dbg("Action type = %d", actions->type);
-
- switch (actions->type) {
- case RTE_FLOW_ACTION_TYPE_VOID:
- break;
- case RTE_FLOW_ACTION_TYPE_MARK:
- act_mark =
- (const struct rte_flow_action_mark *)actions->conf;
-
- /* We have only 16 bits. Use highest val for flag */
- if (act_mark->id > (OTX2_FLOW_FLAG_VAL - 2)) {
- errmsg = "mark value must be < 0xfffe";
- errcode = ENOTSUP;
- goto err_exit;
- }
- mark = act_mark->id + 1;
- req_act |= OTX2_FLOW_ACT_MARK;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_FLAG:
- mark = OTX2_FLOW_FLAG_VAL;
- req_act |= OTX2_FLOW_ACT_FLAG;
- rte_atomic32_inc(&npc->mark_actions);
- break;
-
- case RTE_FLOW_ACTION_TYPE_COUNT:
- /* Indicates, need a counter */
- flow->ctr_id = 1;
- req_act |= OTX2_FLOW_ACT_COUNT;
- break;
-
- case RTE_FLOW_ACTION_TYPE_DROP:
- req_act |= OTX2_FLOW_ACT_DROP;
- break;
-
- case RTE_FLOW_ACTION_TYPE_PF:
- req_act |= OTX2_FLOW_ACT_PF;
- pf_func &= (0xfc00);
- break;
-
- case RTE_FLOW_ACTION_TYPE_VF:
- vf_act = (const struct rte_flow_action_vf *)
- actions->conf;
- req_act |= OTX2_FLOW_ACT_VF;
- if (vf_act->original == 0) {
- vf_id = vf_act->id & RVU_PFVF_FUNC_MASK;
- if (vf_id >= hw->maxvf) {
- errmsg = "invalid vf specified";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- }
- break;
-
- case RTE_FLOW_ACTION_TYPE_PORT_ID:
- case RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR:
- if (actions->type == RTE_FLOW_ACTION_TYPE_PORT_ID) {
- const struct rte_flow_action_port_id *port_act;
-
- port_act = actions->conf;
- port_id = port_act->id;
- } else {
- const struct rte_flow_action_ethdev *ethdev_act;
-
- ethdev_act = actions->conf;
- port_id = ethdev_act->port_id;
- }
- if (rte_eth_dev_get_name_by_port(port_id, if_name)) {
- errmsg = "Name not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- eth_dev = rte_eth_dev_allocated(if_name);
- if (!eth_dev) {
- errmsg = "eth_dev not found for output port id";
- errcode = EINVAL;
- goto err_exit;
- }
- if (!otx2_ethdev_is_same_driver(eth_dev)) {
- errmsg = "Output port id unsupported type";
- errcode = ENOTSUP;
- goto err_exit;
- }
- if (!otx2_dev_is_vf(otx2_eth_pmd_priv(eth_dev))) {
- errmsg = "Output port should be VF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- vf_id = otx2_eth_pmd_priv(eth_dev)->vf;
- if (vf_id >= hw->maxvf) {
- errmsg = "Invalid vf for output port";
- errcode = EINVAL;
- goto err_exit;
- }
- pf_id = otx2_eth_pmd_priv(eth_dev)->pf;
- if (pf_id != hw->pf) {
- errmsg = "Output port unsupported PF";
- errcode = ENOTSUP;
- goto err_exit;
- }
- pf_func &= (0xfc00);
- pf_func = (pf_func | (vf_id + 1));
- req_act |= OTX2_FLOW_ACT_VF;
- break;
-
- case RTE_FLOW_ACTION_TYPE_QUEUE:
- /* Applicable only to ingress flow */
- act_q = (const struct rte_flow_action_queue *)
- actions->conf;
- rq = act_q->index;
- if (rq >= dev->data->nb_rx_queues) {
- errmsg = "invalid queue index";
- errcode = EINVAL;
- goto err_exit;
- }
- req_act |= OTX2_FLOW_ACT_QUEUE;
- break;
-
- case RTE_FLOW_ACTION_TYPE_RSS:
- errcode = parse_rss_action(dev, attr, actions, error);
- if (errcode)
- return -rte_errno;
-
- req_act |= OTX2_FLOW_ACT_RSS;
- break;
-
- case RTE_FLOW_ACTION_TYPE_SECURITY:
- /* Assumes user has already configured security
- * session for this flow. Associated conf is
- * opaque. When RTE security is implemented for otx2,
- * we need to verify that for specified security
- * session:
- * action_type ==
- * RTE_SECURITY_ACTION_TYPE_INLINE_PROTOCOL &&
- * session_protocol ==
- * RTE_SECURITY_PROTOCOL_IPSEC
- *
- * RSS is not supported with inline ipsec. Get the
- * rq from associated conf, or make
- * RTE_FLOW_ACTION_TYPE_QUEUE compulsory with this
- * action.
- * Currently, rq = 0 is assumed.
- */
- req_act |= OTX2_FLOW_ACT_SEC;
- rq = 0;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_VID:
- req_act |= OTX2_FLOW_ACT_VLAN_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_POP_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_STRIP;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_PUSH_VLAN:
- req_act |= OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT;
- break;
- case RTE_FLOW_ACTION_TYPE_OF_SET_VLAN_PCP:
- req_act |= OTX2_FLOW_ACT_VLAN_PCP_INSERT;
- break;
- default:
- errmsg = "Unsupported action specified";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if (req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT))
- vlan_insert_action = true;
-
- if ((req_act &
- (OTX2_FLOW_ACT_VLAN_INSERT | OTX2_FLOW_ACT_VLAN_ETHTYPE_INSERT |
- OTX2_FLOW_ACT_VLAN_PCP_INSERT)) ==
- OTX2_FLOW_ACT_VLAN_PCP_INSERT) {
- errmsg = " PCP insert action can't be supported alone";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Both STRIP and INSERT actions are not supported */
- if (vlan_insert_action && (req_act & OTX2_FLOW_ACT_VLAN_STRIP)) {
- errmsg = "Both VLAN insert and strip actions not supported"
- " together";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- /* Check if actions specified are compatible */
- if (attr->egress) {
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP) {
- errmsg = "VLAN pop action is not supported on Egress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_TX_ACTIONOP_DROP;
- } else if ((req_act & OTX2_FLOW_ACT_COUNT) ||
- vlan_insert_action) {
- flow->npc_action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
- } else {
- errmsg = "Unsupported action for egress";
- errcode = EINVAL;
- goto err_exit;
- }
- goto set_pf_func;
- }
-
- /* We have already verified the attr, this is ingress.
- * - Exactly one terminating action is supported
- * - Exactly one of MARK or FLAG is supported
- * - If terminating action is DROP, only count is valid.
- */
- sel_act = req_act & OTX2_FLOW_ACT_TERM;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only one terminating action supported";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_DROP) {
- sel_act = req_act & ~OTX2_FLOW_ACT_COUNT;
- if ((sel_act & (sel_act - 1)) != 0) {
- errmsg = "Only COUNT action is supported "
- "with DROP ingress action";
- errcode = ENOTSUP;
- goto err_exit;
- }
- }
-
- if ((req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK))
- == (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- errmsg = "Only one of FLAG or MARK action is supported";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (vlan_insert_action) {
- errmsg = "VLAN push/Insert action is not supported on Ingress";
- errcode = ENOTSUP;
- goto err_exit;
- }
-
- if (req_act & OTX2_FLOW_ACT_VLAN_STRIP)
- npc->vtag_actions++;
-
- /* Only VLAN action is provided */
- if (req_act == OTX2_FLOW_ACT_VLAN_STRIP)
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- /* Set NIX_RX_ACTIONOP */
- else if (req_act & (OTX2_FLOW_ACT_PF | OTX2_FLOW_ACT_VF)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- if (req_act & OTX2_FLOW_ACT_QUEUE)
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_DROP) {
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- } else if (req_act & OTX2_FLOW_ACT_QUEUE) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & OTX2_FLOW_ACT_RSS) {
- /* When user added a rule for rss, first we will add the
- *rule in MCAM and then update the action, once if we have
- *FLOW_KEY_ALG index. So, till we update the action with
- *flow_key_alg index, set the action to drop.
- */
- if (dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS)
- flow->npc_action = NIX_RX_ACTIONOP_DROP;
- else
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_SEC) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST_IPSEC;
- flow->npc_action |= (uint64_t)rq << 20;
- } else if (req_act & (OTX2_FLOW_ACT_FLAG | OTX2_FLOW_ACT_MARK)) {
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else if (req_act & OTX2_FLOW_ACT_COUNT) {
- /* Keep OTX2_FLOW_ACT_COUNT always at the end
- * This is default action, when user specify only
- * COUNT ACTION
- */
- flow->npc_action = NIX_RX_ACTIONOP_UCAST;
- } else {
- /* Should never reach here */
- errmsg = "Invalid action specified";
- errcode = EINVAL;
- goto err_exit;
- }
-
- if (mark)
- flow->npc_action |= (uint64_t)mark << 40;
-
- if (rte_atomic32_read(&npc->mark_actions) == 1) {
- hw->rx_offload_flags |=
- NIX_RX_OFFLOAD_MARK_UPDATE_F;
- otx2_eth_set_rx_function(dev);
- }
-
- if (npc->vtag_actions == 1) {
- hw->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(dev);
- }
-
-set_pf_func:
- /* Ideally AF must ensure that correct pf_func is set */
- if (attr->egress)
- flow->npc_action |= (uint64_t)pf_func << 48;
- else
- flow->npc_action |= (uint64_t)pf_func << 4;
-
- return 0;
-
-err_exit:
- rte_flow_error_set(error, errcode,
- RTE_FLOW_ERROR_TYPE_ACTION_NUM, NULL,
- errmsg);
- return -rte_errno;
-}
diff --git a/drivers/net/octeontx2/otx2_flow_utils.c b/drivers/net/octeontx2/otx2_flow_utils.c
deleted file mode 100644
index 35f7d0f4bc..0000000000
--- a/drivers/net/octeontx2/otx2_flow_utils.c
+++ /dev/null
@@ -1,969 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-static int
-flow_mcam_alloc_counter(struct otx2_mbox *mbox, uint16_t *ctr)
-{
- struct npc_mcam_alloc_counter_req *req;
- struct npc_mcam_alloc_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_counter(mbox);
- req->count = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *ctr = rsp->cntr_list[0];
- return rc;
-}
-
-int
-otx2_flow_mcam_free_counter(struct otx2_mbox *mbox, uint16_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_read_counter(struct otx2_mbox *mbox, uint32_t ctr_id,
- uint64_t *count)
-{
- struct npc_mcam_oper_counter_req *req;
- struct npc_mcam_oper_counter_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_counter_stats(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
-
- *count = rsp->stat;
- return rc;
-}
-
-int
-otx2_flow_mcam_clear_counter(struct otx2_mbox *mbox, uint32_t ctr_id)
-{
- struct npc_mcam_oper_counter_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_clear_counter(mbox);
- req->cntr = ctr_id;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_entry(struct otx2_mbox *mbox, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-int
-otx2_flow_mcam_free_all_entries(struct otx2_mbox *mbox)
-{
- struct npc_mcam_free_entry_req *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->all = 1;
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, NULL);
-
- return rc;
-}
-
-static void
-flow_prep_mcam_ldata(uint8_t *ptr, const uint8_t *data, int len)
-{
- int idx;
-
- for (idx = 0; idx < len; idx++)
- ptr[idx] = data[len - 1 - idx];
-}
-
-static int
-flow_check_copysz(size_t size, size_t len)
-{
- if (len <= size)
- return len;
- return -1;
-}
-
-static inline int
-flow_mem_is_zero(const void *mem, int len)
-{
- const char *m = mem;
- int i;
-
- for (i = 0; i < len; i++) {
- if (m[i] != 0)
- return 0;
- }
- return 1;
-}
-
-static void
-flow_set_hw_mask(struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo,
- char *hw_mask)
-{
- int max_off, offset;
- int j;
-
- if (xinfo->enable == 0)
- return;
-
- if (xinfo->hdr_off < info->hw_hdr_len)
- return;
-
- max_off = xinfo->hdr_off + xinfo->len - info->hw_hdr_len;
-
- if (max_off > info->len)
- max_off = info->len;
-
- offset = xinfo->hdr_off - info->hw_hdr_len;
- for (j = offset; j < max_off; j++)
- hw_mask[j] = 0xff;
-}
-
-void
-otx2_flow_get_hw_supp_mask(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt)
-{
- struct npc_xtract_info *xinfo, *lfinfo;
- char *hw_mask = info->hw_mask;
- int lf_cfg;
- int i, j;
- int intf;
-
- intf = pst->flow->nix_intf;
- xinfo = pst->npc->prx_dxcfg[intf][lid][lt].xtract;
- memset(hw_mask, 0, info->len);
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- flow_set_hw_mask(info, &xinfo[i], hw_mask);
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
-
- if (xinfo[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- flow_set_hw_mask(info, &lfinfo[0], hw_mask);
- }
- }
- }
-}
-
-static int
-flow_update_extraction_data(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info,
- struct npc_xtract_info *xinfo)
-{
- uint8_t int_info_mask[NPC_MAX_EXTRACT_DATA_LEN];
- uint8_t int_info[NPC_MAX_EXTRACT_DATA_LEN];
- struct npc_xtract_info *x;
- int k, idx, hdr_off;
- int len = 0;
-
- x = xinfo;
- len = x->len;
- hdr_off = x->hdr_off;
-
- if (hdr_off < info->hw_hdr_len)
- return 0;
-
- if (x->enable == 0)
- return 0;
-
- otx2_npc_dbg("x->hdr_off = %d, len = %d, info->len = %d,"
- "x->key_off = %d", x->hdr_off, len, info->len,
- x->key_off);
-
- hdr_off -= info->hw_hdr_len;
-
- if (hdr_off + len > info->len)
- len = info->len - hdr_off;
-
- /* Check for over-write of previous layer */
- if (!flow_mem_is_zero(pst->mcam_mask + x->key_off,
- len)) {
- /* Cannot support this data match */
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Extraction unsupported");
- return -rte_errno;
- }
-
- len = flow_check_copysz((OTX2_MAX_MCAM_WIDTH_DWORDS * 8)
- - x->key_off,
- len);
- if (len < 0) {
- rte_flow_error_set(pst->error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ITEM,
- pst->pattern,
- "Internal Error");
- return -rte_errno;
- }
-
- /* Need to reverse complete structure so that dest addr is at
- * MSB so as to program the MCAM using mcam_data & mcam_mask
- * arrays
- */
- flow_prep_mcam_ldata(int_info,
- (const uint8_t *)info->spec + hdr_off,
- x->len);
- flow_prep_mcam_ldata(int_info_mask,
- (const uint8_t *)info->mask + hdr_off,
- x->len);
-
- otx2_npc_dbg("Spec: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ",
- ((const uint8_t *)info->spec)[k]);
-
- otx2_npc_dbg("Int_info: ");
- for (k = 0; k < info->len; k++)
- otx2_npc_dbg("0x%.2x ", int_info[k]);
-
- memcpy(pst->mcam_mask + x->key_off, int_info_mask, len);
- memcpy(pst->mcam_data + x->key_off, int_info, len);
-
- otx2_npc_dbg("Parse state mcam data & mask");
- for (idx = 0; idx < len ; idx++)
- otx2_npc_dbg("data[%d]: 0x%x, mask[%d]: 0x%x", idx,
- *(pst->mcam_data + idx + x->key_off), idx,
- *(pst->mcam_mask + idx + x->key_off));
- return 0;
-}
-
-int
-otx2_flow_update_parse_state(struct otx2_parse_state *pst,
- struct otx2_flow_item_info *info, int lid, int lt,
- uint8_t flags)
-{
- struct npc_lid_lt_xtract_info *xinfo;
- struct otx2_flow_dump_data *dump;
- struct npc_xtract_info *lfinfo;
- int intf, lf_cfg;
- int i, j, rc = 0;
-
- otx2_npc_dbg("Parse state function info mask total %s",
- (const uint8_t *)info->mask);
-
- pst->layer_mask |= lid;
- pst->lt[lid] = lt;
- pst->flags[lid] = flags;
-
- intf = pst->flow->nix_intf;
- xinfo = &pst->npc->prx_dxcfg[intf][lid][lt];
- otx2_npc_dbg("Is_terminating = %d", xinfo->is_terminating);
- if (xinfo->is_terminating)
- pst->terminate = 1;
-
- if (info->spec == NULL) {
- otx2_npc_dbg("Info spec NULL");
- goto done;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- rc = flow_update_extraction_data(pst, info, &xinfo->xtract[i]);
- if (rc != 0)
- return rc;
- }
-
- for (i = 0; i < NPC_MAX_LD; i++) {
- if (xinfo->xtract[i].flags_enable == 0)
- continue;
-
- lf_cfg = pst->npc->prx_lfcfg[i].i;
- if (lf_cfg == lid) {
- for (j = 0; j < NPC_MAX_LFL; j++) {
- lfinfo = pst->npc->prx_fxcfg[intf]
- [i][j].xtract;
- rc = flow_update_extraction_data(pst, info,
- &lfinfo[0]);
- if (rc != 0)
- return rc;
-
- if (lfinfo[0].enable)
- pst->flags[lid] = j;
- }
- }
- }
-
-done:
- dump = &pst->flow->dump_data[pst->flow->num_patterns++];
- dump->lid = lid;
- dump->ltype = lt;
- /* Next pattern to parse by subsequent layers */
- pst->pattern++;
- return 0;
-}
-
-static inline int
-flow_range_is_valid(const char *spec, const char *last, const char *mask,
- int len)
-{
- /* Mask must be zero or equal to spec as we do not support
- * non-contiguous ranges.
- */
- while (len--) {
- if (last[len] &&
- (spec[len] & mask[len]) != (last[len] & mask[len]))
- return 0; /* False */
- }
- return 1;
-}
-
-
-static inline int
-flow_mask_is_supported(const char *mask, const char *hw_mask, int len)
-{
- /*
- * If no hw_mask, assume nothing is supported.
- * mask is never NULL
- */
- if (hw_mask == NULL)
- return flow_mem_is_zero(mask, len);
-
- while (len--) {
- if ((mask[len] | hw_mask[len]) != hw_mask[len])
- return 0; /* False */
- }
- return 1;
-}
-
-int
-otx2_flow_parse_item_basic(const struct rte_flow_item *item,
- struct otx2_flow_item_info *info,
- struct rte_flow_error *error)
-{
- /* Item must not be NULL */
- if (item == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "Item is NULL");
- return -rte_errno;
- }
- /* If spec is NULL, both mask and last must be NULL, this
- * makes it to match ANY value (eq to mask = 0).
- * Setting either mask or last without spec is an error
- */
- if (item->spec == NULL) {
- if (item->last == NULL && item->mask == NULL) {
- info->spec = NULL;
- return 0;
- }
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "mask or last set without spec");
- return -rte_errno;
- }
-
- /* We have valid spec */
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->spec = item->spec;
-
- /* If mask is not set, use default mask, err if default mask is
- * also NULL.
- */
- if (item->mask == NULL) {
- otx2_npc_dbg("Item mask null, using default mask");
- if (info->def_mask == NULL) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "No mask or default mask given");
- return -rte_errno;
- }
- info->mask = info->def_mask;
- } else {
- if (item->type != RTE_FLOW_ITEM_TYPE_RAW)
- info->mask = item->mask;
- }
-
- /* mask specified must be subset of hw supported mask
- * mask | hw_mask == hw_mask
- */
- if (!flow_mask_is_supported(info->mask, info->hw_mask, info->len)) {
- rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM,
- item, "Unsupported field in the mask");
- return -rte_errno;
- }
-
- /* Now we have spec and mask. OTX2 does not support non-contiguous
- * range. We should have either:
- * - spec & mask == last & mask or,
- * - last == 0 or,
- * - last == NULL
- */
- if (item->last != NULL && !flow_mem_is_zero(item->last, info->len)) {
- if (!flow_range_is_valid(item->spec, item->last, info->mask,
- info->len)) {
- rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ITEM, item,
- "Unsupported range for match");
- return -rte_errno;
- }
- }
-
- return 0;
-}
-
-void
-otx2_flow_keyx_compress(uint64_t *data, uint32_t nibble_mask)
-{
- uint64_t cdata[2] = {0ULL, 0ULL}, nibble;
- int i, j = 0;
-
- for (i = 0; i < NPC_MAX_KEY_NIBBLES; i++) {
- if (nibble_mask & (1 << i)) {
- nibble = (data[i / 16] >> ((i & 0xf) * 4)) & 0xf;
- cdata[j / 16] |= (nibble << ((j & 0xf) * 4));
- j += 1;
- }
- }
-
- data[0] = cdata[0];
- data[1] = cdata[1];
-}
-
-static int
-flow_first_set_bit(uint64_t slab)
-{
- int num = 0;
-
- if ((slab & 0xffffffff) == 0) {
- num += 32;
- slab >>= 32;
- }
- if ((slab & 0xffff) == 0) {
- num += 16;
- slab >>= 16;
- }
- if ((slab & 0xff) == 0) {
- num += 8;
- slab >>= 8;
- }
- if ((slab & 0xf) == 0) {
- num += 4;
- slab >>= 4;
- }
- if ((slab & 0x3) == 0) {
- num += 2;
- slab >>= 2;
- }
- if ((slab & 0x1) == 0)
- num += 1;
-
- return num;
-}
-
-static int
-flow_shift_lv_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- uint32_t old_ent, uint32_t new_ent)
-{
- struct npc_mcam_shift_entry_req *req;
- struct npc_mcam_shift_entry_rsp *rsp;
- struct otx2_flow_list *list;
- struct rte_flow *flow_iter;
- int rc = 0;
-
- otx2_npc_dbg("Old ent:%u new ent:%u priority:%u", old_ent, new_ent,
- flow->priority);
-
- list = &flow_info->flow_list[flow->priority];
-
- /* Old entry is disabled & it's contents are moved to new_entry,
- * new entry is enabled finally.
- */
- req = otx2_mbox_alloc_msg_npc_mcam_shift_entry(mbox);
- req->curr_entry[0] = old_ent;
- req->new_entry[0] = new_ent;
- req->shift_count = 1;
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Remove old node from list */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id == old_ent)
- TAILQ_REMOVE(list, flow_iter, next);
- }
-
- /* Insert node with new mcam id at right place */
- TAILQ_FOREACH(flow_iter, list, next) {
- if (flow_iter->mcam_id > new_ent)
- TAILQ_INSERT_BEFORE(flow_iter, flow, next);
- }
- return rc;
-}
-
-/* Exchange all required entries with a given priority level */
-static int
-flow_shift_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp, int dir, int prio_lvl)
-{
- struct rte_bitmap *fr_bmp, *fr_bmp_rev, *lv_bmp, *lv_bmp_rev, *bmp;
- uint32_t e_fr = 0, e_lv = 0, e, e_id = 0, mcam_entries;
- uint64_t fr_bit_pos = 0, lv_bit_pos = 0, bit_pos = 0;
- /* Bit position within the slab */
- uint32_t sl_fr_bit_off = 0, sl_lv_bit_off = 0;
- /* Overall bit position of the start of slab */
- /* free & live entry index */
- int rc_fr = 0, rc_lv = 0, rc = 0, idx = 0;
- struct otx2_mcam_ents_info *ent_info;
- /* free & live bitmap slab */
- uint64_t sl_fr = 0, sl_lv = 0, *sl;
-
- fr_bmp = flow_info->free_entries[prio_lvl];
- fr_bmp_rev = flow_info->free_entries_rev[prio_lvl];
- lv_bmp = flow_info->live_entries[prio_lvl];
- lv_bmp_rev = flow_info->live_entries_rev[prio_lvl];
- ent_info = &flow_info->flow_entry_info[prio_lvl];
- mcam_entries = flow_info->mcam_entries;
-
-
- /* New entries allocated are always contiguous, but older entries
- * already in free/live bitmap can be non-contiguous: so return
- * shifted entries should be in non-contiguous format.
- */
- while (idx <= rsp->count) {
- if (!sl_fr && !sl_lv) {
- /* Lower index elements to be exchanged */
- if (dir < 0) {
- rc_fr = rte_bitmap_scan(fr_bmp, &e_fr, &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp, &e_lv, &sl_lv);
- otx2_npc_dbg("Fwd slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- } else {
- rc_fr = rte_bitmap_scan(fr_bmp_rev,
- &sl_fr_bit_off,
- &sl_fr);
- rc_lv = rte_bitmap_scan(lv_bmp_rev,
- &sl_lv_bit_off,
- &sl_lv);
-
- otx2_npc_dbg("Rev slab rc fr %u rc lv %u "
- "e_fr %u e_lv %u", rc_fr, rc_lv,
- e_fr, e_lv);
- }
- }
-
- if (rc_fr) {
- fr_bit_pos = flow_first_set_bit(sl_fr);
- e_fr = sl_fr_bit_off + fr_bit_pos;
- otx2_npc_dbg("Fr_bit_pos 0x%" PRIx64, fr_bit_pos);
- } else {
- e_fr = ~(0);
- }
-
- if (rc_lv) {
- lv_bit_pos = flow_first_set_bit(sl_lv);
- e_lv = sl_lv_bit_off + lv_bit_pos;
- otx2_npc_dbg("Lv_bit_pos 0x%" PRIx64, lv_bit_pos);
- } else {
- e_lv = ~(0);
- }
-
- /* First entry is from free_bmap */
- if (e_fr < e_lv) {
- bmp = fr_bmp;
- e = e_fr;
- sl = &sl_fr;
- bit_pos = fr_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
- otx2_npc_dbg("Fr e %u e_id %u", e, e_id);
- } else {
- bmp = lv_bmp;
- e = e_lv;
- sl = &sl_lv;
- bit_pos = lv_bit_pos;
- if (dir > 0)
- e_id = mcam_entries - e - 1;
- else
- e_id = e;
-
- otx2_npc_dbg("Lv e %u e_id %u", e, e_id);
- if (idx < rsp->count)
- rc =
- flow_shift_lv_ent(mbox, flow,
- flow_info, e_id,
- rsp->entry + idx);
- }
-
- rte_bitmap_clear(bmp, e);
- rte_bitmap_set(bmp, rsp->entry + idx);
- /* Update entry list, use non-contiguous
- * list now.
- */
- rsp->entry_list[idx] = e_id;
- *sl &= ~(1 << bit_pos);
-
- /* Update min & max entry identifiers in current
- * priority level.
- */
- if (dir < 0) {
- ent_info->max_id = rsp->entry + idx;
- ent_info->min_id = e_id;
- } else {
- ent_info->max_id = e_id;
- ent_info->min_id = rsp->entry;
- }
-
- idx++;
- }
- return rc;
-}
-
-/* Validate if newly allocated entries lie in the correct priority zone
- * since NPC_MCAM_LOWER_PRIO & NPC_MCAM_HIGHER_PRIO don't ensure zone accuracy.
- * If not properly aligned, shift entries to do so
- */
-static int
-flow_validate_and_shift_prio_ent(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info,
- struct npc_mcam_alloc_entry_rsp *rsp,
- int req_prio)
-{
- int prio_idx = 0, rc = 0, needs_shift = 0, idx, prio = flow->priority;
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int dir = (req_prio == NPC_MCAM_HIGHER_PRIO) ? 1 : -1;
- uint32_t tot_ent = 0;
-
- otx2_npc_dbg("Dir %d, priority = %d", dir, prio);
-
- if (dir < 0)
- prio_idx = flow_info->flow_max_priority - 1;
-
- /* Only live entries needs to be shifted, free entries can just be
- * moved by bits manipulation.
- */
-
- /* For dir = -1(NPC_MCAM_LOWER_PRIO), when shifting,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining higher priority
- * level entries(lower indexes).
- *
- * For dir = +1(NPC_MCAM_HIGHER_PRIO), during shift,
- * NPC_MAX_PREALLOC_ENT are exchanged with adjoining lower priority
- * level entries(higher indexes) with highest indexes.
- */
- do {
- tot_ent = info[prio_idx].free_ent + info[prio_idx].live_ent;
-
- if (dir < 0 && prio_idx != prio &&
- rsp->entry > info[prio_idx].max_id && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "max id %u", rsp->entry, prio_idx,
- info[prio_idx].max_id);
-
- needs_shift = 1;
- } else if ((dir > 0) && (prio_idx != prio) &&
- (rsp->entry < info[prio_idx].min_id) && tot_ent) {
- otx2_npc_dbg("Rsp entry %u prio idx %u "
- "min id %u", rsp->entry, prio_idx,
- info[prio_idx].min_id);
- needs_shift = 1;
- }
-
- otx2_npc_dbg("Needs_shift = %d", needs_shift);
- if (needs_shift) {
- needs_shift = 0;
- rc = flow_shift_ent(mbox, flow, flow_info, rsp, dir,
- prio_idx);
- } else {
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
- } while ((prio_idx != prio) && (prio_idx += dir));
-
- return rc;
-}
-
-static int
-flow_find_ref_entry(struct otx2_npc_flow_info *flow_info, int *prio,
- int prio_lvl)
-{
- struct otx2_mcam_ents_info *info = flow_info->flow_entry_info;
- int step = 1;
-
- while (step < flow_info->flow_max_priority) {
- if (((prio_lvl + step) < flow_info->flow_max_priority) &&
- info[prio_lvl + step].live_ent) {
- *prio = NPC_MCAM_HIGHER_PRIO;
- return info[prio_lvl + step].min_id;
- }
-
- if (((prio_lvl - step) >= 0) &&
- info[prio_lvl - step].live_ent) {
- otx2_npc_dbg("Prio_lvl %u live %u", prio_lvl - step,
- info[prio_lvl - step].live_ent);
- *prio = NPC_MCAM_LOWER_PRIO;
- return info[prio_lvl - step].max_id;
- }
- step++;
- }
- *prio = NPC_MCAM_ANY_PRIO;
- return 0;
-}
-
-static int
-flow_fill_entry_cache(struct otx2_mbox *mbox, struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info, uint32_t *free_ent)
-{
- struct rte_bitmap *free_bmp, *free_bmp_rev, *live_bmp, *live_bmp_rev;
- struct npc_mcam_alloc_entry_rsp rsp_local;
- struct npc_mcam_alloc_entry_rsp *rsp_cmd;
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mcam_ents_info *info;
- uint16_t ref_ent, idx;
- int rc, prio;
-
- info = &flow_info->flow_entry_info[flow->priority];
- free_bmp = flow_info->free_entries[flow->priority];
- free_bmp_rev = flow_info->free_entries_rev[flow->priority];
- live_bmp = flow_info->live_entries[flow->priority];
- live_bmp_rev = flow_info->live_entries_rev[flow->priority];
-
- ref_ent = flow_find_ref_entry(flow_info, &prio, flow->priority);
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->contig = 1;
- req->count = flow_info->flow_prealloc_size;
- req->priority = prio;
- req->ref_entry = ref_ent;
-
- otx2_npc_dbg("Fill cache ref entry %u prio %u", ref_ent, prio);
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp_cmd);
- if (rc)
- return rc;
-
- rsp = &rsp_local;
- memcpy(rsp, rsp_cmd, sizeof(*rsp));
-
- otx2_npc_dbg("Alloc entry %u count %u , prio = %d", rsp->entry,
- rsp->count, prio);
-
- /* Non-first ent cache fill */
- if (prio != NPC_MCAM_ANY_PRIO) {
- flow_validate_and_shift_prio_ent(mbox, flow, flow_info, rsp,
- prio);
- } else {
- /* Copy into response entry list */
- for (idx = 0; idx < rsp->count; idx++)
- rsp->entry_list[idx] = rsp->entry + idx;
- }
-
- otx2_npc_dbg("Fill entry cache rsp count %u", rsp->count);
- /* Update free entries, reverse free entries list,
- * min & max entry ids.
- */
- for (idx = 0; idx < rsp->count; idx++) {
- if (unlikely(rsp->entry_list[idx] < info->min_id))
- info->min_id = rsp->entry_list[idx];
-
- if (unlikely(rsp->entry_list[idx] > info->max_id))
- info->max_id = rsp->entry_list[idx];
-
- /* Skip entry to be returned, not to be part of free
- * list.
- */
- if (prio == NPC_MCAM_HIGHER_PRIO) {
- if (unlikely(idx == (rsp->count - 1))) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- } else {
- if (unlikely(!idx)) {
- *free_ent = rsp->entry_list[idx];
- continue;
- }
- }
- info->free_ent++;
- rte_bitmap_set(free_bmp, rsp->entry_list[idx]);
- rte_bitmap_set(free_bmp_rev, flow_info->mcam_entries -
- rsp->entry_list[idx] - 1);
-
- otx2_npc_dbg("Final rsp entry %u rsp entry rev %u",
- rsp->entry_list[idx],
- flow_info->mcam_entries - rsp->entry_list[idx] - 1);
- }
-
- otx2_npc_dbg("Cache free entry %u, rev = %u", *free_ent,
- flow_info->mcam_entries - *free_ent - 1);
- info->live_ent++;
- rte_bitmap_set(live_bmp, *free_ent);
- rte_bitmap_set(live_bmp_rev, flow_info->mcam_entries - *free_ent - 1);
-
- return 0;
-}
-
-static int
-flow_check_preallocated_entry_cache(struct otx2_mbox *mbox,
- struct rte_flow *flow,
- struct otx2_npc_flow_info *flow_info)
-{
- struct rte_bitmap *free, *free_rev, *live, *live_rev;
- uint32_t pos = 0, free_ent = 0, mcam_entries;
- struct otx2_mcam_ents_info *info;
- uint64_t slab = 0;
- int rc;
-
- otx2_npc_dbg("Flow priority %u", flow->priority);
-
- info = &flow_info->flow_entry_info[flow->priority];
-
- free_rev = flow_info->free_entries_rev[flow->priority];
- free = flow_info->free_entries[flow->priority];
- live_rev = flow_info->live_entries_rev[flow->priority];
- live = flow_info->live_entries[flow->priority];
- mcam_entries = flow_info->mcam_entries;
-
- if (info->free_ent) {
- rc = rte_bitmap_scan(free, &pos, &slab);
- if (rc) {
- /* Get free_ent from free entry bitmap */
- free_ent = pos + __builtin_ctzll(slab);
- otx2_npc_dbg("Allocated from cache entry %u", free_ent);
- /* Remove from free bitmaps and add to live ones */
- rte_bitmap_clear(free, free_ent);
- rte_bitmap_set(live, free_ent);
- rte_bitmap_clear(free_rev,
- mcam_entries - free_ent - 1);
- rte_bitmap_set(live_rev,
- mcam_entries - free_ent - 1);
-
- info->free_ent--;
- info->live_ent++;
- return free_ent;
- }
-
- otx2_npc_dbg("No free entry:its a mess");
- return -1;
- }
-
- rc = flow_fill_entry_cache(mbox, flow, flow_info, &free_ent);
- if (rc)
- return rc;
-
- return free_ent;
-}
-
-int
-otx2_flow_mcam_alloc_and_write(struct rte_flow *flow, struct otx2_mbox *mbox,
- struct otx2_parse_state *pst,
- struct otx2_npc_flow_info *flow_info)
-{
- int use_ctr = (flow->ctr_id == NPC_COUNTER_NONE ? 0 : 1);
- struct npc_mcam_read_base_rule_rsp *base_rule_rsp;
- struct npc_mcam_write_entry_req *req;
- struct mcam_entry *base_entry;
- struct mbox_msghdr *rsp;
- uint16_t ctr = ~(0);
- int rc, idx;
- int entry;
-
- if (use_ctr) {
- rc = flow_mcam_alloc_counter(mbox, &ctr);
- if (rc)
- return rc;
- }
-
- entry = flow_check_preallocated_entry_cache(mbox, flow, flow_info);
- if (entry < 0) {
- otx2_err("Prealloc failed");
- otx2_flow_mcam_free_counter(mbox, ctr);
- return NPC_MCAM_ALLOC_FAILED;
- }
-
- if (pst->is_vf) {
- (void)otx2_mbox_alloc_msg_npc_read_base_steer_rule(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&base_rule_rsp);
- if (rc) {
- otx2_err("Failed to fetch VF's base MCAM entry");
- return rc;
- }
- base_entry = &base_rule_rsp->entry_data;
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- flow->mcam_data[idx] |= base_entry->kw[idx];
- flow->mcam_mask[idx] |= base_entry->kw_mask[idx];
- }
- }
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- req->set_cntr = use_ctr;
- req->cntr = ctr;
- req->entry = entry;
- otx2_npc_dbg("Alloc & write entry %u", entry);
-
- req->intf =
- (flow->nix_intf == OTX2_INTF_RX) ? NPC_MCAM_RX : NPC_MCAM_TX;
- req->enable_entry = 1;
- req->entry_data.action = flow->npc_action;
- req->entry_data.vtag_action = flow->vtag_action;
-
- for (idx = 0; idx < OTX2_MAX_MCAM_WIDTH_DWORDS; idx++) {
- req->entry_data.kw[idx] = flow->mcam_data[idx];
- req->entry_data.kw_mask[idx] = flow->mcam_mask[idx];
- }
-
- if (flow->nix_intf == OTX2_INTF_RX) {
- req->entry_data.kw[0] |= flow_info->channel;
- req->entry_data.kw_mask[0] |= (BIT_ULL(12) - 1);
- } else {
- uint16_t pf_func = (flow->npc_action >> 48) & 0xffff;
-
- pf_func = htons(pf_func);
- req->entry_data.kw[0] |= ((uint64_t)pf_func << 32);
- req->entry_data.kw_mask[0] |= ((uint64_t)0xffff << 32);
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc != 0)
- return rc;
-
- flow->mcam_id = entry;
- if (use_ctr)
- flow->ctr_id = ctr;
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_link.c b/drivers/net/octeontx2/otx2_link.c
deleted file mode 100644
index 8f5d0eed92..0000000000
--- a/drivers/net/octeontx2/otx2_link.c
+++ /dev/null
@@ -1,287 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <ethdev_pci.h>
-
-#include "otx2_ethdev.h"
-
-void
-otx2_nix_toggle_flag_link_cfg(struct otx2_eth_dev *dev, bool set)
-{
- if (set)
- dev->flags |= OTX2_LINK_CFG_IN_PROGRESS_F;
- else
- dev->flags &= ~OTX2_LINK_CFG_IN_PROGRESS_F;
-
- rte_wmb();
-}
-
-static inline int
-nix_wait_for_link_cfg(struct otx2_eth_dev *dev)
-{
- uint16_t wait = 1000;
-
- do {
- rte_rmb();
- if (!(dev->flags & OTX2_LINK_CFG_IN_PROGRESS_F))
- break;
- wait--;
- rte_delay_ms(1);
- } while (wait);
-
- return wait ? 0 : -1;
-}
-
-static void
-nix_link_status_print(struct rte_eth_dev *eth_dev, struct rte_eth_link *link)
-{
- if (link && link->link_status)
- otx2_info("Port %d: Link Up - speed %u Mbps - %s",
- (int)(eth_dev->data->port_id),
- (uint32_t)link->link_speed,
- link->link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?
- "full-duplex" : "half-duplex");
- else
- otx2_info("Port %d: Link Down", (int)(eth_dev->data->port_id));
-}
-
-void
-otx2_eth_dev_link_status_get(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return;
-
- rte_eth_linkstatus_get(eth_dev, ð_link);
-
- link->link_up = eth_link.link_status;
- link->speed = eth_link.link_speed;
- link->an = eth_link.link_autoneg;
- link->full_duplex = eth_link.link_duplex;
-}
-
-void
-otx2_eth_dev_link_status_update(struct otx2_dev *dev,
- struct cgx_link_user_info *link)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_link eth_link;
- struct rte_eth_dev *eth_dev;
-
- if (!link || !dev)
- return;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev || !eth_dev->data->dev_conf.intr_conf.lsc)
- return;
-
- if (nix_wait_for_link_cfg(otx2_dev)) {
- otx2_err("Timeout waiting for link_cfg to complete");
- return;
- }
-
- eth_link.link_status = link->link_up;
- eth_link.link_speed = link->speed;
- eth_link.link_autoneg = RTE_ETH_LINK_AUTONEG;
- eth_link.link_duplex = link->full_duplex;
-
- otx2_dev->speed = link->speed;
- otx2_dev->duplex = link->full_duplex;
-
- /* Print link info */
- nix_link_status_print(eth_dev, ð_link);
-
- /* Update link info */
- rte_eth_linkstatus_set(eth_dev, ð_link);
-
- /* Set the flag and execute application callbacks */
- rte_eth_dev_callback_process(eth_dev, RTE_ETH_EVENT_INTR_LSC, NULL);
-}
-
-static int
-lbk_link_update(struct rte_eth_link *link)
-{
- link->link_status = RTE_ETH_LINK_UP;
- link->link_speed = RTE_ETH_SPEED_NUM_100G;
- link->link_autoneg = RTE_ETH_LINK_FIXED;
- link->link_duplex = RTE_ETH_LINK_FULL_DUPLEX;
- return 0;
-}
-
-static int
-cgx_link_update(struct otx2_eth_dev *dev, struct rte_eth_link *link)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_link_info_msg *rsp;
- int rc;
- otx2_mbox_alloc_msg_cgx_get_linkinfo(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- link->link_status = rsp->link_info.link_up;
- link->link_speed = rsp->link_info.speed;
- link->link_autoneg = RTE_ETH_LINK_AUTONEG;
-
- if (rsp->link_info.full_duplex)
- link->link_duplex = rsp->link_info.full_duplex;
- return 0;
-}
-
-int
-otx2_nix_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_link link;
- int rc;
-
- RTE_SET_USED(wait_to_complete);
- memset(&link, 0, sizeof(struct rte_eth_link));
-
- if (!eth_dev->data->dev_started || otx2_dev_is_sdp(dev))
- return 0;
-
- if (otx2_dev_is_lbk(dev))
- rc = lbk_link_update(&link);
- else
- rc = cgx_link_update(dev, &link);
-
- if (rc)
- return rc;
-
- return rte_eth_linkstatus_set(eth_dev, &link);
-}
-
-static int
-nix_dev_set_link_state(struct rte_eth_dev *eth_dev, uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_state_msg *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_state(mbox);
- req->enable = enable;
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_dev_set_link_up(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- rc = nix_dev_set_link_state(eth_dev, 1);
- if (rc)
- goto done;
-
- /* Start tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_start(eth_dev, i);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_dev_set_link_down(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- /* Stop tx queues */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++)
- otx2_nix_tx_queue_stop(eth_dev, i);
-
- return nix_dev_set_link_state(eth_dev, 0);
-}
-
-static int
-cgx_change_mode(struct otx2_eth_dev *dev, struct cgx_set_link_mode_args *cfg)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_set_link_mode_req *req;
-
- req = otx2_mbox_alloc_msg_cgx_set_link_mode(mbox);
- req->args.speed = cfg->speed;
- req->args.duplex = cfg->duplex;
- req->args.an = cfg->an;
-
- return otx2_mbox_process(mbox);
-}
-
-#define SPEED_NONE 0
-static inline uint32_t
-nix_parse_link_speeds(struct otx2_eth_dev *dev, uint32_t link_speeds)
-{
- uint32_t link_speed = SPEED_NONE;
-
- /* 50G and 100G to be supported for board version C0 and above */
- if (!otx2_dev_is_Ax(dev)) {
- if (link_speeds & RTE_ETH_LINK_SPEED_100G)
- link_speed = 100000;
- if (link_speeds & RTE_ETH_LINK_SPEED_50G)
- link_speed = 50000;
- }
- if (link_speeds & RTE_ETH_LINK_SPEED_40G)
- link_speed = 40000;
- if (link_speeds & RTE_ETH_LINK_SPEED_25G)
- link_speed = 25000;
- if (link_speeds & RTE_ETH_LINK_SPEED_20G)
- link_speed = 20000;
- if (link_speeds & RTE_ETH_LINK_SPEED_10G)
- link_speed = 10000;
- if (link_speeds & RTE_ETH_LINK_SPEED_5G)
- link_speed = 5000;
- if (link_speeds & RTE_ETH_LINK_SPEED_1G)
- link_speed = 1000;
-
- return link_speed;
-}
-
-static inline uint8_t
-nix_parse_eth_link_duplex(uint32_t link_speeds)
-{
- if ((link_speeds & RTE_ETH_LINK_SPEED_10M_HD) ||
- (link_speeds & RTE_ETH_LINK_SPEED_100M_HD))
- return RTE_ETH_LINK_HALF_DUPLEX;
- else
- return RTE_ETH_LINK_FULL_DUPLEX;
-}
-
-int
-otx2_apply_link_speed(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct rte_eth_conf *conf = ð_dev->data->dev_conf;
- struct cgx_set_link_mode_args cfg;
-
- /* If VF/SDP/LBK, link attributes cannot be changed */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return 0;
-
- memset(&cfg, 0, sizeof(struct cgx_set_link_mode_args));
- cfg.speed = nix_parse_link_speeds(dev, conf->link_speeds);
- if (cfg.speed != SPEED_NONE && cfg.speed != dev->speed) {
- cfg.duplex = nix_parse_eth_link_duplex(conf->link_speeds);
- cfg.an = (conf->link_speeds & RTE_ETH_LINK_SPEED_FIXED) == 0;
-
- return cgx_change_mode(dev, &cfg);
- }
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_lookup.c b/drivers/net/octeontx2/otx2_lookup.c
deleted file mode 100644
index 5fa9ae1396..0000000000
--- a/drivers/net/octeontx2/otx2_lookup.c
+++ /dev/null
@@ -1,352 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-#include <rte_memzone.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev.h"
-
-/* NIX_RX_PARSE_S's ERRCODE + ERRLEV (12 bits) */
-#define ERRCODE_ERRLEN_WIDTH 12
-#define ERR_ARRAY_SZ ((BIT(ERRCODE_ERRLEN_WIDTH)) *\
- sizeof(uint32_t))
-
-#define SA_TBL_SZ (RTE_MAX_ETHPORTS * sizeof(uint64_t))
-#define LOOKUP_ARRAY_SZ (PTYPE_ARRAY_SZ + ERR_ARRAY_SZ +\
- SA_TBL_SZ)
-
-const uint32_t *
-otx2_nix_supported_ptypes_get(struct rte_eth_dev *eth_dev)
-{
- RTE_SET_USED(eth_dev);
-
- static const uint32_t ptypes[] = {
- RTE_PTYPE_L2_ETHER_QINQ, /* LB */
- RTE_PTYPE_L2_ETHER_VLAN, /* LB */
- RTE_PTYPE_L2_ETHER_TIMESYNC, /* LB */
- RTE_PTYPE_L2_ETHER_ARP, /* LC */
- RTE_PTYPE_L2_ETHER_NSH, /* LC */
- RTE_PTYPE_L2_ETHER_FCOE, /* LC */
- RTE_PTYPE_L2_ETHER_MPLS, /* LC */
- RTE_PTYPE_L3_IPV4, /* LC */
- RTE_PTYPE_L3_IPV4_EXT, /* LC */
- RTE_PTYPE_L3_IPV6, /* LC */
- RTE_PTYPE_L3_IPV6_EXT, /* LC */
- RTE_PTYPE_L4_TCP, /* LD */
- RTE_PTYPE_L4_UDP, /* LD */
- RTE_PTYPE_L4_SCTP, /* LD */
- RTE_PTYPE_L4_ICMP, /* LD */
- RTE_PTYPE_L4_IGMP, /* LD */
- RTE_PTYPE_TUNNEL_GRE, /* LD */
- RTE_PTYPE_TUNNEL_ESP, /* LD */
- RTE_PTYPE_TUNNEL_NVGRE, /* LD */
- RTE_PTYPE_TUNNEL_VXLAN, /* LE */
- RTE_PTYPE_TUNNEL_GENEVE, /* LE */
- RTE_PTYPE_TUNNEL_GTPC, /* LE */
- RTE_PTYPE_TUNNEL_GTPU, /* LE */
- RTE_PTYPE_TUNNEL_VXLAN_GPE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_GRE, /* LE */
- RTE_PTYPE_TUNNEL_MPLS_IN_UDP, /* LE */
- RTE_PTYPE_INNER_L2_ETHER,/* LF */
- RTE_PTYPE_INNER_L3_IPV4, /* LG */
- RTE_PTYPE_INNER_L3_IPV6, /* LG */
- RTE_PTYPE_INNER_L4_TCP, /* LH */
- RTE_PTYPE_INNER_L4_UDP, /* LH */
- RTE_PTYPE_INNER_L4_SCTP, /* LH */
- RTE_PTYPE_INNER_L4_ICMP, /* LH */
- RTE_PTYPE_UNKNOWN,
- };
-
- return ptypes;
-}
-
-int
-otx2_nix_ptypes_set(struct rte_eth_dev *eth_dev, uint32_t ptype_mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (ptype_mask) {
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 0;
- } else {
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_PTYPE_F;
- dev->ptype_disable = 1;
- }
-
- otx2_eth_set_rx_function(eth_dev);
-
- return 0;
-}
-
-/*
- * +------------------ +------------------ +
- * | | IL4 | IL3| IL2 | TU | L4 | L3 | L2 |
- * +-------------------+-------------------+
- *
- * +-------------------+------------------ +
- * | | LH | LG | LF | LE | LD | LC | LB |
- * +-------------------+-------------------+
- *
- * ptype [LE - LD - LC - LB] = TU - L4 - L3 - T2
- * ptype_tunnel[LH - LG - LF] = IL4 - IL3 - IL2 - TU
- *
- */
-static void
-nix_create_non_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lb, lc, ld, le;
- uint16_t val;
- uint32_t idx;
-
- for (idx = 0; idx < PTYPE_NON_TUNNEL_ARRAY_SZ; idx++) {
- lb = idx & 0xF;
- lc = (idx & 0xF0) >> 4;
- ld = (idx & 0xF00) >> 8;
- le = (idx & 0xF000) >> 12;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lb) {
- case NPC_LT_LB_STAG_QINQ:
- val |= RTE_PTYPE_L2_ETHER_QINQ;
- break;
- case NPC_LT_LB_CTAG:
- val |= RTE_PTYPE_L2_ETHER_VLAN;
- break;
- }
-
- switch (lc) {
- case NPC_LT_LC_ARP:
- val |= RTE_PTYPE_L2_ETHER_ARP;
- break;
- case NPC_LT_LC_NSH:
- val |= RTE_PTYPE_L2_ETHER_NSH;
- break;
- case NPC_LT_LC_FCOE:
- val |= RTE_PTYPE_L2_ETHER_FCOE;
- break;
- case NPC_LT_LC_MPLS:
- val |= RTE_PTYPE_L2_ETHER_MPLS;
- break;
- case NPC_LT_LC_IP:
- val |= RTE_PTYPE_L3_IPV4;
- break;
- case NPC_LT_LC_IP_OPT:
- val |= RTE_PTYPE_L3_IPV4_EXT;
- break;
- case NPC_LT_LC_IP6:
- val |= RTE_PTYPE_L3_IPV6;
- break;
- case NPC_LT_LC_IP6_EXT:
- val |= RTE_PTYPE_L3_IPV6_EXT;
- break;
- case NPC_LT_LC_PTP:
- val |= RTE_PTYPE_L2_ETHER_TIMESYNC;
- break;
- }
-
- switch (ld) {
- case NPC_LT_LD_TCP:
- val |= RTE_PTYPE_L4_TCP;
- break;
- case NPC_LT_LD_UDP:
- val |= RTE_PTYPE_L4_UDP;
- break;
- case NPC_LT_LD_SCTP:
- val |= RTE_PTYPE_L4_SCTP;
- break;
- case NPC_LT_LD_ICMP:
- case NPC_LT_LD_ICMP6:
- val |= RTE_PTYPE_L4_ICMP;
- break;
- case NPC_LT_LD_IGMP:
- val |= RTE_PTYPE_L4_IGMP;
- break;
- case NPC_LT_LD_GRE:
- val |= RTE_PTYPE_TUNNEL_GRE;
- break;
- case NPC_LT_LD_NVGRE:
- val |= RTE_PTYPE_TUNNEL_NVGRE;
- break;
- }
-
- switch (le) {
- case NPC_LT_LE_VXLAN:
- val |= RTE_PTYPE_TUNNEL_VXLAN;
- break;
- case NPC_LT_LE_ESP:
- val |= RTE_PTYPE_TUNNEL_ESP;
- break;
- case NPC_LT_LE_VXLANGPE:
- val |= RTE_PTYPE_TUNNEL_VXLAN_GPE;
- break;
- case NPC_LT_LE_GENEVE:
- val |= RTE_PTYPE_TUNNEL_GENEVE;
- break;
- case NPC_LT_LE_GTPC:
- val |= RTE_PTYPE_TUNNEL_GTPC;
- break;
- case NPC_LT_LE_GTPU:
- val |= RTE_PTYPE_TUNNEL_GTPU;
- break;
- case NPC_LT_LE_TU_MPLS_IN_GRE:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_GRE;
- break;
- case NPC_LT_LE_TU_MPLS_IN_UDP:
- val |= RTE_PTYPE_TUNNEL_MPLS_IN_UDP;
- break;
- }
- ptype[idx] = val;
- }
-}
-
-#define TU_SHIFT(x) ((x) >> PTYPE_NON_TUNNEL_WIDTH)
-static void
-nix_create_tunnel_ptype_array(uint16_t *ptype)
-{
- uint8_t lf, lg, lh;
- uint16_t val;
- uint32_t idx;
-
- /* Skip non tunnel ptype array memory */
- ptype = ptype + PTYPE_NON_TUNNEL_ARRAY_SZ;
-
- for (idx = 0; idx < PTYPE_TUNNEL_ARRAY_SZ; idx++) {
- lf = idx & 0xF;
- lg = (idx & 0xF0) >> 4;
- lh = (idx & 0xF00) >> 8;
- val = RTE_PTYPE_UNKNOWN;
-
- switch (lf) {
- case NPC_LT_LF_TU_ETHER:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L2_ETHER);
- break;
- }
- switch (lg) {
- case NPC_LT_LG_TU_IP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV4);
- break;
- case NPC_LT_LG_TU_IP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L3_IPV6);
- break;
- }
- switch (lh) {
- case NPC_LT_LH_TU_TCP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_TCP);
- break;
- case NPC_LT_LH_TU_UDP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_UDP);
- break;
- case NPC_LT_LH_TU_SCTP:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_SCTP);
- break;
- case NPC_LT_LH_TU_ICMP:
- case NPC_LT_LH_TU_ICMP6:
- val |= TU_SHIFT(RTE_PTYPE_INNER_L4_ICMP);
- break;
- }
-
- ptype[idx] = val;
- }
-}
-
-static void
-nix_create_rx_ol_flags_array(void *mem)
-{
- uint16_t idx, errcode, errlev;
- uint32_t val, *ol_flags;
-
- /* Skip ptype array memory */
- ol_flags = (uint32_t *)((uint8_t *)mem + PTYPE_ARRAY_SZ);
-
- for (idx = 0; idx < BIT(ERRCODE_ERRLEN_WIDTH); idx++) {
- errlev = idx & 0xf;
- errcode = (idx & 0xff0) >> 4;
-
- val = RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN;
-
- switch (errlev) {
- case NPC_ERRLEV_RE:
- /* Mark all errors as BAD checksum errors
- * including Outer L2 length mismatch error
- */
- if (errcode) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LC:
- if (errcode == NPC_EC_OIP4_CSUM ||
- errcode == NPC_EC_IP_FRAG_OFFSET_1) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- }
- break;
- case NPC_ERRLEV_LG:
- if (errcode == NPC_EC_IIP4_CSUM)
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- else
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- break;
- case NPC_ERRLEV_NIX:
- if (errcode == NIX_RX_PERRCODE_OL4_CHK ||
- errcode == NIX_RX_PERRCODE_OL4_LEN ||
- errcode == NIX_RX_PERRCODE_OL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- val |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL4_CHK ||
- errcode == NIX_RX_PERRCODE_IL4_LEN ||
- errcode == NIX_RX_PERRCODE_IL4_PORT) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
- } else if (errcode == NIX_RX_PERRCODE_IL3_LEN ||
- errcode == NIX_RX_PERRCODE_OL3_LEN) {
- val |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
- } else {
- val |= RTE_MBUF_F_RX_IP_CKSUM_GOOD;
- val |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
- }
- break;
- }
- ol_flags[idx] = val;
- }
-}
-
-void *
-otx2_nix_fastpath_lookup_mem_get(void)
-{
- const char name[] = OTX2_NIX_FASTPATH_LOOKUP_MEM;
- const struct rte_memzone *mz;
- void *mem;
-
- /* SA_TBL starts after PTYPE_ARRAY & ERR_ARRAY */
- RTE_BUILD_BUG_ON(OTX2_NIX_SA_TBL_START != (PTYPE_ARRAY_SZ +
- ERR_ARRAY_SZ));
-
- mz = rte_memzone_lookup(name);
- if (mz != NULL)
- return mz->addr;
-
- /* Request for the first time */
- mz = rte_memzone_reserve_aligned(name, LOOKUP_ARRAY_SZ,
- SOCKET_ID_ANY, 0, OTX2_ALIGN);
- if (mz != NULL) {
- mem = mz->addr;
- /* Form the ptype array lookup memory */
- nix_create_non_tunnel_ptype_array(mem);
- nix_create_tunnel_ptype_array(mem);
- /* Form the rx ol_flags based on errcode */
- nix_create_rx_ol_flags_array(mem);
- return mem;
- }
- return NULL;
-}
diff --git a/drivers/net/octeontx2/otx2_mac.c b/drivers/net/octeontx2/otx2_mac.c
deleted file mode 100644
index 49a700ca1d..0000000000
--- a/drivers/net/octeontx2/otx2_mac.c
+++ /dev/null
@@ -1,151 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_common.h>
-
-#include "otx2_dev.h"
-#include "otx2_ethdev.h"
-
-int
-otx2_cgx_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct cgx_mac_addr_set_or_get *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_set(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set mac address in CGX, rc=%d", rc);
-
- return 0;
-}
-
-int
-otx2_cgx_mac_max_entries_get(struct otx2_eth_dev *dev)
-{
- struct cgx_max_dmac_entries_get_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return 0;
-
- otx2_mbox_alloc_msg_cgx_mac_max_entries_get(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->max_dmac_filters;
-}
-
-int
-otx2_nix_mac_addr_add(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr,
- uint32_t index __rte_unused, uint32_t pool __rte_unused)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_add_req *req;
- struct cgx_mac_addr_add_rsp *rsp;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return -ENOTSUP;
-
- if (otx2_dev_active_vfs(dev))
- return -ENOTSUP;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_add(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to add mac address, rc=%d", rc);
- goto done;
- }
-
- /* Enable promiscuous mode at NIX level */
- otx2_nix_promisc_config(eth_dev, 1);
- dev->dmac_filter_enable = true;
- eth_dev->data->promiscuous = 0;
-
-done:
- return rc;
-}
-
-void
-otx2_nix_mac_addr_del(struct rte_eth_dev *eth_dev, uint32_t index)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct cgx_mac_addr_del_req *req;
- int rc;
-
- if (otx2_dev_is_vf_or_sdp(dev))
- return;
-
- req = otx2_mbox_alloc_msg_cgx_mac_addr_del(mbox);
- req->index = index;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to delete mac address, rc=%d", rc);
-}
-
-int
-otx2_nix_mac_addr_set(struct rte_eth_dev *eth_dev, struct rte_ether_addr *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_set_mac_addr *req;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_set_mac_addr(mbox);
- otx2_mbox_memcpy(req->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to set mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(dev->mac_addr, addr->addr_bytes, RTE_ETHER_ADDR_LEN);
-
- /* Install the same entry into CGX DMAC filter table too. */
- otx2_cgx_mac_addr_set(eth_dev, addr);
-
-done:
- return rc;
-}
-
-int
-otx2_nix_mac_addr_get(struct rte_eth_dev *eth_dev, uint8_t *addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_get_mac_addr_rsp *rsp;
- int rc;
-
- otx2_mbox_alloc_msg_nix_get_mac_addr(mbox);
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_get_rsp(mbox, 0, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to get mac address, rc=%d", rc);
- goto done;
- }
-
- otx2_mbox_memcpy(addr, rsp->mac_addr, RTE_ETHER_ADDR_LEN);
-
-done:
- return rc;
-}
diff --git a/drivers/net/octeontx2/otx2_mcast.c b/drivers/net/octeontx2/otx2_mcast.c
deleted file mode 100644
index b9c63ad3bc..0000000000
--- a/drivers/net/octeontx2/otx2_mcast.c
+++ /dev/null
@@ -1,339 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-static int
-nix_mc_addr_list_free(struct otx2_eth_dev *dev, uint32_t entry_count)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (entry_count == 0)
- goto exit;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry->mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- if (rc < 0)
- goto exit;
-
- TAILQ_REMOVE(&dev->mc_fltr_tbl, entry, next);
- rte_free(entry);
- entry_count--;
-
- if (entry_count == 0)
- break;
- }
-
- if (entry == NULL)
- dev->mc_tbl_set = false;
-
-exit:
- return rc;
-}
-
-static int
-nix_hw_update_mc_addr_list(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct npc_xtract_info *x_info;
- uint64_t mcam_data, mcam_mask;
- struct mcast_entry *entry;
- otx2_dxcfg_t *ld_cfg;
- uint8_t *mac_addr;
- uint64_t action;
- int idx, rc = 0;
-
- ld_cfg = &npc->prx_dxcfg;
- /* Get ETH layer profile info for populating mcam entries */
- x_info = &(*ld_cfg)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- /* The mbox memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- req->intf = NPC_MCAM_RX;
- req->enable_entry = 1;
-
- /* Channel base extracted to KW0[11:0] */
- req->entry_data.kw[0] = dev->rx_chan_base;
- req->entry_data.kw_mask[0] = RTE_LEN2MASK(12, uint64_t);
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)req->entry_data.kw;
- key_mask = (volatile uint8_t *)req->entry_data.kw_mask;
-
- mcam_data = 0ull;
- mcam_mask = RTE_LEN2MASK(48, uint64_t);
- mac_addr = &entry->mcast_mac.addr_bytes[0];
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- otx2_mbox_memcpy(key_data + x_info->key_off,
- &mcam_data, x_info->len);
- otx2_mbox_memcpy(key_mask + x_info->key_off,
- &mcam_mask, x_info->len);
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= ((uint64_t)otx2_pfvf_func(dev->pf, dev->vf)) << 4;
- req->entry_data.action = action;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_install(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t entry_count = 0, idx = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry_count++;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = entry_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || rsp->count < entry_count) {
- otx2_err("Failed to allocate required mcam entries");
- goto exit;
- }
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- entry->mcam_index = rsp->entry_list[idx];
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_mc_addr_list_uninstall(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (!dev->mc_tbl_set)
- return 0;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-static int
-nix_setup_mc_addr_list(struct otx2_eth_dev *dev,
- struct rte_ether_addr *mc_addr_set)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct mcast_entry *entry;
- uint32_t idx = 0;
- int rc = 0;
-
- /* Populate PMD's mcast list with given mcast mac addresses and
- * disable all mcam entries pertaining to the mcast list.
- */
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next) {
- rte_memcpy(&entry->mcast_mac, &mc_addr_set[idx++],
- RTE_ETHER_ADDR_LEN);
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- goto exit;
-
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
- if (req == NULL) {
- rc = -ENOMEM;
- goto exit;
- }
- }
- req->entry = entry->mcam_index;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
-
-exit:
- return rc;
-}
-
-int
-otx2_nix_set_mc_addr_list(struct rte_eth_dev *eth_dev,
- struct rte_ether_addr *mc_addr_set,
- uint32_t nb_mc_addr)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_entry_req *req;
- struct npc_mcam_alloc_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint32_t idx, priv_count = 0;
- struct mcast_entry *entry;
- int rc = 0;
-
- if (otx2_dev_is_vf(dev))
- return -ENOTSUP;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- priv_count++;
-
- if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- /* Free existing list if new list is null */
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- for (idx = 0; idx < nb_mc_addr; idx++) {
- if (!rte_is_multicast_ether_addr(&mc_addr_set[idx]))
- return -EINVAL;
- }
-
- /* New list is bigger than the existing list,
- * allocate mcam entries for the extra entries.
- */
- if (nb_mc_addr > priv_count) {
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_entry(mbox);
- req->priority = NPC_MCAM_ANY_PRIO;
- req->count = nb_mc_addr - priv_count;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc || (rsp->count + priv_count < nb_mc_addr)) {
- otx2_err("Failed to allocate required entries");
- nb_mc_addr = priv_count;
- goto exit;
- }
-
- /* Append new mcam entries to the existing mc list */
- for (idx = 0; idx < rsp->count; idx++) {
- entry = rte_zmalloc("otx2_nix_mc_entry",
- sizeof(struct mcast_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- nb_mc_addr = priv_count;
- rc = -ENOMEM;
- goto exit;
- }
- entry->mcam_index = rsp->entry_list[idx];
- TAILQ_INSERT_HEAD(&dev->mc_fltr_tbl, entry, next);
- }
- } else {
- /* Free the extra mcam entries if the new list is smaller
- * than exiting list.
- */
- nix_mc_addr_list_free(dev, priv_count - nb_mc_addr);
- }
-
-
- /* Now mc_fltr_tbl has the required number of mcam entries,
- * Traverse through it and add new multicast filter table entries.
- */
- rc = nix_setup_mc_addr_list(dev, mc_addr_set);
- if (rc < 0)
- goto exit;
-
- rc = nix_hw_update_mc_addr_list(eth_dev);
- if (rc < 0)
- goto exit;
-
- dev->mc_tbl_set = true;
-
- return 0;
-
-exit:
- nix_mc_addr_list_free(dev, nb_mc_addr);
- return rc;
-}
-
-void
-otx2_nix_mc_filter_init(struct otx2_eth_dev *dev)
-{
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_INIT(&dev->mc_fltr_tbl);
-}
-
-void
-otx2_nix_mc_filter_fini(struct otx2_eth_dev *dev)
-{
- struct mcast_entry *entry;
- uint32_t count = 0;
-
- if (otx2_dev_is_vf(dev))
- return;
-
- TAILQ_FOREACH(entry, &dev->mc_fltr_tbl, next)
- count++;
-
- nix_mc_addr_list_free(dev, count);
-}
diff --git a/drivers/net/octeontx2/otx2_ptp.c b/drivers/net/octeontx2/otx2_ptp.c
deleted file mode 100644
index abb2130587..0000000000
--- a/drivers/net/octeontx2/otx2_ptp.c
+++ /dev/null
@@ -1,450 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <ethdev_driver.h>
-
-#include "otx2_ethdev.h"
-
-#define PTP_FREQ_ADJUST (1 << 9)
-
-/* Function to enable ptp config for VFs */
-void
-otx2_nix_ptp_enable_vf(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (otx2_nix_recalc_mtu(eth_dev))
- otx2_err("Failed to set MTU size for ptp");
-
- dev->scalar_ena = true;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
-}
-
-static uint16_t
-nix_eth_ptp_vf_burst(void *queue, struct rte_mbuf **mbufs, uint16_t pkts)
-{
- struct otx2_eth_rxq *rxq = queue;
- struct rte_eth_dev *eth_dev;
-
- RTE_SET_USED(mbufs);
- RTE_SET_USED(pkts);
-
- eth_dev = rxq->eth_dev;
- otx2_nix_ptp_enable_vf(eth_dev);
-
- return 0;
-}
-
-static int
-nix_read_raw_clock(struct otx2_eth_dev *dev, uint64_t *clock, uint64_t *tsc,
- uint8_t is_pmu)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- req->is_pmu = is_pmu;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- goto fail;
-
- if (clock)
- *clock = rsp->clk;
- if (tsc)
- *tsc = rsp->tsc;
-
-fail:
- return rc;
-}
-
-/* This function calculates two parameters "clk_freq_mult" and
- * "clk_delta" which is useful in deriving PTP HI clock from
- * timestamp counter (tsc) value.
- */
-int
-otx2_nix_raw_clock_tsc_conv(struct otx2_eth_dev *dev)
-{
- uint64_t ticks_base = 0, ticks = 0, tsc = 0, t_freq;
- int rc, val;
-
- /* Calculating the frequency at which PTP HI clock is running */
- rc = nix_read_raw_clock(dev, &ticks_base, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- rte_delay_ms(100);
-
- rc = nix_read_raw_clock(dev, &ticks, &tsc, false);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- t_freq = (ticks - ticks_base) * 10;
-
- /* Calculating the freq multiplier viz the ratio between the
- * frequency at which PTP HI clock works and tsc clock runs
- */
- dev->clk_freq_mult =
- (double)pow(10, floor(log10(t_freq))) / rte_get_timer_hz();
-
- val = false;
-#ifdef RTE_ARM_EAL_RDTSC_USE_PMU
- val = true;
-#endif
- rc = nix_read_raw_clock(dev, &ticks, &tsc, val);
- if (rc) {
- otx2_err("Failed to read the raw clock value: %d", rc);
- goto fail;
- }
-
- /* Calculating delta between PTP HI clock and tsc */
- dev->clk_delta = ((uint64_t)(ticks / dev->clk_freq_mult) - tsc);
-
-fail:
- return rc;
-}
-
-static void
-nix_start_timecounters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- memset(&dev->systime_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->rx_tstamp_tc, 0, sizeof(struct rte_timecounter));
- memset(&dev->tx_tstamp_tc, 0, sizeof(struct rte_timecounter));
-
- dev->systime_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->rx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
- dev->tx_tstamp_tc.cc_mask = OTX2_CYCLECOUNTER_MASK;
-}
-
-static int
-nix_ptp_config(struct rte_eth_dev *eth_dev, int en)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- uint8_t rc = -EINVAL;
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return rc;
-
- if (en) {
- /* Enable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_enable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf enable failed: err %d", rc);
- return rc;
- }
- /* Enable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_enable(mbox);
- } else {
- /* Disable time stamping of sent PTP packets. */
- otx2_mbox_alloc_msg_nix_lf_ptp_tx_disable(mbox);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("MBOX ptp tx conf disable failed: err %d", rc);
- return rc;
- }
- /* Disable time stamping of received PTP packets. */
- otx2_mbox_alloc_msg_cgx_ptp_rx_disable(mbox);
- }
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_eth_dev_ptp_info_update(struct otx2_dev *dev, bool ptp_en)
-{
- struct otx2_eth_dev *otx2_dev = (struct otx2_eth_dev *)dev;
- struct rte_eth_dev *eth_dev;
- int i;
-
- if (!dev)
- return -EINVAL;
-
- eth_dev = otx2_dev->eth_dev;
- if (!eth_dev)
- return -EINVAL;
-
- otx2_dev->ptp_en = ptp_en;
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- struct otx2_eth_rxq *rxq = eth_dev->data->rx_queues[i];
- rxq->mbuf_initializer =
- otx2_nix_rxq_mbuf_setup(otx2_dev,
- eth_dev->data->port_id);
- }
- if (otx2_dev_is_vf(otx2_dev) && !(otx2_dev_is_sdp(otx2_dev)) &&
- !(otx2_dev_is_lbk(otx2_dev))) {
- /* In case of VF, setting of MTU cant be done directly in this
- * function as this is running as part of MBOX request(PF->VF)
- * and MTU setting also requires MBOX message to be
- * sent(VF->PF)
- */
- eth_dev->rx_pkt_burst = nix_eth_ptp_vf_burst;
- rte_mb();
- }
-
- return 0;
-}
-
-int
-otx2_nix_timesync_enable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- /* If we are VF/SDP/LBK, ptp cannot not be enabled */
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev)) {
- otx2_info("PTP cannot be enabled in case of VF/SDP/LBK");
- return -EINVAL;
- }
-
- if (otx2_ethdev_is_ptp_en(dev)) {
- otx2_info("PTP mode is already enabled");
- return -EINVAL;
- }
-
- if (!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)) {
- otx2_err("Ptype offload is disabled, it should be enabled");
- return -EINVAL;
- }
-
- if (dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_HIGIG) {
- otx2_err("Both PTP and switch header enabled");
- return -EINVAL;
- }
-
- /* Allocating a iova address for tx tstamp */
- const struct rte_memzone *ts;
- ts = rte_eth_dma_zone_reserve(eth_dev, "otx2_ts",
- 0, OTX2_ALIGN, OTX2_ALIGN,
- dev->node);
- if (ts == NULL) {
- otx2_err("Failed to allocate mem for tx tstamp addr");
- return -ENOMEM;
- }
-
- dev->tstamp.tx_tstamp_iova = ts->iova;
- dev->tstamp.tx_tstamp = ts->addr;
-
- rc = rte_mbuf_dyn_rx_timestamp_register(
- &dev->tstamp.tstamp_dynfield_offset,
- &dev->tstamp.rx_tstamp_dynflag);
- if (rc != 0) {
- otx2_err("Failed to register Rx timestamp field/flag");
- return -rte_errno;
- }
-
- /* System time should be already on by default */
- nix_start_timecounters(eth_dev);
-
- dev->rx_offloads |= RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags |= NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 1);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int i, rc = 0;
-
- if (!otx2_ethdev_is_ptp_en(dev)) {
- otx2_nix_dbg("PTP mode is disabled");
- return -EINVAL;
- }
-
- if (otx2_dev_is_vf_or_sdp(dev) || otx2_dev_is_lbk(dev))
- return -EINVAL;
-
- dev->rx_offloads &= ~RTE_ETH_RX_OFFLOAD_TIMESTAMP;
- dev->rx_offload_flags &= ~NIX_RX_OFFLOAD_TSTAMP_F;
- dev->tx_offload_flags &= ~NIX_TX_OFFLOAD_TSTAMP_F;
-
- rc = nix_ptp_config(eth_dev, 0);
- if (!rc) {
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- struct otx2_eth_txq *txq = eth_dev->data->tx_queues[i];
- otx2_nix_form_default_desc(txq);
- }
-
- /* Setting up the function pointers as per new offload flags */
- otx2_eth_set_rx_function(eth_dev);
- otx2_eth_set_tx_function(eth_dev);
- }
-
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc)
- otx2_err("Failed to set MTU size for ptp");
-
- return rc;
-}
-
-int
-otx2_nix_timesync_read_rx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp,
- uint32_t __rte_unused flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (!tstamp->rx_ready)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->rx_tstamp_tc, tstamp->rx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
- tstamp->rx_ready = 0;
-
- otx2_nix_dbg("rx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- (uint64_t)tstamp->rx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_tx_timestamp(struct rte_eth_dev *eth_dev,
- struct timespec *timestamp)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_timesync_info *tstamp = &dev->tstamp;
- uint64_t ns;
-
- if (*tstamp->tx_tstamp == 0)
- return -EINVAL;
-
- ns = rte_timecounter_update(&dev->tx_tstamp_tc, *tstamp->tx_tstamp);
- *timestamp = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("tx timestamp: %"PRIu64" sec: %"PRIu64" nsec %"PRIu64"",
- *tstamp->tx_tstamp, (uint64_t)timestamp->tv_sec,
- (uint64_t)timestamp->tv_nsec);
-
- *tstamp->tx_tstamp = 0;
- rte_wmb();
-
- return 0;
-}
-
-int
-otx2_nix_timesync_adjust_time(struct rte_eth_dev *eth_dev, int64_t delta)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- int rc;
-
- /* Adjust the frequent to make tics increments in 10^9 tics per sec */
- if (delta < PTP_FREQ_ADJUST && delta > -PTP_FREQ_ADJUST) {
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_ADJFINE;
- req->scaled_ppm = delta;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
- /* Since the frequency of PTP comp register is tuned, delta and
- * freq mult calculation for deriving PTP_HI from timestamp
- * counter should be done again.
- */
- rc = otx2_nix_raw_clock_tsc_conv(dev);
- if (rc)
- otx2_err("Failed to calculate delta and freq mult");
- }
- dev->systime_tc.nsec += delta;
- dev->rx_tstamp_tc.nsec += delta;
- dev->tx_tstamp_tc.nsec += delta;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_write_time(struct rte_eth_dev *eth_dev,
- const struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t ns;
-
- ns = rte_timespec_to_ns(ts);
- /* Set the time counters to a new value. */
- dev->systime_tc.nsec = ns;
- dev->rx_tstamp_tc.nsec = ns;
- dev->tx_tstamp_tc.nsec = ns;
-
- return 0;
-}
-
-int
-otx2_nix_timesync_read_time(struct rte_eth_dev *eth_dev, struct timespec *ts)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct ptp_req *req;
- struct ptp_rsp *rsp;
- uint64_t ns;
- int rc;
-
- req = otx2_mbox_alloc_msg_ptp_op(mbox);
- req->op = PTP_OP_GET_CLOCK;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- ns = rte_timecounter_update(&dev->systime_tc, rsp->clk);
- *ts = rte_ns_to_timespec(ns);
-
- otx2_nix_dbg("PTP time read: %"PRIu64" .%09"PRIu64"",
- (uint64_t)ts->tv_sec, (uint64_t)ts->tv_nsec);
-
- return 0;
-}
-
-
-int
-otx2_nix_read_clock(struct rte_eth_dev *eth_dev, uint64_t *clock)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* This API returns the raw PTP HI clock value. Since LFs doesn't
- * have direct access to PTP registers and it requires mbox msg
- * to AF for this value. In fastpath reading this value for every
- * packet (which involes mbox call) becomes very expensive, hence
- * we should be able to derive PTP HI clock value from tsc by
- * using freq_mult and clk_delta calculated during configure stage.
- */
- *clock = (rte_get_tsc_cycles() + dev->clk_delta) * dev->clk_freq_mult;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rss.c b/drivers/net/octeontx2/otx2_rss.c
deleted file mode 100644
index 68cef1caa3..0000000000
--- a/drivers/net/octeontx2/otx2_rss.c
+++ /dev/null
@@ -1,427 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include "otx2_ethdev.h"
-
-int
-otx2_nix_rss_tbl_init(struct otx2_eth_dev *dev,
- uint8_t group, uint16_t *ind_tbl)
-{
- struct otx2_rss_info *rss = &dev->rss_info;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_req *req;
- int rc, idx;
-
- for (idx = 0; idx < rss->rss_size; idx++) {
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_INIT;
-
- if (!dev->lock_rx_ctx)
- continue;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req) {
- /* The shared memory buffer can be full.
- * Flush it and retry
- */
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- if (!req)
- return -ENOMEM;
- }
- req->rss.rq = ind_tbl[idx];
- /* Fill AQ info */
- req->qidx = (group * rss->rss_size) + idx;
- req->ctype = NIX_AQ_CTYPE_RSS;
- req->op = NIX_AQ_INSTOP_LOCK;
- }
-
- otx2_mbox_msg_send(mbox, 0);
- rc = otx2_mbox_wait_for_rsp(mbox, 0);
- if (rc < 0)
- return rc;
-
- return 0;
-}
-
-int
-otx2_nix_dev_reta_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
- int idx = 0;
-
- rc = -EINVAL;
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++) {
- if ((reta_conf[i].mask >> j) & 0x01)
- rss->ind_tbl[idx] = reta_conf[i].reta[j];
- idx++;
- }
- }
-
- return otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_dev_reta_query(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_reta_entry64 *reta_conf,
- uint16_t reta_size)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_rss_info *rss = &dev->rss_info;
- int rc, i, j;
-
- rc = -EINVAL;
-
- if (reta_size != dev->rss_info.rss_size) {
- otx2_err("Size of hash lookup table configured "
- "(%d) doesn't match the number hardware can supported "
- "(%d)", reta_size, dev->rss_info.rss_size);
- goto fail;
- }
-
- /* Copy RETA table */
- for (i = 0; i < (dev->rss_info.rss_size / RTE_ETH_RETA_GROUP_SIZE); i++) {
- for (j = 0; j < RTE_ETH_RETA_GROUP_SIZE; j++)
- if ((reta_conf[i].mask >> j) & 0x01)
- reta_conf[i].reta[j] = rss->ind_tbl[j];
- }
-
- return 0;
-
-fail:
- return rc;
-}
-
-void
-otx2_nix_rss_set_key(struct otx2_eth_dev *dev, uint8_t *key,
- uint32_t key_len)
-{
- const uint8_t default_key[NIX_HASH_KEY_SIZE] = {
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD,
- 0xFE, 0xED, 0x0B, 0xAD, 0xFE, 0xED, 0x0B, 0xAD
- };
- struct otx2_rss_info *rss = &dev->rss_info;
- uint64_t *keyptr;
- uint64_t val;
- uint32_t idx;
-
- if (key == NULL || key == 0) {
- keyptr = (uint64_t *)(uintptr_t)default_key;
- key_len = NIX_HASH_KEY_SIZE;
- memset(rss->key, 0, key_len);
- } else {
- memcpy(rss->key, key, key_len);
- keyptr = (uint64_t *)rss->key;
- }
-
- for (idx = 0; idx < (key_len >> 3); idx++) {
- val = rte_cpu_to_be_64(*keyptr);
- otx2_write64(val, dev->base + NIX_LF_RX_SECRETX(idx));
- keyptr++;
- }
-}
-
-static void
-rss_get_key(struct otx2_eth_dev *dev, uint8_t *key)
-{
- uint64_t *keyptr = (uint64_t *)key;
- uint64_t val;
- int idx;
-
- for (idx = 0; idx < (NIX_HASH_KEY_SIZE >> 3); idx++) {
- val = otx2_read64(dev->base + NIX_LF_RX_SECRETX(idx));
- *keyptr = rte_be_to_cpu_64(val);
- keyptr++;
- }
-}
-
-#define RSS_IPV4_ENABLE ( \
- RTE_ETH_RSS_IPV4 | \
- RTE_ETH_RSS_FRAG_IPV4 | \
- RTE_ETH_RSS_NONFRAG_IPV4_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV4_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV4_SCTP)
-
-#define RSS_IPV6_ENABLE ( \
- RTE_ETH_RSS_IPV6 | \
- RTE_ETH_RSS_FRAG_IPV6 | \
- RTE_ETH_RSS_NONFRAG_IPV6_UDP | \
- RTE_ETH_RSS_NONFRAG_IPV6_TCP | \
- RTE_ETH_RSS_NONFRAG_IPV6_SCTP)
-
-#define RSS_IPV6_EX_ENABLE ( \
- RTE_ETH_RSS_IPV6_EX | \
- RTE_ETH_RSS_IPV6_TCP_EX | \
- RTE_ETH_RSS_IPV6_UDP_EX)
-
-#define RSS_MAX_LEVELS 3
-
-#define RSS_IPV4_INDEX 0
-#define RSS_IPV6_INDEX 1
-#define RSS_TCP_INDEX 2
-#define RSS_UDP_INDEX 3
-#define RSS_SCTP_INDEX 4
-#define RSS_DMAC_INDEX 5
-
-uint32_t
-otx2_rss_ethdev_to_nix(struct otx2_eth_dev *dev, uint64_t ethdev_rss,
- uint8_t rss_level)
-{
- uint32_t flow_key_type[RSS_MAX_LEVELS][6] = {
- {
- FLOW_KEY_TYPE_IPV4, FLOW_KEY_TYPE_IPV6,
- FLOW_KEY_TYPE_TCP, FLOW_KEY_TYPE_UDP,
- FLOW_KEY_TYPE_SCTP, FLOW_KEY_TYPE_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_INNR_IPV4, FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_INNR_TCP, FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_INNR_SCTP, FLOW_KEY_TYPE_INNR_ETH_DMAC
- },
- {
- FLOW_KEY_TYPE_IPV4 | FLOW_KEY_TYPE_INNR_IPV4,
- FLOW_KEY_TYPE_IPV6 | FLOW_KEY_TYPE_INNR_IPV6,
- FLOW_KEY_TYPE_TCP | FLOW_KEY_TYPE_INNR_TCP,
- FLOW_KEY_TYPE_UDP | FLOW_KEY_TYPE_INNR_UDP,
- FLOW_KEY_TYPE_SCTP | FLOW_KEY_TYPE_INNR_SCTP,
- FLOW_KEY_TYPE_ETH_DMAC | FLOW_KEY_TYPE_INNR_ETH_DMAC
- }
- };
- uint32_t flowkey_cfg = 0;
-
- dev->rss_info.nix_rss = ethdev_rss;
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD &&
- dev->npc_flow.switch_header_type == OTX2_PRIV_FLAGS_CH_LEN_90B) {
- flowkey_cfg |= FLOW_KEY_TYPE_CH_LEN_90B;
- }
-
- if (ethdev_rss & RTE_ETH_RSS_C_VLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L3_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L3_DST;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_SRC_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_SRC;
-
- if (ethdev_rss & RTE_ETH_RSS_L4_DST_ONLY)
- flowkey_cfg |= FLOW_KEY_TYPE_L4_DST;
-
- if (ethdev_rss & RSS_IPV4_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV4_INDEX];
-
- if (ethdev_rss & RSS_IPV6_ENABLE)
- flowkey_cfg |= flow_key_type[rss_level][RSS_IPV6_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_TCP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_TCP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_UDP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_UDP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_SCTP)
- flowkey_cfg |= flow_key_type[rss_level][RSS_SCTP_INDEX];
-
- if (ethdev_rss & RTE_ETH_RSS_L2_PAYLOAD)
- flowkey_cfg |= flow_key_type[rss_level][RSS_DMAC_INDEX];
-
- if (ethdev_rss & RSS_IPV6_EX_ENABLE)
- flowkey_cfg |= FLOW_KEY_TYPE_IPV6_EXT;
-
- if (ethdev_rss & RTE_ETH_RSS_PORT)
- flowkey_cfg |= FLOW_KEY_TYPE_PORT;
-
- if (ethdev_rss & RTE_ETH_RSS_NVGRE)
- flowkey_cfg |= FLOW_KEY_TYPE_NVGRE;
-
- if (ethdev_rss & RTE_ETH_RSS_VXLAN)
- flowkey_cfg |= FLOW_KEY_TYPE_VXLAN;
-
- if (ethdev_rss & RTE_ETH_RSS_GENEVE)
- flowkey_cfg |= FLOW_KEY_TYPE_GENEVE;
-
- if (ethdev_rss & RTE_ETH_RSS_GTPU)
- flowkey_cfg |= FLOW_KEY_TYPE_GTPU;
-
- return flowkey_cfg;
-}
-
-int
-otx2_rss_set_hf(struct otx2_eth_dev *dev, uint32_t flowkey_cfg,
- uint8_t *alg_idx, uint8_t group, int mcam_index)
-{
- struct nix_rss_flowkey_cfg_rsp *rss_rsp;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_rss_flowkey_cfg *cfg;
- int rc;
-
- rc = -EINVAL;
-
- dev->rss_info.flowkey_cfg = flowkey_cfg;
-
- cfg = otx2_mbox_alloc_msg_nix_rss_flowkey_cfg(mbox);
-
- cfg->flowkey_cfg = flowkey_cfg;
- cfg->mcam_index = mcam_index; /* -1 indicates default group */
- cfg->group = group; /* 0 is default group */
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rss_rsp);
- if (rc)
- return rc;
-
- if (alg_idx)
- *alg_idx = rss_rsp->alg_idx;
-
- return rc;
-}
-
-int
-otx2_nix_rss_hash_update(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint8_t alg_idx;
- int rc;
-
- rc = -EINVAL;
-
- if (rss_conf->rss_key && rss_conf->rss_key_len != NIX_HASH_KEY_SIZE) {
- otx2_err("Hash key size mismatch %d vs %d",
- rss_conf->rss_key_len, NIX_HASH_KEY_SIZE);
- goto fail;
- }
-
- if (rss_conf->rss_key)
- otx2_nix_rss_set_key(dev, rss_conf->rss_key,
- (uint32_t)rss_conf->rss_key_len);
-
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_conf->rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg =
- otx2_rss_ethdev_to_nix(dev, rss_conf->rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
-fail:
- return rc;
-}
-
-int
-otx2_nix_rss_hash_conf_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_rss_conf *rss_conf)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (rss_conf->rss_key)
- rss_get_key(dev, rss_conf->rss_key);
-
- rss_conf->rss_key_len = NIX_HASH_KEY_SIZE;
- rss_conf->rss_hf = dev->rss_info.nix_rss;
-
- return 0;
-}
-
-int
-otx2_nix_rss_config(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t idx, qcnt = eth_dev->data->nb_rx_queues;
- uint8_t rss_hash_level;
- uint32_t flowkey_cfg;
- uint64_t rss_hf;
- uint8_t alg_idx;
- int rc;
-
- /* Skip further configuration if selected mode is not RSS */
- if (eth_dev->data->dev_conf.rxmode.mq_mode != RTE_ETH_MQ_RX_RSS || !qcnt)
- return 0;
-
- /* Update default RSS key and cfg */
- otx2_nix_rss_set_key(dev, NULL, 0);
-
- /* Update default RSS RETA */
- for (idx = 0; idx < dev->rss_info.rss_size; idx++)
- dev->rss_info.ind_tbl[idx] = idx % qcnt;
-
- /* Init RSS table context */
- rc = otx2_nix_rss_tbl_init(dev, 0, dev->rss_info.ind_tbl);
- if (rc) {
- otx2_err("Failed to init RSS table rc=%d", rc);
- return rc;
- }
-
- rss_hf = eth_dev->data->dev_conf.rx_adv_conf.rss_conf.rss_hf;
- rss_hash_level = RTE_ETH_RSS_LEVEL(rss_hf);
- if (rss_hash_level)
- rss_hash_level -= 1;
- flowkey_cfg = otx2_rss_ethdev_to_nix(dev, rss_hf, rss_hash_level);
-
- rc = otx2_rss_set_hf(dev, flowkey_cfg, &alg_idx,
- NIX_DEFAULT_RSS_CTX_GROUP,
- NIX_DEFAULT_RSS_MCAM_IDX);
- if (rc) {
- otx2_err("Failed to set RSS hash function rc=%d", rc);
- return rc;
- }
-
- dev->rss_info.alg_idx = alg_idx;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_rx.c b/drivers/net/octeontx2/otx2_rx.c
deleted file mode 100644
index 5ee1aed786..0000000000
--- a/drivers/net/octeontx2/otx2_rx.c
+++ /dev/null
@@ -1,430 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_rx.h"
-
-#define NIX_DESCS_PER_LOOP 4
-#define CQE_CAST(x) ((struct nix_cqe_hdr_s *)(x))
-#define CQE_SZ(x) ((x) * NIX_CQ_ENTRY_SZ)
-
-static inline uint16_t
-nix_rx_nb_pkts(struct otx2_eth_rxq *rxq, const uint64_t wdata,
- const uint16_t pkts, const uint32_t qmask)
-{
- uint32_t available = rxq->available;
-
- /* Update the available count if cached value is not enough */
- if (unlikely(available < pkts)) {
- uint64_t reg, head, tail;
-
- /* Use LDADDA version to avoid reorder */
- reg = otx2_atomic64_add_sync(wdata, rxq->cq_status);
- /* CQ_OP_STATUS operation error */
- if (reg & BIT_ULL(CQ_OP_STAT_OP_ERR) ||
- reg & BIT_ULL(CQ_OP_STAT_CQ_ERR))
- return 0;
-
- tail = reg & 0xFFFFF;
- head = (reg >> 20) & 0xFFFFF;
- if (tail < head)
- available = tail - head + qmask + 1;
- else
- available = tail - head;
-
- rxq->available = available;
- }
-
- return RTE_MIN(pkts, available);
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue;
- const uint64_t mbuf_init = rxq->mbuf_initializer;
- const void *lookup_mem = rxq->lookup_mem;
- const uint64_t data_off = rxq->data_off;
- const uintptr_t desc = rxq->desc;
- const uint64_t wdata = rxq->wdata;
- const uint32_t qmask = rxq->qmask;
- uint16_t packets = 0, nb_pkts;
- uint32_t head = rxq->head;
- struct nix_cqe_hdr_s *cq;
- struct rte_mbuf *mbuf;
-
- nb_pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
-
- while (packets < nb_pkts) {
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(desc +
- (CQE_SZ((head + 2) & qmask))));
- cq = (struct nix_cqe_hdr_s *)(desc + CQE_SZ(head));
-
- mbuf = nix_get_mbuf_from_cqe(cq, data_off);
-
- otx2_nix_cqe_to_mbuf(cq, cq->tag, mbuf, lookup_mem, mbuf_init,
- flags);
- otx2_nix_mbuf_to_tstamp(mbuf, rxq->tstamp, flags,
- (uint64_t *)((uint8_t *)mbuf + data_off));
- rx_pkts[packets++] = mbuf;
- otx2_prefetch_store_keep(mbuf);
- head++;
- head &= qmask;
- }
-
- rxq->head = head;
- rxq->available -= nb_pkts;
-
- /* Free all the CQs that we've processed */
- otx2_write64((wdata | nb_pkts), rxq->cq_door);
-
- return nb_pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-static __rte_always_inline uint64_t
-nix_vlan_update(const uint64_t w2, uint64_t ol_flags, uint8x16_t *f)
-{
- if (w2 & BIT_ULL(21) /* vtag0_gone */) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- *f = vsetq_lane_u16((uint16_t)(w2 >> 32), *f, 5);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint64_t
-nix_qinq_update(const uint64_t w2, uint64_t ol_flags, struct rte_mbuf *mbuf)
-{
- if (w2 & BIT_ULL(23) /* vtag1_gone */) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = (uint16_t)(w2 >> 48);
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- struct otx2_eth_rxq *rxq = rx_queue; uint16_t packets = 0;
- uint64x2_t cq0_w8, cq1_w8, cq2_w8, cq3_w8, mbuf01, mbuf23;
- const uint64_t mbuf_initializer = rxq->mbuf_initializer;
- const uint64x2_t data_off = vdupq_n_u64(rxq->data_off);
- uint64_t ol_flags0, ol_flags1, ol_flags2, ol_flags3;
- uint64x2_t rearm0 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm1 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm2 = vdupq_n_u64(mbuf_initializer);
- uint64x2_t rearm3 = vdupq_n_u64(mbuf_initializer);
- struct rte_mbuf *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- const uint16_t *lookup_mem = rxq->lookup_mem;
- const uint32_t qmask = rxq->qmask;
- const uint64_t wdata = rxq->wdata;
- const uintptr_t desc = rxq->desc;
- uint8x16_t f0, f1, f2, f3;
- uint32_t head = rxq->head;
- uint16_t pkts_left;
-
- pkts = nix_rx_nb_pkts(rxq, wdata, pkts, qmask);
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
-
- /* Packets has to be floor-aligned to NIX_DESCS_PER_LOOP */
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- while (packets < pkts) {
- /* Exit loop if head is about to wrap and become unaligned */
- if (((head + NIX_DESCS_PER_LOOP - 1) & qmask) <
- NIX_DESCS_PER_LOOP) {
- pkts_left += (pkts - packets);
- break;
- }
-
- const uintptr_t cq0 = desc + CQE_SZ(head);
-
- /* Prefetch N desc ahead */
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(8)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(9)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(10)));
- rte_prefetch_non_temporal((void *)(cq0 + CQE_SZ(11)));
-
- /* Get NIX_RX_SG_S for size and buffer pointer */
- cq0_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(0) + 64));
- cq1_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(1) + 64));
- cq2_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(2) + 64));
- cq3_w8 = vld1q_u64((uint64_t *)(cq0 + CQE_SZ(3) + 64));
-
- /* Extract mbuf from NIX_RX_SG_S */
- mbuf01 = vzip2q_u64(cq0_w8, cq1_w8);
- mbuf23 = vzip2q_u64(cq2_w8, cq3_w8);
- mbuf01 = vqsubq_u64(mbuf01, data_off);
- mbuf23 = vqsubq_u64(mbuf23, data_off);
-
- /* Move mbufs to scalar registers for future use */
- mbuf0 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 0);
- mbuf1 = (struct rte_mbuf *)vgetq_lane_u64(mbuf01, 1);
- mbuf2 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 0);
- mbuf3 = (struct rte_mbuf *)vgetq_lane_u64(mbuf23, 1);
-
- /* Mask to get packet len from NIX_RX_SG_S */
- const uint8x16_t shuf_msk = {
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0xFF, 0xFF, /* pkt_type set as unknown */
- 0, 1, /* octet 1~0, low 16 bits pkt_len */
- 0xFF, 0xFF, /* skip high 16 bits pkt_len, zero out */
- 0, 1, /* octet 1~0, 16 bits data_len */
- 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF
- };
-
- /* Form the rx_descriptor_fields1 with pkt_len and data_len */
- f0 = vqtbl1q_u8(cq0_w8, shuf_msk);
- f1 = vqtbl1q_u8(cq1_w8, shuf_msk);
- f2 = vqtbl1q_u8(cq2_w8, shuf_msk);
- f3 = vqtbl1q_u8(cq3_w8, shuf_msk);
-
- /* Load CQE word0 and word 1 */
- uint64_t cq0_w0 = ((uint64_t *)(cq0 + CQE_SZ(0)))[0];
- uint64_t cq0_w1 = ((uint64_t *)(cq0 + CQE_SZ(0)))[1];
- uint64_t cq1_w0 = ((uint64_t *)(cq0 + CQE_SZ(1)))[0];
- uint64_t cq1_w1 = ((uint64_t *)(cq0 + CQE_SZ(1)))[1];
- uint64_t cq2_w0 = ((uint64_t *)(cq0 + CQE_SZ(2)))[0];
- uint64_t cq2_w1 = ((uint64_t *)(cq0 + CQE_SZ(2)))[1];
- uint64_t cq3_w0 = ((uint64_t *)(cq0 + CQE_SZ(3)))[0];
- uint64_t cq3_w1 = ((uint64_t *)(cq0 + CQE_SZ(3)))[1];
-
- if (flags & NIX_RX_OFFLOAD_RSS_F) {
- /* Fill rss in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(cq0_w0, f0, 3);
- f1 = vsetq_lane_u32(cq1_w0, f1, 3);
- f2 = vsetq_lane_u32(cq2_w0, f2, 3);
- f3 = vsetq_lane_u32(cq3_w0, f3, 3);
- ol_flags0 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags1 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags2 = RTE_MBUF_F_RX_RSS_HASH;
- ol_flags3 = RTE_MBUF_F_RX_RSS_HASH;
- } else {
- ol_flags0 = 0; ol_flags1 = 0;
- ol_flags2 = 0; ol_flags3 = 0;
- }
-
- if (flags & NIX_RX_OFFLOAD_PTYPE_F) {
- /* Fill packet_type in the rx_descriptor_fields1 */
- f0 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq0_w1),
- f0, 0);
- f1 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq1_w1),
- f1, 0);
- f2 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq2_w1),
- f2, 0);
- f3 = vsetq_lane_u32(nix_ptype_get(lookup_mem, cq3_w1),
- f3, 0);
- }
-
- if (flags & NIX_RX_OFFLOAD_CHECKSUM_F) {
- ol_flags0 |= nix_rx_olflags_get(lookup_mem, cq0_w1);
- ol_flags1 |= nix_rx_olflags_get(lookup_mem, cq1_w1);
- ol_flags2 |= nix_rx_olflags_get(lookup_mem, cq2_w1);
- ol_flags3 |= nix_rx_olflags_get(lookup_mem, cq3_w1);
- }
-
- if (flags & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- uint64_t cq0_w2 = *(uint64_t *)(cq0 + CQE_SZ(0) + 16);
- uint64_t cq1_w2 = *(uint64_t *)(cq0 + CQE_SZ(1) + 16);
- uint64_t cq2_w2 = *(uint64_t *)(cq0 + CQE_SZ(2) + 16);
- uint64_t cq3_w2 = *(uint64_t *)(cq0 + CQE_SZ(3) + 16);
-
- ol_flags0 = nix_vlan_update(cq0_w2, ol_flags0, &f0);
- ol_flags1 = nix_vlan_update(cq1_w2, ol_flags1, &f1);
- ol_flags2 = nix_vlan_update(cq2_w2, ol_flags2, &f2);
- ol_flags3 = nix_vlan_update(cq3_w2, ol_flags3, &f3);
-
- ol_flags0 = nix_qinq_update(cq0_w2, ol_flags0, mbuf0);
- ol_flags1 = nix_qinq_update(cq1_w2, ol_flags1, mbuf1);
- ol_flags2 = nix_qinq_update(cq2_w2, ol_flags2, mbuf2);
- ol_flags3 = nix_qinq_update(cq3_w2, ol_flags3, mbuf3);
- }
-
- if (flags & NIX_RX_OFFLOAD_MARK_UPDATE_F) {
- ol_flags0 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(0) + 38), ol_flags0, mbuf0);
- ol_flags1 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(1) + 38), ol_flags1, mbuf1);
- ol_flags2 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(2) + 38), ol_flags2, mbuf2);
- ol_flags3 = nix_update_match_id(*(uint16_t *)
- (cq0 + CQE_SZ(3) + 38), ol_flags3, mbuf3);
- }
-
- /* Form rearm_data with ol_flags */
- rearm0 = vsetq_lane_u64(ol_flags0, rearm0, 1);
- rearm1 = vsetq_lane_u64(ol_flags1, rearm1, 1);
- rearm2 = vsetq_lane_u64(ol_flags2, rearm2, 1);
- rearm3 = vsetq_lane_u64(ol_flags3, rearm3, 1);
-
- /* Update rx_descriptor_fields1 */
- vst1q_u64((uint64_t *)mbuf0->rx_descriptor_fields1, f0);
- vst1q_u64((uint64_t *)mbuf1->rx_descriptor_fields1, f1);
- vst1q_u64((uint64_t *)mbuf2->rx_descriptor_fields1, f2);
- vst1q_u64((uint64_t *)mbuf3->rx_descriptor_fields1, f3);
-
- /* Update rearm_data */
- vst1q_u64((uint64_t *)mbuf0->rearm_data, rearm0);
- vst1q_u64((uint64_t *)mbuf1->rearm_data, rearm1);
- vst1q_u64((uint64_t *)mbuf2->rearm_data, rearm2);
- vst1q_u64((uint64_t *)mbuf3->rearm_data, rearm3);
-
- /* Update that no more segments */
- mbuf0->next = NULL;
- mbuf1->next = NULL;
- mbuf2->next = NULL;
- mbuf3->next = NULL;
-
- /* Store the mbufs to rx_pkts */
- vst1q_u64((uint64_t *)&rx_pkts[packets], mbuf01);
- vst1q_u64((uint64_t *)&rx_pkts[packets + 2], mbuf23);
-
- /* Prefetch mbufs */
- otx2_prefetch_store_keep(mbuf0);
- otx2_prefetch_store_keep(mbuf1);
- otx2_prefetch_store_keep(mbuf2);
- otx2_prefetch_store_keep(mbuf3);
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf0->pool, (void **)&mbuf0, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf1->pool, (void **)&mbuf1, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf2->pool, (void **)&mbuf2, 1, 1);
- RTE_MEMPOOL_CHECK_COOKIES(mbuf3->pool, (void **)&mbuf3, 1, 1);
-
- /* Advance head pointer and packets */
- head += NIX_DESCS_PER_LOOP; head &= qmask;
- packets += NIX_DESCS_PER_LOOP;
- }
-
- rxq->head = head;
- rxq->available -= packets;
-
- rte_io_wmb();
- /* Free all the CQs that we've processed */
- otx2_write64((rxq->wdata | packets), rxq->cq_door);
-
- if (unlikely(pkts_left))
- packets += nix_recv_pkts(rx_queue, &rx_pkts[packets],
- pkts_left, flags);
-
- return packets;
-}
-
-#else
-
-static inline uint16_t
-nix_recv_pkts_vector(void *rx_queue, struct rte_mbuf **rx_pkts,
- uint16_t pkts, const uint16_t flags)
-{
- RTE_SET_USED(rx_queue);
- RTE_SET_USED(rx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(flags);
-
- return 0;
-}
-
-#endif
-
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, (flags)); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_mseg_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- return nix_recv_pkts(rx_queue, rx_pkts, pkts, \
- (flags) | NIX_RX_MULTI_SEG_F); \
-} \
- \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_recv_pkts_vec_ ## name(void *rx_queue, \
- struct rte_mbuf **rx_pkts, uint16_t pkts) \
-{ \
- /* TSTMP is not supported by vector */ \
- if ((flags) & NIX_RX_OFFLOAD_TSTAMP_F) \
- return 0; \
- return nix_recv_pkts_vector(rx_queue, rx_pkts, pkts, (flags)); \
-} \
-
-NIX_RX_FASTPATH_MODES
-#undef R
-
-static inline void
-pick_rx_func(struct rte_eth_dev *eth_dev,
- const eth_rx_burst_t rx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
- eth_dev->rx_pkt_burst = rx_burst
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_SECURITY_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_TSTAMP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_MARK_UPDATE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_VLAN_STRIP_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_CHECKSUM_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_PTYPE_F)]
- [!!(dev->rx_offload_flags & NIX_RX_OFFLOAD_RSS_F)];
-}
-
-void
-otx2_eth_set_rx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_rx_burst_t nix_eth_rx_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_burst_mseg[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_mseg_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- const eth_rx_burst_t nix_eth_rx_vec_burst[2][2][2][2][2][2][2] = {
-#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_recv_pkts_vec_ ## name,
-
-NIX_RX_FASTPATH_MODES
-#undef R
- };
-
- /* For PTP enabled, scalar rx function should be chosen as most of the
- * PTP apps are implemented to rx burst 1 pkt.
- */
- if (dev->scalar_ena || dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
- pick_rx_func(eth_dev, nix_eth_rx_burst);
- else
- pick_rx_func(eth_dev, nix_eth_rx_vec_burst);
-
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
- pick_rx_func(eth_dev, nix_eth_rx_burst_mseg);
-
- /* Copy multi seg version with no offload for tear down sequence */
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- dev->rx_pkt_burst_no_offload =
- nix_eth_rx_burst_mseg[0][0][0][0][0][0][0];
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_rx.h b/drivers/net/octeontx2/otx2_rx.h
deleted file mode 100644
index 98406244e2..0000000000
--- a/drivers/net/octeontx2/otx2_rx.h
+++ /dev/null
@@ -1,583 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_RX_H__
-#define __OTX2_RX_H__
-
-#include <rte_ether.h>
-
-#include "otx2_common.h"
-#include "otx2_ethdev_sec.h"
-#include "otx2_ipsec_anti_replay.h"
-#include "otx2_ipsec_fp.h"
-
-/* Default mark value used when none is provided. */
-#define OTX2_FLOW_ACTION_FLAG_DEFAULT 0xffff
-
-#define PTYPE_NON_TUNNEL_WIDTH 16
-#define PTYPE_TUNNEL_WIDTH 12
-#define PTYPE_NON_TUNNEL_ARRAY_SZ BIT(PTYPE_NON_TUNNEL_WIDTH)
-#define PTYPE_TUNNEL_ARRAY_SZ BIT(PTYPE_TUNNEL_WIDTH)
-#define PTYPE_ARRAY_SZ ((PTYPE_NON_TUNNEL_ARRAY_SZ +\
- PTYPE_TUNNEL_ARRAY_SZ) *\
- sizeof(uint16_t))
-
-#define NIX_RX_OFFLOAD_NONE (0)
-#define NIX_RX_OFFLOAD_RSS_F BIT(0)
-#define NIX_RX_OFFLOAD_PTYPE_F BIT(1)
-#define NIX_RX_OFFLOAD_CHECKSUM_F BIT(2)
-#define NIX_RX_OFFLOAD_VLAN_STRIP_F BIT(3)
-#define NIX_RX_OFFLOAD_MARK_UPDATE_F BIT(4)
-#define NIX_RX_OFFLOAD_TSTAMP_F BIT(5)
-#define NIX_RX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control cqe_to_mbuf conversion function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_RX_MULTI_SEG_F BIT(15)
-#define NIX_TIMESYNC_RX_OFFSET 8
-
-/* Inline IPsec offsets */
-
-/* nix_cqe_hdr_s + nix_rx_parse_s + nix_rx_sg_s + nix_iova_s */
-#define INLINE_CPT_RESULT_OFFSET 80
-
-struct otx2_timesync_info {
- uint64_t rx_tstamp;
- rte_iova_t tx_tstamp_iova;
- uint64_t *tx_tstamp;
- uint64_t rx_tstamp_dynflag;
- int tstamp_dynfield_offset;
- uint8_t tx_ready;
- uint8_t rx_ready;
-} __rte_cache_aligned;
-
-union mbuf_initializer {
- struct {
- uint16_t data_off;
- uint16_t refcnt;
- uint16_t nb_segs;
- uint16_t port;
- } fields;
- uint64_t value;
-};
-
-static inline rte_mbuf_timestamp_t *
-otx2_timestamp_dynfield(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *info)
-{
- return RTE_MBUF_DYNFIELD(mbuf,
- info->tstamp_dynfield_offset, rte_mbuf_timestamp_t *);
-}
-
-static __rte_always_inline void
-otx2_nix_mbuf_to_tstamp(struct rte_mbuf *mbuf,
- struct otx2_timesync_info *tstamp, const uint16_t flag,
- uint64_t *tstamp_ptr)
-{
- if ((flag & NIX_RX_OFFLOAD_TSTAMP_F) &&
- (mbuf->data_off == RTE_PKTMBUF_HEADROOM +
- NIX_TIMESYNC_RX_OFFSET)) {
-
- mbuf->pkt_len -= NIX_TIMESYNC_RX_OFFSET;
-
- /* Reading the rx timestamp inserted by CGX, viz at
- * starting of the packet data.
- */
- *otx2_timestamp_dynfield(mbuf, tstamp) =
- rte_be_to_cpu_64(*tstamp_ptr);
- /* RTE_MBUF_F_RX_IEEE1588_TMST flag needs to be set only in case
- * PTP packets are received.
- */
- if (mbuf->packet_type == RTE_PTYPE_L2_ETHER_TIMESYNC) {
- tstamp->rx_tstamp =
- *otx2_timestamp_dynfield(mbuf, tstamp);
- tstamp->rx_ready = 1;
- mbuf->ol_flags |= RTE_MBUF_F_RX_IEEE1588_PTP |
- RTE_MBUF_F_RX_IEEE1588_TMST |
- tstamp->rx_tstamp_dynflag;
- }
- }
-}
-
-static __rte_always_inline uint64_t
-nix_clear_data_off(uint64_t oldval)
-{
- union mbuf_initializer mbuf_init = { .value = oldval };
-
- mbuf_init.fields.data_off = 0;
- return mbuf_init.value;
-}
-
-static __rte_always_inline struct rte_mbuf *
-nix_get_mbuf_from_cqe(void *cq, const uint64_t data_off)
-{
- rte_iova_t buff;
-
- /* Skip CQE, NIX_RX_PARSE_S and SG HDR(9 DWORDs) and peek buff addr */
- buff = *((rte_iova_t *)((uint64_t *)cq + 9));
- return (struct rte_mbuf *)(buff - data_off);
-}
-
-
-static __rte_always_inline uint32_t
-nix_ptype_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint16_t * const ptype = lookup_mem;
- const uint16_t lh_lg_lf = (in & 0xFFF0000000000000) >> 52;
- const uint16_t tu_l2 = ptype[(in & 0x000FFFF000000000) >> 36];
- const uint16_t il4_tu = ptype[PTYPE_NON_TUNNEL_ARRAY_SZ + lh_lg_lf];
-
- return (il4_tu << PTYPE_NON_TUNNEL_WIDTH) | tu_l2;
-}
-
-static __rte_always_inline uint32_t
-nix_rx_olflags_get(const void * const lookup_mem, const uint64_t in)
-{
- const uint32_t * const ol_flags = (const uint32_t *)
- ((const uint8_t *)lookup_mem + PTYPE_ARRAY_SZ);
-
- return ol_flags[(in & 0xfff00000) >> 20];
-}
-
-static inline uint64_t
-nix_update_match_id(const uint16_t match_id, uint64_t ol_flags,
- struct rte_mbuf *mbuf)
-{
- /* There is no separate bit to check match_id
- * is valid or not? and no flag to identify it is an
- * RTE_FLOW_ACTION_TYPE_FLAG vs RTE_FLOW_ACTION_TYPE_MARK
- * action. The former case addressed through 0 being invalid
- * value and inc/dec match_id pair when MARK is activated.
- * The later case addressed through defining
- * OTX2_FLOW_MARK_DEFAULT as value for
- * RTE_FLOW_ACTION_TYPE_MARK.
- * This would translate to not use
- * OTX2_FLOW_ACTION_FLAG_DEFAULT - 1 and
- * OTX2_FLOW_ACTION_FLAG_DEFAULT for match_id.
- * i.e valid mark_id's are from
- * 0 to OTX2_FLOW_ACTION_FLAG_DEFAULT - 2
- */
- if (likely(match_id)) {
- ol_flags |= RTE_MBUF_F_RX_FDIR;
- if (match_id != OTX2_FLOW_ACTION_FLAG_DEFAULT) {
- ol_flags |= RTE_MBUF_F_RX_FDIR_ID;
- mbuf->hash.fdir.hi = match_id - 1;
- }
- }
-
- return ol_flags;
-}
-
-static __rte_always_inline void
-nix_cqe_xtract_mseg(const struct nix_rx_parse_s *rx,
- struct rte_mbuf *mbuf, uint64_t rearm)
-{
- const rte_iova_t *iova_list;
- struct rte_mbuf *head;
- const rte_iova_t *eol;
- uint8_t nb_segs;
- uint64_t sg;
-
- sg = *(const uint64_t *)(rx + 1);
- nb_segs = (sg >> 48) & 0x3;
- mbuf->nb_segs = nb_segs;
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
-
- eol = ((const rte_iova_t *)(rx + 1) + ((rx->desc_sizem1 + 1) << 1));
- /* Skip SG_S and first IOVA*/
- iova_list = ((const rte_iova_t *)(rx + 1)) + 2;
- nb_segs--;
-
- rearm = rearm & ~0xFFFF;
-
- head = mbuf;
- while (nb_segs) {
- mbuf->next = ((struct rte_mbuf *)*iova_list) - 1;
- mbuf = mbuf->next;
-
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- mbuf->data_len = sg & 0xFFFF;
- sg = sg >> 16;
- *(uint64_t *)(&mbuf->rearm_data) = rearm;
- nb_segs--;
- iova_list++;
-
- if (!nb_segs && (iova_list + 1 < eol)) {
- sg = *(const uint64_t *)(iova_list);
- nb_segs = (sg >> 48) & 0x3;
- head->nb_segs += nb_segs;
- iova_list = (const rte_iova_t *)(iova_list + 1);
- }
- }
- mbuf->next = NULL;
-}
-
-static __rte_always_inline uint16_t
-nix_rx_sec_cptres_get(const void *cq)
-{
- volatile const struct otx2_cpt_res *res;
-
- res = (volatile const struct otx2_cpt_res *)((const char *)cq +
- INLINE_CPT_RESULT_OFFSET);
-
- return res->u16[0];
-}
-
-static __rte_always_inline void *
-nix_rx_sec_sa_get(const void * const lookup_mem, int spi, uint16_t port)
-{
- const uint64_t *const *sa_tbl = (const uint64_t * const *)
- ((const uint8_t *)lookup_mem + OTX2_NIX_SA_TBL_START);
-
- return (void *)sa_tbl[port][spi];
-}
-
-static __rte_always_inline uint64_t
-nix_rx_sec_mbuf_update(const struct nix_rx_parse_s *rx,
- const struct nix_cqe_hdr_s *cq, struct rte_mbuf *m,
- const void * const lookup_mem)
-{
- uint8_t *l2_ptr, *l3_ptr, *l2_ptr_actual, *l3_ptr_actual;
- struct otx2_ipsec_fp_in_sa *sa;
- uint16_t m_len, l2_len, ip_len;
- struct rte_ipv6_hdr *ip6h;
- struct rte_ipv4_hdr *iph;
- uint16_t *ether_type;
- uint32_t spi;
- int i;
-
- if (unlikely(nix_rx_sec_cptres_get(cq) != OTX2_SEC_COMP_GOOD))
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
-
- /* 20 bits of tag would have the SPI */
- spi = cq->tag & 0xFFFFF;
-
- sa = nix_rx_sec_sa_get(lookup_mem, spi, m->port);
- *rte_security_dynfield(m) = sa->udata64;
-
- l2_ptr = rte_pktmbuf_mtod(m, uint8_t *);
- l2_len = rx->lcptr - rx->laptr;
- l3_ptr = RTE_PTR_ADD(l2_ptr, l2_len);
-
- if (sa->replay_win_sz) {
- if (cpt_ipsec_ip_antireplay_check(sa, l3_ptr) < 0)
- return RTE_MBUF_F_RX_SEC_OFFLOAD | RTE_MBUF_F_RX_SEC_OFFLOAD_FAILED;
- }
-
- l2_ptr_actual = RTE_PTR_ADD(l2_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
- l3_ptr_actual = RTE_PTR_ADD(l3_ptr,
- sizeof(struct otx2_ipsec_fp_res_hdr));
-
- for (i = l2_len - RTE_ETHER_TYPE_LEN - 1; i >= 0; i--)
- l2_ptr_actual[i] = l2_ptr[i];
-
- m->data_off += sizeof(struct otx2_ipsec_fp_res_hdr);
-
- ether_type = RTE_PTR_SUB(l3_ptr_actual, RTE_ETHER_TYPE_LEN);
-
- iph = (struct rte_ipv4_hdr *)l3_ptr_actual;
- if ((iph->version_ihl >> 4) == 4) {
- ip_len = rte_be_to_cpu_16(iph->total_length);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4);
- } else {
- ip6h = (struct rte_ipv6_hdr *)iph;
- ip_len = rte_be_to_cpu_16(ip6h->payload_len);
- *ether_type = rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6);
- }
-
- m_len = ip_len + l2_len;
- m->data_len = m_len;
- m->pkt_len = m_len;
- return RTE_MBUF_F_RX_SEC_OFFLOAD;
-}
-
-static __rte_always_inline void
-otx2_nix_cqe_to_mbuf(const struct nix_cqe_hdr_s *cq, const uint32_t tag,
- struct rte_mbuf *mbuf, const void *lookup_mem,
- const uint64_t val, const uint16_t flag)
-{
- const struct nix_rx_parse_s *rx =
- (const struct nix_rx_parse_s *)((const uint64_t *)cq + 1);
- const uint64_t w1 = *(const uint64_t *)rx;
- const uint16_t len = rx->pkt_lenm1 + 1;
- uint64_t ol_flags = 0;
-
- /* Mark mempool obj as "get" as it is alloc'ed by NIX */
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf, 1, 1);
-
- if (flag & NIX_RX_OFFLOAD_PTYPE_F)
- mbuf->packet_type = nix_ptype_get(lookup_mem, w1);
- else
- mbuf->packet_type = 0;
-
- if (flag & NIX_RX_OFFLOAD_RSS_F) {
- mbuf->hash.rss = tag;
- ol_flags |= RTE_MBUF_F_RX_RSS_HASH;
- }
-
- if (flag & NIX_RX_OFFLOAD_CHECKSUM_F)
- ol_flags |= nix_rx_olflags_get(lookup_mem, w1);
-
- if (flag & NIX_RX_OFFLOAD_VLAN_STRIP_F) {
- if (rx->vtag0_gone) {
- ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED;
- mbuf->vlan_tci = rx->vtag0_tci;
- }
- if (rx->vtag1_gone) {
- ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED;
- mbuf->vlan_tci_outer = rx->vtag1_tci;
- }
- }
-
- if (flag & NIX_RX_OFFLOAD_MARK_UPDATE_F)
- ol_flags = nix_update_match_id(rx->match_id, ol_flags, mbuf);
-
- if ((flag & NIX_RX_OFFLOAD_SECURITY_F) &&
- cq->cqe_type == NIX_XQE_TYPE_RX_IPSECH) {
- *(uint64_t *)(&mbuf->rearm_data) = val;
- ol_flags |= nix_rx_sec_mbuf_update(rx, cq, mbuf, lookup_mem);
- mbuf->ol_flags = ol_flags;
- return;
- }
-
- mbuf->ol_flags = ol_flags;
- *(uint64_t *)(&mbuf->rearm_data) = val;
- mbuf->pkt_len = len;
-
- if (flag & NIX_RX_MULTI_SEG_F) {
- nix_cqe_xtract_mseg(rx, mbuf, val);
- } else {
- mbuf->data_len = len;
- mbuf->next = NULL;
- }
-}
-
-#define CKSUM_F NIX_RX_OFFLOAD_CHECKSUM_F
-#define PTYPE_F NIX_RX_OFFLOAD_PTYPE_F
-#define RSS_F NIX_RX_OFFLOAD_RSS_F
-#define RX_VLAN_F NIX_RX_OFFLOAD_VLAN_STRIP_F
-#define MARK_F NIX_RX_OFFLOAD_MARK_UPDATE_F
-#define TS_F NIX_RX_OFFLOAD_TSTAMP_F
-#define RX_SEC_F NIX_RX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSMP] [MARK] [VLAN] [CKSUM] [PTYPE] [RSS] */
-#define NIX_RX_FASTPATH_MODES \
-R(no_offload, 0, 0, 0, 0, 0, 0, 0, NIX_RX_OFFLOAD_NONE) \
-R(rss, 0, 0, 0, 0, 0, 0, 1, RSS_F) \
-R(ptype, 0, 0, 0, 0, 0, 1, 0, PTYPE_F) \
-R(ptype_rss, 0, 0, 0, 0, 0, 1, 1, PTYPE_F | RSS_F) \
-R(cksum, 0, 0, 0, 0, 1, 0, 0, CKSUM_F) \
-R(cksum_rss, 0, 0, 0, 0, 1, 0, 1, CKSUM_F | RSS_F) \
-R(cksum_ptype, 0, 0, 0, 0, 1, 1, 0, CKSUM_F | PTYPE_F) \
-R(cksum_ptype_rss, 0, 0, 0, 0, 1, 1, 1, CKSUM_F | PTYPE_F | RSS_F)\
-R(vlan, 0, 0, 0, 1, 0, 0, 0, RX_VLAN_F) \
-R(vlan_rss, 0, 0, 0, 1, 0, 0, 1, RX_VLAN_F | RSS_F) \
-R(vlan_ptype, 0, 0, 0, 1, 0, 1, 0, RX_VLAN_F | PTYPE_F) \
-R(vlan_ptype_rss, 0, 0, 0, 1, 0, 1, 1, \
- RX_VLAN_F | PTYPE_F | RSS_F) \
-R(vlan_cksum, 0, 0, 0, 1, 1, 0, 0, RX_VLAN_F | CKSUM_F) \
-R(vlan_cksum_rss, 0, 0, 0, 1, 1, 0, 1, \
- RX_VLAN_F | CKSUM_F | RSS_F) \
-R(vlan_cksum_ptype, 0, 0, 0, 1, 1, 1, 0, \
- RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(vlan_cksum_ptype_rss, 0, 0, 0, 1, 1, 1, 1, \
- RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark, 0, 0, 1, 0, 0, 0, 0, MARK_F) \
-R(mark_rss, 0, 0, 1, 0, 0, 0, 1, MARK_F | RSS_F) \
-R(mark_ptype, 0, 0, 1, 0, 0, 1, 0, MARK_F | PTYPE_F) \
-R(mark_ptype_rss, 0, 0, 1, 0, 0, 1, 1, MARK_F | PTYPE_F | RSS_F) \
-R(mark_cksum, 0, 0, 1, 0, 1, 0, 0, MARK_F | CKSUM_F) \
-R(mark_cksum_rss, 0, 0, 1, 0, 1, 0, 1, MARK_F | CKSUM_F | RSS_F) \
-R(mark_cksum_ptype, 0, 0, 1, 0, 1, 1, 0, \
- MARK_F | CKSUM_F | PTYPE_F) \
-R(mark_cksum_ptype_rss, 0, 0, 1, 0, 1, 1, 1, \
- MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(mark_vlan, 0, 0, 1, 1, 0, 0, 0, MARK_F | RX_VLAN_F) \
-R(mark_vlan_rss, 0, 0, 1, 1, 0, 0, 1, \
- MARK_F | RX_VLAN_F | RSS_F) \
-R(mark_vlan_ptype, 0, 0, 1, 1, 0, 1, 0, \
- MARK_F | RX_VLAN_F | PTYPE_F) \
-R(mark_vlan_ptype_rss, 0, 0, 1, 1, 0, 1, 1, \
- MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(mark_vlan_cksum, 0, 0, 1, 1, 1, 0, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F) \
-R(mark_vlan_cksum_rss, 0, 0, 1, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(mark_vlan_cksum_ptype, 0, 0, 1, 1, 1, 1, 0, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(mark_vlan_cksum_ptype_rss, 0, 0, 1, 1, 1, 1, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts, 0, 1, 0, 0, 0, 0, 0, TS_F) \
-R(ts_rss, 0, 1, 0, 0, 0, 0, 1, TS_F | RSS_F) \
-R(ts_ptype, 0, 1, 0, 0, 0, 1, 0, TS_F | PTYPE_F) \
-R(ts_ptype_rss, 0, 1, 0, 0, 0, 1, 1, TS_F | PTYPE_F | RSS_F) \
-R(ts_cksum, 0, 1, 0, 0, 1, 0, 0, TS_F | CKSUM_F) \
-R(ts_cksum_rss, 0, 1, 0, 0, 1, 0, 1, TS_F | CKSUM_F | RSS_F) \
-R(ts_cksum_ptype, 0, 1, 0, 0, 1, 1, 0, TS_F | CKSUM_F | PTYPE_F) \
-R(ts_cksum_ptype_rss, 0, 1, 0, 0, 1, 1, 1, \
- TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_vlan, 0, 1, 0, 1, 0, 0, 0, TS_F | RX_VLAN_F) \
-R(ts_vlan_rss, 0, 1, 0, 1, 0, 0, 1, TS_F | RX_VLAN_F | RSS_F) \
-R(ts_vlan_ptype, 0, 1, 0, 1, 0, 1, 0, \
- TS_F | RX_VLAN_F | PTYPE_F) \
-R(ts_vlan_ptype_rss, 0, 1, 0, 1, 0, 1, 1, \
- TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_vlan_cksum, 0, 1, 0, 1, 1, 0, 0, \
- TS_F | RX_VLAN_F | CKSUM_F) \
-R(ts_vlan_cksum_rss, 0, 1, 0, 1, 1, 0, 1, \
- MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(ts_vlan_cksum_ptype, 0, 1, 0, 1, 1, 1, 0, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_vlan_cksum_ptype_rss, 0, 1, 0, 1, 1, 1, 1, \
- TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark, 0, 1, 1, 0, 0, 0, 0, TS_F | MARK_F) \
-R(ts_mark_rss, 0, 1, 1, 0, 0, 0, 1, TS_F | MARK_F | RSS_F) \
-R(ts_mark_ptype, 0, 1, 1, 0, 0, 1, 0, TS_F | MARK_F | PTYPE_F) \
-R(ts_mark_ptype_rss, 0, 1, 1, 0, 0, 1, 1, \
- TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(ts_mark_cksum, 0, 1, 1, 0, 1, 0, 0, TS_F | MARK_F | CKSUM_F) \
-R(ts_mark_cksum_rss, 0, 1, 1, 0, 1, 0, 1, \
- TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(ts_mark_cksum_ptype, 0, 1, 1, 0, 1, 1, 0, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_cksum_ptype_rss, 0, 1, 1, 0, 1, 1, 1, \
- TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan, 0, 1, 1, 1, 0, 0, 0, TS_F | MARK_F | RX_VLAN_F)\
-R(ts_mark_vlan_rss, 0, 1, 1, 1, 0, 0, 1, \
- TS_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(ts_mark_vlan_ptype, 0, 1, 1, 1, 0, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(ts_mark_vlan_ptype_rss, 0, 1, 1, 1, 0, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(ts_mark_vlan_cksum_ptype, 0, 1, 1, 1, 1, 1, 0, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(ts_mark_vlan_cksum_ptype_rss, 0, 1, 1, 1, 1, 1, 1, \
- TS_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec, 1, 0, 0, 0, 0, 0, 0, RX_SEC_F) \
-R(sec_rss, 1, 0, 0, 0, 0, 0, 1, RX_SEC_F | RSS_F) \
-R(sec_ptype, 1, 0, 0, 0, 0, 1, 0, RX_SEC_F | PTYPE_F) \
-R(sec_ptype_rss, 1, 0, 0, 0, 0, 1, 1, \
- RX_SEC_F | PTYPE_F | RSS_F) \
-R(sec_cksum, 1, 0, 0, 0, 1, 0, 0, RX_SEC_F | CKSUM_F) \
-R(sec_cksum_rss, 1, 0, 0, 0, 1, 0, 1, \
- RX_SEC_F | CKSUM_F | RSS_F) \
-R(sec_cksum_ptype, 1, 0, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_cksum_ptype_rss, 1, 0, 0, 0, 1, 1, 1, \
- RX_SEC_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_vlan, 1, 0, 0, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_vlan_rss, 1, 0, 0, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_vlan_ptype, 1, 0, 0, 1, 0, 1, 0, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F) \
-R(sec_vlan_ptype_rss, 1, 0, 0, 1, 0, 1, 1, \
- RX_SEC_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_vlan_cksum, 1, 0, 0, 1, 1, 0, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F) \
-R(sec_vlan_cksum_rss, 1, 0, 0, 1, 1, 0, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_vlan_cksum_ptype, 1, 0, 0, 1, 1, 1, 0, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_vlan_cksum_ptype_rss, 1, 0, 0, 1, 1, 1, 1, \
- RX_SEC_F | RX_VLAN_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark, 1, 0, 1, 0, 0, 0, 0, RX_SEC_F | MARK_F) \
-R(sec_mark_rss, 1, 0, 1, 0, 0, 0, 1, RX_SEC_F | MARK_F | RSS_F)\
-R(sec_mark_ptype, 1, 0, 1, 0, 0, 1, 0, \
- RX_SEC_F | MARK_F | PTYPE_F) \
-R(sec_mark_ptype_rss, 1, 0, 1, 0, 0, 1, 1, \
- RX_SEC_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_mark_cksum, 1, 0, 1, 0, 1, 0, 0, \
- RX_SEC_F | MARK_F | CKSUM_F) \
-R(sec_mark_cksum_rss, 1, 0, 1, 0, 1, 0, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_mark_cksum_ptype, 1, 0, 1, 0, 1, 1, 0, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_cksum_ptype_rss, 1, 0, 1, 0, 1, 1, 1, \
- RX_SEC_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan, 1, 0, 1, 1, 0, 0, 0, RX_SEC_F | RX_VLAN_F) \
-R(sec_mark_vlan_rss, 1, 0, 1, 1, 0, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | RSS_F) \
-R(sec_mark_vlan_ptype, 1, 0, 1, 1, 0, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_mark_vlan_ptype_rss, 1, 0, 1, 1, 0, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_mark_vlan_cksum, 1, 0, 1, 1, 1, 0, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_mark_vlan_cksum_rss, 1, 0, 1, 1, 1, 0, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_mark_vlan_cksum_ptype, 1, 0, 1, 1, 1, 1, 0, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_mark_vlan_cksum_ptype_rss, \
- 1, 0, 1, 1, 1, 1, 1, \
- RX_SEC_F | MARK_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts, 1, 1, 0, 0, 0, 0, 0, RX_SEC_F | TS_F) \
-R(sec_ts_rss, 1, 1, 0, 0, 0, 0, 1, RX_SEC_F | TS_F | RSS_F) \
-R(sec_ts_ptype, 1, 1, 0, 0, 0, 1, 0, RX_SEC_F | TS_F | PTYPE_F)\
-R(sec_ts_ptype_rss, 1, 1, 0, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | PTYPE_F | RSS_F) \
-R(sec_ts_cksum, 1, 1, 0, 0, 1, 0, 0, RX_SEC_F | TS_F | CKSUM_F)\
-R(sec_ts_cksum_rss, 1, 1, 0, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | CKSUM_F | RSS_F) \
-R(sec_ts_cksum_ptype, 1, 1, 0, 0, 1, 1, 0, \
- RX_SEC_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_cksum_ptype_rss, 1, 1, 0, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan, 1, 1, 0, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F) \
-R(sec_ts_vlan_rss, 1, 1, 0, 1, 0, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_vlan_ptype, 1, 1, 0, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_vlan_ptype_rss, 1, 1, 0, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | PTYPE_F | RSS_F) \
-R(sec_ts_vlan_cksum, 1, 1, 0, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_vlan_cksum_rss, 1, 1, 0, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | RSS_F) \
-R(sec_ts_vlan_cksum_ptype, 1, 1, 0, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_vlan_cksum_ptype_rss, 1, 1, 0, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | RX_VLAN_F | CKSUM_F | PTYPE_F | \
- RSS_F) \
-R(sec_ts_mark, 1, 1, 1, 0, 0, 0, 0, RX_SEC_F | TS_F | MARK_F) \
-R(sec_ts_mark_rss, 1, 1, 1, 0, 0, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RSS_F) \
-R(sec_ts_mark_ptype, 1, 1, 1, 0, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F) \
-R(sec_ts_mark_ptype_rss, 1, 1, 1, 0, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_cksum, 1, 1, 1, 0, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F) \
-R(sec_ts_mark_cksum_rss, 1, 1, 1, 0, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | RSS_F) \
-R(sec_ts_mark_cksum_ptype, 1, 1, 1, 0, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F) \
-R(sec_ts_mark_cksum_ptype_rss, 1, 1, 1, 0, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | CKSUM_F | PTYPE_F | RSS_F) \
-R(sec_ts_mark_vlan, 1, 1, 1, 1, 0, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F) \
-R(sec_ts_mark_vlan_rss, 1, 1, 1, 1, 0, 0, 1, \
- RX_SEC_F | RX_VLAN_F | RSS_F) \
-R(sec_ts_mark_vlan_ptype, 1, 1, 1, 1, 0, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F) \
-R(sec_ts_mark_vlan_ptype_rss, 1, 1, 1, 1, 0, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | PTYPE_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum, 1, 1, 1, 1, 1, 0, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F) \
-R(sec_ts_mark_vlan_cksum_rss, 1, 1, 1, 1, 1, 0, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | RSS_F)\
-R(sec_ts_mark_vlan_cksum_ptype, 1, 1, 1, 1, 1, 1, 0, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F) \
-R(sec_ts_mark_vlan_cksum_ptype_rss, \
- 1, 1, 1, 1, 1, 1, 1, \
- RX_SEC_F | TS_F | MARK_F | RX_VLAN_F | CKSUM_F | \
- PTYPE_F | RSS_F)
-#endif /* __OTX2_RX_H__ */
diff --git a/drivers/net/octeontx2/otx2_stats.c b/drivers/net/octeontx2/otx2_stats.c
deleted file mode 100644
index 3adf21608c..0000000000
--- a/drivers/net/octeontx2/otx2_stats.c
+++ /dev/null
@@ -1,397 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <inttypes.h>
-
-#include "otx2_ethdev.h"
-
-struct otx2_nix_xstats_name {
- char name[RTE_ETH_XSTATS_NAME_SIZE];
- uint32_t offset;
-};
-
-static const struct otx2_nix_xstats_name nix_tx_xstats[] = {
- {"tx_ucast", NIX_STAT_LF_TX_TX_UCAST},
- {"tx_bcast", NIX_STAT_LF_TX_TX_BCAST},
- {"tx_mcast", NIX_STAT_LF_TX_TX_MCAST},
- {"tx_drop", NIX_STAT_LF_TX_TX_DROP},
- {"tx_octs", NIX_STAT_LF_TX_TX_OCTS},
-};
-
-static const struct otx2_nix_xstats_name nix_rx_xstats[] = {
- {"rx_octs", NIX_STAT_LF_RX_RX_OCTS},
- {"rx_ucast", NIX_STAT_LF_RX_RX_UCAST},
- {"rx_bcast", NIX_STAT_LF_RX_RX_BCAST},
- {"rx_mcast", NIX_STAT_LF_RX_RX_MCAST},
- {"rx_drop", NIX_STAT_LF_RX_RX_DROP},
- {"rx_drop_octs", NIX_STAT_LF_RX_RX_DROP_OCTS},
- {"rx_fcs", NIX_STAT_LF_RX_RX_FCS},
- {"rx_err", NIX_STAT_LF_RX_RX_ERR},
- {"rx_drp_bcast", NIX_STAT_LF_RX_RX_DRP_BCAST},
- {"rx_drp_mcast", NIX_STAT_LF_RX_RX_DRP_MCAST},
- {"rx_drp_l3bcast", NIX_STAT_LF_RX_RX_DRP_L3BCAST},
- {"rx_drp_l3mcast", NIX_STAT_LF_RX_RX_DRP_L3MCAST},
-};
-
-static const struct otx2_nix_xstats_name nix_q_xstats[] = {
- {"rq_op_re_pkts", NIX_LF_RQ_OP_RE_PKTS},
-};
-
-#define OTX2_NIX_NUM_RX_XSTATS RTE_DIM(nix_rx_xstats)
-#define OTX2_NIX_NUM_TX_XSTATS RTE_DIM(nix_tx_xstats)
-#define OTX2_NIX_NUM_QUEUE_XSTATS RTE_DIM(nix_q_xstats)
-
-#define OTX2_NIX_NUM_XSTATS_REG (OTX2_NIX_NUM_RX_XSTATS + \
- OTX2_NIX_NUM_TX_XSTATS + OTX2_NIX_NUM_QUEUE_XSTATS)
-
-int
-otx2_nix_dev_stats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_stats *stats)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t reg, val;
- uint32_t qidx, i;
- int64_t *addr;
-
- stats->opackets = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_UCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_MCAST));
- stats->opackets += otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_BCAST));
- stats->oerrors = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_DROP));
- stats->obytes = otx2_read64(dev->base +
- NIX_LF_TX_STATX(NIX_STAT_LF_TX_TX_OCTS));
-
- stats->ipackets = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_UCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_MCAST));
- stats->ipackets += otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_BCAST));
- stats->imissed = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_DROP));
- stats->ibytes = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_OCTS));
- stats->ierrors = otx2_read64(dev->base +
- NIX_LF_RX_STATX(NIX_STAT_LF_RX_RX_ERR));
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->txmap[i] & (1U << 31)) {
- qidx = dev->txmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_opackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_obytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] = val;
- }
- }
-
- for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
- if (dev->rxmap[i] & (1U << 31)) {
- qidx = dev->rxmap[i] & 0xFFFF;
- reg = (((uint64_t)qidx) << 32);
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ipackets[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_ibytes[i] = val;
-
- addr = (int64_t *)(dev->base + NIX_LF_RQ_OP_DROP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->q_errors[i] += val;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_dev_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_queue_stats_mapping(struct rte_eth_dev *eth_dev, uint16_t queue_id,
- uint8_t stat_idx, uint8_t is_rx)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (is_rx)
- dev->rxmap[stat_idx] = ((1U << 31) | queue_id);
- else
- dev->txmap[stat_idx] = ((1U << 31) | queue_id);
-
- return 0;
-}
-
-int
-otx2_nix_xstats_get(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat *xstats,
- unsigned int n)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- unsigned int i, count = 0;
- uint64_t reg, val;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (xstats == NULL)
- return 0;
-
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_TX_STATX(nix_tx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- xstats[count].value = otx2_read64(dev->base +
- NIX_LF_RX_STATX(nix_rx_xstats[i].offset));
- xstats[count].id = count;
- count++;
- }
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- reg = (((uint64_t)i) << 32);
- val = otx2_atomic64_add_nosync(reg, (int64_t *)(dev->base +
- nix_q_xstats[0].offset));
- if (val & OP_ERR)
- val = 0;
- xstats[count].value += val;
- }
- xstats[count].id = count;
- count++;
-
- return count;
-}
-
-int
-otx2_nix_xstats_get_names(struct rte_eth_dev *eth_dev,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- unsigned int i, count = 0;
-
- RTE_SET_USED(eth_dev);
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && xstats_names != NULL)
- return -ENOMEM;
-
- if (xstats_names) {
- for (i = 0; i < OTX2_NIX_NUM_TX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_tx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_RX_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_rx_xstats[i].name);
- count++;
- }
-
- for (i = 0; i < OTX2_NIX_NUM_QUEUE_XSTATS; i++) {
- snprintf(xstats_names[count].name,
- sizeof(xstats_names[count].name),
- "%s", nix_q_xstats[i].name);
- count++;
- }
- }
-
- return OTX2_NIX_NUM_XSTATS_REG;
-}
-
-int
-otx2_nix_xstats_get_names_by_id(struct rte_eth_dev *eth_dev,
- const uint64_t *ids,
- struct rte_eth_xstat_name *xstats_names,
- unsigned int limit)
-{
- struct rte_eth_xstat_name xstats_names_copy[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (limit < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (limit > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (xstats_names == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get_names(eth_dev, xstats_names_copy, limit);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- strncpy(xstats_names[i].name, xstats_names_copy[ids[i]].name,
- sizeof(xstats_names[i].name));
- }
-
- return limit;
-}
-
-int
-otx2_nix_xstats_get_by_id(struct rte_eth_dev *eth_dev, const uint64_t *ids,
- uint64_t *values, unsigned int n)
-{
- struct rte_eth_xstat xstats[OTX2_NIX_NUM_XSTATS_REG];
- uint16_t i;
-
- if (n < OTX2_NIX_NUM_XSTATS_REG && ids == NULL)
- return OTX2_NIX_NUM_XSTATS_REG;
-
- if (n > OTX2_NIX_NUM_XSTATS_REG)
- return -EINVAL;
-
- if (values == NULL)
- return -ENOMEM;
-
- otx2_nix_xstats_get(eth_dev, xstats, n);
-
- for (i = 0; i < OTX2_NIX_NUM_XSTATS_REG; i++) {
- if (ids[i] >= OTX2_NIX_NUM_XSTATS_REG) {
- otx2_err("Invalid id value");
- return -EINVAL;
- }
- values[i] = xstats[ids[i]].value;
- }
-
- return n;
-}
-
-static int
-nix_queue_stats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_aq_enq_rsp *rsp;
- struct nix_aq_enq_req *aq;
- uint32_t i;
- int rc;
-
- for (i = 0; i < eth_dev->data->nb_rx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read rq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_RQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->rq, &rsp->rq, sizeof(rsp->rq));
- otx2_mbox_memset(&aq->rq_mask, 0, sizeof(aq->rq_mask));
- aq->rq.octs = 0;
- aq->rq.pkts = 0;
- aq->rq.drop_octs = 0;
- aq->rq.drop_pkts = 0;
- aq->rq.re_pkts = 0;
-
- aq->rq_mask.octs = ~(aq->rq_mask.octs);
- aq->rq_mask.pkts = ~(aq->rq_mask.pkts);
- aq->rq_mask.drop_octs = ~(aq->rq_mask.drop_octs);
- aq->rq_mask.drop_pkts = ~(aq->rq_mask.drop_pkts);
- aq->rq_mask.re_pkts = ~(aq->rq_mask.re_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write rq context");
- return rc;
- }
- }
-
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_READ;
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read sq context");
- return rc;
- }
- aq = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- aq->qidx = i;
- aq->ctype = NIX_AQ_CTYPE_SQ;
- aq->op = NIX_AQ_INSTOP_WRITE;
- otx2_mbox_memcpy(&aq->sq, &rsp->sq, sizeof(rsp->sq));
- otx2_mbox_memset(&aq->sq_mask, 0, sizeof(aq->sq_mask));
- aq->sq.octs = 0;
- aq->sq.pkts = 0;
- aq->sq.drop_octs = 0;
- aq->sq.drop_pkts = 0;
-
- aq->sq_mask.octs = ~(aq->sq_mask.octs);
- aq->sq_mask.pkts = ~(aq->sq_mask.pkts);
- aq->sq_mask.drop_octs = ~(aq->sq_mask.drop_octs);
- aq->sq_mask.drop_pkts = ~(aq->sq_mask.drop_pkts);
- rc = otx2_mbox_process(mbox);
- if (rc) {
- otx2_err("Failed to write sq context");
- return rc;
- }
- }
-
- return 0;
-}
-
-int
-otx2_nix_xstats_reset(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int ret;
-
- if (otx2_mbox_alloc_msg_nix_stats_rst(mbox) == NULL)
- return -ENOMEM;
-
- ret = otx2_mbox_process(mbox);
- if (ret != 0)
- return ret;
-
- /* Reset queue stats */
- return nix_queue_stats_reset(eth_dev);
-}
diff --git a/drivers/net/octeontx2/otx2_tm.c b/drivers/net/octeontx2/otx2_tm.c
deleted file mode 100644
index 6aff1f9587..0000000000
--- a/drivers/net/octeontx2/otx2_tm.c
+++ /dev/null
@@ -1,3317 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_tm.h"
-
-/* Use last LVL_CNT nodes as default nodes */
-#define NIX_DEFAULT_NODE_ID_START (RTE_TM_NODE_ID_NULL - NIX_TXSCH_LVL_CNT)
-
-enum otx2_tm_node_level {
- OTX2_TM_LVL_ROOT = 0,
- OTX2_TM_LVL_SCH1,
- OTX2_TM_LVL_SCH2,
- OTX2_TM_LVL_SCH3,
- OTX2_TM_LVL_SCH4,
- OTX2_TM_LVL_QUEUE,
- OTX2_TM_LVL_MAX,
-};
-
-static inline
-uint64_t shaper2regval(struct shaper_params *shaper)
-{
- return (shaper->burst_exponent << 37) | (shaper->burst_mantissa << 29) |
- (shaper->div_exp << 13) | (shaper->exponent << 9) |
- (shaper->mantissa << 1);
-}
-
-int
-otx2_nix_get_link(struct otx2_eth_dev *dev)
-{
- int link = 13 /* SDP */;
- uint16_t lmac_chan;
- uint16_t map;
-
- lmac_chan = dev->tx_chan_base;
-
- /* CGX lmac link */
- if (lmac_chan >= 0x800) {
- map = lmac_chan & 0x7FF;
- link = 4 * ((map >> 8) & 0xF) + ((map >> 4) & 0xF);
- } else if (lmac_chan < 0x700) {
- /* LBK channel */
- link = 12;
- }
-
- return link;
-}
-
-static uint8_t
-nix_get_relchan(struct otx2_eth_dev *dev)
-{
- return dev->tx_chan_base & 0xff;
-}
-
-static bool
-nix_tm_have_tl1_access(struct otx2_eth_dev *dev)
-{
- bool is_lbk = otx2_dev_is_lbk(dev);
- return otx2_dev_is_pf(dev) && !otx2_dev_is_Ax(dev) && !is_lbk;
-}
-
-static bool
-nix_tm_is_leaf(struct otx2_eth_dev *dev, int lvl)
-{
- if (nix_tm_have_tl1_access(dev))
- return (lvl == OTX2_TM_LVL_QUEUE);
-
- return (lvl == OTX2_TM_LVL_SCH4);
-}
-
-static int
-find_prio_anchor(struct otx2_eth_dev *dev, uint32_t node_id)
-{
- struct otx2_nix_tm_node *child_node;
-
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (!child_node->parent)
- continue;
- if (!(child_node->parent->id == node_id))
- continue;
- if (child_node->priority == child_node->parent->rr_prio)
- continue;
- return child_node->hw_id - child_node->priority;
- }
- return 0;
-}
-
-
-static struct otx2_nix_tm_shaper_profile *
-nix_tm_shaper_profile_search(struct otx2_eth_dev *dev, uint32_t shaper_id)
-{
- struct otx2_nix_tm_shaper_profile *tm_shaper_profile;
-
- TAILQ_FOREACH(tm_shaper_profile, &dev->shaper_profile_list, shaper) {
- if (tm_shaper_profile->shaper_profile_id == shaper_id)
- return tm_shaper_profile;
- }
- return NULL;
-}
-
-static inline uint64_t
-shaper_rate_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p, uint64_t *div_exp_p)
-{
- uint64_t div_exp, exponent, mantissa;
-
- /* Boundary checks */
- if (value < MIN_SHAPER_RATE ||
- value > MAX_SHAPER_RATE)
- return 0;
-
- if (value <= SHAPER_RATE(0, 0, 0)) {
- /* Calculate rate div_exp and mantissa using
- * the following formula:
- *
- * value = (2E6 * (256 + mantissa)
- * / ((1 << div_exp) * 256))
- */
- div_exp = 0;
- exponent = 0;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST / (1 << div_exp)))
- div_exp += 1;
-
- while (value <
- ((NIX_SHAPER_RATE_CONST * (256 + mantissa)) /
- ((1 << div_exp) * 256)))
- mantissa -= 1;
- } else {
- /* Calculate rate exponent and mantissa using
- * the following formula:
- *
- * value = (2E6 * ((256 + mantissa) << exponent)) / 256
- *
- */
- div_exp = 0;
- exponent = MAX_RATE_EXPONENT;
- mantissa = MAX_RATE_MANTISSA;
-
- while (value < (NIX_SHAPER_RATE_CONST * (1 << exponent)))
- exponent -= 1;
-
- while (value < ((NIX_SHAPER_RATE_CONST *
- ((256 + mantissa) << exponent)) / 256))
- mantissa -= 1;
- }
-
- if (div_exp > MAX_RATE_DIV_EXP ||
- exponent > MAX_RATE_EXPONENT || mantissa > MAX_RATE_MANTISSA)
- return 0;
-
- if (div_exp_p)
- *div_exp_p = div_exp;
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- /* Calculate real rate value */
- return SHAPER_RATE(exponent, mantissa, div_exp);
-}
-
-static inline uint64_t
-shaper_burst_to_nix(uint64_t value, uint64_t *exponent_p,
- uint64_t *mantissa_p)
-{
- uint64_t exponent, mantissa;
-
- if (value < MIN_SHAPER_BURST || value > MAX_SHAPER_BURST)
- return 0;
-
- /* Calculate burst exponent and mantissa using
- * the following formula:
- *
- * value = (((256 + mantissa) << (exponent + 1)
- / 256)
- *
- */
- exponent = MAX_BURST_EXPONENT;
- mantissa = MAX_BURST_MANTISSA;
-
- while (value < (1ull << (exponent + 1)))
- exponent -= 1;
-
- while (value < ((256 + mantissa) << (exponent + 1)) / 256)
- mantissa -= 1;
-
- if (exponent > MAX_BURST_EXPONENT || mantissa > MAX_BURST_MANTISSA)
- return 0;
-
- if (exponent_p)
- *exponent_p = exponent;
- if (mantissa_p)
- *mantissa_p = mantissa;
-
- return SHAPER_BURST(exponent, mantissa);
-}
-
-static void
-shaper_config_to_nix(struct otx2_nix_tm_shaper_profile *profile,
- struct shaper_params *cir,
- struct shaper_params *pir)
-{
- struct rte_tm_shaper_params *param = &profile->params;
-
- if (!profile)
- return;
-
- /* Calculate CIR exponent and mantissa */
- if (param->committed.rate)
- cir->rate = shaper_rate_to_nix(param->committed.rate,
- &cir->exponent,
- &cir->mantissa,
- &cir->div_exp);
-
- /* Calculate PIR exponent and mantissa */
- if (param->peak.rate)
- pir->rate = shaper_rate_to_nix(param->peak.rate,
- &pir->exponent,
- &pir->mantissa,
- &pir->div_exp);
-
- /* Calculate CIR burst exponent and mantissa */
- if (param->committed.size)
- cir->burst = shaper_burst_to_nix(param->committed.size,
- &cir->burst_exponent,
- &cir->burst_mantissa);
-
- /* Calculate PIR burst exponent and mantissa */
- if (param->peak.size)
- pir->burst = shaper_burst_to_nix(param->peak.size,
- &pir->burst_exponent,
- &pir->burst_mantissa);
-}
-
-static void
-shaper_default_red_algo(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile)
-{
- struct shaper_params cir, pir;
-
- /* C0 doesn't support STALL when both PIR & CIR are enabled */
- if (profile && otx2_dev_is_96xx_Cx(dev)) {
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- if (pir.rate && cir.rate) {
- tm_node->red_algo = NIX_REDALG_DISCARD;
- tm_node->flags |= NIX_TM_NODE_RED_DISCARD;
- return;
- }
- }
-
- tm_node->red_algo = NIX_REDALG_STD;
- tm_node->flags &= ~NIX_TM_NODE_RED_DISCARD;
-}
-
-static int
-populate_tm_tl1_default(struct otx2_eth_dev *dev, uint32_t schq)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
-
- /*
- * Default config for TL1.
- * For VF this is always ignored.
- */
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_TL1;
-
- /* Set DWRR quantum */
- req->reg[0] = NIX_AF_TL1X_SCHEDULE(schq);
- req->regval[0] = TXSCH_TL1_DFLT_RR_QTM;
- req->num_regs++;
-
- req->reg[1] = NIX_AF_TL1X_TOPOLOGY(schq);
- req->regval[1] = (TXSCH_TL1_DFLT_RR_PRIO << 1);
- req->num_regs++;
-
- req->reg[2] = NIX_AF_TL1X_CIR(schq);
- req->regval[2] = 0;
- req->num_regs++;
-
- return otx2_mbox_process(mbox);
-}
-
-static uint8_t
-prepare_tm_sched_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint64_t strict_prio = tm_node->priority;
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint64_t rr_quantum;
- uint8_t k = 0;
-
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- /* For children to root, strict prio is default if either
- * device root is TL2 or TL1 Static Priority is disabled.
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL2 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- dev->tm_flags & NIX_TM_TL1_NO_SP))
- strict_prio = TXSCH_TL1_DFLT_RR_PRIO;
-
- otx2_tm_dbg("Schedule config node %s(%u) lvl %u id %u, "
- "prio 0x%" PRIx64 ", rr_quantum 0x%" PRIx64 " (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, strict_prio, rr_quantum, tm_node);
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- reg[k] = NIX_AF_MDQX_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SCHEDULE(schq);
- regval[k] = (strict_prio << 24) | rr_quantum;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SCHEDULE(schq);
- regval[k] = rr_quantum;
- k++;
-
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_shaper_reg(struct otx2_nix_tm_node *tm_node,
- struct otx2_nix_tm_shaper_profile *profile,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- struct shaper_params cir, pir;
- uint32_t schq = tm_node->hw_id;
- uint64_t adjust = 0;
- uint8_t k = 0;
-
- memset(&cir, 0, sizeof(cir));
- memset(&pir, 0, sizeof(pir));
- shaper_config_to_nix(profile, &cir, &pir);
-
- /* Packet length adjust */
- if (tm_node->pkt_mode)
- adjust = 1;
- else if (profile)
- adjust = profile->params.pkt_length_adjust & 0x1FF;
-
- otx2_tm_dbg("Shaper config node %s(%u) lvl %u id %u, pir %" PRIu64
- "(%" PRIu64 "B), cir %" PRIu64 "(%" PRIu64 "B)"
- "adjust 0x%" PRIx64 "(pktmode %u) (%p)",
- nix_hwlvl2str(tm_node->hw_lvl), schq, tm_node->lvl,
- tm_node->id, pir.rate, pir.burst, cir.rate, cir.burst,
- adjust, tm_node->pkt_mode, tm_node);
-
- switch (tm_node->hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_MDQX_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_MDQX_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED ALG */
- reg[k] = NIX_AF_MDQX_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL4X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL4X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL4X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL3X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL3X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL3X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Configure PIR, CIR */
- reg[k] = NIX_AF_TL2X_PIR(schq);
- regval[k] = (pir.rate && pir.burst) ?
- (shaper2regval(&pir) | 1) : 0;
- k++;
-
- reg[k] = NIX_AF_TL2X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure RED algo */
- reg[k] = NIX_AF_TL2X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->red_algo << 9 |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL1:
- /* Configure CIR */
- reg[k] = NIX_AF_TL1X_CIR(schq);
- regval[k] = (cir.rate && cir.burst) ?
- (shaper2regval(&cir) | 1) : 0;
- k++;
-
- /* Configure length disable and adjust */
- reg[k] = NIX_AF_TL1X_SHAPE(schq);
- regval[k] = (adjust |
- (uint64_t)tm_node->pkt_mode << 24);
- k++;
- break;
- }
-
- return k;
-}
-
-static uint8_t
-prepare_tm_sw_xoff(struct otx2_nix_tm_node *tm_node, bool enable,
- volatile uint64_t *reg, volatile uint64_t *regval)
-{
- uint32_t hw_lvl = tm_node->hw_lvl;
- uint32_t schq = tm_node->hw_id;
- uint8_t k = 0;
-
- otx2_tm_dbg("sw xoff config node %s(%u) lvl %u id %u, enable %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, tm_node->lvl,
- tm_node->id, enable, tm_node);
-
- regval[k] = enable;
-
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- reg[k] = NIX_AF_MDQX_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL4:
- reg[k] = NIX_AF_TL4X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL3:
- reg[k] = NIX_AF_TL3X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL2:
- reg[k] = NIX_AF_TL2X_SW_XOFF(schq);
- k++;
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_SW_XOFF(schq);
- k++;
- break;
- default:
- break;
- }
-
- return k;
-}
-
-static int
-populate_tm_reg(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t regval_mask[MAX_REGS_PER_MBOX_MSG];
- uint64_t regval[MAX_REGS_PER_MBOX_MSG];
- uint64_t reg[MAX_REGS_PER_MBOX_MSG];
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t parent = 0, child = 0;
- uint32_t hw_lvl, rr_prio, schq;
- struct nix_txschq_config *req;
- int rc = -EFAULT;
- uint8_t k = 0;
-
- memset(regval_mask, 0, sizeof(regval_mask));
- profile = nix_tm_shaper_profile_search(dev,
- tm_node->params.shaper_profile_id);
- rr_prio = tm_node->rr_prio;
- hw_lvl = tm_node->hw_lvl;
- schq = tm_node->hw_id;
-
- /* Root node will not have a parent node */
- if (hw_lvl == dev->otx2_tm_root_lvl)
- parent = tm_node->parent_hw_id;
- else
- parent = tm_node->parent->hw_id;
-
- /* Do we need this trigger to configure TL1 */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- hw_lvl == dev->otx2_tm_root_lvl) {
- rc = populate_tm_tl1_default(dev, parent);
- if (rc)
- goto error;
- }
-
- if (hw_lvl != NIX_TXSCH_LVL_SMQ)
- child = find_prio_anchor(dev, tm_node->id);
-
- /* Override default rr_prio when TL1
- * Static Priority is disabled
- */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- dev->tm_flags & NIX_TM_TL1_NO_SP) {
- rr_prio = TXSCH_TL1_DFLT_RR_PRIO;
- child = 0;
- }
-
- otx2_tm_dbg("Topology config node %s(%u)->%s(%"PRIu64") lvl %u, id %u"
- " prio_anchor %"PRIu64" rr_prio %u (%p)",
- nix_hwlvl2str(hw_lvl), schq, nix_hwlvl2str(hw_lvl + 1),
- parent, tm_node->lvl, tm_node->id, child, rr_prio, tm_node);
-
- /* Prepare Topology and Link config */
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_SMQ:
-
- /* Set xoff which will be cleared later and minimum length
- * which will be used for zero padding if packet length is
- * smaller
- */
- reg[k] = NIX_AF_SMQX_CFG(schq);
- regval[k] = BIT_ULL(50) | ((uint64_t)NIX_MAX_VTAG_INS << 36) |
- NIX_MIN_HW_FRS;
- regval_mask[k] = ~(BIT_ULL(50) | (0x7ULL << 36) | 0x7f);
- k++;
-
- /* Parent and schedule conf */
- reg[k] = NIX_AF_MDQX_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- break;
- case NIX_TXSCH_LVL_TL4:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL4X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL4X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Configure TL4 to send to SDP channel instead of CGX/LBK */
- if (otx2_dev_is_sdp(dev)) {
- reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
- regval[k] = BIT_ULL(12);
- k++;
- }
- break;
- case NIX_TXSCH_LVL_TL3:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL3X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL3X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL3) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL2:
- /* Parent and schedule conf */
- reg[k] = NIX_AF_TL2X_PARENT(schq);
- regval[k] = parent << 16;
- k++;
-
- reg[k] = NIX_AF_TL2X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1);
- k++;
-
- /* Link configuration */
- if (!otx2_dev_is_sdp(dev) &&
- dev->link_cfg_lvl == NIX_TXSCH_LVL_TL2) {
- reg[k] = NIX_AF_TL3_TL2X_LINKX_CFG(schq,
- otx2_nix_get_link(dev));
- regval[k] = BIT_ULL(12) | nix_get_relchan(dev);
- k++;
- }
-
- break;
- case NIX_TXSCH_LVL_TL1:
- reg[k] = NIX_AF_TL1X_TOPOLOGY(schq);
- regval[k] = (child << 32) | (rr_prio << 1 /*RR_PRIO*/);
- k++;
-
- break;
- }
-
- /* Prepare schedule config */
- k += prepare_tm_sched_reg(dev, tm_node, ®[k], ®val[k]);
-
- /* Prepare shaping config */
- k += prepare_tm_shaper_reg(tm_node, profile, ®[k], ®val[k]);
-
- if (!k)
- return 0;
-
- /* Copy and send config mbox */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = hw_lvl;
- req->num_regs = k;
-
- otx2_mbox_memcpy(req->reg, reg, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval, regval, sizeof(uint64_t) * k);
- otx2_mbox_memcpy(req->regval_mask, regval_mask, sizeof(uint64_t) * k);
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- goto error;
-
- return 0;
-error:
- otx2_err("Txschq cfg request failed for node %p, rc=%d", tm_node, rc);
- return rc;
-}
-
-
-static int
-nix_tm_txsch_reg_config(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t hw_lvl;
- int rc = 0;
-
- for (hw_lvl = 0; hw_lvl <= dev->otx2_tm_root_lvl; hw_lvl++) {
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl == hw_lvl &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_CNT) {
- rc = populate_tm_reg(dev, tm_node);
- if (rc)
- goto exit;
- }
- }
- }
-exit:
- return rc;
-}
-
-static struct otx2_nix_tm_node *
-nix_tm_node_search(struct otx2_eth_dev *dev,
- uint32_t node_id, bool user)
-{
- struct otx2_nix_tm_node *tm_node;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->id == node_id &&
- (user == !!(tm_node->flags & NIX_TM_NODE_USER)))
- return tm_node;
- }
- return NULL;
-}
-
-static uint32_t
-check_rr(struct otx2_eth_dev *dev, uint32_t priority, uint32_t parent_id)
-{
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->parent->id == parent_id))
- continue;
-
- if (tm_node->priority == priority)
- rr_num++;
- }
- return rr_num;
-}
-
-static int
-nix_tm_update_parent_info(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *tm_node_child;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_nix_tm_node *parent;
- uint32_t rr_num = 0;
- uint32_t priority;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
- /* Count group of children of same priority i.e are RR */
- parent = tm_node->parent;
- priority = tm_node->priority;
- rr_num = check_rr(dev, priority, parent->id);
-
- /* Assuming that multiple RR groups are
- * not configured based on capability.
- */
- if (rr_num > 1) {
- parent->rr_prio = priority;
- parent->rr_num = rr_num;
- }
-
- /* Find out static priority children that are not in RR */
- TAILQ_FOREACH(tm_node_child, &dev->node_list, node) {
- if (!tm_node_child->parent)
- continue;
- if (parent->id != tm_node_child->parent->id)
- continue;
- if (parent->max_prio == UINT32_MAX &&
- tm_node_child->priority != parent->rr_prio)
- parent->max_prio = 0;
-
- if (parent->max_prio < tm_node_child->priority &&
- parent->rr_prio != tm_node_child->priority)
- parent->max_prio = tm_node_child->priority;
- }
- }
-
- return 0;
-}
-
-static int
-nix_tm_node_add_to_list(struct otx2_eth_dev *dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint16_t hw_lvl,
- uint16_t lvl, bool user,
- struct rte_tm_node_params *params)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *parent_node;
- uint32_t profile_id;
-
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- parent_node = nix_tm_node_search(dev, parent_node_id, user);
-
- tm_node = rte_zmalloc("otx2_nix_tm_node",
- sizeof(struct otx2_nix_tm_node), 0);
- if (!tm_node)
- return -ENOMEM;
-
- tm_node->lvl = lvl;
- tm_node->hw_lvl = hw_lvl;
-
- /* Maintain minimum weight */
- if (!weight)
- weight = 1;
-
- tm_node->id = node_id;
- tm_node->priority = priority;
- tm_node->weight = weight;
- tm_node->rr_prio = 0xf;
- tm_node->max_prio = UINT32_MAX;
- tm_node->hw_id = UINT32_MAX;
- tm_node->flags = 0;
- if (user)
- tm_node->flags = NIX_TM_NODE_USER;
-
- /* Packet mode */
- if (!nix_tm_is_leaf(dev, lvl) &&
- ((profile && profile->params.packet_mode) ||
- (params->nonleaf.wfq_weight_mode &&
- params->nonleaf.n_sp_priorities &&
- !params->nonleaf.wfq_weight_mode[0])))
- tm_node->pkt_mode = 1;
-
- rte_memcpy(&tm_node->params, params, sizeof(struct rte_tm_node_params));
-
- if (profile)
- profile->reference_count++;
-
- tm_node->parent = parent_node;
- tm_node->parent_hw_id = UINT32_MAX;
- shaper_default_red_algo(dev, tm_node, profile);
-
- TAILQ_INSERT_TAIL(&dev->node_list, tm_node, node);
-
- return 0;
-}
-
-static int
-nix_tm_clear_shaper_profiles(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *shaper_profile;
-
- while ((shaper_profile = TAILQ_FIRST(&dev->shaper_profile_list))) {
- if (shaper_profile->reference_count)
- otx2_tm_dbg("Shaper profile %u has non zero references",
- shaper_profile->shaper_profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, shaper_profile, shaper);
- rte_free(shaper_profile);
- }
-
- return 0;
-}
-
-static int
-nix_clear_path_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node)
-{
- struct nix_txschq_config *req;
- struct otx2_nix_tm_node *p;
- int rc;
-
- /* Manipulating SW_XOFF not supported on Ax */
- if (otx2_dev_is_Ax(dev))
- return 0;
-
- /* Enable nodes in path for flush to succeed */
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- p = tm_node;
- else
- p = tm_node->parent;
- while (p) {
- if (!(p->flags & NIX_TM_NODE_ENABLED) &&
- (p->flags & NIX_TM_NODE_HWRES)) {
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = p->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(p, false, req->reg,
- req->regval);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
-
- p->flags |= NIX_TM_NODE_ENABLED;
- }
- p = p->parent;
- }
-
- return 0;
-}
-
-static int
-nix_smq_xoff(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool enable)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txschq_config *req;
- uint16_t smq;
- int rc;
-
- smq = tm_node->hw_id;
- otx2_tm_dbg("Setting SMQ %u XOFF/FLUSH to %s", smq,
- enable ? "enable" : "disable");
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_SMQ;
- req->num_regs = 1;
-
- req->reg[0] = NIX_AF_SMQX_CFG(smq);
- req->regval[0] = enable ? (BIT_ULL(50) | BIT_ULL(49)) : 0;
- req->regval_mask[0] = enable ?
- ~(BIT_ULL(50) | BIT_ULL(49)) : ~BIT_ULL(50);
-
- return otx2_mbox_process(mbox);
-}
-
-int
-otx2_nix_sq_sqb_aura_fc(void *__txq, bool enable)
-{
- struct otx2_eth_txq *txq = __txq;
- struct npa_aq_enq_req *req;
- struct npa_aq_enq_rsp *rsp;
- struct otx2_npa_lf *lf;
- struct otx2_mbox *mbox;
- uint64_t aura_handle;
- int rc;
-
- otx2_tm_dbg("Setting SQ %u SQB aura FC to %s", txq->sq,
- enable ? "enable" : "disable");
-
- lf = otx2_npa_lf_obj_get();
- if (!lf)
- return -EFAULT;
- mbox = lf->mbox;
- /* Set/clear sqb aura fc_ena */
- aura_handle = txq->sqb_pool->pool_id;
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_WRITE;
- /* Below is not needed for aura writes but AF driver needs it */
- /* AF will translate to associated poolctx */
- req->aura.pool_addr = req->aura_id;
-
- req->aura.fc_ena = enable;
- req->aura_mask.fc_ena = 1;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- /* Read back npa aura ctx */
- req = otx2_mbox_alloc_msg_npa_aq_enq(mbox);
-
- req->aura_id = npa_lf_aura_handle_to_aura(aura_handle);
- req->ctype = NPA_AQ_CTYPE_AURA;
- req->op = NPA_AQ_INSTOP_READ;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- /* Init when enabled as there might be no triggers */
- if (enable)
- *(volatile uint64_t *)txq->fc_mem = rsp->aura.count;
- else
- *(volatile uint64_t *)txq->fc_mem = txq->nb_sqb_bufs;
- /* Sync write barrier */
- rte_wmb();
-
- return 0;
-}
-
-static int
-nix_txq_flush_sq_spin(struct otx2_eth_txq *txq)
-{
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_eth_dev *dev = txq->dev;
- uint64_t wdata, val, prev;
- uint16_t sq = txq->sq;
- int64_t *regaddr;
- uint64_t timeout;/* 10's of usec */
-
- /* Wait for enough time based on shaper min rate */
- timeout = (txq->qconf.nb_desc * NIX_MAX_HW_FRS * 8 * 1E5);
- timeout = timeout / dev->tm_rate_min;
- if (!timeout)
- timeout = 10000;
-
- wdata = ((uint64_t)sq << 32);
- regaddr = (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
-
- /* Spin multiple iterations as "txq->fc_cache_pkts" can still
- * have space to send pkts even though fc_mem is disabled
- */
-
- while (true) {
- prev = val;
- rte_delay_us(10);
- val = otx2_atomic64_add_nosync(wdata, regaddr);
- /* Continue on error */
- if (val & BIT_ULL(63))
- continue;
-
- if (prev != val)
- continue;
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- /* SQ reached quiescent state */
- if (sqb_cnt <= 1 && head_off == tail_off &&
- (*txq->fc_mem == txq->nb_sqb_bufs)) {
- break;
- }
-
- /* Timeout */
- if (!timeout)
- goto exit;
- timeout--;
- }
-
- return 0;
-exit:
- otx2_nix_tm_dump(dev);
- return -EFAULT;
-}
-
-/* Flush and disable tx queue and its parent SMQ */
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq;
- struct otx2_eth_dev *dev;
- uint16_t sq;
- bool user;
- int rc;
-
- txq = _txq;
- dev = txq->dev;
- sq = txq->sq;
-
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node || !(tm_node->flags & NIX_TM_NODE_ENABLED)) {
- otx2_err("Invalid node/state for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable CGX RXTX to drain pkts */
- if (!dev_started) {
- /* Though it enables both RX MCAM Entries and CGX Link
- * we assume all the rx queues are stopped way back.
- */
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc) {
- otx2_err("cgx start failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Disable smq xoff for case it was enabled earlier */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
-
- /* As per HRM, to disable an SQ, all other SQ's
- * that feed to same SMQ must be paused before SMQ flush.
- */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- sq = sibling->id;
- txq = dev->eth_dev->data->tx_queues[sq];
- if (!txq)
- continue;
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq %u, rc=%d\n", txq->sq, rc);
- return rc;
- }
- }
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Disable and flush */
- rc = nix_smq_xoff(dev, tm_node->parent, true);
- if (rc) {
- otx2_err("Failed to disable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- goto cleanup;
- }
-cleanup:
- /* Restore cgx state */
- if (!dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-int otx2_nix_sq_flush_post(void *_txq)
-{
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_eth_txq *txq = _txq;
- struct otx2_eth_txq *s_txq;
- struct otx2_eth_dev *dev;
- bool once = false;
- uint16_t sq, s_sq;
- bool user;
- int rc;
-
- dev = txq->dev;
- sq = txq->sq;
- user = !!(dev->tm_flags & NIX_TM_COMMITTED);
-
- /* Find the node for this SQ */
- tm_node = nix_tm_node_search(dev, sq, user);
- if (!tm_node) {
- otx2_err("Invalid node for sq %u", sq);
- return -EFAULT;
- }
-
- /* Enable all the siblings back */
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
-
- if (sibling->id == sq)
- continue;
-
- if (!(sibling->flags & NIX_TM_NODE_ENABLED))
- continue;
-
- s_sq = sibling->id;
- s_txq = dev->eth_dev->data->tx_queues[s_sq];
- if (!s_txq)
- continue;
-
- if (!once) {
- /* Enable back if any SQ is still present */
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->parent->hw_id, rc);
- return rc;
- }
- once = true;
- }
-
- rc = otx2_nix_sq_sqb_aura_fc(s_txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
- }
-
- return 0;
-}
-
-static int
-nix_sq_sched_data(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *tm_node,
- bool rr_quantum_only)
-{
- struct rte_eth_dev *eth_dev = dev->eth_dev;
- struct otx2_mbox *mbox = dev->mbox;
- uint16_t sq = tm_node->id, smq;
- struct nix_aq_enq_req *req;
- uint64_t rr_quantum;
- int rc;
-
- smq = tm_node->parent->hw_id;
- rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- if (rr_quantum_only)
- otx2_tm_dbg("Update sq(%u) rr_quantum 0x%"PRIx64, sq, rr_quantum);
- else
- otx2_tm_dbg("Enabling sq(%u)->smq(%u), rr_quantum 0x%"PRIx64,
- sq, smq, rr_quantum);
-
- if (sq > eth_dev->data->nb_tx_queues)
- return -EFAULT;
-
- req = otx2_mbox_alloc_msg_nix_aq_enq(mbox);
- req->qidx = sq;
- req->ctype = NIX_AQ_CTYPE_SQ;
- req->op = NIX_AQ_INSTOP_WRITE;
-
- /* smq update only when needed */
- if (!rr_quantum_only) {
- req->sq.smq = smq;
- req->sq_mask.smq = ~req->sq_mask.smq;
- }
- req->sq.smq_rr_quantum = rr_quantum;
- req->sq_mask.smq_rr_quantum = ~req->sq_mask.smq_rr_quantum;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- otx2_err("Failed to set smq, rc=%d", rc);
- return rc;
-}
-
-int otx2_nix_sq_enable(void *_txq)
-{
- struct otx2_eth_txq *txq = _txq;
- int rc;
-
- /* Enable sqb_aura fc */
- rc = otx2_nix_sq_sqb_aura_fc(txq, true);
- if (rc) {
- otx2_err("Failed to enable sqb aura fc, rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-static int
-nix_tm_free_resources(struct otx2_eth_dev *dev, uint32_t flags_mask,
- uint32_t flags, bool hw_only)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_nix_tm_node *tm_node, *next_node;
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_free_req *req;
- uint32_t profile_id;
- int rc = 0;
-
- next_node = TAILQ_FIRST(&dev->node_list);
- while (next_node) {
- tm_node = next_node;
- next_node = TAILQ_NEXT(tm_node, node);
-
- /* Check for only requested nodes */
- if ((tm_node->flags & flags_mask) != flags)
- continue;
-
- if (!nix_tm_is_leaf(dev, tm_node->lvl) &&
- tm_node->hw_lvl != NIX_TXSCH_LVL_TL1 &&
- tm_node->flags & NIX_TM_NODE_HWRES) {
- /* Free specific HW resource */
- otx2_tm_dbg("Free hwres %s(%u) lvl %u id %u (%p)",
- nix_hwlvl2str(tm_node->hw_lvl),
- tm_node->hw_id, tm_node->lvl,
- tm_node->id, tm_node);
-
- rc = nix_clear_path_xoff(dev, tm_node);
- if (rc)
- return rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = 0;
- req->schq_lvl = tm_node->hw_lvl;
- req->schq = tm_node->hw_id;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
- tm_node->flags &= ~NIX_TM_NODE_HWRES;
- }
-
- /* Leave software elements if needed */
- if (hw_only)
- continue;
-
- otx2_tm_dbg("Free node lvl %u id %u (%p)",
- tm_node->lvl, tm_node->id, tm_node);
-
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile)
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- }
-
- if (!flags_mask) {
- /* Free all hw resources */
- req = otx2_mbox_alloc_msg_nix_txsch_free(mbox);
- req->flags = TXSCHQ_FREE_ALL;
-
- return otx2_mbox_process(mbox);
- }
-
- return rc;
-}
-
-static uint8_t
-nix_tm_copy_rsp_to_dev(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_rsp *rsp)
-{
- uint16_t schq;
- uint8_t lvl;
-
- for (lvl = 0; lvl < NIX_TXSCH_LVL_CNT; lvl++) {
- for (schq = 0; schq < MAX_TXSCHQ_PER_FUNC; schq++) {
- dev->txschq_list[lvl][schq] = rsp->schq_list[lvl][schq];
- dev->txschq_contig_list[lvl][schq] =
- rsp->schq_contig_list[lvl][schq];
- }
-
- dev->txschq[lvl] = rsp->schq[lvl];
- dev->txschq_contig[lvl] = rsp->schq_contig[lvl];
- }
- return 0;
-}
-
-static int
-nix_tm_assign_id_to_node(struct otx2_eth_dev *dev,
- struct otx2_nix_tm_node *child,
- struct otx2_nix_tm_node *parent)
-{
- uint32_t hw_id, schq_con_index, prio_offset;
- uint32_t l_id, schq_index;
-
- otx2_tm_dbg("Assign hw id for child node %s lvl %u id %u (%p)",
- nix_hwlvl2str(child->hw_lvl), child->lvl, child->id, child);
-
- child->flags |= NIX_TM_NODE_HWRES;
-
- /* Process root nodes */
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- int idx = 0;
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_contig_index[l_id];
- hw_id = dev->txschq_contig_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_contig_index[l_id]++;
- /* Update TL1 hw_id for its parent for config purpose */
- idx = dev->txschq_index[NIX_TXSCH_LVL_TL1]++;
- hw_id = dev->txschq_list[NIX_TXSCH_LVL_TL1][idx];
- child->parent_hw_id = hw_id;
- return 0;
- }
- if (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL1 &&
- child->hw_lvl == dev->otx2_tm_root_lvl && !parent) {
- uint32_t tschq_con_index;
-
- l_id = child->hw_lvl;
- tschq_con_index = dev->txschq_index[l_id];
- hw_id = dev->txschq_list[l_id][tschq_con_index];
- child->hw_id = hw_id;
- dev->txschq_index[l_id]++;
- return 0;
- }
-
- /* Process children with parents */
- l_id = child->hw_lvl;
- schq_index = dev->txschq_index[l_id];
- schq_con_index = dev->txschq_contig_index[l_id];
-
- if (child->priority == parent->rr_prio) {
- hw_id = dev->txschq_list[l_id][schq_index];
- child->hw_id = hw_id;
- child->parent_hw_id = parent->hw_id;
- dev->txschq_index[l_id]++;
- } else {
- prio_offset = schq_con_index + child->priority;
- hw_id = dev->txschq_contig_list[l_id][prio_offset];
- child->hw_id = hw_id;
- }
- return 0;
-}
-
-static int
-nix_tm_assign_hw_id(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_node *parent, *child;
- uint32_t child_hw_lvl, con_index_inc, i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--) {
- TAILQ_FOREACH(parent, &dev->node_list, node) {
- child_hw_lvl = parent->hw_lvl - 1;
- if (parent->hw_lvl != i)
- continue;
- TAILQ_FOREACH(child, &dev->node_list, node) {
- if (!child->parent)
- continue;
- if (child->parent->id != parent->id)
- continue;
- nix_tm_assign_id_to_node(dev, child, parent);
- }
-
- con_index_inc = parent->max_prio + 1;
- dev->txschq_contig_index[child_hw_lvl] += con_index_inc;
-
- /*
- * Explicitly assign id to parent node if it
- * doesn't have a parent
- */
- if (parent->hw_lvl == dev->otx2_tm_root_lvl)
- nix_tm_assign_id_to_node(dev, parent, NULL);
- }
- }
- return 0;
-}
-
-static uint8_t
-nix_tm_count_req_schq(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req, uint8_t lvl)
-{
- struct otx2_nix_tm_node *tm_node;
- uint8_t contig_count;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (lvl == tm_node->hw_lvl) {
- req->schq[lvl - 1] += tm_node->rr_num;
- if (tm_node->max_prio != UINT32_MAX) {
- contig_count = tm_node->max_prio + 1;
- req->schq_contig[lvl - 1] += contig_count;
- }
- }
- if (lvl == dev->otx2_tm_root_lvl &&
- dev->otx2_tm_root_lvl && lvl == NIX_TXSCH_LVL_TL2 &&
- tm_node->hw_lvl == dev->otx2_tm_root_lvl) {
- req->schq_contig[dev->otx2_tm_root_lvl]++;
- }
- }
-
- req->schq[NIX_TXSCH_LVL_TL1] = 1;
- req->schq_contig[NIX_TXSCH_LVL_TL1] = 0;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_txschq_req(struct otx2_eth_dev *dev,
- struct nix_txsch_alloc_req *req)
-{
- uint8_t i;
-
- for (i = NIX_TXSCH_LVL_TL1; i > 0; i--)
- nix_tm_count_req_schq(dev, req, i);
-
- for (i = 0; i < NIX_TXSCH_LVL_CNT; i++) {
- dev->txschq_index[i] = 0;
- dev->txschq_contig_index[i] = 0;
- }
- return 0;
-}
-
-static int
-nix_tm_send_txsch_alloc_msg(struct otx2_eth_dev *dev)
-{
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_txsch_alloc_req *req;
- struct nix_txsch_alloc_rsp *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txsch_alloc(mbox);
-
- rc = nix_tm_prepare_txschq_req(dev, req);
- if (rc)
- return rc;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- nix_tm_copy_rsp_to_dev(dev, rsp);
- dev->link_cfg_lvl = rsp->link_cfg_lvl;
-
- nix_tm_assign_hw_id(dev);
- return 0;
-}
-
-static int
-nix_tm_alloc_resources(struct rte_eth_dev *eth_dev, bool xmit_enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint16_t sq;
- int rc;
-
- nix_tm_update_parent_info(dev);
-
- rc = nix_tm_send_txsch_alloc_msg(dev);
- if (rc) {
- otx2_err("TM failed to alloc tm resources=%d", rc);
- return rc;
- }
-
- rc = nix_tm_txsch_reg_config(dev);
- if (rc) {
- otx2_err("TM failed to configure sched registers=%d", rc);
- return rc;
- }
-
- /* Trigger MTU recalculate as SMQ needs MTU conf */
- if (eth_dev->data->dev_started && eth_dev->data->nb_rx_queues) {
- rc = otx2_nix_recalc_mtu(eth_dev);
- if (rc) {
- otx2_err("TM MTU update failed, rc=%d", rc);
- return rc;
- }
- }
-
- /* Mark all non-leaf's as enabled */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- if (!xmit_enable)
- return 0;
-
- /* Update SQ Sched Data while SQ is idle */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- rc = nix_sq_sched_data(dev, tm_node, false);
- if (rc) {
- otx2_err("SQ %u sched update failed, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- }
-
- /* Finally XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- return rc;
- }
- }
-
- /* Enable xmit as all the topology is ready */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!nix_tm_is_leaf(dev, tm_node->lvl))
- continue;
-
- sq = tm_node->id;
- txq = eth_dev->data->tx_queues[sq];
-
- rc = otx2_nix_sq_enable(txq);
- if (rc) {
- otx2_err("TM sw xon failed on SQ %u, rc=%d",
- tm_node->id, rc);
- return rc;
- }
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- }
-
- return 0;
-}
-
-static int
-send_tm_reqval(struct otx2_mbox *mbox,
- struct nix_txschq_config *req,
- struct rte_tm_error *error)
-{
- int rc;
-
- if (!req->num_regs ||
- req->num_regs > MAX_REGS_PER_MBOX_MSG) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "invalid config";
- return -EIO;
- }
-
- rc = otx2_mbox_process(mbox);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- }
- return rc;
-}
-
-static uint16_t
-nix_tm_lvl2nix(struct otx2_eth_dev *dev, uint32_t lvl)
-{
- if (nix_tm_have_tl1_access(dev)) {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL1;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH4:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- } else {
- switch (lvl) {
- case OTX2_TM_LVL_ROOT:
- return NIX_TXSCH_LVL_TL2;
- case OTX2_TM_LVL_SCH1:
- return NIX_TXSCH_LVL_TL3;
- case OTX2_TM_LVL_SCH2:
- return NIX_TXSCH_LVL_TL4;
- case OTX2_TM_LVL_SCH3:
- return NIX_TXSCH_LVL_SMQ;
- default:
- return NIX_TXSCH_LVL_CNT;
- }
- }
-}
-
-static uint16_t
-nix_max_prio(struct otx2_eth_dev *dev, uint16_t hw_lvl)
-{
- if (hw_lvl >= NIX_TXSCH_LVL_CNT)
- return 0;
-
- /* MDQ doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- return 0;
-
- /* PF's TL1 with VF's enabled doesn't support SP */
- if (hw_lvl == NIX_TXSCH_LVL_TL1 &&
- (dev->otx2_tm_root_lvl == NIX_TXSCH_LVL_TL2 ||
- (dev->tm_flags & NIX_TM_TL1_NO_SP)))
- return 0;
-
- return TXSCH_TLX_SP_PRIO_MAX - 1;
-}
-
-
-static int
-validate_prio(struct otx2_eth_dev *dev, uint32_t lvl,
- uint32_t parent_id, uint32_t priority,
- struct rte_tm_error *error)
-{
- uint8_t priorities[TXSCH_TLX_SP_PRIO_MAX];
- struct otx2_nix_tm_node *tm_node;
- uint32_t rr_num = 0;
- int i;
-
- /* Validate priority against max */
- if (priority > nix_max_prio(dev, nix_tm_lvl2nix(dev, lvl - 1))) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "unsupported priority value";
- return -EINVAL;
- }
-
- if (parent_id == RTE_TM_NODE_ID_NULL)
- return 0;
-
- memset(priorities, 0, TXSCH_TLX_SP_PRIO_MAX);
- priorities[priority] = 1;
-
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (!tm_node->parent)
- continue;
-
- if (!(tm_node->flags & NIX_TM_NODE_USER))
- continue;
-
- if (tm_node->parent->id != parent_id)
- continue;
-
- priorities[tm_node->priority]++;
- }
-
- for (i = 0; i < TXSCH_TLX_SP_PRIO_MAX; i++)
- if (priorities[i] > 1)
- rr_num++;
-
- /* At max, one rr groups per parent */
- if (rr_num > 1) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "multiple DWRR node priority";
- return -EINVAL;
- }
-
- /* Check for previous priority to avoid holes in priorities */
- if (priority && !priorities[priority - 1]) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PRIORITY;
- error->message = "priority not in order";
- return -EINVAL;
- }
-
- return 0;
-}
-
-static int
-read_tm_reg(struct otx2_mbox *mbox, uint64_t reg,
- uint64_t *regval, uint32_t hw_lvl)
-{
- volatile struct nix_txschq_config *req;
- struct nix_txschq_config *rsp;
- int rc;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->read = 1;
- req->lvl = hw_lvl;
- req->reg[0] = reg;
- req->num_regs = 1;
-
- rc = otx2_mbox_process_msg(mbox, (void **)&rsp);
- if (rc)
- return rc;
- *regval = rsp->regval[0];
- return 0;
-}
-
-/* Search for min rate in topology */
-static void
-nix_tm_shaper_profile_update_min(struct otx2_eth_dev *dev)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- uint64_t rate_min = 1E9; /* 1 Gbps */
-
- TAILQ_FOREACH(profile, &dev->shaper_profile_list, shaper) {
- if (profile->params.peak.rate &&
- profile->params.peak.rate < rate_min)
- rate_min = profile->params.peak.rate;
-
- if (profile->params.committed.rate &&
- profile->params.committed.rate < rate_min)
- rate_min = profile->params.committed.rate;
- }
-
- dev->tm_rate_min = rate_min;
-}
-
-static int
-nix_xmit_disable(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- uint16_t sqb_cnt, head_off, tail_off;
- struct otx2_nix_tm_node *tm_node;
- struct otx2_eth_txq *txq;
- uint64_t wdata, val;
- int i, rc;
-
- otx2_tm_dbg("Disabling xmit on %s", eth_dev->data->name);
-
- /* Enable CGX RXTX to drain pkts */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_start_rx(dev->mbox);
- rc = otx2_mbox_process(dev->mbox);
- if (rc)
- return rc;
- }
-
- /* XON all SMQ's */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, false);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Flush all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- rc = otx2_nix_sq_sqb_aura_fc(txq, false);
- if (rc) {
- otx2_err("Failed to disable sqb aura fc, rc=%d", rc);
- goto cleanup;
- }
-
- /* Wait for sq entries to be flushed */
- rc = nix_txq_flush_sq_spin(txq);
- if (rc) {
- otx2_err("Failed to drain sq, rc=%d\n", rc);
- goto cleanup;
- }
- }
-
- /* XOFF & Flush all SMQ's. HRM mandates
- * all SQ's empty before SMQ flush is issued.
- */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->hw_lvl != NIX_TXSCH_LVL_SMQ)
- continue;
- if (!(tm_node->flags & NIX_TM_NODE_HWRES))
- continue;
-
- rc = nix_smq_xoff(dev, tm_node, true);
- if (rc) {
- otx2_err("Failed to enable smq %u, rc=%d",
- tm_node->hw_id, rc);
- goto cleanup;
- }
- }
-
- /* Verify sanity of all tx queues */
- for (i = 0; i < sq_cnt; i++) {
- txq = eth_dev->data->tx_queues[i];
-
- wdata = ((uint64_t)txq->sq << 32);
- val = otx2_atomic64_add_nosync(wdata,
- (int64_t *)(dev->base + NIX_LF_SQ_OP_STATUS));
-
- sqb_cnt = val & 0xFFFF;
- head_off = (val >> 20) & 0x3F;
- tail_off = (val >> 28) & 0x3F;
-
- if (sqb_cnt > 1 || head_off != tail_off ||
- (*txq->fc_mem != txq->nb_sqb_bufs))
- otx2_err("Failed to gracefully flush sq %u", txq->sq);
- }
-
-cleanup:
- /* restore cgx state */
- if (!eth_dev->data->dev_started) {
- otx2_mbox_alloc_msg_nix_lf_stop_rx(dev->mbox);
- rc |= otx2_mbox_process(dev->mbox);
- }
-
- return rc;
-}
-
-static int
-otx2_nix_tm_node_type_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- int *is_leaf, struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
-
- if (is_leaf == NULL) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (node_id == RTE_TM_NODE_ID_NULL || !tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- return -EINVAL;
- }
- if (nix_tm_is_leaf(dev, tm_node->lvl))
- *is_leaf = true;
- else
- *is_leaf = false;
- return 0;
-}
-
-static int
-otx2_nix_tm_capa_get(struct rte_eth_dev *eth_dev,
- struct rte_tm_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- int rc, max_nr_nodes = 0, i;
- struct free_rsrcs_rsp *rsp;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- for (i = 0; i < NIX_TXSCH_LVL_TL1; i++)
- max_nr_nodes += rsp->schq[i];
-
- cap->n_nodes_max = max_nr_nodes + dev->tm_leaf_cnt;
- /* TL1 level is reserved for PF */
- cap->n_levels_max = nix_tm_have_tl1_access(dev) ?
- OTX2_TM_LVL_MAX : OTX2_TM_LVL_MAX - 1;
- cap->non_leaf_nodes_identical = 1;
- cap->leaf_nodes_identical = 1;
-
- /* Shaper Capabilities */
- cap->shaper_private_n_max = max_nr_nodes;
- cap->shaper_n_max = max_nr_nodes;
- cap->shaper_private_dual_rate_n_max = max_nr_nodes;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
- cap->shaper_pkt_length_adjust_min = NIX_LENGTH_ADJUST_MIN;
- cap->shaper_pkt_length_adjust_max = NIX_LENGTH_ADJUST_MAX;
-
- /* Schedule Capabilities */
- cap->sched_n_children_max = rsp->schq[NIX_TXSCH_LVL_MDQ];
- cap->sched_sp_n_priorities_max = TXSCH_TLX_SP_PRIO_MAX;
- cap->sched_wfq_n_children_per_group_max = cap->sched_n_children_max;
- cap->sched_wfq_n_groups_max = 1;
- cap->sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->sched_wfq_packet_mode_supported = 1;
- cap->sched_wfq_byte_mode_supported = 1;
-
- cap->dynamic_update_mask =
- RTE_TM_UPDATE_NODE_PARENT_KEEP_LEVEL |
- RTE_TM_UPDATE_NODE_SUSPEND_RESUME;
- cap->stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES |
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- for (i = 0; i < RTE_COLORS; i++) {
- cap->mark_vlan_dei_supported[i] = false;
- cap->mark_ip_ecn_tcp_supported[i] = false;
- cap->mark_ip_dscp_supported[i] = false;
- }
-
- return 0;
-}
-
-static int
-otx2_nix_tm_level_capa_get(struct rte_eth_dev *eth_dev, uint32_t lvl,
- struct rte_tm_level_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct free_rsrcs_rsp *rsp;
- uint16_t hw_lvl;
- int rc;
-
- memset(cap, 0, sizeof(*cap));
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
-
- if (nix_tm_is_leaf(dev, lvl)) {
- /* Leaf */
- cap->n_nodes_max = dev->tm_leaf_cnt;
- cap->n_nodes_leaf_max = dev->tm_leaf_cnt;
- cap->leaf_nodes_identical = 1;
- cap->leaf.stats_mask =
- RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
-
- } else if (lvl == OTX2_TM_LVL_ROOT) {
- /* Root node, aka TL2(vf)/TL1(pf) */
- cap->n_nodes_max = 1;
- cap->n_nodes_nonleaf_max = 1;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported =
- nix_tm_have_tl1_access(dev) ? false : true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (nix_tm_have_tl1_access(dev))
- cap->nonleaf.stats_mask =
- RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- } else if ((lvl < OTX2_TM_LVL_MAX) &&
- (hw_lvl < NIX_TXSCH_LVL_CNT)) {
- /* TL2, TL3, TL4, MDQ */
- cap->n_nodes_max = rsp->schq[hw_lvl];
- cap->n_nodes_nonleaf_max = cap->n_nodes_max;
- cap->non_leaf_nodes_identical = 1;
-
- cap->nonleaf.shaper_private_supported = true;
- cap->nonleaf.shaper_private_dual_rate_supported = true;
- cap->nonleaf.shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->nonleaf.shaper_private_packet_mode_supported = 1;
- cap->nonleaf.shaper_private_byte_mode_supported = 1;
-
- /* MDQ doesn't support Strict Priority */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max =
- rsp->schq[hw_lvl - 1];
- cap->nonleaf.sched_sp_n_priorities_max =
- nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
- } else {
- /* unsupported level */
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_capa_get(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_capabilities *cap,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct free_rsrcs_rsp *rsp;
- int rc, hw_lvl, lvl;
-
- memset(cap, 0, sizeof(*cap));
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- hw_lvl = tm_node->hw_lvl;
- lvl = tm_node->lvl;
-
- /* Leaf node */
- if (nix_tm_is_leaf(dev, lvl)) {
- cap->stats_mask = RTE_TM_STATS_N_PKTS |
- RTE_TM_STATS_N_BYTES;
- return 0;
- }
-
- otx2_mbox_alloc_msg_free_rsrc_cnt(mbox);
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "unexpected fatal error";
- return rc;
- }
-
- /* Non Leaf Shaper */
- cap->shaper_private_supported = true;
- cap->shaper_private_dual_rate_supported =
- (hw_lvl == NIX_TXSCH_LVL_TL1) ? false : true;
- cap->shaper_private_rate_min = MIN_SHAPER_RATE / 8;
- cap->shaper_private_rate_max = MAX_SHAPER_RATE / 8;
- cap->shaper_private_packet_mode_supported = 1;
- cap->shaper_private_byte_mode_supported = 1;
-
- /* Non Leaf Scheduler */
- if (hw_lvl == NIX_TXSCH_LVL_MDQ)
- cap->nonleaf.sched_n_children_max = dev->tm_leaf_cnt;
- else
- cap->nonleaf.sched_n_children_max = rsp->schq[hw_lvl - 1];
-
- cap->nonleaf.sched_sp_n_priorities_max = nix_max_prio(dev, hw_lvl) + 1;
- cap->nonleaf.sched_wfq_n_children_per_group_max =
- cap->nonleaf.sched_n_children_max;
- cap->nonleaf.sched_wfq_n_groups_max = 1;
- cap->nonleaf.sched_wfq_weight_max = MAX_SCHED_WEIGHT;
- cap->nonleaf.sched_wfq_packet_mode_supported = 1;
- cap->nonleaf.sched_wfq_byte_mode_supported = 1;
-
- if (hw_lvl == NIX_TXSCH_LVL_TL1)
- cap->stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_add(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_shaper_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile;
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID exist";
- return -EINVAL;
- }
-
- /* Committed rate and burst size can be enabled/disabled */
- if (params->committed.size || params->committed.rate) {
- if (params->committed.size < MIN_SHAPER_BURST ||
- params->committed.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->committed.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper committed rate invalid";
- return -EINVAL;
- }
- }
-
- /* Peak rate and burst size can be enabled/disabled */
- if (params->peak.size || params->peak.rate) {
- if (params->peak.size < MIN_SHAPER_BURST ||
- params->peak.size > MAX_SHAPER_BURST) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PEAK_SIZE;
- return -EINVAL;
- } else if (!shaper_rate_to_nix(params->peak.rate * 8,
- NULL, NULL, NULL)) {
- error->type =
- RTE_TM_ERROR_TYPE_SHAPER_PROFILE_COMMITTED_RATE;
- error->message = "shaper peak rate invalid";
- return -EINVAL;
- }
- }
-
- if (params->pkt_length_adjust < NIX_LENGTH_ADJUST_MIN ||
- params->pkt_length_adjust > NIX_LENGTH_ADJUST_MAX) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_PKT_ADJUST_LEN;
- error->message = "length adjust invalid";
- return -EINVAL;
- }
-
- profile = rte_zmalloc("otx2_nix_tm_shaper_profile",
- sizeof(struct otx2_nix_tm_shaper_profile), 0);
- if (!profile)
- return -ENOMEM;
-
- profile->shaper_profile_id = profile_id;
- rte_memcpy(&profile->params, params,
- sizeof(struct rte_tm_shaper_params));
- TAILQ_INSERT_TAIL(&dev->shaper_profile_list, profile, shaper);
-
- otx2_tm_dbg("Added TM shaper profile %u, "
- " pir %" PRIu64 " , pbs %" PRIu64 ", cir %" PRIu64
- ", cbs %" PRIu64 " , adj %u, pkt mode %d",
- profile_id,
- params->peak.rate * 8,
- params->peak.size,
- params->committed.rate * 8,
- params->committed.size,
- params->pkt_length_adjust,
- params->packet_mode);
-
- /* Translate rate as bits per second */
- profile->params.peak.rate = profile->params.peak.rate * 8;
- profile->params.committed.rate = profile->params.committed.rate * 8;
- /* Always use PIR for single rate shaping */
- if (!params->peak.rate && params->committed.rate) {
- profile->params.peak = profile->params.committed;
- memset(&profile->params.committed, 0,
- sizeof(profile->params.committed));
- }
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_shaper_profile_delete(struct rte_eth_dev *eth_dev,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_nix_tm_shaper_profile *profile;
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- profile = nix_tm_shaper_profile_search(dev, profile_id);
-
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
-
- if (profile->reference_count) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper profile in use";
- return -EINVAL;
- }
-
- otx2_tm_dbg("Removing TM shaper profile %u", profile_id);
- TAILQ_REMOVE(&dev->shaper_profile_list, profile, shaper);
- rte_free(profile);
-
- /* update min rate */
- nix_tm_shaper_profile_update_min(dev);
- return 0;
-}
-
-static int
-otx2_nix_tm_node_add(struct rte_eth_dev *eth_dev, uint32_t node_id,
- uint32_t parent_node_id, uint32_t priority,
- uint32_t weight, uint32_t lvl,
- struct rte_tm_node_params *params,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_nix_tm_node *parent_node;
- int rc, pkt_mode, clear_on_fail = 0;
- uint32_t exp_next_lvl, i;
- uint32_t profile_id;
- uint16_t hw_lvl;
-
- /* we don't support dynamic updates */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "dynamic update not supported";
- return -EIO;
- }
-
- /* Leaf nodes have to be same priority */
- if (nix_tm_is_leaf(dev, lvl) && priority != 0) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "queue shapers must be priority 0";
- return -EIO;
- }
-
- parent_node = nix_tm_node_search(dev, parent_node_id, true);
-
- /* find the right level */
- if (lvl == RTE_TM_NODE_LEVEL_ID_ANY) {
- if (parent_node_id == RTE_TM_NODE_ID_NULL) {
- lvl = OTX2_TM_LVL_ROOT;
- } else if (parent_node) {
- lvl = parent_node->lvl + 1;
- } else {
- /* Neigher proper parent nor proper level id given */
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -ERANGE;
- }
- }
-
- /* Translate rte_tm level id's to nix hw level id's */
- hw_lvl = nix_tm_lvl2nix(dev, lvl);
- if (hw_lvl == NIX_TXSCH_LVL_CNT &&
- !nix_tm_is_leaf(dev, lvl)) {
- error->type = RTE_TM_ERROR_TYPE_LEVEL_ID;
- error->message = "invalid level id";
- return -ERANGE;
- }
-
- if (node_id < dev->tm_leaf_cnt)
- exp_next_lvl = NIX_TXSCH_LVL_SMQ;
- else
- exp_next_lvl = hw_lvl + 1;
-
- /* Check if there is no parent node yet */
- if (hw_lvl != dev->otx2_tm_root_lvl &&
- (!parent_node || parent_node->hw_lvl != exp_next_lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "invalid parent node id";
- return -EINVAL;
- }
-
- /* Check if a node already exists */
- if (nix_tm_node_search(dev, node_id, true)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "node already exists";
- return -EINVAL;
- }
-
- if (!nix_tm_is_leaf(dev, lvl)) {
- /* Check if shaper profile exists for non leaf node */
- profile_id = params->shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE && !profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "invalid shaper profile";
- return -EINVAL;
- }
-
- /* Minimum static priority count is 1 */
- if (!params->nonleaf.n_sp_priorities ||
- params->nonleaf.n_sp_priorities > TXSCH_TLX_SP_PRIO_MAX) {
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_N_SP_PRIORITIES;
- error->message = "invalid sp priorities";
- return -EINVAL;
- }
-
- pkt_mode = 0;
- /* Validate weight mode */
- for (i = 0; i < params->nonleaf.n_sp_priorities &&
- params->nonleaf.wfq_weight_mode; i++) {
- pkt_mode = !params->nonleaf.wfq_weight_mode[i];
- if (pkt_mode == !params->nonleaf.wfq_weight_mode[0])
- continue;
-
- error->type =
- RTE_TM_ERROR_TYPE_NODE_PARAMS_WFQ_WEIGHT_MODE;
- error->message = "unsupported weight mode";
- return -EINVAL;
- }
-
- if (profile && params->nonleaf.n_sp_priorities &&
- pkt_mode != profile->params.packet_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE;
- error->message = "shaper wfq packet mode mismatch";
- return -EINVAL;
- }
- }
-
- /* Check if there is second DWRR already in siblings or holes in prio */
- if (validate_prio(dev, lvl, parent_node_id, priority, error))
- return -EINVAL;
-
- if (weight > MAX_SCHED_WEIGHT) {
- error->type = RTE_TM_ERROR_TYPE_NODE_WEIGHT;
- error->message = "max weight exceeded";
- return -EINVAL;
- }
-
- rc = nix_tm_node_add_to_list(dev, node_id, parent_node_id,
- priority, weight, hw_lvl,
- lvl, true, params);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- /* cleanup user added nodes */
- if (clear_on_fail)
- nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, false);
- error->message = "failed to add node";
- return rc;
- }
- error->type = RTE_TM_ERROR_TYPE_NONE;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_delete(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *child_node;
- struct otx2_nix_tm_shaper_profile *profile;
- uint32_t profile_id;
-
- /* we don't support dynamic updates yet */
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_CAPABILITIES;
- error->message = "hierarchy exists";
- return -EIO;
- }
-
- if (node_id == RTE_TM_NODE_ID_NULL) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node id";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Check for any existing children */
- TAILQ_FOREACH(child_node, &dev->node_list, node) {
- if (child_node->parent == tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "children exist";
- return -EINVAL;
- }
- }
-
- /* Remove shaper profile reference */
- profile_id = tm_node->params.shaper_profile_id;
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- profile->reference_count--;
-
- TAILQ_REMOVE(&dev->node_list, tm_node, node);
- rte_free(tm_node);
- return 0;
-}
-
-static int
-nix_tm_node_suspend_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error, bool suspend)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint16_t flags;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- flags = tm_node->flags;
- flags = suspend ? (flags & ~NIX_TM_NODE_ENABLED) :
- (flags | NIX_TM_NODE_ENABLED);
-
- if (tm_node->flags == flags)
- return 0;
-
- /* send mbox for state change */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
-
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node, suspend,
- req->reg, req->regval);
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags = flags;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_suspend(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, true);
-}
-
-static int
-otx2_nix_tm_node_resume(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_error *error)
-{
- return nix_tm_node_suspend_resume(eth_dev, node_id, error, false);
-}
-
-static int
-otx2_nix_tm_hierarchy_commit(struct rte_eth_dev *eth_dev,
- int clear_on_fail,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint32_t leaf_cnt = 0;
- int rc;
-
- if (dev->tm_flags & NIX_TM_COMMITTED) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy exists";
- return -EINVAL;
- }
-
- /* Check if we have all the leaf nodes */
- TAILQ_FOREACH(tm_node, &dev->node_list, node) {
- if (tm_node->flags & NIX_TM_NODE_USER &&
- tm_node->id < dev->tm_leaf_cnt)
- leaf_cnt++;
- }
-
- if (leaf_cnt != dev->tm_leaf_cnt) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "incomplete hierarchy";
- return -EINVAL;
- }
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- /* Delete default/ratelimit tree */
- if (dev->tm_flags & (NIX_TM_DEFAULT_TREE | NIX_TM_RATE_LIMIT_TREE)) {
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER, 0, false);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free default resources";
- return rc;
- }
- dev->tm_flags &= ~(NIX_TM_DEFAULT_TREE |
- NIX_TM_RATE_LIMIT_TREE);
- }
-
- /* Free up user alloc'ed resources */
- rc = nix_tm_free_resources(dev, NIX_TM_NODE_USER,
- NIX_TM_NODE_USER, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "failed to free user resources";
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "alloc resources failed";
- /* TODO should we restore default config ? */
- if (clear_on_fail)
- nix_tm_free_resources(dev, 0, 0, false);
- return rc;
- }
-
- error->type = RTE_TM_ERROR_TYPE_NONE;
- dev->tm_flags |= NIX_TM_COMMITTED;
- return 0;
-}
-
-static int
-otx2_nix_tm_node_shaper_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id,
- uint32_t profile_id,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile *profile = NULL;
- struct otx2_mbox *mbox = dev->mbox;
- struct otx2_nix_tm_node *tm_node;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node || nix_tm_is_leaf(dev, tm_node->lvl)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "invalid node";
- return -EINVAL;
- }
-
- if (profile_id == tm_node->params.shaper_profile_id)
- return 0;
-
- if (profile_id != RTE_TM_SHAPER_PROFILE_ID_NONE) {
- profile = nix_tm_shaper_profile_search(dev, profile_id);
- if (!profile) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile ID not exist";
- return -EINVAL;
- }
- }
-
- if (profile && profile->params.packet_mode != tm_node->pkt_mode) {
- error->type = RTE_TM_ERROR_TYPE_SHAPER_PROFILE_ID;
- error->message = "shaper profile pkt mode mismatch";
- return -EINVAL;
- }
-
- tm_node->params.shaper_profile_id = profile_id;
-
- /* Nothing to do if not yet committed */
- if (!(dev->tm_flags & NIX_TM_COMMITTED))
- return 0;
-
- tm_node->flags &= ~NIX_TM_NODE_ENABLED;
-
- /* Flush the specific node with SW_XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
- k = prepare_tm_sw_xoff(tm_node, true, req->reg, req->regval);
- req->num_regs = k;
-
- rc = send_tm_reqval(mbox, req, error);
- if (rc)
- return rc;
-
- shaper_default_red_algo(dev, tm_node, profile);
-
- /* Update the PIR/CIR and clear SW XOFF */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = prepare_tm_shaper_reg(tm_node, profile, req->reg, req->regval);
-
- k += prepare_tm_sw_xoff(tm_node, false, &req->reg[k], &req->regval[k]);
-
- req->num_regs = k;
- rc = send_tm_reqval(mbox, req, error);
- if (!rc)
- tm_node->flags |= NIX_TM_NODE_ENABLED;
- return rc;
-}
-
-static int
-otx2_nix_tm_node_parent_update(struct rte_eth_dev *eth_dev,
- uint32_t node_id, uint32_t new_parent_id,
- uint32_t priority, uint32_t weight,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node, *sibling;
- struct otx2_nix_tm_node *new_parent;
- struct nix_txschq_config *req;
- uint8_t k;
- int rc;
-
- if (!(dev->tm_flags & NIX_TM_COMMITTED)) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "hierarchy doesn't exist";
- return -EINVAL;
- }
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- /* Parent id valid only for non root nodes */
- if (tm_node->hw_lvl != dev->otx2_tm_root_lvl) {
- new_parent = nix_tm_node_search(dev, new_parent_id, true);
- if (!new_parent) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "no such parent node";
- return -EINVAL;
- }
-
- /* Current support is only for dynamic weight update */
- if (tm_node->parent != new_parent ||
- tm_node->priority != priority) {
- error->type = RTE_TM_ERROR_TYPE_NODE_PARENT_NODE_ID;
- error->message = "only weight update supported";
- return -EINVAL;
- }
- }
-
- /* Skip if no change */
- if (tm_node->weight == weight)
- return 0;
-
- tm_node->weight = weight;
-
- /* For leaf nodes, SQ CTX needs update */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- /* Update SQ quantum data on the fly */
- rc = nix_sq_sched_data(dev, tm_node, true);
- if (rc) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "sq sched data update failed";
- return rc;
- }
- } else {
- /* XOFF Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, true,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XOFF this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, true, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* Update new weight for current node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
- req->num_regs = prepare_tm_sched_reg(dev, tm_node,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON this node and all other siblings */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->hw_lvl;
-
- k = 0;
- TAILQ_FOREACH(sibling, &dev->node_list, node) {
- if (sibling->parent != tm_node->parent)
- continue;
- k += prepare_tm_sw_xoff(sibling, false, &req->reg[k],
- &req->regval[k]);
- }
- req->num_regs = k;
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
-
- /* XON Parent node */
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(dev->mbox);
- req->lvl = tm_node->parent->hw_lvl;
- req->num_regs = prepare_tm_sw_xoff(tm_node->parent, false,
- req->reg, req->regval);
- rc = send_tm_reqval(dev->mbox, req, error);
- if (rc)
- return rc;
- }
- return 0;
-}
-
-static int
-otx2_nix_tm_node_stats_read(struct rte_eth_dev *eth_dev, uint32_t node_id,
- struct rte_tm_node_stats *stats,
- uint64_t *stats_mask, int clear,
- struct rte_tm_error *error)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_node *tm_node;
- uint64_t reg, val;
- int64_t *addr;
- int rc = 0;
-
- tm_node = nix_tm_node_search(dev, node_id, true);
- if (!tm_node) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "no such node";
- return -EINVAL;
- }
-
- if (!(tm_node->flags & NIX_TM_NODE_HWRES)) {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "HW resources not allocated";
- return -EINVAL;
- }
-
- /* Stats support only for leaf node or TL1 root */
- if (nix_tm_is_leaf(dev, tm_node->lvl)) {
- reg = (((uint64_t)tm_node->id) << 32);
-
- /* Packets */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_PKTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_pkts = val - tm_node->last_pkts;
-
- /* Bytes */
- addr = (int64_t *)(dev->base + NIX_LF_SQ_OP_OCTS);
- val = otx2_atomic64_add_nosync(reg, addr);
- if (val & OP_ERR)
- val = 0;
- stats->n_bytes = val - tm_node->last_bytes;
-
- if (clear) {
- tm_node->last_pkts = stats->n_pkts;
- tm_node->last_bytes = stats->n_bytes;
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS | RTE_TM_STATS_N_BYTES;
-
- } else if (tm_node->hw_lvl == NIX_TXSCH_LVL_TL1) {
- error->type = RTE_TM_ERROR_TYPE_UNSPECIFIED;
- error->message = "stats read error";
-
- /* RED Drop packets */
- reg = NIX_AF_TL1X_DROPPED_PACKETS(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED] =
- val - tm_node->last_pkts;
-
- /* RED Drop bytes */
- reg = NIX_AF_TL1X_DROPPED_BYTES(tm_node->hw_id);
- rc = read_tm_reg(dev->mbox, reg, &val, NIX_TXSCH_LVL_TL1);
- if (rc)
- goto exit;
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED] =
- val - tm_node->last_bytes;
-
- /* Clear stats */
- if (clear) {
- tm_node->last_pkts =
- stats->leaf.n_pkts_dropped[RTE_COLOR_RED];
- tm_node->last_bytes =
- stats->leaf.n_bytes_dropped[RTE_COLOR_RED];
- }
-
- *stats_mask = RTE_TM_STATS_N_PKTS_RED_DROPPED |
- RTE_TM_STATS_N_BYTES_RED_DROPPED;
-
- } else {
- error->type = RTE_TM_ERROR_TYPE_NODE_ID;
- error->message = "unsupported node";
- rc = -EINVAL;
- }
-
-exit:
- return rc;
-}
-
-const struct rte_tm_ops otx2_tm_ops = {
- .node_type_get = otx2_nix_tm_node_type_get,
-
- .capabilities_get = otx2_nix_tm_capa_get,
- .level_capabilities_get = otx2_nix_tm_level_capa_get,
- .node_capabilities_get = otx2_nix_tm_node_capa_get,
-
- .shaper_profile_add = otx2_nix_tm_shaper_profile_add,
- .shaper_profile_delete = otx2_nix_tm_shaper_profile_delete,
-
- .node_add = otx2_nix_tm_node_add,
- .node_delete = otx2_nix_tm_node_delete,
- .node_suspend = otx2_nix_tm_node_suspend,
- .node_resume = otx2_nix_tm_node_resume,
- .hierarchy_commit = otx2_nix_tm_hierarchy_commit,
-
- .node_shaper_update = otx2_nix_tm_node_shaper_update,
- .node_parent_update = otx2_nix_tm_node_parent_update,
- .node_stats_read = otx2_nix_tm_node_stats_read,
-};
-
-static int
-nix_tm_prepare_default_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i;
- int rc = 0, leaf_level;
-
- /* Default params */
- memset(¶ms, 0, sizeof(params));
- params.shaper_profile_id = RTE_TM_SHAPER_PROFILE_ID_NONE;
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 4, def + 3, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 4;
- leaf_level = OTX2_TM_LVL_QUEUE;
- } else {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto exit;
-
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto exit;
-
- leaf_parent = def + 3;
- leaf_level = OTX2_TM_LVL_SCH4;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- leaf_level, false, ¶ms);
- if (rc)
- break;
- }
-
-exit:
- return rc;
-}
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- TAILQ_INIT(&dev->node_list);
- TAILQ_INIT(&dev->shaper_profile_list);
- dev->tm_rate_min = 1E9; /* 1Gbps */
-}
-
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev)
-{
- struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev);
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint16_t sq_cnt = eth_dev->data->nb_tx_queues;
- int rc;
-
- /* Free up all resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
- dev->tm_flags = NIX_TM_DEFAULT_TREE;
-
- /* Disable TL1 Static Priority when VF's are enabled
- * as otherwise VF's TL2 reallocation will be needed
- * runtime to support a specific topology of PF.
- */
- if (pci_dev->max_vfs)
- dev->tm_flags |= NIX_TM_TL1_NO_SP;
-
- rc = nix_tm_prepare_default_tree(eth_dev);
- if (rc != 0)
- return rc;
-
- rc = nix_tm_alloc_resources(eth_dev, false);
- if (rc != 0)
- return rc;
- dev->tm_leaf_cnt = sq_cnt;
-
- return 0;
-}
-
-static int
-nix_tm_prepare_rate_limited_tree(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint32_t def = eth_dev->data->nb_tx_queues;
- struct rte_tm_node_params params;
- uint32_t leaf_parent, i, rc = 0;
-
- memset(¶ms, 0, sizeof(params));
-
- if (nix_tm_have_tl1_access(dev)) {
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL1;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL1,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 3, def + 2, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH3, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 3;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i,
- leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_QUEUE,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- return 0;
- }
-
- dev->otx2_tm_root_lvl = NIX_TXSCH_LVL_TL2;
- rc = nix_tm_node_add_to_list(dev, def, RTE_TM_NODE_ID_NULL, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL2,
- OTX2_TM_LVL_ROOT, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 1, def, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL3,
- OTX2_TM_LVL_SCH1, false, ¶ms);
- if (rc)
- goto error;
- rc = nix_tm_node_add_to_list(dev, def + 2, def + 1, 0,
- DEFAULT_RR_WEIGHT, NIX_TXSCH_LVL_TL4,
- OTX2_TM_LVL_SCH2, false, ¶ms);
- if (rc)
- goto error;
- leaf_parent = def + 2;
-
- /* Add per queue SMQ nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, leaf_parent + 1 + i,
- leaf_parent,
- 0, DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_SMQ,
- OTX2_TM_LVL_SCH3,
- false, ¶ms);
- if (rc)
- goto error;
- }
-
- /* Add leaf nodes */
- for (i = 0; i < eth_dev->data->nb_tx_queues; i++) {
- rc = nix_tm_node_add_to_list(dev, i, leaf_parent + 1 + i, 0,
- DEFAULT_RR_WEIGHT,
- NIX_TXSCH_LVL_CNT,
- OTX2_TM_LVL_SCH4,
- false, ¶ms);
- if (rc)
- break;
- }
-error:
- return rc;
-}
-
-static int
-otx2_nix_tm_rate_limit_mdq(struct rte_eth_dev *eth_dev,
- struct otx2_nix_tm_node *tm_node,
- uint64_t tx_rate)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_nix_tm_shaper_profile profile;
- struct otx2_mbox *mbox = dev->mbox;
- volatile uint64_t *reg, *regval;
- struct nix_txschq_config *req;
- uint16_t flags;
- uint8_t k = 0;
- int rc;
-
- flags = tm_node->flags;
-
- req = otx2_mbox_alloc_msg_nix_txschq_cfg(mbox);
- req->lvl = NIX_TXSCH_LVL_MDQ;
- reg = req->reg;
- regval = req->regval;
-
- if (tx_rate == 0) {
- k += prepare_tm_sw_xoff(tm_node, true, ®[k], ®val[k]);
- flags &= ~NIX_TM_NODE_ENABLED;
- goto exit;
- }
-
- if (!(flags & NIX_TM_NODE_ENABLED)) {
- k += prepare_tm_sw_xoff(tm_node, false, ®[k], ®val[k]);
- flags |= NIX_TM_NODE_ENABLED;
- }
-
- /* Use only PIR for rate limit */
- memset(&profile, 0, sizeof(profile));
- profile.params.peak.rate = tx_rate;
- /* Minimum burst of ~4us Bytes of Tx */
- profile.params.peak.size = RTE_MAX(NIX_MAX_HW_FRS,
- (4ull * tx_rate) / (1E6 * 8));
- if (!dev->tm_rate_min || dev->tm_rate_min > tx_rate)
- dev->tm_rate_min = tx_rate;
-
- k += prepare_tm_shaper_reg(tm_node, &profile, ®[k], ®val[k]);
-exit:
- req->num_regs = k;
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- tm_node->flags = flags;
- return 0;
-}
-
-int
-otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate_mbps)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t tx_rate = tx_rate_mbps * (uint64_t)1E6;
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- if (queue_idx >= eth_dev->data->nb_tx_queues)
- return -EINVAL;
-
- if (!(dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- !(dev->tm_flags & NIX_TM_RATE_LIMIT_TREE))
- goto error;
-
- if ((dev->tm_flags & NIX_TM_DEFAULT_TREE) &&
- eth_dev->data->nb_tx_queues > 1) {
- /* For TM topology change ethdev needs to be stopped */
- if (eth_dev->data->dev_started)
- return -EBUSY;
-
- /*
- * Disable xmit will be enabled when
- * new topology is available.
- */
- rc = nix_xmit_disable(eth_dev);
- if (rc) {
- otx2_err("failed to disable TX, rc=%d", rc);
- return -EIO;
- }
-
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc < 0) {
- otx2_tm_dbg("failed to free default resources, rc %d",
- rc);
- return -EIO;
- }
-
- rc = nix_tm_prepare_rate_limited_tree(eth_dev);
- if (rc < 0) {
- otx2_tm_dbg("failed to prepare tm tree, rc=%d", rc);
- return rc;
- }
-
- rc = nix_tm_alloc_resources(eth_dev, true);
- if (rc != 0) {
- otx2_tm_dbg("failed to allocate tm tree, rc=%d", rc);
- return rc;
- }
-
- dev->tm_flags &= ~NIX_TM_DEFAULT_TREE;
- dev->tm_flags |= NIX_TM_RATE_LIMIT_TREE;
- }
-
- tm_node = nix_tm_node_search(dev, queue_idx, false);
-
- /* check if we found a valid leaf node */
- if (!tm_node ||
- !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent ||
- tm_node->parent->hw_id == UINT32_MAX)
- return -EIO;
-
- return otx2_nix_tm_rate_limit_mdq(eth_dev, tm_node->parent, tx_rate);
-error:
- otx2_tm_dbg("Unsupported TM tree 0x%0x", dev->tm_flags);
- return -EINVAL;
-}
-
-int
-otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *arg)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- if (!arg)
- return -EINVAL;
-
- /* Check for supported revisions */
- if (otx2_dev_is_95xx_Ax(dev) ||
- otx2_dev_is_96xx_Ax(dev))
- return -EINVAL;
-
- *(const void **)arg = &otx2_tm_ops;
-
- return 0;
-}
-
-int
-otx2_nix_tm_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc;
-
- /* Xmit is assumed to be disabled */
- /* Free up resources already held */
- rc = nix_tm_free_resources(dev, 0, 0, false);
- if (rc) {
- otx2_err("Failed to freeup existing resources,rc=%d", rc);
- return rc;
- }
-
- /* Clear shaper profiles */
- nix_tm_clear_shaper_profiles(dev);
-
- dev->tm_flags = 0;
- return 0;
-}
-
-int
-otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq)
-{
- struct otx2_nix_tm_node *tm_node;
- int rc;
-
- /* 0..sq_cnt-1 are leaf nodes */
- if (sq >= dev->tm_leaf_cnt)
- return -EINVAL;
-
- /* Search for internal node first */
- tm_node = nix_tm_node_search(dev, sq, false);
- if (!tm_node)
- tm_node = nix_tm_node_search(dev, sq, true);
-
- /* Check if we found a valid leaf node */
- if (!tm_node || !nix_tm_is_leaf(dev, tm_node->lvl) ||
- !tm_node->parent || tm_node->parent->hw_id == UINT32_MAX) {
- return -EIO;
- }
-
- /* Get SMQ Id of leaf node's parent */
- *smq = tm_node->parent->hw_id;
- *rr_quantum = NIX_TM_WEIGHT_TO_RR_QUANTUM(tm_node->weight);
-
- rc = nix_smq_xoff(dev, tm_node->parent, false);
- if (rc)
- return rc;
- tm_node->flags |= NIX_TM_NODE_ENABLED;
-
- return 0;
-}
diff --git a/drivers/net/octeontx2/otx2_tm.h b/drivers/net/octeontx2/otx2_tm.h
deleted file mode 100644
index db44d4891f..0000000000
--- a/drivers/net/octeontx2/otx2_tm.h
+++ /dev/null
@@ -1,176 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TM_H__
-#define __OTX2_TM_H__
-
-#include <stdbool.h>
-
-#include <rte_tm_driver.h>
-
-#define NIX_TM_DEFAULT_TREE BIT_ULL(0)
-#define NIX_TM_COMMITTED BIT_ULL(1)
-#define NIX_TM_RATE_LIMIT_TREE BIT_ULL(2)
-#define NIX_TM_TL1_NO_SP BIT_ULL(3)
-
-struct otx2_eth_dev;
-
-void otx2_nix_tm_conf_init(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_init_default(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_fini(struct rte_eth_dev *eth_dev);
-int otx2_nix_tm_ops_get(struct rte_eth_dev *eth_dev, void *ops);
-int otx2_nix_tm_get_leaf_data(struct otx2_eth_dev *dev, uint16_t sq,
- uint32_t *rr_quantum, uint16_t *smq);
-int otx2_nix_tm_set_queue_rate_limit(struct rte_eth_dev *eth_dev,
- uint16_t queue_idx, uint16_t tx_rate);
-int otx2_nix_sq_flush_pre(void *_txq, bool dev_started);
-int otx2_nix_sq_flush_post(void *_txq);
-int otx2_nix_sq_enable(void *_txq);
-int otx2_nix_get_link(struct otx2_eth_dev *dev);
-int otx2_nix_sq_sqb_aura_fc(void *_txq, bool enable);
-
-struct otx2_nix_tm_node {
- TAILQ_ENTRY(otx2_nix_tm_node) node;
- uint32_t id;
- uint32_t hw_id;
- uint32_t priority;
- uint32_t weight;
- uint16_t lvl;
- uint16_t hw_lvl;
- uint32_t rr_prio;
- uint32_t rr_num;
- uint32_t max_prio;
- uint32_t parent_hw_id;
- uint32_t flags:16;
-#define NIX_TM_NODE_HWRES BIT_ULL(0)
-#define NIX_TM_NODE_ENABLED BIT_ULL(1)
-#define NIX_TM_NODE_USER BIT_ULL(2)
-#define NIX_TM_NODE_RED_DISCARD BIT_ULL(3)
- /* Shaper algorithm for RED state @NIX_REDALG_E */
- uint32_t red_algo:2;
- uint32_t pkt_mode:1;
-
- struct otx2_nix_tm_node *parent;
- struct rte_tm_node_params params;
-
- /* Last stats */
- uint64_t last_pkts;
- uint64_t last_bytes;
-};
-
-struct otx2_nix_tm_shaper_profile {
- TAILQ_ENTRY(otx2_nix_tm_shaper_profile) shaper;
- uint32_t shaper_profile_id;
- uint32_t reference_count;
- struct rte_tm_shaper_params params; /* Rate in bits/sec */
-};
-
-struct shaper_params {
- uint64_t burst_exponent;
- uint64_t burst_mantissa;
- uint64_t div_exp;
- uint64_t exponent;
- uint64_t mantissa;
- uint64_t burst;
- uint64_t rate;
-};
-
-TAILQ_HEAD(otx2_nix_tm_node_list, otx2_nix_tm_node);
-TAILQ_HEAD(otx2_nix_tm_shaper_profile_list, otx2_nix_tm_shaper_profile);
-
-#define MAX_SCHED_WEIGHT ((uint8_t)~0)
-#define NIX_TM_RR_QUANTUM_MAX (BIT_ULL(24) - 1)
-#define NIX_TM_WEIGHT_TO_RR_QUANTUM(__weight) \
- ((((__weight) & MAX_SCHED_WEIGHT) * \
- NIX_TM_RR_QUANTUM_MAX) / MAX_SCHED_WEIGHT)
-
-/* DEFAULT_RR_WEIGHT * NIX_TM_RR_QUANTUM_MAX / MAX_SCHED_WEIGHT */
-/* = NIX_MAX_HW_MTU */
-#define DEFAULT_RR_WEIGHT 71
-
-/** NIX rate limits */
-#define MAX_RATE_DIV_EXP 12
-#define MAX_RATE_EXPONENT 0xf
-#define MAX_RATE_MANTISSA 0xff
-
-#define NIX_SHAPER_RATE_CONST ((uint64_t)2E6)
-
-/* NIX rate calculation in Bits/Sec
- * PIR_ADD = ((256 + NIX_*_PIR[RATE_MANTISSA])
- * << NIX_*_PIR[RATE_EXPONENT]) / 256
- * PIR = (2E6 * PIR_ADD / (1 << NIX_*_PIR[RATE_DIVIDER_EXPONENT]))
- *
- * CIR_ADD = ((256 + NIX_*_CIR[RATE_MANTISSA])
- * << NIX_*_CIR[RATE_EXPONENT]) / 256
- * CIR = (2E6 * CIR_ADD / (CCLK_TICKS << NIX_*_CIR[RATE_DIVIDER_EXPONENT]))
- */
-#define SHAPER_RATE(exponent, mantissa, div_exp) \
- ((NIX_SHAPER_RATE_CONST * ((256 + (mantissa)) << (exponent)))\
- / (((1ull << (div_exp)) * 256)))
-
-/* 96xx rate limits in Bits/Sec */
-#define MIN_SHAPER_RATE \
- SHAPER_RATE(0, 0, MAX_RATE_DIV_EXP)
-
-#define MAX_SHAPER_RATE \
- SHAPER_RATE(MAX_RATE_EXPONENT, MAX_RATE_MANTISSA, 0)
-
-/* Min is limited so that NIX_AF_SMQX_CFG[MINLEN]+ADJUST is not -ve */
-#define NIX_LENGTH_ADJUST_MIN ((int)-NIX_MIN_HW_FRS + 1)
-#define NIX_LENGTH_ADJUST_MAX 255
-
-/** TM Shaper - low level operations */
-
-/** NIX burst limits */
-#define MAX_BURST_EXPONENT 0xf
-#define MAX_BURST_MANTISSA 0xff
-
-/* NIX burst calculation
- * PIR_BURST = ((256 + NIX_*_PIR[BURST_MANTISSA])
- * << (NIX_*_PIR[BURST_EXPONENT] + 1))
- * / 256
- *
- * CIR_BURST = ((256 + NIX_*_CIR[BURST_MANTISSA])
- * << (NIX_*_CIR[BURST_EXPONENT] + 1))
- * / 256
- */
-#define SHAPER_BURST(exponent, mantissa) \
- (((256 + (mantissa)) << ((exponent) + 1)) / 256)
-
-/** Shaper burst limits */
-#define MIN_SHAPER_BURST \
- SHAPER_BURST(0, 0)
-
-#define MAX_SHAPER_BURST \
- SHAPER_BURST(MAX_BURST_EXPONENT,\
- MAX_BURST_MANTISSA)
-
-/* Default TL1 priority and Quantum from AF */
-#define TXSCH_TL1_DFLT_RR_QTM ((1 << 24) - 1)
-#define TXSCH_TL1_DFLT_RR_PRIO 1
-
-#define TXSCH_TLX_SP_PRIO_MAX 10
-
-static inline const char *
-nix_hwlvl2str(uint32_t hw_lvl)
-{
- switch (hw_lvl) {
- case NIX_TXSCH_LVL_MDQ:
- return "SMQ/MDQ";
- case NIX_TXSCH_LVL_TL4:
- return "TL4";
- case NIX_TXSCH_LVL_TL3:
- return "TL3";
- case NIX_TXSCH_LVL_TL2:
- return "TL2";
- case NIX_TXSCH_LVL_TL1:
- return "TL1";
- default:
- break;
- }
-
- return "???";
-}
-
-#endif /* __OTX2_TM_H__ */
diff --git a/drivers/net/octeontx2/otx2_tx.c b/drivers/net/octeontx2/otx2_tx.c
deleted file mode 100644
index e95184632f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.c
+++ /dev/null
@@ -1,1077 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_vect.h>
-
-#include "otx2_ethdev.h"
-
-#define NIX_XMIT_FC_OR_RETURN(txq, pkts) do { \
- /* Cached value is low, Update the fc_cache_pkts */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) { \
- /* Multiply with sqe_per_sqb to express in pkts */ \
- (txq)->fc_cache_pkts = \
- ((txq)->nb_sqb_bufs_adj - *(txq)->fc_mem) << \
- (txq)->sqes_per_sqb_log2; \
- /* Check it again for the room */ \
- if (unlikely((txq)->fc_cache_pkts < (pkts))) \
- return 0; \
- } \
-} while (0)
-
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint16_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- /* Passing no of segdw as 4: HDR + EXT + SG + SMEM */
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, 4, flags);
- otx2_nix_xmit_one(cmd, lmt_addr, io_addr, flags);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-static __rte_always_inline uint16_t
-nix_xmit_pkts_mseg(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- struct otx2_eth_txq *txq = tx_queue; uint64_t i;
- const rte_iova_t io_addr = txq->io_addr;
- void *lmt_addr = txq->lmt_addr;
- uint64_t lso_tun_fmt;
- uint16_t segdw;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- otx2_lmt_mov(cmd, &txq->cmd[0], otx2_nix_tx_ext_subs(flags));
-
- /* Perform header writes before barrier for TSO */
- if (flags & NIX_TX_OFFLOAD_TSO_F) {
- lso_tun_fmt = txq->lso_tun_fmt;
- for (i = 0; i < pkts; i++)
- otx2_nix_xmit_prepare_tso(tx_pkts[i], flags);
- }
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- for (i = 0; i < pkts; i++) {
- otx2_nix_xmit_prepare(tx_pkts[i], cmd, flags, lso_tun_fmt);
- segdw = otx2_nix_prepare_mseg(tx_pkts[i], cmd, flags);
- otx2_nix_xmit_prepare_tstamp(cmd, &txq->cmd[0],
- tx_pkts[i]->ol_flags, segdw,
- flags);
- otx2_nix_xmit_mseg_one(cmd, lmt_addr, io_addr, segdw);
- }
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- return pkts;
-}
-
-#if defined(RTE_ARCH_ARM64)
-
-#define NIX_DESCS_PER_LOOP 4
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- uint64x2_t dataoff_iova0, dataoff_iova1, dataoff_iova2, dataoff_iova3;
- uint64x2_t len_olflags0, len_olflags1, len_olflags2, len_olflags3;
- uint64_t *mbuf0, *mbuf1, *mbuf2, *mbuf3;
- uint64x2_t senddesc01_w0, senddesc23_w0;
- uint64x2_t senddesc01_w1, senddesc23_w1;
- uint64x2_t sgdesc01_w0, sgdesc23_w0;
- uint64x2_t sgdesc01_w1, sgdesc23_w1;
- struct otx2_eth_txq *txq = tx_queue;
- uint64_t *lmt_addr = txq->lmt_addr;
- rte_iova_t io_addr = txq->io_addr;
- uint64x2_t ltypes01, ltypes23;
- uint64x2_t xtmp128, ytmp128;
- uint64x2_t xmask01, xmask23;
- uint64x2_t cmd00, cmd01;
- uint64x2_t cmd10, cmd11;
- uint64x2_t cmd20, cmd21;
- uint64x2_t cmd30, cmd31;
- uint64_t lmt_status, i;
- uint16_t pkts_left;
-
- NIX_XMIT_FC_OR_RETURN(txq, pkts);
-
- pkts_left = pkts & (NIX_DESCS_PER_LOOP - 1);
- pkts = RTE_ALIGN_FLOOR(pkts, NIX_DESCS_PER_LOOP);
-
- /* Reduce the cached count */
- txq->fc_cache_pkts -= pkts;
-
- /* Lets commit any changes in the packet here as no further changes
- * to the packet will be done unless no fast free is enabled.
- */
- if (!(flags & NIX_TX_OFFLOAD_MBUF_NOFF_F))
- rte_io_wmb();
-
- senddesc01_w0 = vld1q_dup_u64(&txq->cmd[0]);
- senddesc23_w0 = senddesc01_w0;
- senddesc01_w1 = vdupq_n_u64(0);
- senddesc23_w1 = senddesc01_w1;
- sgdesc01_w0 = vld1q_dup_u64(&txq->cmd[2]);
- sgdesc23_w0 = sgdesc01_w0;
-
- for (i = 0; i < pkts; i += NIX_DESCS_PER_LOOP) {
- /* Clear lower 32bit of SEND_HDR_W0 and SEND_SG_W0 */
- senddesc01_w0 = vbicq_u64(senddesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
- sgdesc01_w0 = vbicq_u64(sgdesc01_w0,
- vdupq_n_u64(0xFFFFFFFF));
-
- senddesc23_w0 = senddesc01_w0;
- sgdesc23_w0 = sgdesc01_w0;
-
- /* Move mbufs to iova */
- mbuf0 = (uint64_t *)tx_pkts[0];
- mbuf1 = (uint64_t *)tx_pkts[1];
- mbuf2 = (uint64_t *)tx_pkts[2];
- mbuf3 = (uint64_t *)tx_pkts[3];
-
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, buf_iova));
- /*
- * Get mbuf's, olflags, iova, pktlen, dataoff
- * dataoff_iovaX.D[0] = iova,
- * dataoff_iovaX.D[1](15:0) = mbuf->dataoff
- * len_olflagsX.D[0] = ol_flags,
- * len_olflagsX.D[1](63:32) = mbuf->pkt_len
- */
- dataoff_iova0 = vld1q_u64(mbuf0);
- len_olflags0 = vld1q_u64(mbuf0 + 2);
- dataoff_iova1 = vld1q_u64(mbuf1);
- len_olflags1 = vld1q_u64(mbuf1 + 2);
- dataoff_iova2 = vld1q_u64(mbuf2);
- len_olflags2 = vld1q_u64(mbuf2 + 2);
- dataoff_iova3 = vld1q_u64(mbuf3);
- len_olflags3 = vld1q_u64(mbuf3 + 2);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- struct rte_mbuf *mbuf;
- /* Set don't free bit if reference count > 1 */
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask01, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 0);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- if (otx2_nix_prefree_seg(mbuf))
- vsetq_lane_u64(0x80000, xmask23, 1);
- else
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool,
- (void **)&mbuf,
- 1, 0);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- } else {
- struct rte_mbuf *mbuf;
- /* Mark mempool object as "put" since
- * it is freed by NIX
- */
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf0 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf1 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf2 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
-
- mbuf = (struct rte_mbuf *)((uintptr_t)mbuf3 -
- offsetof(struct rte_mbuf, buf_iova));
- RTE_MEMPOOL_CHECK_COOKIES(mbuf->pool, (void **)&mbuf,
- 1, 0);
- RTE_SET_USED(mbuf);
- }
-
- /* Move mbufs to point pool */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mbuf, pool) -
- offsetof(struct rte_mbuf, buf_iova));
-
- if (flags &
- (NIX_TX_OFFLOAD_OL3_OL4_CSUM_F |
- NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- /* Get tx_offload for ol2, ol3, l2, l3 lengths */
- /*
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- * E(8):OL2_LEN(7):OL3_LEN(9):E(24):L3_LEN(9):L2_LEN(7)
- */
-
- asm volatile ("LD1 {%[a].D}[0],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf0 + 2) : "memory");
-
- asm volatile ("LD1 {%[a].D}[1],[%[in]]\n\t" :
- [a]"+w"(senddesc01_w1) :
- [in]"r"(mbuf1 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[0],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf2 + 2) : "memory");
-
- asm volatile ("LD1 {%[b].D}[1],[%[in]]\n\t" :
- [b]"+w"(senddesc23_w1) :
- [in]"r"(mbuf3 + 2) : "memory");
-
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- } else {
- /* Get pool pointer alone */
- mbuf0 = (uint64_t *)*mbuf0;
- mbuf1 = (uint64_t *)*mbuf1;
- mbuf2 = (uint64_t *)*mbuf2;
- mbuf3 = (uint64_t *)*mbuf3;
- }
-
- const uint8x16_t shuf_mask2 = {
- 0x4, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xc, 0xd, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- xtmp128 = vzip2q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip2q_u64(len_olflags2, len_olflags3);
-
- /* Clear dataoff_iovaX.D[1] bits other than dataoff(15:0) */
- const uint64x2_t and_mask0 = {
- 0xFFFFFFFFFFFFFFFF,
- 0x000000000000FFFF,
- };
-
- dataoff_iova0 = vandq_u64(dataoff_iova0, and_mask0);
- dataoff_iova1 = vandq_u64(dataoff_iova1, and_mask0);
- dataoff_iova2 = vandq_u64(dataoff_iova2, and_mask0);
- dataoff_iova3 = vandq_u64(dataoff_iova3, and_mask0);
-
- /*
- * Pick only 16 bits of pktlen preset at bits 63:32
- * and place them at bits 15:0.
- */
- xtmp128 = vqtbl1q_u8(xtmp128, shuf_mask2);
- ytmp128 = vqtbl1q_u8(ytmp128, shuf_mask2);
-
- /* Add pairwise to get dataoff + iova in sgdesc_w1 */
- sgdesc01_w1 = vpaddq_u64(dataoff_iova0, dataoff_iova1);
- sgdesc23_w1 = vpaddq_u64(dataoff_iova2, dataoff_iova3);
-
- /* Orr both sgdesc_w0 and senddesc_w0 with 16 bits of
- * pktlen at 15:0 position.
- */
- sgdesc01_w0 = vorrq_u64(sgdesc01_w0, xtmp128);
- sgdesc23_w0 = vorrq_u64(sgdesc23_w0, ytmp128);
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xtmp128);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, ytmp128);
-
- if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- !(flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * il3/il4 types. But we still use ol3/ol4 types in
- * senddesc_w1 as only one header processing is enabled.
- */
- const uint8x16_t tbl = {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6 assumed) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6 assumed) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6 assumed) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6 assumed) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_TCP_CKSUM */
- 0x22, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_SCTP_CKSUM */
- 0x32, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_UDP_CKSUM */
- 0x03, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):L3_LEN(9):L2_LEN(7+z)
- * E(47):L3_LEN(9):L2_LEN(7+z)
- */
- senddesc01_w1 = vshlq_n_u64(senddesc01_w1, 1);
- senddesc23_w1 = vshlq_n_u64(senddesc23_w1, 1);
-
- /* Move OLFLAGS bits 55:52 to 51:48
- * with zeros preprended on the byte and rest
- * don't care
- */
- xtmp128 = vshrq_n_u8(xtmp128, 4);
- ytmp128 = vshrq_n_u8(ytmp128, 4);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x6, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xE, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create first half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if (!(flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /*
- * Lookup table to translate ol_flags to
- * ol3/ol4 types.
- */
-
- const uint8x16_t tbl = {
- /* [0-15] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IP_CKSUM */
- 0x32, /* OUTER_UDP_CKSUM | OUTER_IPV4 */
- 0x33, /* OUTER_UDP_CKSUM | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM | OUTER_IPV6 */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- };
-
- /* Extract olflags to translate to iltypes */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- * E(47):OL3_LEN(9):OL2_LEN(7+z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer ol flags only */
- const uint64x2_t o_cksum_mask = {
- 0x1C00020000000000,
- 0x1C00020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, o_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, o_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift oltype by 2 to start nibble from BIT(56)
- * instead of BIT(58)
- */
- xtmp128 = vshrq_n_u8(xtmp128, 2);
- ytmp128 = vshrq_n_u8(ytmp128, 2);
- /*
- * E(48):L3_LEN(8):L2_LEN(z+7)
- * E(48):L3_LEN(8):L2_LEN(z+7)
- */
- const int8x16_t tshft3 = {
- -1, 0, 8, 8, 8, 8, 8, 8,
- -1, 0, 8, 8, 8, 8, 8, 8,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Do the lookup */
- ltypes01 = vqtbl1q_u8(tbl, xtmp128);
- ltypes23 = vqtbl1q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 56:63 of oltype
- * and place it in ol3/ol4type of senddesc_w1
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0xFF, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xFF, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare ol4ptr, ol3ptr from ol3len, ol2len.
- * a [E(32):E(16):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E(32):E(16):(OL3+OL2):OL2]
- * => E(32):E(16)::OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u16(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u16(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
-
- } else if ((flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)) {
- /* Lookup table to translate ol_flags to
- * ol4type, ol3type, il4type, il3type of senddesc_w1
- */
- const uint8x16x2_t tbl = {
- {
- {
- /* [0-15] = il4type:il3type */
- 0x04, /* none (IPv6) */
- 0x14, /* RTE_MBUF_F_TX_TCP_CKSUM (IPv6) */
- 0x24, /* RTE_MBUF_F_TX_SCTP_CKSUM (IPv6) */
- 0x34, /* RTE_MBUF_F_TX_UDP_CKSUM (IPv6) */
- 0x03, /* RTE_MBUF_F_TX_IP_CKSUM */
- 0x13, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x02, /* RTE_MBUF_F_TX_IPV4 */
- 0x12, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x22, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x32, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- 0x03, /* RTE_MBUF_F_TX_IPV4 |
- * RTE_MBUF_F_TX_IP_CKSUM
- */
- 0x13, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_TCP_CKSUM
- */
- 0x23, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_SCTP_CKSUM
- */
- 0x33, /* RTE_MBUF_F_TX_IPV4 | RTE_MBUF_F_TX_IP_CKSUM |
- * RTE_MBUF_F_TX_UDP_CKSUM
- */
- },
-
- {
- /* [16-31] = ol4type:ol3type */
- 0x00, /* none */
- 0x03, /* OUTER_IP_CKSUM */
- 0x02, /* OUTER_IPV4 */
- 0x03, /* OUTER_IPV4 | OUTER_IP_CKSUM */
- 0x04, /* OUTER_IPV6 */
- 0x00, /* OUTER_IPV6 | OUTER_IP_CKSUM */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 */
- 0x00, /* OUTER_IPV6 | OUTER_IPV4 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IP_CKSUM
- */
- 0x32, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4
- */
- 0x33, /* OUTER_UDP_CKSUM |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- 0x34, /* OUTER_UDP_CKSUM |
- * OUTER_IPV6
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IP_CKSUM
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4
- */
- 0x00, /* OUTER_UDP_CKSUM | OUTER_IPV6 |
- * OUTER_IPV4 | OUTER_IP_CKSUM
- */
- },
- }
- };
-
- /* Extract olflags to translate to oltype & iltype */
- xtmp128 = vzip1q_u64(len_olflags0, len_olflags1);
- ytmp128 = vzip1q_u64(len_olflags2, len_olflags3);
-
- /*
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- * E(8):OL2_LN(7):OL3_LN(9):E(23):L3_LN(9):L2_LN(7+z)
- */
- const uint32x4_t tshft_4 = {
- 1, 0,
- 1, 0,
- };
- senddesc01_w1 = vshlq_u32(senddesc01_w1, tshft_4);
- senddesc23_w1 = vshlq_u32(senddesc23_w1, tshft_4);
-
- /*
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- * E(32):L3_LEN(8):L2_LEN(7+Z):OL3_LEN(8):OL2_LEN(7+Z)
- */
- const uint8x16_t shuf_mask5 = {
- 0x6, 0x5, 0x0, 0x1, 0xFF, 0xFF, 0xFF, 0xFF,
- 0xE, 0xD, 0x8, 0x9, 0xFF, 0xFF, 0xFF, 0xFF,
- };
- senddesc01_w1 = vqtbl1q_u8(senddesc01_w1, shuf_mask5);
- senddesc23_w1 = vqtbl1q_u8(senddesc23_w1, shuf_mask5);
-
- /* Extract outer and inner header ol_flags */
- const uint64x2_t oi_cksum_mask = {
- 0x1CF0020000000000,
- 0x1CF0020000000000,
- };
-
- xtmp128 = vandq_u64(xtmp128, oi_cksum_mask);
- ytmp128 = vandq_u64(ytmp128, oi_cksum_mask);
-
- /* Extract OUTER_UDP_CKSUM bit 41 and
- * move it to bit 61
- */
-
- xtmp128 = xtmp128 | vshlq_n_u64(xtmp128, 20);
- ytmp128 = ytmp128 | vshlq_n_u64(ytmp128, 20);
-
- /* Shift right oltype by 2 and iltype by 4
- * to start oltype nibble from BIT(58)
- * instead of BIT(56) and iltype nibble from BIT(48)
- * instead of BIT(52).
- */
- const int8x16_t tshft5 = {
- 8, 8, 8, 8, 8, 8, -4, -2,
- 8, 8, 8, 8, 8, 8, -4, -2,
- };
-
- xtmp128 = vshlq_u8(xtmp128, tshft5);
- ytmp128 = vshlq_u8(ytmp128, tshft5);
- /*
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- * E(32):L3_LEN(8):L2_LEN(8):OL3_LEN(8):OL2_LEN(8)
- */
- const int8x16_t tshft3 = {
- -1, 0, -1, 0, 0, 0, 0, 0,
- -1, 0, -1, 0, 0, 0, 0, 0,
- };
-
- senddesc01_w1 = vshlq_u8(senddesc01_w1, tshft3);
- senddesc23_w1 = vshlq_u8(senddesc23_w1, tshft3);
-
- /* Mark Bit(4) of oltype */
- const uint64x2_t oi_cksum_mask2 = {
- 0x1000000000000000,
- 0x1000000000000000,
- };
-
- xtmp128 = vorrq_u64(xtmp128, oi_cksum_mask2);
- ytmp128 = vorrq_u64(ytmp128, oi_cksum_mask2);
-
- /* Do the lookup */
- ltypes01 = vqtbl2q_u8(tbl, xtmp128);
- ltypes23 = vqtbl2q_u8(tbl, ytmp128);
-
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
-
- /* Pick only relevant fields i.e Bit 48:55 of iltype and
- * Bit 56:63 of oltype and place it in corresponding
- * place in senddesc_w1.
- */
- const uint8x16_t shuf_mask0 = {
- 0xFF, 0xFF, 0xFF, 0xFF, 0x7, 0x6, 0xFF, 0xFF,
- 0xFF, 0xFF, 0xFF, 0xFF, 0xF, 0xE, 0xFF, 0xFF,
- };
-
- ltypes01 = vqtbl1q_u8(ltypes01, shuf_mask0);
- ltypes23 = vqtbl1q_u8(ltypes23, shuf_mask0);
-
- /* Prepare l4ptr, l3ptr, ol4ptr, ol3ptr from
- * l3len, l2len, ol3len, ol2len.
- * a [E(32):L3(8):L2(8):OL3(8):OL2(8)]
- * a = a + (a << 8)
- * a [E:(L3+L2):(L2+OL3):(OL3+OL2):OL2]
- * a = a + (a << 16)
- * a [E:(L3+L2+OL3+OL2):(L2+OL3+OL2):(OL3+OL2):OL2]
- * => E(32):IL4PTR(8):IL3PTR(8):OL4PTR(8):OL3PTR(8)
- */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 8));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 8));
-
- /* Create second half of 4W cmd for 4 mbufs (sgdesc) */
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
-
- /* Continue preparing l4ptr, l3ptr, ol4ptr, ol3ptr */
- senddesc01_w1 = vaddq_u8(senddesc01_w1,
- vshlq_n_u32(senddesc01_w1, 16));
- senddesc23_w1 = vaddq_u8(senddesc23_w1,
- vshlq_n_u32(senddesc23_w1, 16));
-
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
- /* Move ltypes to senddesc*_w1 */
- senddesc01_w1 = vorrq_u64(senddesc01_w1, ltypes01);
- senddesc23_w1 = vorrq_u64(senddesc23_w1, ltypes23);
-
- /* Create first half of 4W cmd for 4 mbufs (sendhdr) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- } else {
- /* Just use ld1q to retrieve aura
- * when we don't need tx_offload
- */
- mbuf0 = (uint64_t *)((uintptr_t)mbuf0 +
- offsetof(struct rte_mempool, pool_id));
- mbuf1 = (uint64_t *)((uintptr_t)mbuf1 +
- offsetof(struct rte_mempool, pool_id));
- mbuf2 = (uint64_t *)((uintptr_t)mbuf2 +
- offsetof(struct rte_mempool, pool_id));
- mbuf3 = (uint64_t *)((uintptr_t)mbuf3 +
- offsetof(struct rte_mempool, pool_id));
- xmask01 = vdupq_n_u64(0);
- xmask23 = xmask01;
- asm volatile ("LD1 {%[a].H}[0],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf0) : "memory");
-
- asm volatile ("LD1 {%[a].H}[4],[%[in]]\n\t" :
- [a]"+w"(xmask01) : [in]"r"(mbuf1) : "memory");
-
- asm volatile ("LD1 {%[b].H}[0],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf2) : "memory");
-
- asm volatile ("LD1 {%[b].H}[4],[%[in]]\n\t" :
- [b]"+w"(xmask23) : [in]"r"(mbuf3) : "memory");
- xmask01 = vshlq_n_u64(xmask01, 20);
- xmask23 = vshlq_n_u64(xmask23, 20);
-
- senddesc01_w0 = vorrq_u64(senddesc01_w0, xmask01);
- senddesc23_w0 = vorrq_u64(senddesc23_w0, xmask23);
-
- /* Create 4W cmd for 4 mbufs (sendhdr, sgdesc) */
- cmd00 = vzip1q_u64(senddesc01_w0, senddesc01_w1);
- cmd01 = vzip1q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd10 = vzip2q_u64(senddesc01_w0, senddesc01_w1);
- cmd11 = vzip2q_u64(sgdesc01_w0, sgdesc01_w1);
- cmd20 = vzip1q_u64(senddesc23_w0, senddesc23_w1);
- cmd21 = vzip1q_u64(sgdesc23_w0, sgdesc23_w1);
- cmd30 = vzip2q_u64(senddesc23_w0, senddesc23_w1);
- cmd31 = vzip2q_u64(sgdesc23_w0, sgdesc23_w1);
- }
-
- do {
- vst1q_u64(lmt_addr, cmd00);
- vst1q_u64(lmt_addr + 2, cmd01);
- vst1q_u64(lmt_addr + 4, cmd10);
- vst1q_u64(lmt_addr + 6, cmd11);
- vst1q_u64(lmt_addr + 8, cmd20);
- vst1q_u64(lmt_addr + 10, cmd21);
- vst1q_u64(lmt_addr + 12, cmd30);
- vst1q_u64(lmt_addr + 14, cmd31);
- lmt_status = otx2_lmt_submit(io_addr);
-
- } while (lmt_status == 0);
- tx_pkts = tx_pkts + NIX_DESCS_PER_LOOP;
- }
-
- if (unlikely(pkts_left))
- pkts += nix_xmit_pkts(tx_queue, tx_pkts, pkts_left, cmd, flags);
-
- return pkts;
-}
-
-#else
-static __rte_always_inline uint16_t
-nix_xmit_pkts_vector(void *tx_queue, struct rte_mbuf **tx_pkts,
- uint16_t pkts, uint64_t *cmd, const uint16_t flags)
-{
- RTE_SET_USED(tx_queue);
- RTE_SET_USED(tx_pkts);
- RTE_SET_USED(pkts);
- RTE_SET_USED(cmd);
- RTE_SET_USED(flags);
- return 0;
-}
-#endif
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts(tx_queue, tx_pkts, pkts, cmd, flags); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_mseg_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \
- \
- /* For TSO inner checksum is a must */ \
- if (((flags) & NIX_TX_OFFLOAD_TSO_F) && \
- !((flags) & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) \
- return 0; \
- return nix_xmit_pkts_mseg(tx_queue, tx_pkts, pkts, cmd, \
- (flags) | NIX_TX_MULTI_SEG_F); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
-static uint16_t __rte_noinline __rte_hot \
-otx2_nix_xmit_pkts_vec_ ## name(void *tx_queue, \
- struct rte_mbuf **tx_pkts, uint16_t pkts) \
-{ \
- uint64_t cmd[sz]; \
- \
- /* VLAN, TSTMP, TSO is not supported by vec */ \
- if ((flags) & NIX_TX_OFFLOAD_VLAN_QINQ_F || \
- (flags) & NIX_TX_OFFLOAD_TSTAMP_F || \
- (flags) & NIX_TX_OFFLOAD_TSO_F) \
- return 0; \
- return nix_xmit_pkts_vector(tx_queue, tx_pkts, pkts, cmd, (flags)); \
-}
-
-NIX_TX_FASTPATH_MODES
-#undef T
-
-static inline void
-pick_tx_func(struct rte_eth_dev *eth_dev,
- const eth_tx_burst_t tx_burst[2][2][2][2][2][2][2])
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- /* [SEC] [TSTMP] [NOFF] [VLAN] [OL3_OL4_CSUM] [IL3_IL4_CSUM] */
- eth_dev->tx_pkt_burst = tx_burst
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_SECURITY_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSO_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_TSTAMP_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_MBUF_NOFF_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_VLAN_QINQ_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)]
- [!!(dev->tx_offload_flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)];
-}
-
-void
-otx2_eth_set_tx_function(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
-
- const eth_tx_burst_t nix_eth_tx_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_burst_mseg[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_mseg_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- const eth_tx_burst_t nix_eth_tx_vec_burst[2][2][2][2][2][2][2] = {
-#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \
- [f6][f5][f4][f3][f2][f1][f0] = otx2_nix_xmit_pkts_vec_ ## name,
-
-NIX_TX_FASTPATH_MODES
-#undef T
- };
-
- if (dev->scalar_ena ||
- (dev->tx_offload_flags &
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F |
- NIX_TX_OFFLOAD_TSO_F)))
- pick_tx_func(eth_dev, nix_eth_tx_burst);
- else
- pick_tx_func(eth_dev, nix_eth_tx_vec_burst);
-
- if (dev->tx_offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS)
- pick_tx_func(eth_dev, nix_eth_tx_burst_mseg);
-
- rte_mb();
-}
diff --git a/drivers/net/octeontx2/otx2_tx.h b/drivers/net/octeontx2/otx2_tx.h
deleted file mode 100644
index 4bbd5a390f..0000000000
--- a/drivers/net/octeontx2/otx2_tx.h
+++ /dev/null
@@ -1,791 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#ifndef __OTX2_TX_H__
-#define __OTX2_TX_H__
-
-#define NIX_TX_OFFLOAD_NONE (0)
-#define NIX_TX_OFFLOAD_L3_L4_CSUM_F BIT(0)
-#define NIX_TX_OFFLOAD_OL3_OL4_CSUM_F BIT(1)
-#define NIX_TX_OFFLOAD_VLAN_QINQ_F BIT(2)
-#define NIX_TX_OFFLOAD_MBUF_NOFF_F BIT(3)
-#define NIX_TX_OFFLOAD_TSTAMP_F BIT(4)
-#define NIX_TX_OFFLOAD_TSO_F BIT(5)
-#define NIX_TX_OFFLOAD_SECURITY_F BIT(6)
-
-/* Flags to control xmit_prepare function.
- * Defining it from backwards to denote its been
- * not used as offload flags to pick function
- */
-#define NIX_TX_MULTI_SEG_F BIT(15)
-
-#define NIX_TX_NEED_SEND_HDR_W1 \
- (NIX_TX_OFFLOAD_L3_L4_CSUM_F | NIX_TX_OFFLOAD_OL3_OL4_CSUM_F | \
- NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_TX_NEED_EXT_HDR \
- (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSTAMP_F | \
- NIX_TX_OFFLOAD_TSO_F)
-
-#define NIX_UDP_TUN_BITMASK \
- ((1ull << (RTE_MBUF_F_TX_TUNNEL_VXLAN >> 45)) | \
- (1ull << (RTE_MBUF_F_TX_TUNNEL_GENEVE >> 45)))
-
-#define NIX_LSO_FORMAT_IDX_TSOV4 (0)
-#define NIX_LSO_FORMAT_IDX_TSOV6 (1)
-
-/* Function to determine no of tx subdesc required in case ext
- * sub desc is enabled.
- */
-static __rte_always_inline int
-otx2_nix_tx_ext_subs(const uint16_t flags)
-{
- return (flags & NIX_TX_OFFLOAD_TSTAMP_F) ? 2 :
- ((flags & (NIX_TX_OFFLOAD_VLAN_QINQ_F | NIX_TX_OFFLOAD_TSO_F)) ?
- 1 : 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tstamp(uint64_t *cmd, const uint64_t *send_mem_desc,
- const uint64_t ol_flags, const uint16_t no_segdw,
- const uint16_t flags)
-{
- if (flags & NIX_TX_OFFLOAD_TSTAMP_F) {
- struct nix_send_mem_s *send_mem;
- uint16_t off = (no_segdw - 1) << 1;
- const uint8_t is_ol_tstamp = !(ol_flags & RTE_MBUF_F_TX_IEEE1588_TMST);
-
- send_mem = (struct nix_send_mem_s *)(cmd + off);
- if (flags & NIX_TX_MULTI_SEG_F) {
- /* Retrieving the default desc values */
- cmd[off] = send_mem_desc[6];
-
- /* Using compiler barier to avoid voilation of C
- * aliasing rules.
- */
- rte_compiler_barrier();
- }
-
- /* Packets for which RTE_MBUF_F_TX_IEEE1588_TMST is not set, tx tstamp
- * should not be recorded, hence changing the alg type to
- * NIX_SENDMEMALG_SET and also changing send mem addr field to
- * next 8 bytes as it corrpt the actual tx tstamp registered
- * address.
- */
- send_mem->alg = NIX_SENDMEMALG_SETTSTMP - (is_ol_tstamp);
-
- send_mem->addr = (rte_iova_t)((uint64_t *)send_mem_desc[7] +
- (is_ol_tstamp));
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_pktmbuf_detach(struct rte_mbuf *m)
-{
- struct rte_mempool *mp = m->pool;
- uint32_t mbuf_size, buf_len;
- struct rte_mbuf *md;
- uint16_t priv_size;
- uint16_t refcount;
-
- /* Update refcount of direct mbuf */
- md = rte_mbuf_from_indirect(m);
- refcount = rte_mbuf_refcnt_update(md, -1);
-
- priv_size = rte_pktmbuf_priv_size(mp);
- mbuf_size = (uint32_t)(sizeof(struct rte_mbuf) + priv_size);
- buf_len = rte_pktmbuf_data_room_size(mp);
-
- m->priv_size = priv_size;
- m->buf_addr = (char *)m + mbuf_size;
- m->buf_iova = rte_mempool_virt2iova(m) + mbuf_size;
- m->buf_len = (uint16_t)buf_len;
- rte_pktmbuf_reset_headroom(m);
- m->data_len = 0;
- m->ol_flags = 0;
- m->next = NULL;
- m->nb_segs = 1;
-
- /* Now indirect mbuf is safe to free */
- rte_pktmbuf_free(m);
-
- if (refcount == 0) {
- rte_mbuf_refcnt_set(md, 1);
- md->data_len = 0;
- md->ol_flags = 0;
- md->next = NULL;
- md->nb_segs = 1;
- return 0;
- } else {
- return 1;
- }
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_prefree_seg(struct rte_mbuf *m)
-{
- if (likely(rte_mbuf_refcnt_read(m) == 1)) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- } else if (rte_mbuf_refcnt_update(m, -1) == 0) {
- if (!RTE_MBUF_DIRECT(m))
- return otx2_pktmbuf_detach(m);
-
- rte_mbuf_refcnt_set(m, 1);
- m->next = NULL;
- m->nb_segs = 1;
- return 0;
- }
-
- /* Mbuf is having refcount more than 1 so need not to be freed */
- return 1;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare_tso(struct rte_mbuf *m, const uint64_t flags)
-{
- uint64_t mask, ol_flags = m->ol_flags;
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uintptr_t mdata = rte_pktmbuf_mtod(m, uintptr_t);
- uint16_t *iplen, *oiplen, *oudplen;
- uint16_t lso_sb, paylen;
-
- mask = -!!(ol_flags & (RTE_MBUF_F_TX_OUTER_IPV4 | RTE_MBUF_F_TX_OUTER_IPV6));
- lso_sb = (mask & (m->outer_l2_len + m->outer_l3_len)) +
- m->l2_len + m->l3_len + m->l4_len;
-
- /* Reduce payload len from base headers */
- paylen = m->pkt_len - lso_sb;
-
- /* Get iplen position assuming no tunnel hdr */
- iplen = (uint16_t *)(mdata + m->l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
-
- oiplen = (uint16_t *)(mdata + m->outer_l2_len +
- (2 << !!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)));
- *oiplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*oiplen) -
- paylen);
-
- /* Update format for UDP tunneled packet */
- if (is_udp_tun) {
- oudplen = (uint16_t *)(mdata + m->outer_l2_len +
- m->outer_l3_len + 4);
- *oudplen =
- rte_cpu_to_be_16(rte_be_to_cpu_16(*oudplen) -
- paylen);
- }
-
- /* Update iplen position to inner ip hdr */
- iplen = (uint16_t *)(mdata + lso_sb - m->l3_len -
- m->l4_len + (2 << !!(ol_flags & RTE_MBUF_F_TX_IPV6)));
- }
-
- *iplen = rte_cpu_to_be_16(rte_be_to_cpu_16(*iplen) - paylen);
- }
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prepare(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags,
- const uint64_t lso_tun_fmt)
-{
- struct nix_send_ext_s *send_hdr_ext;
- struct nix_send_hdr_s *send_hdr;
- uint64_t ol_flags = 0, mask;
- union nix_send_hdr_w1_u w1;
- union nix_send_sg_s *sg;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- if (flags & NIX_TX_NEED_EXT_HDR) {
- send_hdr_ext = (struct nix_send_ext_s *)(cmd + 2);
- sg = (union nix_send_sg_s *)(cmd + 4);
- /* Clear previous markings */
- send_hdr_ext->w0.lso = 0;
- send_hdr_ext->w1.u = 0;
- } else {
- sg = (union nix_send_sg_s *)(cmd + 2);
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1) {
- ol_flags = m->ol_flags;
- w1.u = 0;
- }
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- send_hdr->w0.total = m->data_len;
- send_hdr->w0.aura =
- npa_lf_aura_handle_to_aura(m->pool->pool_id);
- }
-
- /*
- * L3type: 2 => IPV4
- * 3 => IPV4 with csum
- * 4 => IPV6
- * L3type and L3ptr needs to be set for either
- * L3 csum or L4 csum or LSO
- *
- */
-
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F)) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t ol3type =
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L3 */
- w1.ol3type = ol3type;
- mask = 0xffffull << ((!!ol3type) << 4);
- w1.ol3ptr = ~mask & m->outer_l2_len;
- w1.ol4ptr = ~mask & (w1.ol3ptr + m->outer_l3_len);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- /* Inner L3 */
- w1.il3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2);
- w1.il3ptr = w1.ol4ptr + m->l2_len;
- w1.il4ptr = w1.il3ptr + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.il3type = w1.il3type + !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.il4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
-
- /* In case of no tunnel header use only
- * shift IL3/IL4 fields a bit to use
- * OL3/OL4 for header checksum
- */
- mask = !ol3type;
- w1.u = ((w1.u & 0xFFFFFFFF00000000) >> (mask << 3)) |
- ((w1.u & 0X00000000FFFFFFFF) >> (mask << 4));
-
- } else if (flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) {
- const uint8_t csum = !!(ol_flags & RTE_MBUF_F_TX_OUTER_UDP_CKSUM);
- const uint8_t outer_l2_len = m->outer_l2_len;
-
- /* Outer L3 */
- w1.ol3ptr = outer_l2_len;
- w1.ol4ptr = outer_l2_len + m->outer_l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_OUTER_IP_CKSUM);
-
- /* Outer L4 */
- w1.ol4type = csum + (csum << 1);
-
- } else if (flags & NIX_TX_OFFLOAD_L3_L4_CSUM_F) {
- const uint8_t l2_len = m->l2_len;
-
- /* Always use OLXPTR and OLXTYPE when only
- * when one header is present
- */
-
- /* Inner L3 */
- w1.ol3ptr = l2_len;
- w1.ol4ptr = l2_len + m->l3_len;
- /* Increment it by 1 if it is IPV4 as 3 is with csum */
- w1.ol3type = ((!!(ol_flags & RTE_MBUF_F_TX_IPV4)) << 1) +
- ((!!(ol_flags & RTE_MBUF_F_TX_IPV6)) << 2) +
- !!(ol_flags & RTE_MBUF_F_TX_IP_CKSUM);
-
- /* Inner L4 */
- w1.ol4type = (ol_flags & RTE_MBUF_F_TX_L4_MASK) >> 52;
- }
-
- if (flags & NIX_TX_NEED_EXT_HDR &&
- flags & NIX_TX_OFFLOAD_VLAN_QINQ_F) {
- send_hdr_ext->w1.vlan1_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_VLAN);
- /* HW will update ptr after vlan0 update */
- send_hdr_ext->w1.vlan1_ins_ptr = 12;
- send_hdr_ext->w1.vlan1_ins_tci = m->vlan_tci;
-
- send_hdr_ext->w1.vlan0_ins_ena = !!(ol_flags & RTE_MBUF_F_TX_QINQ);
- /* 2B before end of l2 header */
- send_hdr_ext->w1.vlan0_ins_ptr = 12;
- send_hdr_ext->w1.vlan0_ins_tci = m->vlan_tci_outer;
- }
-
- if (flags & NIX_TX_OFFLOAD_TSO_F &&
- (ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {
- uint16_t lso_sb;
- uint64_t mask;
-
- mask = -(!w1.il3type);
- lso_sb = (mask & w1.ol4ptr) + (~mask & w1.il4ptr) + m->l4_len;
-
- send_hdr_ext->w0.lso_sb = lso_sb;
- send_hdr_ext->w0.lso = 1;
- send_hdr_ext->w0.lso_mps = m->tso_segsz;
- send_hdr_ext->w0.lso_format =
- NIX_LSO_FORMAT_IDX_TSOV4 + !!(ol_flags & RTE_MBUF_F_TX_IPV6);
- w1.ol4type = NIX_SENDL4TYPE_TCP_CKSUM;
-
- /* Handle tunnel tso */
- if ((flags & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F) &&
- (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)) {
- const uint8_t is_udp_tun = (NIX_UDP_TUN_BITMASK >>
- ((ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) >> 45)) & 0x1;
- uint8_t shift = is_udp_tun ? 32 : 0;
-
- shift += (!!(ol_flags & RTE_MBUF_F_TX_OUTER_IPV6) << 4);
- shift += (!!(ol_flags & RTE_MBUF_F_TX_IPV6) << 3);
-
- w1.il4type = NIX_SENDL4TYPE_TCP_CKSUM;
- w1.ol4type = is_udp_tun ? NIX_SENDL4TYPE_UDP_CKSUM : 0;
- /* Update format for UDP tunneled packet */
- send_hdr_ext->w0.lso_format = (lso_tun_fmt >> shift);
- }
- }
-
- if (flags & NIX_TX_NEED_SEND_HDR_W1)
- send_hdr->w1.u = w1.u;
-
- if (!(flags & NIX_TX_MULTI_SEG_F)) {
- sg->seg1_size = m->data_len;
- *(rte_iova_t *)(++sg) = rte_mbuf_data_iova(m);
-
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- /* DF bit = 1 if refcount of current mbuf or parent mbuf
- * is greater than 1
- * DF bit = 0 otherwise
- */
- send_hdr->w0.df = otx2_nix_prefree_seg(m);
- /* Ensuring mbuf fields which got updated in
- * otx2_nix_prefree_seg are written before LMTST.
- */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
- if (!send_hdr->w0.df)
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- }
-}
-
-
-static __rte_always_inline void
-otx2_nix_xmit_one(uint64_t *cmd, void *lmt_addr,
- const rte_iova_t io_addr, const uint32_t flags)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_prep_lmt(uint64_t *cmd, void *lmt_addr, const uint32_t flags)
-{
- otx2_lmt_mov(lmt_addr, cmd, otx2_nix_tx_ext_subs(flags));
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit(io_addr);
-}
-
-static __rte_always_inline uint64_t
-otx2_nix_xmit_submit_lmt_release(const rte_iova_t io_addr)
-{
- return otx2_lmt_submit_release(io_addr);
-}
-
-static __rte_always_inline uint16_t
-otx2_nix_prepare_mseg(struct rte_mbuf *m, uint64_t *cmd, const uint16_t flags)
-{
- struct nix_send_hdr_s *send_hdr;
- union nix_send_sg_s *sg;
- struct rte_mbuf *m_next;
- uint64_t *slist, sg_u;
- uint64_t nb_segs;
- uint64_t segdw;
- uint8_t off, i;
-
- send_hdr = (struct nix_send_hdr_s *)cmd;
- send_hdr->w0.total = m->pkt_len;
- send_hdr->w0.aura = npa_lf_aura_handle_to_aura(m->pool->pool_id);
-
- if (flags & NIX_TX_NEED_EXT_HDR)
- off = 2;
- else
- off = 0;
-
- sg = (union nix_send_sg_s *)&cmd[2 + off];
- /* Clear sg->u header before use */
- sg->u &= 0xFC00000000000000;
- sg_u = sg->u;
- slist = &cmd[3 + off];
-
- i = 0;
- nb_segs = m->nb_segs;
-
- /* Fill mbuf segments */
- do {
- m_next = m->next;
- sg_u = sg_u | ((uint64_t)m->data_len << (i << 4));
- *slist = rte_mbuf_data_iova(m);
- /* Set invert df if buffer is not to be freed by H/W */
- if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) {
- sg_u |= (otx2_nix_prefree_seg(m) << (i + 55));
- /* Commit changes to mbuf */
- rte_io_wmb();
- }
- /* Mark mempool object as "put" since it is freed by NIX */
-#ifdef RTE_LIBRTE_MEMPOOL_DEBUG
- if (!(sg_u & (1ULL << (i + 55))))
- RTE_MEMPOOL_CHECK_COOKIES(m->pool, (void **)&m, 1, 0);
- rte_io_wmb();
-#endif
- slist++;
- i++;
- nb_segs--;
- if (i > 2 && nb_segs) {
- i = 0;
- /* Next SG subdesc */
- *(uint64_t *)slist = sg_u & 0xFC00000000000000;
- sg->u = sg_u;
- sg->segs = 3;
- sg = (union nix_send_sg_s *)slist;
- sg_u = sg->u;
- slist++;
- }
- m = m_next;
- } while (nb_segs);
-
- sg->u = sg_u;
- sg->segs = i;
- segdw = (uint64_t *)slist - (uint64_t *)&cmd[2 + off];
- /* Roundup extra dwords to multiple of 2 */
- segdw = (segdw >> 1) + (segdw & 0x1);
- /* Default dwords */
- segdw += (off >> 1) + 1 + !!(flags & NIX_TX_OFFLOAD_TSTAMP_F);
- send_hdr->w0.sizem1 = segdw - 1;
-
- return segdw;
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_prep_lmt(uint64_t *cmd, void *lmt_addr, uint16_t segdw)
-{
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-static __rte_always_inline void
-otx2_nix_xmit_mseg_one_release(uint64_t *cmd, void *lmt_addr,
- rte_iova_t io_addr, uint16_t segdw)
-{
- uint64_t lmt_status;
-
- rte_io_wmb();
- do {
- otx2_lmt_mov_seg(lmt_addr, (const void *)cmd, segdw);
- lmt_status = otx2_lmt_submit(io_addr);
- } while (lmt_status == 0);
-}
-
-#define L3L4CSUM_F NIX_TX_OFFLOAD_L3_L4_CSUM_F
-#define OL3OL4CSUM_F NIX_TX_OFFLOAD_OL3_OL4_CSUM_F
-#define VLAN_F NIX_TX_OFFLOAD_VLAN_QINQ_F
-#define NOFF_F NIX_TX_OFFLOAD_MBUF_NOFF_F
-#define TSP_F NIX_TX_OFFLOAD_TSTAMP_F
-#define TSO_F NIX_TX_OFFLOAD_TSO_F
-#define TX_SEC_F NIX_TX_OFFLOAD_SECURITY_F
-
-/* [SEC] [TSO] [TSTMP] [NOFF] [VLAN] [OL3OL4CSUM] [L3L4CSUM] */
-#define NIX_TX_FASTPATH_MODES \
-T(no_offload, 0, 0, 0, 0, 0, 0, 0, 4, \
- NIX_TX_OFFLOAD_NONE) \
-T(l3l4csum, 0, 0, 0, 0, 0, 0, 1, 4, \
- L3L4CSUM_F) \
-T(ol3ol4csum, 0, 0, 0, 0, 0, 1, 0, 4, \
- OL3OL4CSUM_F) \
-T(ol3ol4csum_l3l4csum, 0, 0, 0, 0, 0, 1, 1, 4, \
- OL3OL4CSUM_F | L3L4CSUM_F) \
-T(vlan, 0, 0, 0, 0, 1, 0, 0, 6, \
- VLAN_F) \
-T(vlan_l3l4csum, 0, 0, 0, 0, 1, 0, 1, 6, \
- VLAN_F | L3L4CSUM_F) \
-T(vlan_ol3ol4csum, 0, 0, 0, 0, 1, 1, 0, 6, \
- VLAN_F | OL3OL4CSUM_F) \
-T(vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 0, 1, 1, 1, 6, \
- VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff, 0, 0, 0, 1, 0, 0, 0, 4, \
- NOFF_F) \
-T(noff_l3l4csum, 0, 0, 0, 1, 0, 0, 1, 4, \
- NOFF_F | L3L4CSUM_F) \
-T(noff_ol3ol4csum, 0, 0, 0, 1, 0, 1, 0, 4, \
- NOFF_F | OL3OL4CSUM_F) \
-T(noff_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 0, 1, 1, 4, \
- NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(noff_vlan, 0, 0, 0, 1, 1, 0, 0, 6, \
- NOFF_F | VLAN_F) \
-T(noff_vlan_l3l4csum, 0, 0, 0, 1, 1, 0, 1, 6, \
- NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(noff_vlan_ol3ol4csum, 0, 0, 0, 1, 1, 1, 0, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 0, 1, 1, 1, 1, 6, \
- NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts, 0, 0, 1, 0, 0, 0, 0, 8, \
- TSP_F) \
-T(ts_l3l4csum, 0, 0, 1, 0, 0, 0, 1, 8, \
- TSP_F | L3L4CSUM_F) \
-T(ts_ol3ol4csum, 0, 0, 1, 0, 0, 1, 0, 8, \
- TSP_F | OL3OL4CSUM_F) \
-T(ts_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 0, 1, 1, 8, \
- TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_vlan, 0, 0, 1, 0, 1, 0, 0, 8, \
- TSP_F | VLAN_F) \
-T(ts_vlan_l3l4csum, 0, 0, 1, 0, 1, 0, 1, 8, \
- TSP_F | VLAN_F | L3L4CSUM_F) \
-T(ts_vlan_ol3ol4csum, 0, 0, 1, 0, 1, 1, 0, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 0, 1, 1, 1, 8, \
- TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff, 0, 0, 1, 1, 0, 0, 0, 8, \
- TSP_F | NOFF_F) \
-T(ts_noff_l3l4csum, 0, 0, 1, 1, 0, 0, 1, 8, \
- TSP_F | NOFF_F | L3L4CSUM_F) \
-T(ts_noff_ol3ol4csum, 0, 0, 1, 1, 0, 1, 0, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(ts_noff_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 0, 1, 1, 8, \
- TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(ts_noff_vlan, 0, 0, 1, 1, 1, 0, 0, 8, \
- TSP_F | NOFF_F | VLAN_F) \
-T(ts_noff_vlan_l3l4csum, 0, 0, 1, 1, 1, 0, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum, 0, 0, 1, 1, 1, 1, 0, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 0, 1, 1, 1, 1, 1, 8, \
- TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
- \
-T(tso, 0, 1, 0, 0, 0, 0, 0, 6, \
- TSO_F) \
-T(tso_l3l4csum, 0, 1, 0, 0, 0, 0, 1, 6, \
- TSO_F | L3L4CSUM_F) \
-T(tso_ol3ol4csum, 0, 1, 0, 0, 0, 1, 0, 6, \
- TSO_F | OL3OL4CSUM_F) \
-T(tso_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 0, 1, 1, 6, \
- TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_vlan, 0, 1, 0, 0, 1, 0, 0, 6, \
- TSO_F | VLAN_F) \
-T(tso_vlan_l3l4csum, 0, 1, 0, 0, 1, 0, 1, 6, \
- TSO_F | VLAN_F | L3L4CSUM_F) \
-T(tso_vlan_ol3ol4csum, 0, 1, 0, 0, 1, 1, 0, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 0, 1, 1, 1, 6, \
- TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff, 0, 1, 0, 1, 0, 0, 0, 6, \
- TSO_F | NOFF_F) \
-T(tso_noff_l3l4csum, 0, 1, 0, 1, 0, 0, 1, 6, \
- TSO_F | NOFF_F | L3L4CSUM_F) \
-T(tso_noff_ol3ol4csum, 0, 1, 0, 1, 0, 1, 0, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_noff_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 0, 1, 1, 6, \
- TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_noff_vlan, 0, 1, 0, 1, 1, 0, 0, 6, \
- TSO_F | NOFF_F | VLAN_F) \
-T(tso_noff_vlan_l3l4csum, 0, 1, 0, 1, 1, 0, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum, 0, 1, 0, 1, 1, 1, 0, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 0, 1, 1, 1, 1, 6, \
- TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts, 0, 1, 1, 0, 0, 0, 0, 8, \
- TSO_F | TSP_F) \
-T(tso_ts_l3l4csum, 0, 1, 1, 0, 0, 0, 1, 8, \
- TSO_F | TSP_F | L3L4CSUM_F) \
-T(tso_ts_ol3ol4csum, 0, 1, 1, 0, 0, 1, 0, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(tso_ts_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 0, 1, 1, 8, \
- TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_vlan, 0, 1, 1, 0, 1, 0, 0, 8, \
- TSO_F | TSP_F | VLAN_F) \
-T(tso_ts_vlan_l3l4csum, 0, 1, 1, 0, 1, 0, 1, 8, \
- TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum, 0, 1, 1, 0, 1, 1, 0, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 0, 1, 1, 1, 8, \
- TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff, 0, 1, 1, 1, 0, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F) \
-T(tso_ts_noff_l3l4csum, 0, 1, 1, 1, 0, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum, 0, 1, 1, 1, 0, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 0, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan, 0, 1, 1, 1, 1, 0, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(tso_ts_noff_vlan_l3l4csum, 0, 1, 1, 1, 1, 0, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum, 0, 1, 1, 1, 1, 1, 0, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(tso_ts_noff_vlan_ol3ol4csum_l3l4csum, 0, 1, 1, 1, 1, 1, 1, 8, \
- TSO_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec, 1, 0, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F) \
-T(sec_l3l4csum, 1, 0, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | L3L4CSUM_F) \
-T(sec_ol3ol4csum, 1, 0, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | OL3OL4CSUM_F) \
-T(sec_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_vlan, 1, 0, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | VLAN_F) \
-T(sec_vlan_l3l4csum, 1, 0, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | VLAN_F | L3L4CSUM_F) \
-T(sec_vlan_ol3ol4csum, 1, 0, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff, 1, 0, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | NOFF_F) \
-T(sec_noff_l3l4csum, 1, 0, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | NOFF_F | L3L4CSUM_F) \
-T(sec_noff_ol3ol4csum, 1, 0, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_noff_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_noff_vlan, 1, 0, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F) \
-T(sec_noff_vlan_l3l4csum, 1, 0, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum, 1, 0, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts, 1, 0, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F) \
-T(sec_ts_l3l4csum, 1, 0, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | L3L4CSUM_F) \
-T(sec_ts_ol3ol4csum, 1, 0, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_ts_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_vlan, 1, 0, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F) \
-T(sec_ts_vlan_l3l4csum, 1, 0, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum, 1, 0, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff, 1, 0, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F) \
-T(sec_ts_noff_l3l4csum, 1, 0, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum, 1, 0, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan, 1, 0, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_ts_noff_vlan_l3l4csum, 1, 0, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum, 1, 0, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_ts_noff_vlan_ol3ol4csum_l3l4csum, 1, 0, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSP_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso, 1, 1, 0, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F) \
-T(sec_tso_l3l4csum, 1, 1, 0, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | L3L4CSUM_F) \
-T(sec_tso_ol3ol4csum, 1, 1, 0, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F) \
-T(sec_tso_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_vlan, 1, 1, 0, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F) \
-T(sec_tso_vlan_l3l4csum, 1, 1, 0, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum, 1, 1, 0, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_vlan_ol3ol4csum_l3l4csum, 1, 1, 0, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | VLAN_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff, 1, 1, 0, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F) \
-T(sec_tso_noff_l3l4csum, 1, 1, 0, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum, 1, 1, 0, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_ol3ol4csum_l3l4csum, 1, 1, 0, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan, 1, 1, 0, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F) \
-T(sec_tso_noff_vlan_l3l4csum, 1, 1, 0, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum, 1, 1, 0, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 0, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | NOFF_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts, 1, 1, 1, 0, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F) \
-T(sec_tso_ts_l3l4csum, 1, 1, 1, 0, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | L3L4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum, 1, 1, 1, 0, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | OL3OL4CSUM_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan, 1, 1, 1, 0, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F) \
-T(sec_tso_ts_vlan_l3l4csum, 1, 1, 1, 0, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | L3L4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum, 1, 1, 1, 0, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_vlan_ol3ol4csum_l3l4csum, 1, 1, 1, 0, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | VLAN_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff, 1, 1, 1, 1, 0, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F) \
-T(sec_tso_ts_noff_l3l4csum, 1, 1, 1, 1, 0, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | L3L4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum, 1, 1, 1, 1, 0, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_ol3ol4csum_l3l4csum, 1, 1, 1, 1, 0, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | OL3OL4CSUM_F | \
- L3L4CSUM_F) \
-T(sec_tso_ts_noff_vlan, 1, 1, 1, 1, 1, 0, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F) \
-T(sec_tso_ts_noff_vlan_l3l4csum, 1, 1, 1, 1, 1, 0, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | L3L4CSUM_F)\
-T(sec_tso_ts_noff_vlan_ol3ol4csum, 1, 1, 1, 1, 1, 1, 0, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F) \
-T(sec_tso_ts_noff_vlan_ol3ol4csum_l3l4csum, \
- 1, 1, 1, 1, 1, 1, 1, 8, \
- TX_SEC_F | TSO_F | TSP_F | NOFF_F | VLAN_F | \
- OL3OL4CSUM_F | L3L4CSUM_F)
-#endif /* __OTX2_TX_H__ */
diff --git a/drivers/net/octeontx2/otx2_vlan.c b/drivers/net/octeontx2/otx2_vlan.c
deleted file mode 100644
index cce643b7b5..0000000000
--- a/drivers/net/octeontx2/otx2_vlan.c
+++ /dev/null
@@ -1,1035 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(C) 2019 Marvell International Ltd.
- */
-
-#include <rte_malloc.h>
-#include <rte_tailq.h>
-
-#include "otx2_ethdev.h"
-#include "otx2_flow.h"
-
-
-#define VLAN_ID_MATCH 0x1
-#define VTAG_F_MATCH 0x2
-#define MAC_ADDR_MATCH 0x4
-#define QINQ_F_MATCH 0x8
-#define VLAN_DROP 0x10
-#define DEF_F_ENTRY 0x20
-
-enum vtag_cfg_dir {
- VTAG_TX,
- VTAG_RX
-};
-
-static int
-nix_vlan_mcam_enb_dis(struct otx2_eth_dev *dev,
- uint32_t entry, const int enable)
-{
- struct npc_mcam_ena_dis_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- if (enable)
- req = otx2_mbox_alloc_msg_npc_mcam_ena_entry(mbox);
- else
- req = otx2_mbox_alloc_msg_npc_mcam_dis_entry(mbox);
-
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static void
-nix_set_rx_vlan_action(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry, bool qinq, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int pcifunc = otx2_pfvf_func(dev->pf, dev->vf);
- uint64_t action = 0, vtag_action = 0;
-
- action = NIX_RX_ACTIONOP_UCAST;
-
- if (eth_dev->data->dev_conf.rxmode.mq_mode == RTE_ETH_MQ_RX_RSS) {
- action = NIX_RX_ACTIONOP_RSS;
- action |= (uint64_t)(dev->rss_info.alg_idx) << 56;
- }
-
- action |= (uint64_t)pcifunc << 4;
- entry->action = action;
-
- if (drop) {
- entry->action &= ~((uint64_t)0xF);
- entry->action |= NIX_RX_ACTIONOP_DROP;
- return;
- }
-
- if (!qinq) {
- /* VTAG0 fields denote CTAG in single vlan case */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG0_RELPTR;
- } else {
- /* VTAG0 & VTAG1 fields denote CTAG & STAG respectively */
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 15);
- vtag_action |= (NPC_LID_LB << 8);
- vtag_action |= NIX_RX_VTAGACTION_VTAG1_RELPTR;
- vtag_action |= (NIX_RX_VTAGACTION_VTAG_VALID << 47);
- vtag_action |= ((uint64_t)(NPC_LID_LB) << 40);
- vtag_action |= (NIX_RX_VTAGACTION_VTAG0_RELPTR << 32);
- }
-
- entry->vtag_action = vtag_action;
-}
-
-static void
-nix_set_tx_vlan_action(struct mcam_entry *entry, enum rte_vlan_type type,
- int vtag_index)
-{
- union {
- uint64_t reg;
- struct nix_tx_vtag_action_s act;
- } vtag_action;
-
- uint64_t action;
-
- action = NIX_TX_ACTIONOP_UCAST_DEFAULT;
-
- /*
- * Take offset from LA since in case of untagged packet,
- * lbptr is zero.
- */
- if (type == RTE_ETH_VLAN_TYPE_OUTER) {
- vtag_action.act.vtag0_def = vtag_index;
- vtag_action.act.vtag0_lid = NPC_LID_LA;
- vtag_action.act.vtag0_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag0_relptr = NIX_TX_VTAGACTION_VTAG0_RELPTR;
- } else {
- vtag_action.act.vtag1_def = vtag_index;
- vtag_action.act.vtag1_lid = NPC_LID_LA;
- vtag_action.act.vtag1_op = NIX_TX_VTAGOP_INSERT;
- vtag_action.act.vtag1_relptr = NIX_TX_VTAGACTION_VTAG1_RELPTR;
- }
-
- entry->action = action;
- entry->vtag_action = vtag_action.reg;
-}
-
-static int
-nix_vlan_mcam_free(struct otx2_eth_dev *dev, uint32_t entry)
-{
- struct npc_mcam_free_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_free_entry(mbox);
- req->entry = entry;
-
- rc = otx2_mbox_process_msg(mbox, NULL);
- return rc;
-}
-
-static int
-nix_vlan_mcam_write(struct rte_eth_dev *eth_dev, uint16_t ent_idx,
- struct mcam_entry *entry, uint8_t intf, uint8_t ena)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_write_entry_req *req;
- struct otx2_mbox *mbox = dev->mbox;
- struct msghdr *rsp;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_write_entry(mbox);
-
- req->entry = ent_idx;
- req->intf = intf;
- req->enable_entry = ena;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- return rc;
-}
-
-static int
-nix_vlan_mcam_alloc_and_write(struct rte_eth_dev *eth_dev,
- struct mcam_entry *entry,
- uint8_t intf, bool drop)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct npc_mcam_alloc_and_write_entry_req *req;
- struct npc_mcam_alloc_and_write_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- int rc = -EINVAL;
-
- req = otx2_mbox_alloc_msg_npc_mcam_alloc_and_write_entry(mbox);
-
- if (intf == NPC_MCAM_RX) {
- if (!drop && dev->vlan_info.def_rx_mcam_idx) {
- req->priority = NPC_MCAM_HIGHER_PRIO;
- req->ref_entry = dev->vlan_info.def_rx_mcam_idx;
- } else if (drop && dev->vlan_info.qinq_mcam_idx) {
- req->priority = NPC_MCAM_LOWER_PRIO;
- req->ref_entry = dev->vlan_info.qinq_mcam_idx;
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
- } else {
- req->priority = NPC_MCAM_ANY_PRIO;
- req->ref_entry = 0;
- }
-
- req->intf = intf;
- req->enable_entry = 1;
- memcpy(&req->entry_data, entry, sizeof(struct mcam_entry));
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- return rsp->entry;
-}
-
-static void
-nix_vlan_update_mac(struct rte_eth_dev *eth_dev, int mcam_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- struct npc_mcam_read_entry_req *req;
- struct npc_mcam_read_entry_rsp *rsp;
- struct otx2_mbox *mbox = dev->mbox;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t intf, mcam_ena;
- int idx, rc = -EINVAL;
- uint8_t *mac_addr;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Read entry first */
- req = otx2_mbox_alloc_msg_npc_mcam_read_entry(mbox);
-
- req->entry = mcam_index;
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc) {
- otx2_err("Failed to read entry %d", mcam_index);
- return;
- }
-
- entry = rsp->entry_data;
- intf = rsp->intf;
- mcam_ena = rsp->enable;
-
- /* Update mcam address */
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- if (enable) {
- mcam_mask = 0;
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
-
- } else {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
-
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* Write back the mcam entry */
- rc = nix_vlan_mcam_write(eth_dev, mcam_index,
- &entry, intf, mcam_ena);
- if (rc) {
- otx2_err("Failed to write entry %d", mcam_index);
- return;
- }
-}
-
-void
-otx2_nix_vlan_update_promisc(struct rte_eth_dev *eth_dev, int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
-
- /* Already in required mode */
- if (enable == vlan->promisc_on)
- return;
-
- /* Update default rx entry */
- if (vlan->def_rx_mcam_idx)
- nix_vlan_update_mac(eth_dev, vlan->def_rx_mcam_idx, enable);
-
- /* Update all other rx filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next)
- nix_vlan_update_mac(eth_dev, entry->mcam_idx, enable);
-
- vlan->promisc_on = enable;
-}
-
-/* Configure mcam entry with required MCAM search rules */
-static int
-nix_vlan_mcam_config(struct rte_eth_dev *eth_dev,
- uint16_t vlan_id, uint16_t flags)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- volatile uint8_t *key_data, *key_mask;
- uint64_t mcam_data, mcam_mask;
- struct mcam_entry entry;
- uint8_t *mac_addr;
- int idx, kwi = 0;
-
- memset(&entry, 0, sizeof(struct mcam_entry));
- key_data = (volatile uint8_t *)entry.kw;
- key_mask = (volatile uint8_t *)entry.kw_mask;
-
- /* Channel base extracted to KW0[11:0] */
- entry.kw[kwi] = dev->rx_chan_base;
- entry.kw_mask[kwi] = BIT_ULL(12) - 1;
-
- /* Adds vlan_id & LB CTAG flag to MCAM KW */
- if (flags & VLAN_ID_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
-
- mcam_data = (uint16_t)vlan_id;
- mcam_mask = (BIT_ULL(16) - 1);
- otx2_mbox_memcpy(key_data + mkex->lb_xtract.key_off,
- &mcam_data, mkex->lb_xtract.len);
- otx2_mbox_memcpy(key_mask + mkex->lb_xtract.key_off,
- &mcam_mask, mkex->lb_xtract.len);
- }
-
- /* Adds LB STAG flag to MCAM KW */
- if (flags & QINQ_F_MATCH) {
- entry.kw[kwi] |= NPC_LT_LB_STAG_QINQ << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |= 0xFULL << mkex->lb_lt_offset;
- }
-
- /* Adds LB CTAG & LB STAG flags to MCAM KW */
- if (flags & VTAG_F_MATCH) {
- entry.kw[kwi] |= (NPC_LT_LB_CTAG | NPC_LT_LB_STAG_QINQ)
- << mkex->lb_lt_offset;
- entry.kw_mask[kwi] |=
- (0xF & ~(NPC_LT_LB_CTAG ^ NPC_LT_LB_STAG_QINQ))
- << mkex->lb_lt_offset;
- }
-
- /* Adds port MAC address to MCAM KW */
- if (flags & MAC_ADDR_MATCH) {
- mcam_data = 0ULL;
- mac_addr = dev->mac_addr;
- for (idx = RTE_ETHER_ADDR_LEN - 1; idx >= 0; idx--)
- mcam_data |= ((uint64_t)*mac_addr++) << (8 * idx);
-
- mcam_mask = BIT_ULL(48) - 1;
- otx2_mbox_memcpy(key_data + mkex->la_xtract.key_off,
- &mcam_data, mkex->la_xtract.len + 1);
- otx2_mbox_memcpy(key_mask + mkex->la_xtract.key_off,
- &mcam_mask, mkex->la_xtract.len + 1);
- }
-
- /* VLAN_DROP: for drop action for all vlan packets when filter is on.
- * For QinQ, enable vtag action for both outer & inner tags
- */
- if (flags & VLAN_DROP)
- nix_set_rx_vlan_action(eth_dev, &entry, false, true);
- else if (flags & QINQ_F_MATCH)
- nix_set_rx_vlan_action(eth_dev, &entry, true, false);
- else
- nix_set_rx_vlan_action(eth_dev, &entry, false, false);
-
- if (flags & DEF_F_ENTRY)
- dev->vlan_info.def_rx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry, NIX_INTF_RX,
- flags & VLAN_DROP);
-}
-
-/* Installs/Removes/Modifies default rx entry */
-static int
-nix_vlan_handle_default_rx_entry(struct rte_eth_dev *eth_dev, bool strip,
- bool filter, bool enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- uint16_t flags = 0;
- int mcam_idx, rc;
-
- /* Use default mcam entry to either drop vlan traffic when
- * vlan filter is on or strip vtag when strip is enabled.
- * Allocate default entry which matches port mac address
- * and vtag(ctag/stag) flags with drop action.
- */
- if (!vlan->def_rx_mcam_idx) {
- if (!eth_dev->data->promiscuous)
- flags = MAC_ADDR_MATCH;
-
- if (filter && enable)
- flags |= VTAG_F_MATCH | VLAN_DROP;
- else if (strip && enable)
- flags |= VTAG_F_MATCH;
- else
- return 0;
-
- flags |= DEF_F_ENTRY;
-
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, flags);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- return -mcam_idx;
- }
-
- vlan->def_rx_mcam_idx = mcam_idx;
- return 0;
- }
-
- /* Filter is already enabled, so packets would be dropped anyways. No
- * processing needed for enabling strip wrt mcam entry.
- */
-
- /* Filter disable request */
- if (vlan->filter_on && filter && !enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
-
- /* Free default rx entry only when
- * 1. strip is not on and
- * 2. qinq entry is allocated before default entry.
- */
- if (vlan->strip_on ||
- (vlan->qinq_on && !vlan->qinq_before_def)) {
- if (eth_dev->data->dev_conf.rxmode.mq_mode ==
- RTE_ETH_MQ_RX_RSS)
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_RSS;
- else
- vlan->def_rx_mcam_ent.action |=
- NIX_RX_ACTIONOP_UCAST;
- return nix_vlan_mcam_write(eth_dev,
- vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent,
- NIX_INTF_RX, 1);
- } else {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- /* Filter enable request */
- if (!vlan->filter_on && filter && enable) {
- vlan->def_rx_mcam_ent.action &= ~((uint64_t)0xF);
- vlan->def_rx_mcam_ent.action |= NIX_RX_ACTIONOP_DROP;
- return nix_vlan_mcam_write(eth_dev, vlan->def_rx_mcam_idx,
- &vlan->def_rx_mcam_ent, NIX_INTF_RX, 1);
- }
-
- /* Strip disable request */
- if (vlan->strip_on && strip && !enable) {
- if (!vlan->filter_on &&
- !(vlan->qinq_on && !vlan->qinq_before_def)) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
- }
-
- return 0;
-}
-
-/* Installs/Removes default tx entry */
-static int
-nix_vlan_handle_default_tx_entry(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, int vtag_index,
- int enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct mcam_entry entry;
- uint16_t pf_func;
- int rc;
-
- if (!vlan->def_tx_mcam_idx && enable) {
- memset(&entry, 0, sizeof(struct mcam_entry));
-
- /* Only pf_func is matched, swap it's bytes */
- pf_func = (dev->pf_func & 0xff) << 8;
- pf_func |= (dev->pf_func >> 8) & 0xff;
-
- /* PF Func extracted to KW1[47:32] */
- entry.kw[0] = (uint64_t)pf_func << 32;
- entry.kw_mask[0] = (BIT_ULL(16) - 1) << 32;
-
- nix_set_tx_vlan_action(&entry, type, vtag_index);
- vlan->def_tx_mcam_ent = entry;
-
- return nix_vlan_mcam_alloc_and_write(eth_dev, &entry,
- NIX_INTF_TX, 0);
- }
-
- if (vlan->def_tx_mcam_idx && !enable) {
- rc = nix_vlan_mcam_free(dev, vlan->def_tx_mcam_idx);
- if (rc)
- return rc;
- vlan->def_rx_mcam_idx = 0;
- }
-
- return 0;
-}
-
-/* Configure vlan stripping on or off */
-static int
-nix_vlan_hw_strip(struct rte_eth_dev *eth_dev, const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_mbox *mbox = dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- int rc = -EINVAL;
-
- rc = nix_vlan_handle_default_rx_entry(eth_dev, true, false, enable);
- if (rc) {
- otx2_err("Failed to config default rx entry");
- return rc;
- }
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
- /* cfg_type = 1 for rx vlan cfg */
- vtag_cfg->cfg_type = VTAG_RX;
-
- if (enable)
- vtag_cfg->rx.strip_vtag = 1;
- else
- vtag_cfg->rx.strip_vtag = 0;
-
- /* Always capture */
- vtag_cfg->rx.capture_vtag = 1;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
- /* Use rx vtag type index[0] for now */
- vtag_cfg->rx.vtag_type = 0;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- dev->vlan_info.strip_on = enable;
- return rc;
-}
-
-/* Configure vlan filtering on or off for all vlans if vlan_id == 0 */
-static int
-nix_vlan_hw_filter(struct rte_eth_dev *eth_dev, const uint8_t enable,
- uint16_t vlan_id)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc = -EINVAL;
-
- if (!vlan_id && enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- /* Enable/disable existing vlan filter entries */
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (vlan_id) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_enb_dis(dev,
- entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- } else {
- rc = nix_vlan_mcam_enb_dis(dev, entry->mcam_idx,
- enable);
- if (rc)
- return rc;
- }
- }
-
- if (!vlan_id && !enable) {
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true,
- enable);
- if (rc) {
- otx2_err("Failed to config vlan mcam");
- return rc;
- }
- dev->vlan_info.filter_on = enable;
- return 0;
- }
-
- return 0;
-}
-
-/* Enable/disable vlan filtering for the given vlan_id */
-int
-otx2_nix_vlan_filter_set(struct rte_eth_dev *eth_dev, uint16_t vlan_id,
- int on)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int entry_exists = 0;
- int rc = -EINVAL;
- int mcam_idx;
-
- if (!vlan_id) {
- otx2_err("Vlan Id can't be zero");
- return rc;
- }
-
- if (!vlan->def_rx_mcam_idx) {
- otx2_err("Vlan Filtering is disabled, enable it first");
- return rc;
- }
-
- if (on) {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- /* Vlan entry already exists */
- entry_exists = 1;
- /* Mcam entry already allocated */
- if (entry->mcam_idx) {
- rc = nix_vlan_hw_filter(eth_dev, on,
- vlan_id);
- return rc;
- }
- break;
- }
- }
-
- if (!entry_exists) {
- entry = rte_zmalloc("otx2_nix_vlan_entry",
- sizeof(struct vlan_entry), 0);
- if (!entry) {
- otx2_err("Failed to allocate memory");
- return -ENOMEM;
- }
- }
-
- /* Enables vlan_id & mac address based filtering */
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, vlan_id,
- VLAN_ID_MATCH |
- MAC_ADDR_MATCH);
- if (mcam_idx < 0) {
- otx2_err("Failed to config vlan mcam");
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- return mcam_idx;
- }
-
- entry->mcam_idx = mcam_idx;
- if (!entry_exists) {
- entry->vlan_id = vlan_id;
- TAILQ_INSERT_HEAD(&vlan->fltr_tbl, entry, next);
- }
- } else {
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (entry->vlan_id == vlan_id) {
- rc = nix_vlan_mcam_free(dev, entry->mcam_idx);
- if (rc)
- return rc;
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- break;
- }
- }
- }
- return 0;
-}
-
-/* Configure double vlan(qinq) on or off */
-static int
-otx2_nix_config_double_vlan(struct rte_eth_dev *eth_dev,
- const uint8_t enable)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan_info;
- int mcam_idx;
- int rc;
-
- vlan_info = &dev->vlan_info;
-
- if (!enable) {
- if (!vlan_info->qinq_mcam_idx)
- return 0;
-
- rc = nix_vlan_mcam_free(dev, vlan_info->qinq_mcam_idx);
- if (rc)
- return rc;
-
- vlan_info->qinq_mcam_idx = 0;
- dev->vlan_info.qinq_on = 0;
- vlan_info->qinq_before_def = 0;
- return 0;
- }
-
- if (eth_dev->data->promiscuous)
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0, QINQ_F_MATCH);
- else
- mcam_idx = nix_vlan_mcam_config(eth_dev, 0,
- QINQ_F_MATCH | MAC_ADDR_MATCH);
- if (mcam_idx < 0)
- return mcam_idx;
-
- if (!vlan_info->def_rx_mcam_idx)
- vlan_info->qinq_before_def = 1;
-
- vlan_info->qinq_mcam_idx = mcam_idx;
- dev->vlan_info.qinq_on = 1;
- return 0;
-}
-
-int
-otx2_nix_vlan_offload_set(struct rte_eth_dev *eth_dev, int mask)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- uint64_t offloads = dev->rx_offloads;
- struct rte_eth_rxmode *rxmode;
- int rc = 0;
-
- rxmode = ð_dev->data->dev_conf.rxmode;
-
- if (mask & RTE_ETH_VLAN_STRIP_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, true);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_STRIP;
- rc = nix_vlan_hw_strip(eth_dev, false);
- }
- if (rc)
- goto done;
- }
-
- if (mask & RTE_ETH_VLAN_FILTER_MASK) {
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_FILTER) {
- offloads |= RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, true, 0);
- } else {
- offloads &= ~RTE_ETH_RX_OFFLOAD_VLAN_FILTER;
- rc = nix_vlan_hw_filter(eth_dev, false, 0);
- }
- if (rc)
- goto done;
- }
-
- if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) {
- if (!dev->vlan_info.qinq_on) {
- offloads |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, true);
- if (rc)
- goto done;
- }
- } else {
- if (dev->vlan_info.qinq_on) {
- offloads &= ~RTE_ETH_RX_OFFLOAD_QINQ_STRIP;
- rc = otx2_nix_config_double_vlan(eth_dev, false);
- if (rc)
- goto done;
- }
- }
-
- if (offloads & (RTE_ETH_RX_OFFLOAD_VLAN_STRIP |
- RTE_ETH_RX_OFFLOAD_QINQ_STRIP)) {
- dev->rx_offloads |= offloads;
- dev->rx_offload_flags |= NIX_RX_OFFLOAD_VLAN_STRIP_F;
- otx2_eth_set_rx_function(eth_dev);
- }
-
-done:
- return rc;
-}
-
-int
-otx2_nix_vlan_tpid_set(struct rte_eth_dev *eth_dev,
- enum rte_vlan_type type, uint16_t tpid)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct nix_set_vlan_tpid *tpid_cfg;
- struct otx2_mbox *mbox = dev->mbox;
- int rc;
-
- tpid_cfg = otx2_mbox_alloc_msg_nix_set_vlan_tpid(mbox);
-
- tpid_cfg->tpid = tpid;
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_OUTER;
- else
- tpid_cfg->vlan_type = NIX_VLAN_TYPE_INNER;
-
- rc = otx2_mbox_process(mbox);
- if (rc)
- return rc;
-
- if (type == RTE_ETH_VLAN_TYPE_OUTER)
- dev->vlan_info.outer_vlan_tpid = tpid;
- else
- dev->vlan_info.inner_vlan_tpid = tpid;
- return 0;
-}
-
-int
-otx2_nix_vlan_pvid_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
-{
- struct otx2_eth_dev *otx2_dev = otx2_eth_pmd_priv(dev);
- struct otx2_mbox *mbox = otx2_dev->mbox;
- struct nix_vtag_config *vtag_cfg;
- struct nix_vtag_config_rsp *rsp;
- struct otx2_vlan_info *vlan;
- int rc, rc1, vtag_index = 0;
-
- if (vlan_id == 0) {
- otx2_err("vlan id can't be zero");
- return -EINVAL;
- }
-
- vlan = &otx2_dev->vlan_info;
-
- if (on && vlan->pvid_insert_on && vlan->pvid == vlan_id) {
- otx2_err("pvid %d is already enabled", vlan_id);
- return -EINVAL;
- }
-
- if (on && vlan->pvid_insert_on && vlan->pvid != vlan_id) {
- otx2_err("another pvid is enabled, disable that first");
- return -EINVAL;
- }
-
- /* No pvid active */
- if (!on && !vlan->pvid_insert_on)
- return 0;
-
- /* Given pvid already disabled */
- if (!on && vlan->pvid != vlan_id)
- return 0;
-
- vtag_cfg = otx2_mbox_alloc_msg_nix_vtag_cfg(mbox);
-
- if (on) {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- if (vlan->outer_vlan_tpid)
- vtag_cfg->tx.vtag0 = ((uint32_t)vlan->outer_vlan_tpid
- << 16) | vlan_id;
- else
- vtag_cfg->tx.vtag0 =
- ((RTE_ETHER_TYPE_VLAN << 16) | vlan_id);
- vtag_cfg->tx.cfg_vtag0 = 1;
- } else {
- vtag_cfg->cfg_type = VTAG_TX;
- vtag_cfg->vtag_size = NIX_VTAGSIZE_T4;
-
- vtag_cfg->tx.vtag0_idx = vlan->outer_vlan_idx;
- vtag_cfg->tx.free_vtag0 = 1;
- }
-
- rc = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc)
- return rc;
-
- if (on) {
- vtag_index = rsp->vtag0_idx;
- } else {
- vlan->pvid = 0;
- vlan->pvid_insert_on = 0;
- vlan->outer_vlan_idx = 0;
- }
-
- rc = nix_vlan_handle_default_tx_entry(dev, RTE_ETH_VLAN_TYPE_OUTER,
- vtag_index, on);
- if (rc < 0) {
- printf("Default tx entry failed with rc %d\n", rc);
- vtag_cfg->tx.vtag0_idx = vtag_index;
- vtag_cfg->tx.free_vtag0 = 1;
- vtag_cfg->tx.cfg_vtag0 = 0;
-
- rc1 = otx2_mbox_process_msg(mbox, (void *)&rsp);
- if (rc1)
- otx2_err("Vtag free failed");
-
- return rc;
- }
-
- if (on) {
- vlan->pvid = vlan_id;
- vlan->pvid_insert_on = 1;
- vlan->outer_vlan_idx = vtag_index;
- }
-
- return 0;
-}
-
-void otx2_nix_vlan_strip_queue_set(__rte_unused struct rte_eth_dev *dev,
- __rte_unused uint16_t queue,
- __rte_unused int on)
-{
- otx2_err("Not Supported");
-}
-
-static int
-nix_vlan_rx_mkex_offset(uint64_t mask)
-{
- int nib_count = 0;
-
- while (mask) {
- nib_count += mask & 1;
- mask >>= 1;
- }
-
- return nib_count * 4;
-}
-
-static int
-nix_vlan_get_mkex_info(struct otx2_eth_dev *dev)
-{
- struct vlan_mkex_info *mkex = &dev->vlan_info.mkex;
- struct otx2_npc_flow_info *npc = &dev->npc_flow;
- struct npc_xtract_info *x_info = NULL;
- uint64_t rx_keyx;
- otx2_dxcfg_t *p;
- int rc = -EINVAL;
-
- if (npc == NULL) {
- otx2_err("Missing npc mkex configuration");
- return rc;
- }
-
-#define NPC_KEX_CHAN_NIBBLE_ENA 0x7ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_ENA 0x1000ULL
-#define NPC_KEX_LB_LTYPE_NIBBLE_MASK 0xFFFULL
-
- rx_keyx = npc->keyx_supp_nmask[NPC_MCAM_RX];
- if ((rx_keyx & NPC_KEX_CHAN_NIBBLE_ENA) != NPC_KEX_CHAN_NIBBLE_ENA)
- return rc;
-
- if ((rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_ENA) !=
- NPC_KEX_LB_LTYPE_NIBBLE_ENA)
- return rc;
-
- mkex->lb_lt_offset =
- nix_vlan_rx_mkex_offset(rx_keyx & NPC_KEX_LB_LTYPE_NIBBLE_MASK);
-
- p = &npc->prx_dxcfg;
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LA][NPC_LT_LA_ETHER].xtract[0];
- memcpy(&mkex->la_xtract, x_info, sizeof(struct npc_xtract_info));
- x_info = &(*p)[NPC_MCAM_RX][NPC_LID_LB][NPC_LT_LB_CTAG].xtract[0];
- memcpy(&mkex->lb_xtract, x_info, sizeof(struct npc_xtract_info));
-
- return 0;
-}
-
-static void nix_vlan_reinstall_vlan_filters(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct vlan_entry *entry;
- int rc;
-
- /* VLAN filters can't be set without setting filtern on */
- rc = nix_vlan_handle_default_rx_entry(eth_dev, false, true, true);
- if (rc) {
- otx2_err("Failed to reinstall vlan filters");
- return;
- }
-
- TAILQ_FOREACH(entry, &dev->vlan_info.fltr_tbl, next) {
- rc = otx2_nix_vlan_filter_set(eth_dev, entry->vlan_id, true);
- if (rc)
- otx2_err("Failed to reinstall filter for vlan:%d",
- entry->vlan_id);
- }
-}
-
-int
-otx2_nix_vlan_offload_init(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- int rc, mask;
-
- /* Port initialized for first time or restarted */
- if (!dev->configured) {
- rc = nix_vlan_get_mkex_info(dev);
- if (rc) {
- otx2_err("Failed to get vlan mkex info rc=%d", rc);
- return rc;
- }
-
- TAILQ_INIT(&dev->vlan_info.fltr_tbl);
- } else {
- /* Reinstall all mcam entries now if filter offload is set */
- if (eth_dev->data->dev_conf.rxmode.offloads &
- RTE_ETH_RX_OFFLOAD_VLAN_FILTER)
- nix_vlan_reinstall_vlan_filters(eth_dev);
- }
-
- mask =
- RTE_ETH_VLAN_STRIP_MASK | RTE_ETH_VLAN_FILTER_MASK;
- rc = otx2_nix_vlan_offload_set(eth_dev, mask);
- if (rc) {
- otx2_err("Failed to set vlan offload rc=%d", rc);
- return rc;
- }
-
- return 0;
-}
-
-int
-otx2_nix_vlan_fini(struct rte_eth_dev *eth_dev)
-{
- struct otx2_eth_dev *dev = otx2_eth_pmd_priv(eth_dev);
- struct otx2_vlan_info *vlan = &dev->vlan_info;
- struct vlan_entry *entry;
- int rc;
-
- TAILQ_FOREACH(entry, &vlan->fltr_tbl, next) {
- if (!dev->configured) {
- TAILQ_REMOVE(&vlan->fltr_tbl, entry, next);
- rte_free(entry);
- } else {
- /* MCAM entries freed by flow_fini & lf_free on
- * port stop.
- */
- entry->mcam_idx = 0;
- }
- }
-
- if (!dev->configured) {
- if (vlan->def_rx_mcam_idx) {
- rc = nix_vlan_mcam_free(dev, vlan->def_rx_mcam_idx);
- if (rc)
- return rc;
- }
- }
-
- otx2_nix_config_double_vlan(eth_dev, false);
- vlan->def_rx_mcam_idx = 0;
- return 0;
-}
diff --git a/drivers/net/octeontx2/version.map b/drivers/net/octeontx2/version.map
deleted file mode 100644
index c2e0723b4c..0000000000
--- a/drivers/net/octeontx2/version.map
+++ /dev/null
@@ -1,3 +0,0 @@
-DPDK_22 {
- local: *;
-};
diff --git a/drivers/net/octeontx_ep/otx2_ep_vf.h b/drivers/net/octeontx_ep/otx2_ep_vf.h
index 9326925025..dc720368ab 100644
--- a/drivers/net/octeontx_ep/otx2_ep_vf.h
+++ b/drivers/net/octeontx_ep/otx2_ep_vf.h
@@ -113,7 +113,7 @@
#define otx2_read64(addr) rte_read64_relaxed((void *)(addr))
#define otx2_write64(val, addr) rte_write64_relaxed((val), (void *)(addr))
-#define PCI_DEVID_OCTEONTX2_EP_NET_VF 0xB203 /* OCTEON TX2 EP mode */
+#define PCI_DEVID_CN9K_EP_NET_VF 0xB203 /* OCTEON 9 EP mode */
#define PCI_DEVID_CN98XX_EP_NET_VF 0xB103
int
diff --git a/drivers/net/octeontx_ep/otx_ep_common.h b/drivers/net/octeontx_ep/otx_ep_common.h
index fd5e8ed263..8a59a1a194 100644
--- a/drivers/net/octeontx_ep/otx_ep_common.h
+++ b/drivers/net/octeontx_ep/otx_ep_common.h
@@ -150,7 +150,7 @@ struct otx_ep_iq_config {
/** The instruction (input) queue.
* The input queue is used to post raw (instruction) mode data or packet data
- * to OCTEON TX2 device from the host. Each IQ of a OTX_EP EP VF device has one
+ * to OCTEON 9 device from the host. Each IQ of a OTX_EP EP VF device has one
* such structure to represent it.
*/
struct otx_ep_instr_queue {
@@ -170,12 +170,12 @@ struct otx_ep_instr_queue {
/* Input ring index, where the driver should write the next packet */
uint32_t host_write_index;
- /* Input ring index, where the OCTEON TX2 should read the next packet */
+ /* Input ring index, where the OCTEON 9 should read the next packet */
uint32_t otx_read_index;
uint32_t reset_instr_cnt;
- /** This index aids in finding the window in the queue where OCTEON TX2
+ /** This index aids in finding the window in the queue where OCTEON 9
* has read the commands.
*/
uint32_t flush_index;
@@ -195,7 +195,7 @@ struct otx_ep_instr_queue {
/* OTX_EP instruction count register for this ring. */
void *inst_cnt_reg;
- /* Number of instructions pending to be posted to OCTEON TX2. */
+ /* Number of instructions pending to be posted to OCTEON 9. */
uint32_t fill_cnt;
/* Statistics for this input queue. */
@@ -230,8 +230,8 @@ union otx_ep_rh {
};
#define OTX_EP_RH_SIZE (sizeof(union otx_ep_rh))
-/** Information about packet DMA'ed by OCTEON TX2.
- * The format of the information available at Info Pointer after OCTEON TX2
+/** Information about packet DMA'ed by OCTEON 9.
+ * The format of the information available at Info Pointer after OCTEON 9
* has posted a packet. Not all descriptors have valid information. Only
* the Info field of the first descriptor for a packet has information
* about the packet.
@@ -295,7 +295,7 @@ struct otx_ep_droq {
/* Driver should read the next packet at this index */
uint32_t read_idx;
- /* OCTEON TX2 will write the next packet at this index */
+ /* OCTEON 9 will write the next packet at this index */
uint32_t write_idx;
/* At this index, the driver will refill the descriptor's buffer */
@@ -326,7 +326,7 @@ struct otx_ep_droq {
*/
void *pkts_credit_reg;
- /** Pointer to the mapped packet sent register. OCTEON TX2 writes the
+ /** Pointer to the mapped packet sent register. OCTEON 9 writes the
* number of packets DMA'ed to host memory in this register.
*/
void *pkts_sent_reg;
diff --git a/drivers/net/octeontx_ep/otx_ep_ethdev.c b/drivers/net/octeontx_ep/otx_ep_ethdev.c
index c3cec6d833..806add246b 100644
--- a/drivers/net/octeontx_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeontx_ep/otx_ep_ethdev.c
@@ -102,7 +102,7 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = otx_ep_vf_setup_device(otx_epvf);
otx_epvf->fn_list.disable_io_queues(otx_epvf);
break;
- case PCI_DEVID_OCTEONTX2_EP_NET_VF:
+ case PCI_DEVID_CN9K_EP_NET_VF:
case PCI_DEVID_CN98XX_EP_NET_VF:
otx_epvf->chip_id = dev_id;
ret = otx2_ep_vf_setup_device(otx_epvf);
@@ -137,7 +137,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &otx_ep_recv_pkts;
if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx_ep_xmit_pkts;
- else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX2_EP_NET_VF ||
+ else if (otx_epvf->chip_id == PCI_DEVID_CN9K_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CN98XX_EP_NET_VF)
otx_epvf->eth_dev->tx_pkt_burst = &otx2_ep_xmit_pkts;
ethdev_queues = (uint32_t)(otx_epvf->sriov_info.rings_per_vf);
@@ -422,7 +422,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->pdev = pdev;
otx_epdev_init(otx_epvf);
- if (pdev->id.device_id == PCI_DEVID_OCTEONTX2_EP_NET_VF)
+ if (pdev->id.device_id == PCI_DEVID_CN9K_EP_NET_VF)
otx_epvf->pkind = SDP_OTX2_PKIND;
else
otx_epvf->pkind = SDP_PKIND;
@@ -450,7 +450,7 @@ otx_ep_eth_dev_pci_remove(struct rte_pci_device *pci_dev)
/* Set of PCI devices this driver supports */
static const struct rte_pci_id pci_id_otx_ep_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX_EP_VF) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_OCTEONTX2_EP_NET_VF) },
+ { RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN9K_EP_NET_VF) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_CAVIUM, PCI_DEVID_CN98XX_EP_NET_VF) },
{ .vendor_id = 0, /* sentinel */ }
};
diff --git a/drivers/net/octeontx_ep/otx_ep_rxtx.c b/drivers/net/octeontx_ep/otx_ep_rxtx.c
index 9338b30672..59df6ad857 100644
--- a/drivers/net/octeontx_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeontx_ep/otx_ep_rxtx.c
@@ -85,7 +85,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq = otx_ep->instr_queue[iq_no];
q_size = conf->iq.instr_type * num_descs;
- /* IQ memory creation for Instruction submission to OCTEON TX2 */
+ /* IQ memory creation for Instruction submission to OCTEON 9 */
iq->iq_mz = rte_eth_dma_zone_reserve(otx_ep->eth_dev,
"instr_queue", iq_no, q_size,
OTX_EP_PCI_RING_ALIGN,
@@ -106,8 +106,8 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->nb_desc = num_descs;
/* Create a IQ request list to hold requests that have been
- * posted to OCTEON TX2. This list will be used for freeing the IQ
- * data buffer(s) later once the OCTEON TX2 fetched the requests.
+ * posted to OCTEON 9. This list will be used for freeing the IQ
+ * data buffer(s) later once the OCTEON 9 fetched the requests.
*/
iq->req_list = rte_zmalloc_socket("request_list",
(iq->nb_desc * OTX_EP_IQREQ_LIST_SIZE),
@@ -450,7 +450,7 @@ post_iqcmd(struct otx_ep_instr_queue *iq, uint8_t *iqcmd)
uint8_t *iqptr, cmdsize;
/* This ensures that the read index does not wrap around to
- * the same position if queue gets full before OCTEON TX2 could
+ * the same position if queue gets full before OCTEON 9 could
* fetch any instr.
*/
if (iq->instr_pending > (iq->nb_desc - 1))
@@ -979,7 +979,7 @@ otx_ep_check_droq_pkts(struct otx_ep_droq *droq)
return new_pkts;
}
-/* Check for response arrival from OCTEON TX2
+/* Check for response arrival from OCTEON 9
* returns number of requests completed
*/
uint16_t
diff --git a/usertools/dpdk-devbind.py b/usertools/dpdk-devbind.py
index 6cea732228..ace4627218 100755
--- a/usertools/dpdk-devbind.py
+++ b/usertools/dpdk-devbind.py
@@ -65,11 +65,11 @@
intel_ntb_icx = {'Class': '06', 'Vendor': '8086', 'Device': '347e',
'SVendor': None, 'SDevice': None}
-octeontx2_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
+cnxk_sso = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f9,a0fa',
'SVendor': None, 'SDevice': None}
-octeontx2_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
+cnxk_npa = {'Class': '08', 'Vendor': '177d', 'Device': 'a0fb,a0fc',
'SVendor': None, 'SDevice': None}
-octeontx2_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
+cn9k_ree = {'Class': '08', 'Vendor': '177d', 'Device': 'a0f4',
'SVendor': None, 'SDevice': None}
network_devices = [network_class, cavium_pkx, avp_vnic, ifpga_class]
@@ -77,10 +77,10 @@
crypto_devices = [encryption_class, intel_processor_class]
dma_devices = [cnxk_dma, hisilicon_dma,
intel_idxd_spr, intel_ioat_bdw, intel_ioat_icx, intel_ioat_skx]
-eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, octeontx2_sso]
-mempool_devices = [cavium_fpa, octeontx2_npa]
+eventdev_devices = [cavium_sso, cavium_tim, intel_dlb, cnxk_sso]
+mempool_devices = [cavium_fpa, cnxk_npa]
compress_devices = [cavium_zip]
-regex_devices = [octeontx2_ree]
+regex_devices = [cn9k_ree]
misc_devices = [cnxk_bphy, cnxk_bphy_cgx, cnxk_inl_dev,
intel_ntb_skx, intel_ntb_icx]
--
2.34.1
^ permalink raw reply [relevance 1%]
* Re: vmxnet3 no longer functional on DPDK 21.11
@ 2021-12-06 1:52 3% ` Lewis Donzis
0 siblings, 0 replies; 200+ results
From: Lewis Donzis @ 2021-12-06 1:52 UTC (permalink / raw)
To: Bruce Richardson; +Cc: dev, yongwang
----- On Nov 30, 2021, at 7:42 AM, Bruce Richardson bruce.richardson@intel.com wrote:
> On Mon, Nov 29, 2021 at 02:45:15PM -0600, Lewis Donzis wrote:
>> Hello.
>> We just upgraded from 21.08 to 21.11 and it's rather astounding the
>> number of incompatible changes in three months. Not a big deal, just
>> kind of a surprise, that's all.
>> Anyway, the problem is that the vmxnet3 driver is no longer functional
>> on FreeBSD.
>> In drivers/net/vmxnet3/vmxnet3_ethdev.c, vmxnet3_dev_start() gets an
>> error calling rte_intr_enable(). So it logs "interrupt enable failed"
>> and returns an error.
>> In lib/eal/freebsd/eal_interrupts.c, rte_intr_enable() is returning an
>> error because rte_intr_dev_fd_get(intr_handle) is returning -1.
>> I don't see how that could ever return anything other than -1 since it
>> appears that there is no code that ever calls rte_intr_dev_fd_set()
>> with a value other than -1 on FreeBSD. Also weird to me is that even
>> if it didn't get an error, the switch statement that follows looks like
>> it will return an error in every case.
>> Nonetheless, it worked in 21.08, and I can't quite see why the
>> difference, so I must be missing something.
>> For the moment, I just commented the "return -EIO" in vmxnet3_ethdev.c,
>> and it's now working again, but that's obviously not the correct
>> solution.
>> Can someone who's knowledgable about this mechanism perhaps explain a
>> little bit about what's going on? I'll be happy to help troubleshoot.
>> It seems like it must be something simple, but I just don't see it yet.
>
> Hi
>
> if you have the chance, it would be useful if you could use "git bisect" to
> identify the commit in 21.11 that broke this driver. Looking through the
> logs for 21.11 I can't identify any particular likely-looking commit, so
> bisect is likely a good way to start looking into this.
>
> Regards,
> /Bruce
Hi, Bruce. git bisect is very time-consuming and very cool!
I went back to 21.08, about 1100 commits, and worked through the process, but then I realized that I had forgotten to run ninja on one of the steps, so I did it again.
I also re-checked it after the bisect, just to make sure that c87d435a4d79739c0cec2ed280b94b41cb908af7 is good, and 7a0935239b9eb817c65c03554a9954ddb8ea5044 is bad.
Thanks,
lew
Here's the result:
root@fbdev:/usr/local/share/dpdk-git # git bisect start
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
root@fbdev:/usr/local/share/dpdk-git # git bisect good 74bd4072996e64b0051d24d8d641554d225db196
Bisecting: 556 revisions left to test after this (roughly 9 steps)
[e2a289a788c0a128a15bc0f1099af7c031201ac5] net/ngbe: add mailbox process operations
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 277 revisions left to test after this (roughly 8 steps)
[5906be5af6570db8b70b307c96aace0b096d1a2c] ethdev: fix ID spelling in comments and log messages
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 138 revisions left to test after this (roughly 7 steps)
[a7c236b894a848c7bb9afb773a7e3c13615abaa8] net/cnxk: support meter ops get
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 69 revisions left to test after this (roughly 6 steps)
[14fc81aed73842d976dd19a93ca47e22d61c1759] ethdev: update modify field flow action
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 34 revisions left to test after this (roughly 5 steps)
[cdea571becb4dabf9962455f671af0c99594e380] common/sfc_efx/base: add flag to use Rx prefix user flag
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 17 revisions left to test after this (roughly 4 steps)
[7a0935239b9eb817c65c03554a9954ddb8ea5044] ethdev: make fast-path functions to use new flat array
root@fbdev:/usr/local/share/dpdk-git # git bisect bad
Bisecting: 8 revisions left to test after this (roughly 3 steps)
[012bf708c20f4b23d055717e28f8de74887113d8] net/sfc: support group flows in tunnel offload
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 4 revisions left to test after this (roughly 2 steps)
[9df2d8f5cc9653d6413cb2240c067ea455ab7c3c] net/sfc: support counters in tunnel offload jump rules
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 2 revisions left to test after this (roughly 1 step)
[c024496ae8c8c075b0d0a3b43119475787b24b45] ethdev: allocate max space for internal queue array
root@fbdev:/usr/local/share/dpdk-git # git bisect good
Bisecting: 0 revisions left to test after this (roughly 1 step)
[c87d435a4d79739c0cec2ed280b94b41cb908af7] ethdev: copy fast-path API into separate structure
root@fbdev:/usr/local/share/dpdk-git # git bisect good
7a0935239b9eb817c65c03554a9954ddb8ea5044 is the first bad commit
commit 7a0935239b9eb817c65c03554a9954ddb8ea5044
Author: Konstantin Ananyev <konstantin.ananyev@intel.com>
Date: Wed Oct 13 14:37:02 2021 +0100
ethdev: make fast-path functions to use new flat array
Rework fast-path ethdev functions to use rte_eth_fp_ops[].
While it is an API/ABI breakage, this change is intended to be
transparent for both users (no changes in user app is required) and
PMD developers (no changes in PMD is required).
One extra thing to note - RX/TX callback invocation will cause extra
function call with these changes. That might cause some insignificant
slowdown for code-path where RX/TX callbacks are heavily involved.
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Tested-by: Feifei Wang <feifei.wang2@arm.com>
lib/ethdev/ethdev_private.c | 31 +++++
lib/ethdev/rte_ethdev.h | 270 +++++++++++++++++++++++++++++++-------------
lib/ethdev/version.map | 3 +
3 files changed, 226 insertions(+), 78 deletions(-)
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-05 7:03 3% ` Jerin Jacob
@ 2021-12-05 18:00 0% ` Stephen Hemminger
2021-12-06 9:57 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-12-05 18:00 UTC (permalink / raw)
To: Jerin Jacob
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sun, 5 Dec 2021 12:33:57 +0530
Jerin Jacob <jerinjacobk@gmail.com> wrote:
> On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Sat, 4 Dec 2021 22:54:58 +0530
> > <jerinj@marvell.com> wrote:
> >
> > > + /**
> > > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > > + *
> > > + * Based on device support and use-case need, there are two different
> > > + * ways to enable PFC. The first case is the port level PFC
> > > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > > + * API shall be used to configure the PFC, and PFC frames will be
> > > + * generated using based on VLAN TC value.
> > > + * The second case is the queue level PFC configuration, in this case,
> > > + * Any packet field content can be used to steer the packet to the
> > > + * specific queue using rte_flow or RSS and then use
> > > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > > + * on each queue. Based on congestion selected on the specific queue,
> > > + * configured TC shall be used to generate PFC frames.
> > > + *
> > > + * When set to non zero value, application must use queue level
> > > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > > + * instead of port level PFC configuration via
> > > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > > + * PFC configuration.
> > > + */
> > > + uint8_t pfc_queue_tc_max;
> > > + uint8_t reserved_8s[7];
> > > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > > void *reserved_ptrs[2]; /**< Reserved for future fields */
> >
> > Not sure you can claim ABI compatibility because the previous versions of DPDK
> > did not enforce that reserved fields must be zero. The Linux kernel
> > learned this when adding flags for new system calls; reserved fields only
> > work if you enforce that application must set them to zero.
>
> In this case it rte_eth_dev_info is an out parameter and implementation of
> rte_eth_dev_info_get() already memseting to 0.
> Do you still see any other ABI issue?
>
> See rte_eth_dev_info_get()
> /*
> * Init dev_info before port_id check since caller does not have
> * return status and does not know if get is successful or not.
> */
> memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
The concern was from the misreading comment. It talks about what application should do.
Could you reword the comment so that it describes what pfc_queue_tc_max is here
and move the flow control set part of the comment to where the API for that is.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
2021-12-04 17:38 3% ` Stephen Hemminger
@ 2021-12-05 7:03 3% ` Jerin Jacob
2021-12-05 18:00 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2021-12-05 7:03 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Jerin Jacob, dpdk-dev, Ray Kinsella, Thomas Monjalon,
Ferruh Yigit, Andrew Rybchenko, Ajit Khaparde, Andrew Boyer,
Beilei Xing, Richardson, Bruce, Chas Williams, Xia, Chenbo,
Ciara Loftus, Devendra Singh Rawat, Ed Czeck, Evgeny Schemeilin,
Gaetan Rivet, Gagandeep Singh, Guoyang Zhou, Haiyue Wang,
Harman Kalra, heinrich.kuhn, Hemant Agrawal, Hyong Youb Kim,
Igor Chauskin, Igor Russkikh, Jakub Grajciar, Jasvinder Singh,
Jian Wang, Jiawen Wu, Jingjing Wu, John Daley, John Miller,
John W. Linville, Wiles, Keith, Kiran Kumar K, Lijun Ou,
Liron Himi, Long Li, Marcin Wojtas, Martin Spinler, Matan Azrad,
Matt Peters, Maxime Coquelin, Michal Krawczyk, Min Hu (Connor,
Pradeep Kumar Nalla, Nithin Dabilpuram, Qiming Yang, Qi Zhang,
Radha Mohan Chintakuntla, Rahul Lakkireddy, Rasesh Mody,
Rosen Xu, Sachin Saxena, Satha Koteswara Rao Kottidi,
Shahed Shaikh, Shai Brandes, Shepard Siegel,
Somalapuram Amaranath, Somnath Kotur, Stephen Hemminger,
Steven Webster, Sunil Kumar Kori, Tetsuya Mukawa,
Veerasenareddy Burru, Viacheslav Ovsiienko, Xiao Wang,
Xiaoyun Wang, Yisen Zhuang, Yong Wang, Ziyang Xuan
On Sat, Dec 4, 2021 at 11:08 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Sat, 4 Dec 2021 22:54:58 +0530
> <jerinj@marvell.com> wrote:
>
> > + /**
> > + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> > + *
> > + * Based on device support and use-case need, there are two different
> > + * ways to enable PFC. The first case is the port level PFC
> > + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> > + * API shall be used to configure the PFC, and PFC frames will be
> > + * generated using based on VLAN TC value.
> > + * The second case is the queue level PFC configuration, in this case,
> > + * Any packet field content can be used to steer the packet to the
> > + * specific queue using rte_flow or RSS and then use
> > + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> > + * on each queue. Based on congestion selected on the specific queue,
> > + * configured TC shall be used to generate PFC frames.
> > + *
> > + * When set to non zero value, application must use queue level
> > + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> > + * instead of port level PFC configuration via
> > + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> > + * PFC configuration.
> > + */
> > + uint8_t pfc_queue_tc_max;
> > + uint8_t reserved_8s[7];
> > + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> > void *reserved_ptrs[2]; /**< Reserved for future fields */
>
> Not sure you can claim ABI compatibility because the previous versions of DPDK
> did not enforce that reserved fields must be zero. The Linux kernel
> learned this when adding flags for new system calls; reserved fields only
> work if you enforce that application must set them to zero.
In this case it rte_eth_dev_info is an out parameter and implementation of
rte_eth_dev_info_get() already memseting to 0.
Do you still see any other ABI issue?
See rte_eth_dev_info_get()
/*
* Init dev_info before port_id check since caller does not have
* return status and does not know if get is successful or not.
*/
memset(dev_info, 0, sizeof(struct rte_eth_dev_info));
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control
@ 2021-12-04 17:38 3% ` Stephen Hemminger
2021-12-05 7:03 3% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2021-12-04 17:38 UTC (permalink / raw)
To: jerinj
Cc: dev, Ray Kinsella, Thomas Monjalon, Ferruh Yigit,
Andrew Rybchenko, ajit.khaparde, aboyer, beilei.xing,
bruce.richardson, chas3, chenbo.xia, ciara.loftus, dsinghrawat,
ed.czeck, evgenys, grive, g.singh, zhouguoyang, haiyue.wang,
hkalra, heinrich.kuhn, hemant.agrawal, hyonkim, igorch,
irusskikh, jgrajcia, jasvinder.singh, jianwang, jiawenwu,
jingjing.wu, johndale, john.miller, linville, keith.wiles,
kirankumark, oulijun, lironh, longli, mw, spinler, matan,
matt.peters, maxime.coquelin, mk, humin29, pnalla, ndabilpuram,
qiming.yang, qi.z.zhang, radhac, rahul.lakkireddy, rmody,
rosen.xu, sachin.saxena, skoteshwar, shshaikh, shaibran,
shepard.siegel, asomalap, somnath.kotur, sthemmin,
steven.webster, skori, mtetsuyah, vburru, viacheslavo,
xiao.w.wang, cloud.wangxiaoyun, yisen.zhuang, yongwang,
xuanziyang2
On Sat, 4 Dec 2021 22:54:58 +0530
<jerinj@marvell.com> wrote:
> + /**
> + * Maximum supported traffic class as per PFC (802.1Qbb) specification.
> + *
> + * Based on device support and use-case need, there are two different
> + * ways to enable PFC. The first case is the port level PFC
> + * configuration, in this case, rte_eth_dev_priority_flow_ctrl_set()
> + * API shall be used to configure the PFC, and PFC frames will be
> + * generated using based on VLAN TC value.
> + * The second case is the queue level PFC configuration, in this case,
> + * Any packet field content can be used to steer the packet to the
> + * specific queue using rte_flow or RSS and then use
> + * rte_eth_dev_priority_flow_ctrl_queue_set() to set the TC mapping
> + * on each queue. Based on congestion selected on the specific queue,
> + * configured TC shall be used to generate PFC frames.
> + *
> + * When set to non zero value, application must use queue level
> + * PFC configuration via rte_eth_dev_priority_flow_ctrl_queue_set() API
> + * instead of port level PFC configuration via
> + * rte_eth_dev_priority_flow_ctrl_set() API to realize
> + * PFC configuration.
> + */
> + uint8_t pfc_queue_tc_max;
> + uint8_t reserved_8s[7];
> + uint64_t reserved_64s[1]; /**< Reserved for future fields */
> void *reserved_ptrs[2]; /**< Reserved for future fields */
Not sure you can claim ABI compatibility because the previous versions of DPDK
did not enforce that reserved fields must be zero. The Linux kernel
learned this when adding flags for new system calls; reserved fields only
work if you enforce that application must set them to zero.
^ permalink raw reply [relevance 3%]
* [PATCH v4 1/2] net: add functions to calculate UDP/TCP cksum in mbuf
@ 2021-12-03 11:38 3% ` Xiaoyun Li
2021-12-15 11:33 0% ` Singh, Aman Deep
0 siblings, 1 reply; 200+ results
From: Xiaoyun Li @ 2021-12-03 11:38 UTC (permalink / raw)
To: ferruh.yigit, olivier.matz, mb, konstantin.ananyev, stephen,
vladimir.medvedkin
Cc: dev, Xiaoyun Li
Add functions to call rte_raw_cksum_mbuf() to calculate IPv4/6
UDP/TCP checksum in mbuf which can be over multi-segments.
Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
doc/guides/rel_notes/release_22_03.rst | 10 ++
lib/net/rte_ip.h | 186 +++++++++++++++++++++++++
lib/net/version.map | 10 ++
3 files changed, 206 insertions(+)
diff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst
index 6d99d1eaa9..7a082c4427 100644
--- a/doc/guides/rel_notes/release_22_03.rst
+++ b/doc/guides/rel_notes/release_22_03.rst
@@ -55,6 +55,13 @@ New Features
Also, make sure to start the actual text at the margin.
=======================================================
+* **Added functions to calculate UDP/TCP checksum in mbuf.**
+ * Added the following functions to calculate UDP/TCP checksum of packets
+ which can be over multi-segments:
+ - ``rte_ipv4_udptcp_cksum_mbuf()``
+ - ``rte_ipv4_udptcp_cksum_mbuf_verify()``
+ - ``rte_ipv6_udptcp_cksum_mbuf()``
+ - ``rte_ipv6_udptcp_cksum_mbuf_verify()``
Removed Items
-------------
@@ -84,6 +91,9 @@ API Changes
Also, make sure to start the actual text at the margin.
=======================================================
+* net: added experimental functions ``rte_ipv4_udptcp_cksum_mbuf()``,
+ ``rte_ipv4_udptcp_cksum_mbuf_verify()``, ``rte_ipv6_udptcp_cksum_mbuf()``,
+ ``rte_ipv6_udptcp_cksum_mbuf_verify()``
ABI Changes
-----------
diff --git a/lib/net/rte_ip.h b/lib/net/rte_ip.h
index c575250852..534f401d26 100644
--- a/lib/net/rte_ip.h
+++ b/lib/net/rte_ip.h
@@ -400,6 +400,65 @@ rte_ipv4_udptcp_cksum(const struct rte_ipv4_hdr *ipv4_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv4 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv4_phdr_cksum(ipv4_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Compute the IPv4 UDP/TCP checksum of a packet.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv4_hdr->next_proto_id == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv4 UDP or TCP checksum.
*
@@ -426,6 +485,38 @@ rte_ipv4_udptcp_cksum_verify(const struct rte_ipv4_hdr *ipv4_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Verify the IPv4 UDP/TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0
+ * (i.e. no checksum).
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv4_hdr
+ * The pointer to the contiguous IPv4 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv4_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv4_hdr *ipv4_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv4_udptcp_cksum_mbuf(m, ipv4_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/**
* IPv6 Header
*/
@@ -538,6 +629,68 @@ rte_ipv6_udptcp_cksum(const struct rte_ipv6_hdr *ipv6_hdr, const void *l4_hdr)
return cksum;
}
+/**
+ * @internal Calculate the non-complemented IPv6 L4 checksum of a packet
+ */
+static inline uint16_t
+__rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t raw_cksum;
+ uint32_t cksum;
+
+ if (l4_off > m->pkt_len)
+ return 0;
+
+ if (rte_raw_cksum_mbuf(m, l4_off, m->pkt_len - l4_off, &raw_cksum))
+ return 0;
+
+ cksum = raw_cksum + rte_ipv6_phdr_cksum(ipv6_hdr, 0);
+
+ cksum = ((cksum & 0xffff0000) >> 16) + (cksum & 0xffff);
+
+ return (uint16_t)cksum;
+}
+
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Process the IPv6 UDP or TCP checksum of a packet.
+ *
+ * The IPv6 header must not be followed by extension headers. The layer 4
+ * checksum must be set to 0 in the L4 header by the caller.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * The complemented checksum to set in the L4 header.
+ */
+__rte_experimental
+static inline uint16_t
+rte_ipv6_udptcp_cksum_mbuf(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr, uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ cksum = ~cksum;
+
+ /*
+ * Per RFC 768: If the computed checksum is zero for UDP,
+ * it is transmitted as all ones
+ * (the equivalent in one's complement arithmetic).
+ */
+ if (cksum == 0 && ipv6_hdr->proto == IPPROTO_UDP)
+ cksum = 0xffff;
+
+ return cksum;
+}
+
/**
* Validate the IPv6 UDP or TCP checksum.
*
@@ -565,6 +718,39 @@ rte_ipv6_udptcp_cksum_verify(const struct rte_ipv6_hdr *ipv6_hdr,
return 0;
}
+/**
+ * @warning
+ * @b EXPERIMENTAL: this API may change without prior notice.
+ *
+ * Validate the IPv6 UDP or TCP checksum of a packet.
+ *
+ * In case of UDP, the caller must first check if udp_hdr->dgram_cksum is 0:
+ * this is either invalid or means no checksum in some situations. See 8.1
+ * (Upper-Layer Checksums) in RFC 8200.
+ *
+ * @param m
+ * The pointer to the mbuf.
+ * @param ipv6_hdr
+ * The pointer to the contiguous IPv6 header.
+ * @param l4_off
+ * The offset in bytes to start L4 checksum.
+ * @return
+ * Return 0 if the checksum is correct, else -1.
+ */
+__rte_experimental
+static inline int
+rte_ipv6_udptcp_cksum_mbuf_verify(const struct rte_mbuf *m,
+ const struct rte_ipv6_hdr *ipv6_hdr,
+ uint16_t l4_off)
+{
+ uint16_t cksum = __rte_ipv6_udptcp_cksum_mbuf(m, ipv6_hdr, l4_off);
+
+ if (cksum != 0xffff)
+ return -1;
+
+ return 0;
+}
+
/** IPv6 fragment extension header. */
#define RTE_IPV6_EHDR_MF_SHIFT 0
#define RTE_IPV6_EHDR_MF_MASK 1
diff --git a/lib/net/version.map b/lib/net/version.map
index 4f4330d1c4..0f2aacdef8 100644
--- a/lib/net/version.map
+++ b/lib/net/version.map
@@ -12,3 +12,13 @@ DPDK_22 {
local: *;
};
+
+EXPERIMENTAL {
+ global:
+
+ # added in 22.03
+ rte_ipv4_udptcp_cksum_mbuf;
+ rte_ipv4_udptcp_cksum_mbuf_verify;
+ rte_ipv6_udptcp_cksum_mbuf;
+ rte_ipv6_udptcp_cksum_mbuf_verify;
+};
--
2.25.1
--------------------------------------------------------------
Intel Research and Development Ireland Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
This e-mail and any attachments may contain confidential material for the sole
use of the intended recipient(s). Any review or distribution by others is
strictly prohibited. If you are not the intended recipient, please contact the
sender and delete all copies.
^ permalink raw reply [relevance 3%]
* [RFC] cryptodev: asymmetric crypto random number source
@ 2021-12-03 10:03 3% Kusztal, ArkadiuszX
2021-12-13 8:14 3% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Kusztal, ArkadiuszX @ 2021-12-03 10:03 UTC (permalink / raw)
To: gakhil, Anoob Joseph, Zhang, Roy Fan; +Cc: dev
[-- Attachment #1: Type: text/plain, Size: 1126 bytes --]
ECDSA op:
rte_crypto_param k;
/**< The ECDSA per-message secret number, which is an integer
* in the interval (1, n-1)
*/
DSA op:
No 'k'.
This one I think have described some time ago:
Only PMD that verifies ECDSA is OCTEON which apparently needs 'k' provided by user.
Only PMD that verifies DSA is OpenSSL PMD which will generate its own random number internally.
So in case PMD supports one of these options (or especially when supports both) we need to give some information here.
The most obvious option would be to change rte_crypto_param k -> rte_crypto_param *k
In case (k == NULL) PMD should generate it itself if possible, otherwise it should push crypto_op to the response ring with appropriate error code.
Another options would be:
* Extend rte_cryptodev_config and rte_cryptodev_info with information about random number generator for specific device (though it would be ABI breakage)
* Provide some kind of callback to get random number from user (which could be useful for other things like RSA padding as well)
[-- Attachment #2: Type: text/html, Size: 6854 bytes --]
^ permalink raw reply [relevance 3%]
Results 2401-2600 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2020-10-08 15:30 [dpdk-dev] [PATCH v4 1/5] eal: add API for bus close rohit.raj
2022-01-10 5:26 3% ` [PATCH v5 1/2] " rohit.raj
2022-02-09 11:04 3% ` David Marchand
2022-02-09 13:20 3% ` Thomas Monjalon
2020-10-09 3:48 [dpdk-dev] [PATCH v6 0/3] librte_ethdev: error recovery support Kalesh A P
2022-01-28 12:48 ` [dpdk-dev] [PATCH v7 0/4] ethdev: " Kalesh A P
2022-01-28 12:48 ` [dpdk-dev] [PATCH v7 1/4] ethdev: support device reset and recovery events Kalesh A P
2022-02-01 12:52 3% ` Ferruh Yigit
2022-02-02 11:44 0% ` Ray Kinsella
2022-02-10 22:16 3% ` Thomas Monjalon
2022-02-11 10:09 5% ` Ray Kinsella
2022-02-14 10:16 4% ` Ray Kinsella
2022-02-14 11:15 4% ` Thomas Monjalon
2022-02-14 16:06 5% ` Ray Kinsella
2022-02-14 16:25 0% ` Thomas Monjalon
2022-02-14 18:27 0% ` Ray Kinsella
2022-02-15 13:55 4% ` Ray Kinsella
2022-02-15 15:12 0% ` Thomas Monjalon
2022-02-15 16:12 0% ` Ray Kinsella
2021-10-06 4:48 [dpdk-dev] [RFC 0/3] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-01-18 15:30 ` [PATCH v2 00/10] " Alexander Kozyrev
2022-01-18 15:30 ` [PATCH v2 01/10] ethdev: introduce flow pre-configuration hints Alexander Kozyrev
2022-01-24 14:36 3% ` Jerin Jacob
2022-01-24 17:35 0% ` Thomas Monjalon
2022-01-24 17:46 0% ` Jerin Jacob
2022-01-24 18:08 3% ` Bruce Richardson
2022-01-25 1:14 0% ` Alexander Kozyrev
2022-01-25 15:58 4% ` Ori Kam
2022-01-25 18:09 3% ` Bruce Richardson
2022-01-25 18:14 3% ` Bruce Richardson
2022-01-26 9:45 0% ` Ori Kam
2022-01-26 10:52 4% ` Bruce Richardson
2022-01-26 11:21 0% ` Thomas Monjalon
2022-01-26 12:19 0% ` Ori Kam
2022-01-26 13:41 0% ` Bruce Richardson
2022-01-26 15:12 0% ` Ori Kam
2022-01-24 17:40 0% ` Ajit Khaparde
2022-01-25 1:28 0% ` Alexander Kozyrev
2022-01-25 18:44 ` Jerin Jacob
2022-01-26 22:02 ` Alexander Kozyrev
2022-01-27 9:34 3% ` Jerin Jacob
2021-10-15 5:13 [dpdk-dev] [PATCH] app/testpmd: fix l4 sw csum over multi segments Xiaoyun Li
2021-12-03 11:38 ` [PATCH v4 0/2] Add functions to calculate UDP/TCP cksum in mbuf Xiaoyun Li
2021-12-03 11:38 3% ` [PATCH v4 1/2] net: add " Xiaoyun Li
2021-12-15 11:33 0% ` Singh, Aman Deep
2022-01-04 15:18 0% ` Li, Xiaoyun
2022-01-04 15:40 0% ` Li, Xiaoyun
2022-01-06 12:56 0% ` Singh, Aman Deep
2022-01-06 16:03 ` [PATCH v5 0/2] Add " Xiaoyun Li
2022-01-06 16:03 3% ` [PATCH v5 1/2] net: add " Xiaoyun Li
2022-01-24 12:28 ` [PATCH v6 0/2] Add " Xiaoyun Li
2022-01-24 12:28 3% ` [PATCH v6 1/2] net: add " Xiaoyun Li
2021-11-03 22:48 [dpdk-dev] [PATCH v2] ethdev: mark old macros as deprecated Ferruh Yigit
2022-01-12 14:36 1% ` [PATCH v3] " Ferruh Yigit
2021-11-22 10:54 [RFC 0/1] integrate dmadev in vhost Jiayu Hu
2021-12-30 21:55 ` [PATCH v1 " Jiayu Hu
2021-12-30 21:55 ` [PATCH v1 1/1] vhost: integrate dmadev in asynchronous datapath Jiayu Hu
2022-01-14 6:30 3% ` Xia, Chenbo
2022-01-17 5:39 0% ` Hu, Jiayu
2022-01-19 2:18 0% ` Xia, Chenbo
2021-11-29 19:47 [PATCH v3 1/5] common/cnxk: add REE HW definitions lironh
2021-12-07 18:31 ` [PATCH v4 0/4] regex/cn9k: use cnxk infrastructure lironh
2021-12-08 9:14 3% ` Jerin Jacob
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 0/5] remove octeontx2 drivers jerinj
2021-12-11 9:04 2% ` [dpdk-dev] [PATCH v5 4/5] regex/cn9k: use cnxk infrastructure jerinj
2021-12-11 9:04 1% ` [dpdk-dev] [PATCH v5 5/5] drivers: remove octeontx2 drivers jerinj
2021-11-29 20:45 vmxnet3 no longer functional on DPDK 21.11 Lewis Donzis
2021-11-30 13:42 ` Bruce Richardson
2021-12-06 1:52 3% ` Lewis Donzis
2021-12-03 10:03 3% [RFC] cryptodev: asymmetric crypto random number source Kusztal, ArkadiuszX
2021-12-13 8:14 3% ` Akhil Goyal
2021-12-13 9:27 0% ` Ramkumar Balu
2021-12-17 15:26 0% ` Kusztal, ArkadiuszX
2021-12-04 17:24 [dpdk-dev] [PATCH] ethdev: support queue-based priority flow control jerinj
2021-12-04 17:38 3% ` Stephen Hemminger
2021-12-05 7:03 3% ` Jerin Jacob
2021-12-05 18:00 0% ` Stephen Hemminger
2021-12-06 9:57 0% ` Jerin Jacob
2021-12-06 8:35 1% [dpdk-dev] [PATCH v1] drivers: remove octeontx2 drivers jerinj
2021-12-06 13:35 3% ` Ferruh Yigit
2021-12-07 7:39 3% ` Jerin Jacob
2021-12-07 11:01 0% ` Ferruh Yigit
2021-12-07 11:51 0% ` Kevin Traynor
2021-12-13 16:48 [PATCH 1/2] maintainers: fix stable maintainers list Kevin Traynor
2021-12-13 16:48 5% ` [PATCH 2/2] doc: update LTS release cadence Kevin Traynor
2021-12-14 14:12 [PATCH 00/12] add packet generator library and example app Ronan Randles
2021-12-14 14:57 ` Bruce Richardson
2022-01-12 16:18 3% ` Morten Brørup
2021-12-15 18:19 [PATCH 1/7] net/bonding: fix typos and whitespace Robert Sanford
2021-12-21 19:57 ` [PATCH v2 0/8] net/bonding: fixes and LACP short timeout Robert Sanford
2021-12-21 19:57 ` [PATCH v2 4/8] net/bonding: support enabling " Robert Sanford
2022-02-04 14:46 4% ` Ferruh Yigit
2021-12-21 19:57 ` [PATCH v2 8/8] net/bonding: add LACP short timeout tests Robert Sanford
2022-02-04 14:49 4% ` Ferruh Yigit
2022-02-04 15:09 3% ` [PATCH v2 0/8] net/bonding: fixes and LACP short timeout Ferruh Yigit
2021-12-20 10:27 [PATCH 1/8] common/dpaax: caamflib: Remove code related to SEC ERA 1 to 7 Gagandeep Singh
2021-12-28 9:10 ` [PATCH v2 0/8] NXP crypto drivers changes Gagandeep Singh
2021-12-28 9:10 ` [PATCH v2 4/8] crypto/dpaa2_sec: support AES-GMAC Gagandeep Singh
2022-01-21 11:29 3% ` [EXT] " Akhil Goyal
2022-02-08 14:15 0% ` Gagandeep Singh
2021-12-22 6:13 [PATCH] eventdev/rx_adapter: add event port get api Naga Harish K S V
2022-01-22 17:07 4% ` [PATCH v2] " Naga Harish K S V
2022-01-22 17:14 4% ` [PATCH v3] eventdev/eth_rx: " Naga Harish K S V
2022-01-23 15:32 0% ` Jerin Jacob
2021-12-22 8:25 [PATCH v2] mempool: fix the description of some function return values Zhiheng Chen
2021-12-23 10:07 ` [PATCH v3] " Zhiheng Chen
2022-01-24 17:04 3% ` Olivier Matz
2021-12-24 22:59 [PATCH 0/1] mempool: implement index-based per core cache Dharmik Thakkar
2022-01-13 5:36 ` [PATCH v2 " Dharmik Thakkar
2022-01-13 5:36 ` [PATCH v2 1/1] " Dharmik Thakkar
2022-01-20 8:21 3% ` Morten Brørup
2022-01-21 6:01 3% ` Honnappa Nagarahalli
2022-01-21 7:36 4% ` Morten Brørup
2022-01-24 13:05 0% ` Ray Kinsella
2022-01-21 9:12 0% ` Bruce Richardson
2021-12-25 11:28 2% [PATCH v4 00/25] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-26 15:34 [RFC] mempool: rte_mempool_do_generic_get optimizations Morten Brørup
2022-01-19 14:52 3% ` [PATCH v2] mempool: fix put objects to mempool with cache Morten Brørup
2022-01-19 15:03 3% ` [PATCH v3] " Morten Brørup
2022-01-24 15:39 3% ` Olivier Matz
2022-01-28 9:37 0% ` Morten Brørup
2022-02-02 10:33 3% ` [PATCH v4] mempool: fix mempool cache flushing algorithm Morten Brørup
2021-12-29 13:37 2% [PATCH v5 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2021-12-30 3:08 [RFC 0/3] Add support for GRE optional fields matching Sean Zhang
2021-12-30 3:08 ` [RFC 1/3] ethdev: support GRE optional fields Sean Zhang
2022-01-19 9:53 ` Ferruh Yigit
2022-01-19 10:01 ` Thomas Monjalon
2022-01-19 10:56 4% ` Ori Kam
2022-01-25 9:49 0% ` Sean Zhang (Networking SW)
2022-01-25 11:37 0% ` Ferruh Yigit
2022-01-25 13:06 0% ` Ori Kam
2022-01-25 14:29 0% ` Ferruh Yigit
2022-01-25 16:03 0% ` Ori Kam
2021-12-30 6:08 2% [PATCH v6 00/26] Net/SPNIC: support SPNIC into DPDK 22.03 Yanling Song
2022-01-19 16:56 0% ` Ferruh Yigit
2022-01-21 9:27 0% ` Yanling Song
2022-01-21 10:22 0% ` Ferruh Yigit
2022-01-24 5:12 0% ` Hemant Agrawal
2022-02-12 14:01 0% ` Yanling Song
2022-01-03 15:08 [PATCH 0/8] ethdev: introduce IP reassembly offload Akhil Goyal
2022-01-20 16:26 4% ` [PATCH v2 0/4] " Akhil Goyal
2022-01-20 16:26 ` [PATCH v2 1/4] " Akhil Goyal
2022-01-20 16:45 3% ` Stephen Hemminger
2022-01-20 17:11 0% ` [EXT] " Akhil Goyal
2022-01-30 17:59 4% ` [PATCH v3 0/4] " Akhil Goyal
2022-01-30 17:59 ` [PATCH v3 4/4] security: add IPsec option for IP reassembly Akhil Goyal
2022-02-01 14:12 ` Ferruh Yigit
2022-02-02 9:15 3% ` [EXT] " Akhil Goyal
2022-02-02 14:04 0% ` Ferruh Yigit
2022-02-01 14:10 0% ` [PATCH v3 0/4] ethdev: introduce IP reassembly offload Ferruh Yigit
2022-02-04 22:13 4% ` [PATCH v4 0/3] " Akhil Goyal
2022-02-04 22:13 ` [PATCH v4 3/3] security: add IPsec option for IP reassembly Akhil Goyal
2022-02-08 9:01 ` David Marchand
2022-02-08 9:18 3% ` [EXT] " Akhil Goyal
2022-02-08 9:27 0% ` David Marchand
2022-02-08 10:45 0% ` Akhil Goyal
2022-02-08 13:19 0% ` Akhil Goyal
2022-02-08 19:55 0% ` David Marchand
2022-02-08 20:01 0% ` Akhil Goyal
2022-02-08 20:11 4% ` [PATCH v5 0/3] ethdev: introduce IP reassembly offload Akhil Goyal
2022-02-08 22:20 4% ` [PATCH v6 " Akhil Goyal
2022-02-10 8:54 0% ` Ferruh Yigit
2022-01-17 20:18 [PATCH v5 00/50] introduce IWYU Sean Morrissey
2022-02-02 9:47 ` [PATCH v6 " Sean Morrissey
2022-02-02 9:47 ` [PATCH v6 20/50] pdump: remove unneeded header includes Sean Morrissey
2022-02-02 15:54 ` Stephen Hemminger
2022-02-02 16:00 3% ` Bruce Richardson
2022-02-02 16:45 0% ` Morten Brørup
2022-01-17 23:14 4% [PATCH] Add pragma to ignore gcc-compat warnings in clang when used with diagnose_if Michael Barker
2022-01-17 23:23 4% ` [PATCH v2] " Michael Barker
2022-01-20 14:16 0% ` Thomas Monjalon
2022-01-23 21:17 0% ` Michael Barker
2022-01-23 21:07 8% ` [PATCH v3] " Michael Barker
2022-01-23 21:20 8% ` [PATCH v4] " Michael Barker
2022-01-25 10:33 0% ` Ray Kinsella
2022-01-31 0:05 8% ` [PATCH v5] " Michael Barker
2022-02-12 14:00 0% ` Thomas Monjalon
2022-01-22 17:02 4% [PATCH v2] eventdev/rx_adapter: add event port get api Naga Harish K S V
2022-02-01 9:18 3% Minutes of tech-board meeting: 2022-01-26 Richardson, Bruce
2022-02-03 16:04 [PATCH v3 0/4] crypto: improve asym session usage Ciara Power
2022-02-03 16:04 1% ` [PATCH v3 1/4] crypto: use single buffer for asymmetric session Ciara Power
2022-02-03 16:04 2% ` [PATCH v3 4/4] crypto: modify return value for asym session create Ciara Power
2022-02-04 17:42 [PATCH 0/7] Verify C++ compatibility of public headers Bruce Richardson
2022-02-10 15:42 ` [PATCH v4 " Bruce Richardson
2022-02-10 15:42 11% ` [PATCH v4 7/7] buildtools/chkincs: test headers for C++ compatibility Bruce Richardson
2022-02-11 11:36 ` [PATCH v5 0/2] Verify C++ compatibility of public headers Bruce Richardson
2022-02-11 11:36 11% ` [PATCH v5 2/2] buildtools/chkincs: test headers for C++ compatibility Bruce Richardson
2022-02-07 11:35 [PATCH v2 0/4] Clarify asymmetric random, add 'k' and crypto uint Arek Kusztal
2022-02-07 11:35 ` [PATCH v2 4/4] crypto: reorganize endianness comments, add " Arek Kusztal
2022-02-10 10:17 3% ` [EXT] " Akhil Goyal
2022-02-10 16:38 0% ` Zhang, Roy Fan
2022-02-10 21:08 4% ` Akhil Goyal
2022-02-11 10:54 0% ` Ray Kinsella
2022-02-07 16:02 [PATCH v18 8/8] eal: implement functions for mutex management Ananyev, Konstantin
2022-02-08 2:21 ` Ananyev, Konstantin
2022-02-09 2:47 3% ` Narcisa Ana Maria Vasile
2022-02-09 13:57 0% ` Ananyev, Konstantin
2022-02-20 21:56 4% ` Dmitry Kozlyuk
2022-02-23 17:08 0% ` Dmitry Kozlyuk
2022-02-24 17:29 0% ` Ananyev, Konstantin
2022-02-24 17:44 0% ` Stephen Hemminger
2022-02-08 13:47 4% [PATCH] ci: remove outdated default reference tag for ABI Thomas Monjalon
2022-02-08 15:08 7% ` Aaron Conole
2022-02-08 22:03 8% ` Brandon Lo
2022-02-09 13:37 4% ` Thomas Monjalon
2022-02-09 14:04 4% ` David Marchand
2022-03-01 9:56 9% ` [PATCH v2] ci: remove outdated default versions for ABI check Thomas Monjalon
2022-03-01 10:07 4% ` David Marchand
2022-03-06 9:27 4% ` Thomas Monjalon
2022-02-09 15:38 [PATCH v4 0/5] crypto: improve asym session usage Ciara Power
2022-02-09 15:38 1% ` [PATCH v4 2/5] crypto: use single buffer for asymmetric session Ciara Power
2022-02-09 15:38 2% ` [PATCH v4 5/5] crypto: modify return value for asym session create Ciara Power
2022-02-10 14:01 ` [PATCH v5 0/5] crypto: improve asym session usage Ciara Power
2022-02-10 14:01 1% ` [PATCH v5 2/5] crypto: use single buffer for asymmetric session Ciara Power
2022-02-10 14:01 2% ` [PATCH v5 5/5] crypto: modify return value for asym session create Ciara Power
2022-02-10 15:53 ` [PATCH v6 0/5] crypto: improve asym session usage Ciara Power
2022-02-10 15:54 1% ` [PATCH v6 2/5] crypto: use single buffer for asymmetric session Ciara Power
2022-02-10 15:54 2% ` [PATCH v6 5/5] crypto: modify return value for asym session create Ciara Power
2022-02-11 9:29 ` [PATCH v7 0/5] crypto: improve asym session usage Ciara Power
2022-02-11 9:29 1% ` [PATCH v7 2/5] crypto: use single buffer for asymmetric session Ciara Power
2022-02-11 9:29 2% ` [PATCH v7 5/5] crypto: modify return value for asym session create Ciara Power
2022-02-10 10:25 [PATCH] crypto: fix misspelled key in qt format Arek Kusztal
2022-02-12 11:34 3% ` [EXT] " Akhil Goyal
2022-02-18 6:11 0% ` Kusztal, ArkadiuszX
2022-02-25 17:56 4% ` Thomas Monjalon
2022-02-25 19:35 0% ` Ray Kinsella
2022-02-19 4:11 [PATCH v7 00/10] ethdev: datapath-focused flow rules management Alexander Kozyrev
2022-02-20 3:43 ` [PATCH v8 " Alexander Kozyrev
2022-02-20 3:44 ` [PATCH v8 02/11] ethdev: add flow item/action templates Alexander Kozyrev
2022-02-21 10:57 ` Andrew Rybchenko
2022-02-21 13:12 ` Ori Kam
2022-02-21 15:14 3% ` Alexander Kozyrev
2022-02-19 23:43 [PATCH 0/3] more unnecessary null checks Stephen Hemminger
2022-02-20 18:21 3% ` [PATCH v3 0/8] yet more unnecessary NULL checks Stephen Hemminger
2022-02-21 6:47 DPDK LTS release Kamaraj P
2022-02-21 10:23 ` Kevin Traynor
2022-02-23 16:16 ` Kamaraj P
2022-02-23 16:57 3% ` Kevin Traynor
2022-02-22 21:04 [PATCH 0/6] mlx5: external RxQ support Michael Baum
2022-02-23 18:48 3% ` [PATCH v2 " Michael Baum
2022-02-23 18:48 4% ` [PATCH v2 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-24 8:38 0% ` [PATCH v2 0/6] mlx5: external RxQ support Matan Azrad
2022-02-24 23:25 3% ` [PATCH v3 " Michael Baum
2022-02-24 23:25 4% ` [PATCH v3 1/6] common/mlx5: consider local functions as internal Michael Baum
2022-02-25 18:01 0% ` Ferruh Yigit
2022-02-25 18:38 3% ` Thomas Monjalon
2022-02-25 19:13 0% ` Ferruh Yigit
2022-02-23 13:32 [PATCH] raw/cnxk_gpio: fix DPDK version in a map file Tomasz Duszynski
2022-02-23 13:50 ` Ferruh Yigit
2022-02-23 16:28 3% ` Thomas Monjalon
2022-02-23 16:43 [PATCH 0/3] net/mlx5: fix link state detection Dmitry Kozlyuk
2022-03-01 12:15 ` [PATCH v2 " Dmitry Kozlyuk
2022-03-01 12:15 ` [PATCH v2 1/3] common/mlx5: add Netlink event helpers Dmitry Kozlyuk
2022-03-02 15:49 ` Ferruh Yigit
2022-03-02 15:56 ` Dmitry Kozlyuk
2022-03-08 13:48 3% ` Kevin Traynor
2022-03-08 15:18 0% ` Dmitry Kozlyuk
2022-03-01 16:54 15% [PATCH 1/2] devtools: remove event/dlb exception in ABI check David Marchand
2022-03-01 16:54 10% ` [PATCH 2/2] devtools: use libabigail rule for mlx glue drivers David Marchand
2022-03-02 10:16 3% ` Ray Kinsella
2022-03-08 14:04 0% ` Thomas Monjalon
2022-03-02 10:13 4% ` [PATCH 1/2] devtools: remove event/dlb exception in ABI check Ray Kinsella
2022-03-03 6:01 [RFC] ethdev: introduce protocol type based header split xuan.ding
2022-03-03 16:15 3% ` Stephen Hemminger
2022-03-04 9:58 3% ` Zhang, Qi Z
2022-03-04 11:54 0% ` Morten Brørup
2022-03-04 17:32 3% ` Stephen Hemminger
2022-03-06 9:20 [PATCH 0/2] add missing local symbols catch-all Thomas Monjalon
2022-03-06 9:20 4% ` [PATCH 1/2] regexdev: fix section attribute of symbols Thomas Monjalon
2022-03-07 10:15 0% ` Ori Kam
2022-03-06 9:20 4% ` [PATCH 2/2] build: hide local symbols in shared libraries Thomas Monjalon
2022-03-08 14:24 ` [PATCH v2 0/2] add missing local symbols catch-all Thomas Monjalon
2022-03-08 14:24 4% ` [PATCH v2 1/2] regexdev: fix section attribute of symbols Thomas Monjalon
2022-03-08 14:24 4% ` [PATCH v2 2/2] build: hide local symbols in shared libraries Thomas Monjalon
2022-03-09 10:58 0% ` Kevin Traynor
2022-03-09 18:54 0% ` Thomas Monjalon
2022-03-07 22:39 [PATCH] examples/distributor: one Tx queue is enough Honnappa Nagarahalli
2022-03-08 3:26 3% ` Honnappa Nagarahalli
2022-03-09 0:22 [PATCH v1 0/2] bbdev: add device info on queue topology Nicolas Chautru
2022-03-09 0:22 ` [PATCH v1 1/2] " Nicolas Chautru
2022-03-09 1:28 3% ` Stephen Hemminger
2022-03-10 23:49 [PATCH v1] bbdev: add new operation for FFT processing Nicolas Chautru
2022-03-10 23:49 ` Nicolas Chautru
2022-03-11 1:12 3% ` Stephen Hemminger
2022-03-17 18:42 3% ` Chautru, Nicolas
2022-03-17 0:11 3% [PATCH v1] doc: announce changes in bbdev related to enum extension Nicolas Chautru
2022-03-17 0:11 10% ` Nicolas Chautru
2022-03-17 9:54 3% DPDK 22.03 released Thomas Monjalon
2022-03-17 18:37 3% [PATCH v2] doc: announce changes in bbdev related to enum extension Nicolas Chautru
2022-03-17 18:37 10% ` Nicolas Chautru
2022-03-18 14:35 10% [PATCH] version: 22.07-rc0 David Marchand
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).