* Re: [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
2019-10-25 15:30 4% ` Burakov, Anatoly
2019-10-25 15:33 4% ` Thomas Monjalon
@ 2019-10-26 18:14 4% ` Kevin Traynor
2 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2019-10-26 18:14 UTC (permalink / raw)
To: David Marchand, dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic
On 25/10/2019 14:56, David Marchand wrote:
> New accessor has been introduced to provide the hidden information.
> This symbol can now be kept internal.
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index cf7744e..3aa1634 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -34,6 +34,10 @@ Deprecation Notices
>
> + ``rte_eal_devargs_type_count``
>
> +* eal: The ``rte_logs`` struct and global symbol will be made private to
> + remove it from the externally visible ABI and allow it to be updated in the
> + future.
> +
> * vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
> have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
> functions. The due date for the removal targets DPDK 20.02.
>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4] mbuf: support dynamic fields and flags
2019-10-26 12:39 3% ` [dpdk-dev] [PATCH v4] " Olivier Matz
@ 2019-10-26 17:04 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2019-10-26 17:04 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Shahaf Shuler, Stephen Hemminger,
Slava Ovsiienko
26/10/2019 14:39, Olivier Matz:
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each
> feature. Also, changing fields in the mbuf structure can break the API
> or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration
> of fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload
> feature, when the application requests to enable this feature. As
> the space in mbuf is limited, the space should only be reserved if it
> is going to be used (i.e when the application explicitly asks for it).
>
> The registration can be done at any moment, but it is not possible
> to unregister fields or flags.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Applied, thanks, this is a new major feature.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4] mbuf: support dynamic fields and flags
` (2 preceding siblings ...)
2019-10-24 8:13 3% ` [dpdk-dev] [PATCH v3] " Olivier Matz
@ 2019-10-26 12:39 3% ` Olivier Matz
2019-10-26 17:04 0% ` Thomas Monjalon
3 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-26 12:39 UTC (permalink / raw)
To: dev
Cc: Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Shahaf Shuler, Stephen Hemminger,
Thomas Monjalon, Slava Ovsiienko
Many features require to store data inside the mbuf. As the room in mbuf
structure is limited, it is not possible to have a field for each
feature. Also, changing fields in the mbuf structure can break the API
or ABI.
This commit addresses these issues, by enabling the dynamic registration
of fields or flags:
- a dynamic field is a named area in the rte_mbuf structure, with a
given size (>= 1 byte) and alignment constraint.
- a dynamic flag is a named bit in the rte_mbuf structure.
The typical use case is a PMD that registers space for an offload
feature, when the application requests to enable this feature. As
the space in mbuf is limited, the space should only be reserved if it
is going to be used (i.e when the application explicitly asks for it).
The registration can be done at any moment, but it is not possible
to unregister fields or flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v4
* rebase and solve conflicts
v3
* define mark_free() macro outside the init_shared_mem() function
(Konstantin)
* better document automatic field placement (Konstantin)
* introduce RTE_SIZEOF_FIELD() to get the size of a field in
a structure (Haiyue)
* fix api doc generation (Slava)
* document dynamic field and flags naming conventions
v2
* Rebase on top of master: solve conflict with Stephen's patchset
(packet copy)
* Add new apis to register a dynamic field/flag at a specific place
* Add a dump function (sugg by David)
* Enhance field registration function to select the best offset, keeping
large aligned zones as much as possible (sugg by Konstantin)
* Use a size_t and unsigned int instead of int when relevant
(sugg by Konstantin)
* Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
(sugg by Konstantin)
* Remove unused argument in private function (sugg by Konstantin)
* Fix and simplify locking (sugg by Konstantin)
* Fix minor typo
rfc -> v1
* Rebase on top of master
* Change registration API to use a structure instead of
variables, getting rid of #defines (Stephen's comment)
* Update flag registration to use a similar API as fields.
* Change max name length from 32 to 64 (sugg. by Thomas)
* Enhance API documentation (Haiyue's and Andrew's comments)
* Add a debug log at registration
* Add some words in release note
* Did some performance tests (sugg. by Andrew):
On my platform, reading a dynamic field takes ~3 cycles more
than a static field, and ~2 cycles more for writing.
app/test/test_mbuf.c | 143 ++++++
doc/guides/rel_notes/release_19_11.rst | 7 +
lib/librte_eal/common/include/rte_common.h | 12 +
lib/librte_mbuf/Makefile | 2 +
lib/librte_mbuf/meson.build | 6 +-
lib/librte_mbuf/rte_mbuf.h | 15 +
lib/librte_mbuf/rte_mbuf_core.h | 8 +-
lib/librte_mbuf/rte_mbuf_dyn.c | 553 +++++++++++++++++++++
lib/librte_mbuf/rte_mbuf_dyn.h | 239 +++++++++
lib/librte_mbuf/rte_mbuf_version.map | 7 +
10 files changed, 988 insertions(+), 4 deletions(-)
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index 9fea312c8..854bc26d8 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -32,6 +32,7 @@
#include <rte_ether.h>
#include <rte_ip.h>
#include <rte_tcp.h>
+#include <rte_mbuf_dyn.h>
#include "test.h"
@@ -2411,6 +2412,142 @@ test_pktmbuf_ext_shinfo_init_helper(struct rte_mempool *pktmbuf_pool)
return -1;
}
+static int
+test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
+{
+ const struct rte_mbuf_dynfield dynfield = {
+ .name = "test-dynfield",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield2 = {
+ .name = "test-dynfield2",
+ .size = sizeof(uint16_t),
+ .align = __alignof__(uint16_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield3 = {
+ .name = "test-dynfield3",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_big = {
+ .name = "test-dynfield-fail-big",
+ .size = 256,
+ .align = 1,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_align = {
+ .name = "test-dynfield-fail-align",
+ .size = 1,
+ .align = 3,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag = {
+ .name = "test-dynflag",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag2 = {
+ .name = "test-dynflag2",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag3 = {
+ .name = "test-dynflag3",
+ .flags = 0,
+ };
+ struct rte_mbuf *m = NULL;
+ int offset, offset2, offset3;
+ int flag, flag2, flag3;
+ int ret;
+
+ printf("Test mbuf dynamic fields and flags\n");
+ rte_mbuf_dyn_dump(stdout);
+
+ offset = rte_mbuf_dynfield_register(&dynfield);
+ if (offset == -1)
+ GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
+ offset, strerror(errno));
+
+ ret = rte_mbuf_dynfield_register(&dynfield);
+ if (ret != offset)
+ GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
+ ret, strerror(errno));
+
+ offset2 = rte_mbuf_dynfield_register(&dynfield2);
+ if (offset2 == -1 || offset2 == offset || (offset2 & 1))
+ GOTO_FAIL("failed to register dynamic field 2, offset2=%d: %s",
+ offset2, strerror(errno));
+
+ offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
+ offsetof(struct rte_mbuf, dynfield1[1]));
+ if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
+ GOTO_FAIL("failed to register dynamic field 3, offset=%d: %s",
+ offset3, strerror(errno));
+
+ printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
+ offset, offset2, offset3);
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (too big)");
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (bad alignment)");
+
+ ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
+ offsetof(struct rte_mbuf, ol_flags));
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (not avail)");
+
+ flag = rte_mbuf_dynflag_register(&dynflag);
+ if (flag == -1)
+ GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
+ flag, strerror(errno));
+
+ ret = rte_mbuf_dynflag_register(&dynflag);
+ if (ret != flag)
+ GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
+ ret, strerror(errno));
+
+ flag2 = rte_mbuf_dynflag_register(&dynflag2);
+ if (flag2 == -1 || flag2 == flag)
+ GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
+ flag2, strerror(errno));
+
+ flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
+ rte_bsf64(PKT_LAST_FREE));
+ if (flag3 != rte_bsf64(PKT_LAST_FREE))
+ GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
+ flag3, strerror(errno));
+
+ printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
+
+ /* set, get dynamic field */
+ m = rte_pktmbuf_alloc(pktmbuf_pool);
+ if (m == NULL)
+ GOTO_FAIL("Cannot allocate mbuf");
+
+ *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
+ if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
+ GOTO_FAIL("failed to read dynamic field");
+ *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
+ if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
+ GOTO_FAIL("failed to read dynamic field");
+
+ /* set a dynamic flag */
+ m->ol_flags |= (1ULL << flag);
+
+ rte_mbuf_dyn_dump(stdout);
+ rte_pktmbuf_free(m);
+ return 0;
+fail:
+ rte_pktmbuf_free(m);
+ return -1;
+}
+
static int
test_mbuf(void)
{
@@ -2431,6 +2568,12 @@ test_mbuf(void)
goto err;
}
+ /* test registration of dynamic fields and flags */
+ if (test_mbuf_dyn(pktmbuf_pool) < 0) {
+ printf("mbuf dynflag test failed\n");
+ goto err;
+ }
+
/* create a specific pktmbuf pool with a priv_size != 0 and no data
* room size */
pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 2b4cbe6e3..603d618a5 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -21,6 +21,13 @@ DPDK Release 19.11
xdg-open build/doc/html/guides/rel_notes/release_19_11.html
+* **Add support of support dynamic fields and flags in mbuf.**
+
+ This new feature adds the ability to dynamically register some room
+ for a field or a flag in the mbuf structure. This is typically used
+ for specific offload features, where adding a static field or flag
+ in the mbuf is not justified.
+
New Features
------------
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 7ee94d698..459d082d1 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -675,6 +675,18 @@ rte_log2_u64(uint64_t v)
})
#endif
+/**
+ * Get the size of a field in a structure.
+ *
+ * @param type
+ * The type of the structure.
+ * @param field
+ * The field in the structure.
+ * @return
+ * The size of the field in the structure, in bytes.
+ */
+#define RTE_SIZEOF_FIELD(type, field) (sizeof(((type *)0)->field))
+
#define _RTE_STR(x) #x
/** Take a macro value and get a string version of it */
#define RTE_STR(x) _RTE_STR(x)
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index f3b76ad23..019c8dd8f 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -17,11 +17,13 @@ LIBABIVER := 5
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_core.h
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_ptype.h
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_pool_ops.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
index 36bb6eb9d..59fd07224 100644
--- a/lib/librte_mbuf/meson.build
+++ b/lib/librte_mbuf/meson.build
@@ -2,9 +2,11 @@
# Copyright(c) 2017 Intel Corporation
version = 5
-sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
+sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
+ 'rte_mbuf_dyn.c')
headers = files('rte_mbuf.h', 'rte_mbuf_core.h',
- 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
+ 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
+ 'rte_mbuf_dyn.h')
deps += ['mempool']
allow_experimental_apis = true
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index bd26764a2..92d81972a 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -1000,6 +1000,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
*/
#define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
+/**
+ * Copy dynamic fields from msrc to mdst.
+ *
+ * @param mdst
+ * The destination mbuf.
+ * @param msrc
+ * The source mbuf.
+ */
+static inline void
+rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
+{
+ memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst->dynfield1));
+}
+
/* internal */
static inline void
__rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
@@ -1011,6 +1025,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
mdst->hash = msrc->hash;
mdst->packet_type = msrc->packet_type;
mdst->timestamp = msrc->timestamp;
+ rte_mbuf_dynfield_copy(mdst, msrc);
}
/**
diff --git a/lib/librte_mbuf/rte_mbuf_core.h b/lib/librte_mbuf/rte_mbuf_core.h
index 3398c12c8..302270146 100644
--- a/lib/librte_mbuf/rte_mbuf_core.h
+++ b/lib/librte_mbuf/rte_mbuf_core.h
@@ -184,9 +184,12 @@ extern "C" {
#define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
#define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
-/* add new RX flags here */
+/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-/* add new TX flags here */
+#define PKT_FIRST_FREE (1ULL << 23)
+#define PKT_LAST_FREE (1ULL << 39)
+
+/* add new TX flags here, don't forget to update PKT_LAST_FREE */
/**
* Indicate that the metadata field in the mbuf is in use.
@@ -689,6 +692,7 @@ struct rte_mbuf {
*/
struct rte_mbuf_ext_shared_info *shinfo;
+ uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
} __rte_cache_aligned;
/**
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
new file mode 100644
index 000000000..d6931f847
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.c
@@ -0,0 +1,553 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#include <sys/queue.h>
+#include <stdint.h>
+#include <limits.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_tailq.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
+
+struct mbuf_dynfield_elt {
+ TAILQ_ENTRY(mbuf_dynfield_elt) next;
+ struct rte_mbuf_dynfield params;
+ size_t offset;
+};
+TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynfield_tailq = {
+ .name = "RTE_MBUF_DYNFIELD",
+};
+EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
+
+struct mbuf_dynflag_elt {
+ TAILQ_ENTRY(mbuf_dynflag_elt) next;
+ struct rte_mbuf_dynflag params;
+ unsigned int bitnum;
+};
+TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynflag_tailq = {
+ .name = "RTE_MBUF_DYNFLAG",
+};
+EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
+
+struct mbuf_dyn_shm {
+ /**
+ * For each mbuf byte, free_space[i] != 0 if space is free.
+ * The value is the size of the biggest aligned element that
+ * can fit in the zone.
+ */
+ uint8_t free_space[sizeof(struct rte_mbuf)];
+ /** Bitfield of available flags. */
+ uint64_t free_flags;
+};
+static struct mbuf_dyn_shm *shm;
+
+/* Set the value of free_space[] according to the size and alignment of
+ * the free areas. This helps to select the best place when reserving a
+ * dynamic field. Assume tailq is locked.
+ */
+static void
+process_score(void)
+{
+ size_t off, align, size, i;
+
+ /* first, erase previous info */
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if (shm->free_space[i])
+ shm->free_space[i] = 1;
+ }
+
+ for (off = 0; off < sizeof(struct rte_mbuf); off++) {
+ /* get the size of the free zone */
+ for (size = 0; shm->free_space[off + size]; size++)
+ ;
+ if (size == 0)
+ continue;
+
+ /* get the alignment of biggest object that can fit in
+ * the zone at this offset.
+ */
+ for (align = 1;
+ (off % (align << 1)) == 0 && (align << 1) <= size;
+ align <<= 1)
+ ;
+
+ /* save it in free_space[] */
+ for (i = off; i < off + size; i++)
+ shm->free_space[i] = RTE_MAX(align, shm->free_space[i]);
+ }
+}
+
+/* Mark the area occupied by a mbuf field as available in the shm. */
+#define mark_free(field) \
+ memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
+ 1, sizeof(((struct rte_mbuf *)0)->field))
+
+/* Allocate and initialize the shared memory. Assume tailq is locked */
+static int
+init_shared_mem(void)
+{
+ const struct rte_memzone *mz;
+ uint64_t mask;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
+ sizeof(struct mbuf_dyn_shm),
+ SOCKET_ID_ANY, 0,
+ RTE_CACHE_LINE_SIZE);
+ } else {
+ mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
+ }
+ if (mz == NULL)
+ return -1;
+
+ shm = mz->addr;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* init free_space, keep it sync'd with
+ * rte_mbuf_dynfield_copy().
+ */
+ memset(shm, 0, sizeof(*shm));
+ mark_free(dynfield1);
+
+ /* init free_flags */
+ for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+ shm->free_flags |= mask;
+
+ process_score();
+ }
+
+ return 0;
+}
+
+/* check if this offset can be used */
+static int
+check_offset(size_t offset, size_t size, size_t align)
+{
+ size_t i;
+
+ if ((offset & (align - 1)) != 0)
+ return -1;
+ if (offset + size > sizeof(struct rte_mbuf))
+ return -1;
+
+ for (i = 0; i < size; i++) {
+ if (!shm->free_space[i + offset])
+ return -1;
+ }
+
+ return 0;
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynfield_elt *
+__mbuf_dynfield_lookup(const char *name)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
+ if (strcmp(name, mbuf_dynfield->params.name) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynfield;
+}
+
+int
+rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
+{
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynfield = __mbuf_dynfield_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynfield == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynfield->params, sizeof(*params));
+
+ return mbuf_dynfield->offset;
+}
+
+static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
+ const struct rte_mbuf_dynfield *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->size != params2->size)
+ return -1;
+ if (params1->align != params2->align)
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int best_zone = UINT_MAX;
+ size_t i, offset;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
+ if (mbuf_dynfield != NULL) {
+ if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynfield->offset;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == SIZE_MAX) {
+ /* Find the best place to put this field: we search the
+ * lowest value of shm->free_space[offset]: the zones
+ * containing room for larger fields are kept for later.
+ */
+ for (offset = 0;
+ offset < sizeof(struct rte_mbuf);
+ offset++) {
+ if (check_offset(offset, params->size,
+ params->align) == 0 &&
+ shm->free_space[offset] < best_zone) {
+ best_zone = shm->free_space[offset];
+ req = offset;
+ }
+ }
+ if (req == SIZE_MAX) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ } else {
+ if (check_offset(req, params->size, params->align) < 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ }
+
+ offset = req;
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
+ if (mbuf_dynfield == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynfield->params.name, params->name,
+ sizeof(mbuf_dynfield->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
+ rte_errno = ENAMETOOLONG;
+ rte_free(mbuf_dynfield);
+ rte_free(te);
+ return -1;
+ }
+ memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
+ mbuf_dynfield->offset = offset;
+ te->data = mbuf_dynfield;
+
+ TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
+
+ for (i = offset; i < offset + params->size; i++)
+ shm->free_space[i] = 0;
+ process_score();
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
+ params->name, params->size, params->align, params->flags,
+ offset);
+
+ return offset;
+}
+
+int
+rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ int ret;
+
+ if (params->size >= sizeof(struct rte_mbuf)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (!rte_is_power_of_2(params->align)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (params->flags != 0) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynfield_register_offset(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
+{
+ return rte_mbuf_dynfield_register_offset(params, SIZE_MAX);
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynflag_elt *
+__mbuf_dynflag_lookup(const char *name)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
+ if (strncmp(name, mbuf_dynflag->params.name,
+ RTE_MBUF_DYN_NAMESIZE) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynflag;
+}
+
+int
+rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params)
+{
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynflag = __mbuf_dynflag_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynflag == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynflag->params, sizeof(*params));
+
+ return mbuf_dynflag->bitnum;
+}
+
+static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
+ const struct rte_mbuf_dynflag *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int bitnum;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
+ if (mbuf_dynflag != NULL) {
+ if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynflag->bitnum;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == UINT_MAX) {
+ if (shm->free_flags == 0) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ bitnum = rte_bsf64(shm->free_flags);
+ } else {
+ if ((shm->free_flags & (1ULL << req)) == 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ bitnum = req;
+ }
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
+ if (mbuf_dynflag == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynflag->params.name, params->name,
+ sizeof(mbuf_dynflag->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
+ rte_free(mbuf_dynflag);
+ rte_free(te);
+ rte_errno = ENAMETOOLONG;
+ return -1;
+ }
+ mbuf_dynflag->bitnum = bitnum;
+ te->data = mbuf_dynflag;
+
+ TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
+
+ shm->free_flags &= ~(1ULL << bitnum);
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
+ params->name, params->flags, bitnum);
+
+ return bitnum;
+}
+
+int
+rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ int ret;
+
+ if (req >= RTE_SIZEOF_FIELD(struct rte_mbuf, ol_flags) * CHAR_BIT &&
+ req != UINT_MAX) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynflag_register_bitnum(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
+{
+ return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX);
+}
+
+void rte_mbuf_dyn_dump(FILE *out)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *dynfield;
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *dynflag;
+ struct rte_tailq_entry *te;
+ size_t i;
+
+ rte_mcfg_tailq_write_lock();
+ init_shared_mem();
+ fprintf(out, "Reserved fields:\n");
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ dynfield = (struct mbuf_dynfield_elt *)te->data;
+ fprintf(out, " name=%s offset=%zd size=%zd align=%zd flags=%x\n",
+ dynfield->params.name, dynfield->offset,
+ dynfield->params.size, dynfield->params.align,
+ dynfield->params.flags);
+ }
+ fprintf(out, "Reserved flags:\n");
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ dynflag = (struct mbuf_dynflag_elt *)te->data;
+ fprintf(out, " name=%s bitnum=%u flags=%x\n",
+ dynflag->params.name, dynflag->bitnum,
+ dynflag->params.flags);
+ }
+ fprintf(out, "Free space in mbuf (0 = free, value = zone alignment):\n");
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if ((i % 8) == 0)
+ fprintf(out, " %4.4zx: ", i);
+ fprintf(out, "%2.2x%s", shm->free_space[i],
+ (i % 8 != 7) ? " " : "\n");
+ }
+ rte_mcfg_tailq_write_unlock();
+}
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
new file mode 100644
index 000000000..2e9d418cf
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#ifndef _RTE_MBUF_DYN_H_
+#define _RTE_MBUF_DYN_H_
+
+/**
+ * @file
+ * RTE Mbuf dynamic fields and flags
+ *
+ * Many DPDK features require to store data inside the mbuf. As the room
+ * in mbuf structure is limited, it is not possible to have a field for
+ * each feature. Also, changing fields in the mbuf structure can break
+ * the API or ABI.
+ *
+ * This module addresses this issue, by enabling the dynamic
+ * registration of fields or flags:
+ *
+ * - a dynamic field is a named area in the rte_mbuf structure, with a
+ * given size (>= 1 byte) and alignment constraint.
+ * - a dynamic flag is a named bit in the rte_mbuf structure, stored
+ * in mbuf->ol_flags.
+ *
+ * The placement of the field or flag can be automatic, in this case the
+ * zones that have the smallest size and alignment constraint are
+ * selected in priority. Else, a specific field offset or flag bit
+ * number can be requested through the API.
+ *
+ * The typical use case is when a specific offload feature requires to
+ * register a dedicated offload field in the mbuf structure, and adding
+ * a static field or flag is not justified.
+ *
+ * Example of use:
+ *
+ * - A rte_mbuf_dynfield structure is defined, containing the parameters
+ * of the dynamic field to be registered:
+ * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
+ * - The application initializes the PMD, and asks for this feature
+ * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * rxconf. This will make the PMD to register the field by calling
+ * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
+ * stores the returned offset.
+ * - The application that uses the offload feature also registers
+ * the field to retrieve the same offset.
+ * - When the PMD receives a packet, it can set the field:
+ * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
+ * - In the main loop, the application can retrieve the value with
+ * the same macro.
+ *
+ * To avoid wasting space, the dynamic fields or flags must only be
+ * reserved on demand, when an application asks for the related feature.
+ *
+ * The registration can be done at any moment, but it is not possible
+ * to unregister fields or flags for now.
+ *
+ * A dynamic field can be reserved and used by an application only.
+ * It can for instance be a packet mark.
+ *
+ * To avoid namespace collisions, the dynamic mbuf field or flag names
+ * have to be chosen with care. It is advised to use the same
+ * conventions than function names in dpdk:
+ * - "rte_mbuf_dynfield_<name>" if defined in mbuf library
+ * - "rte_<libname>_dynfield_<name>" if defined in another library
+ * - "rte_net_<pmd>_dynfield_<name>" if defined in a in PMD
+ * - any name that does not start with "rte_" in an application
+ */
+
+#include <sys/types.h>
+/**
+ * Maximum length of the dynamic field or flag string.
+ */
+#define RTE_MBUF_DYN_NAMESIZE 64
+
+/**
+ * Structure describing the parameters of a mbuf dynamic field.
+ */
+struct rte_mbuf_dynfield {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
+ size_t size; /**< The number of bytes to reserve. */
+ size_t align; /**< The alignment constraint (power of 2). */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Structure describing the parameters of a mbuf dynamic flag.
+ */
+struct rte_mbuf_dynflag {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic flag. */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Register space for a dynamic field in the mbuf structure.
+ *
+ * If the field is already registered (same name and parameters), its
+ * offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
+
+/**
+ * Register space for a dynamic field in the mbuf structure at offset.
+ *
+ * If the field is already registered (same name, parameters and offset),
+ * the offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @param offset
+ * The requested offset. Ignored if SIZE_MAX is passed.
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, flags, or offset).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested offset cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t offset);
+
+/**
+ * Lookup for a registered dynamic mbuf field.
+ *
+ * @param name
+ * A string identifying the dynamic field.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic field.
+ * @return
+ * The offset of this field in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic field matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_lookup(const char *name,
+ struct rte_mbuf_dynfield *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure.
+ *
+ * If the flag is already registered (same name and parameters), its
+ * bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure specifying bitnum.
+ *
+ * If the flag is already registered (same name, parameters and bitnum),
+ * the bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @param bitnum
+ * The requested bitnum. Ignored if UINT_MAX is passed.
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested bitnum cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int bitnum);
+
+/**
+ * Lookup for a registered dynamic mbuf flag.
+ *
+ * @param name
+ * A string identifying the dynamic flag.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic flag.
+ * @return
+ * The offset of this flag in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic flag matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params);
+
+/**
+ * Helper macro to access to a dynamic field.
+ */
+#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
+
+/**
+ * Dump the status of dynamic fields and flags.
+ *
+ * @param out
+ * The stream where the status is displayed.
+ */
+__rte_experimental
+void rte_mbuf_dyn_dump(FILE *out);
+
+/* Placeholder for dynamic fields and flags declarations. */
+
+#endif
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index a4f41d7fd..263dc0a21 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -58,6 +58,13 @@ EXPERIMENTAL {
global:
rte_mbuf_check;
+ rte_mbuf_dynfield_lookup;
+ rte_mbuf_dynfield_register;
+ rte_mbuf_dynfield_register_offset;
+ rte_mbuf_dynflag_lookup;
+ rte_mbuf_dynflag_register;
+ rte_mbuf_dynflag_register_bitnum;
+ rte_mbuf_dyn_dump;
rte_pktmbuf_copy;
rte_pktmbuf_free_bulk;
--
2.20.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [RFC v2 0/7] RFC: Support MACSEC offload in the RTE_SECURITY infrastructure.
@ 2019-10-25 17:53 2% Pavel Belous
0 siblings, 0 replies; 200+ results
From: Pavel Belous @ 2019-10-25 17:53 UTC (permalink / raw)
To: dev
Cc: Ferruh Yigit, Akhil Goyal, John McNamara, Declan Doherty,
Konstantin Ananyev, Thomas Monjalon, Igor Russkikh,
Fenilkumar Patel, Hitesh K Maisheri, Pavel Belous
From: Pavel Belous <Pavel.Belous@aquantia.com>
This RFC suggest possible API to implement generic MACSEC HW
offload in DPDK infrastructure.
Right now two PMDs implementing MACSEC hw offload via private
API: ixgbe (Intel) and atlantic (Aquantia).
During that private API discussion it was decided to go further
with well defined public API, based most probably on rte_security
infrastructure.
Here is that previous discussion:
http://inbox.dpdk.org/dev/20190416101145.nVecHKp3w14Ptd_hne-DqHhKyzbre88PwNI-OAowXJM@z/
Declaring macsec API via rte_security gives a good data-centric view on parameters
and operations macsec supports. Old, pure functional API (basically ixbe only API)
presented function calls with big argument lists which is hard to extend and analyse.
However, I'd like to note rte_security has to be used via explicitly created
mempools - this hardens abit the usage.
It also may be hard to extend the structures in the ABI compatible way.
One of the problems with MACSEC is that internally implementation and hardware
support could be either very simple, doing only endpoint encryption with a single
TX SC (Secure Connection), or quite complex, capable to do flexible filtering
and SC matching based on mac, vlan, ethertype and other.
Different macsec hardware supports some custom features and from our experience
users would like to configure these as well. Therefore there will probably be
needed a number of PMD specific macsec operators support.
Examples include: custom in-the-clear tag (matched by vlan id or mask),
configurable internal logic to allow both secure and unsecure traffic,
bypass filters on specific ethertypes.
To support such extensions, suggest use rte_security_macsec_op enum with
vendor specific operation codes.
In context of rte_security, MACSEC operations should normally be based on
security session create and update calls.
Session create is used to setup overall session. Thats equivalent of old
`macsec enable` operation.
Session update is used to update security connections and associations.
Here xform->op contains the required operation: rx/tx session/association
add/update/removal.
This RFC contains:
- patch 1-2 is rte_security data structures declaration and documentation
- patches 3-5 MACSEC implementation for atlantic (Aquantia) driver, using
new rte_security interface.
- patches 6-7 is a draft on how testpmd based invocations of rte_security
API will look like
To be done/decide:
- add missing documentation and comments to all the structures
- full testpmd macsec API adoption
- ixgbe api adoptation
- decide on how to declare SA (Security Associations) auto rollover and
some other important features.
- interrupt event callback detalization of possible macsec events.
Notice that it is not a part of rte_security, but a part of rte_ethdev.
- add ability to retrieve MACSEC statistics per individual SC/SA.
Pavel Belous (7):
security: MACSEC infrastructure data declarations
security: Update rte_security documentation
net/atlantic: Add helper functions for PHY access
net/atlantic: add MACSEC internal HW data declaration and functions
net/atlantic: implementation of the MACSEC using rte_security
interface
app/testpmd: macsec on/off commands using rte_security interface
app/testpmd: macsec adding RX/TX SC using rte_security interface
app/test-pmd/Makefile | 1 +
app/test-pmd/cmdline.c | 20 +-
app/test-pmd/macsec.c | 138 ++
app/test-pmd/macsec.h | 14 +
app/test-pmd/meson.build | 3 +-
doc/guides/prog_guide/rte_security.rst | 4 -
drivers/net/atlantic/Makefile | 5 +-
drivers/net/atlantic/atl_ethdev.c | 316 +---
drivers/net/atlantic/atl_sec.c | 615 ++++++++
drivers/net/atlantic/atl_sec.h | 124 ++
drivers/net/atlantic/hw_atl/hw_atl_utils.h | 116 +-
drivers/net/atlantic/macsec/MSS_Egress_registers.h | 1498 ++++++++++++++++++
.../net/atlantic/macsec/MSS_Ingress_registers.h | 1135 ++++++++++++++
drivers/net/atlantic/macsec/macsec_api.c | 1612 ++++++++++++++++++++
drivers/net/atlantic/macsec/macsec_api.h | 111 ++
drivers/net/atlantic/macsec/macsec_struct.h | 269 ++++
drivers/net/atlantic/macsec/mdio.c | 328 ++++
drivers/net/atlantic/macsec/mdio.h | 19 +
drivers/net/atlantic/meson.build | 6 +-
drivers/net/atlantic/rte_pmd_atlantic.c | 102 --
drivers/net/atlantic/rte_pmd_atlantic.h | 144 --
drivers/net/atlantic/rte_pmd_atlantic_version.map | 16 -
lib/librte_security/rte_security.h | 143 +-
23 files changed, 6080 insertions(+), 659 deletions(-)
create mode 100644 app/test-pmd/macsec.c
create mode 100644 app/test-pmd/macsec.h
create mode 100644 drivers/net/atlantic/atl_sec.c
create mode 100644 drivers/net/atlantic/atl_sec.h
create mode 100644 drivers/net/atlantic/macsec/MSS_Egress_registers.h
create mode 100644 drivers/net/atlantic/macsec/MSS_Ingress_registers.h
create mode 100644 drivers/net/atlantic/macsec/macsec_api.c
create mode 100644 drivers/net/atlantic/macsec/macsec_api.h
create mode 100644 drivers/net/atlantic/macsec/macsec_struct.h
create mode 100644 drivers/net/atlantic/macsec/mdio.c
create mode 100644 drivers/net/atlantic/macsec/mdio.h
delete mode 100644 drivers/net/atlantic/rte_pmd_atlantic.c
delete mode 100644 drivers/net/atlantic/rte_pmd_atlantic.h
delete mode 100644 drivers/net/atlantic/rte_pmd_atlantic_version.map
--
2.7.4
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v7 4/4] doc: add maintainer for abi policy
2019-10-25 16:28 10% [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
` (2 preceding siblings ...)
2019-10-25 16:28 30% ` [dpdk-dev] [PATCH v7 3/4] doc: updates to versioning guide for " Ray Kinsella
@ 2019-10-25 16:28 13% ` Ray Kinsella
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 16:28 UTC (permalink / raw)
To: dev
Cc: mdr, thomas, stephen, bruce.richardson, ferruh.yigit,
konstantin.ananyev, jerinj, olivier.matz, nhorman,
maxime.coquelin, john.mcnamara, marko.kovacevic, hemant.agrawal,
ktraynor, aconole
Add an entry to the maintainer file for the abi policy.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
MAINTAINERS | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/MAINTAINERS b/MAINTAINERS
index f0f555b..6ae7fb3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -80,6 +80,10 @@ M: Marko Kovacevic <marko.kovacevic@intel.com>
F: README
F: doc/
+ABI Policy
+M: Ray Kinsella <mdr@ashroe.eu>
+F: doc/guides/contributing/abi_*.rst
+
Developers and Maintainers Tools
M: Thomas Monjalon <thomas@monjalon.net>
F: MAINTAINERS
--
2.7.4
^ permalink raw reply [relevance 13%]
* [dpdk-dev] [PATCH v7 2/4] doc: changes to abi policy introducing major abi versions
2019-10-25 16:28 10% [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 1/4] doc: separate versioning.rst into version and policy Ray Kinsella
@ 2019-10-25 16:28 23% ` Ray Kinsella
2019-10-25 16:28 30% ` [dpdk-dev] [PATCH v7 3/4] doc: updates to versioning guide for " Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 4/4] doc: add maintainer for abi policy Ray Kinsella
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 16:28 UTC (permalink / raw)
To: dev
Cc: mdr, thomas, stephen, bruce.richardson, ferruh.yigit,
konstantin.ananyev, jerinj, olivier.matz, nhorman,
maxime.coquelin, john.mcnamara, marko.kovacevic, hemant.agrawal,
ktraynor, aconole
This policy change introduces major ABI versions, these are
declared every year, typically aligned with the LTS release
and are supported by subsequent releases in the following year.
This change is intended to improve ABI stabilty for those projects
consuming DPDK.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/contributing/abi_policy.rst | 321 ++++--
.../contributing/img/abi_stability_policy.svg | 1059 ++++++++++++++++++++
doc/guides/contributing/img/what_is_an_abi.svg | 382 +++++++
doc/guides/contributing/stable.rst | 12 +-
4 files changed, 1683 insertions(+), 91 deletions(-)
create mode 100644 doc/guides/contributing/img/abi_stability_policy.svg
create mode 100644 doc/guides/contributing/img/what_is_an_abi.svg
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
index d4f4e9f..620e320 100644
--- a/doc/guides/contributing/abi_policy.rst
+++ b/doc/guides/contributing/abi_policy.rst
@@ -1,31 +1,44 @@
.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2018 The DPDK contributors
+ Copyright 2019 The DPDK contributors
-DPDK ABI/API policy
-===================
+ABI Policy
+==========
Description
-----------
-This document details some methods for handling ABI management in the DPDK.
+This document details the management policy that ensures the long-term stability
+of the DPDK ABI and API.
General Guidelines
------------------
-#. Whenever possible, ABI should be preserved
-#. ABI/API may be changed with a deprecation process
-#. The modification of symbols can generally be managed with versioning
-#. Libraries or APIs marked in ``experimental`` state may change without constraint
-#. New APIs will be marked as ``experimental`` for at least one release to allow
- any issues found by users of the new API to be fixed quickly
-#. The addition of symbols is generally not problematic
-#. The removal of symbols generally is an ABI break and requires bumping of the
- LIBABIVER macro
-#. Updates to the minimum hardware requirements, which drop support for hardware which
- was previously supported, should be treated as an ABI change.
-
-What is an ABI
-~~~~~~~~~~~~~~
+#. Major ABI versions are declared every **year** and are then supported for one
+ year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
+#. The ABI version is managed at a project level in DPDK, with the ABI version
+ reflected in all :ref:`library's soname <what_is_soname>`.
+#. The ABI should be preserved and not changed lightly. ABI changes must follow
+ the outlined :ref:`deprecation process <abi_changes>`.
+#. The addition of symbols is generally not problematic. The modification of
+ symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
+#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
+ once approved these will form part of the next ABI version.
+#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
+ considered part of an ABI version and may change without constraint.
+#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
+ support for hardware which was previously supported, should be treated as an
+ ABI change.
+
+.. note::
+
+ In 2019, the DPDK community stated its intention to move to ABI stable
+ releases, over a number of release cycles. This change begins with
+ maintaining ABI stability through one year of DPDK releases starting from
+ DPDK 19.11. This policy will be reviewed in 2020, with intention of
+ lengthening the stability period.
+
+What is an ABI?
+~~~~~~~~~~~~~~~
An ABI (Application Binary Interface) is the set of runtime interfaces exposed
by a library. It is similar to an API (Application Programming Interface) but
@@ -37,30 +50,82 @@ Therefore, in the case of dynamic linking, it is critical that an ABI is
preserved, or (when modified), done in such a way that the application is unable
to behave improperly or in an unexpected fashion.
+.. _figure_what_is_an_abi:
+
+.. figure:: img/what_is_an_abi.*
+
+*Figure 1. Illustration of DPDK API and ABI .*
-ABI/API Deprecation
--------------------
+
+What is an ABI version?
+~~~~~~~~~~~~~~~~~~~~~~~
+
+An ABI version is an instance of a library's ABI at a specific release. Certain
+releases are considered to be milestone releases, the yearly LTS release for
+example. The ABI of a milestone release may be designated as a 'major ABI
+version', where this ABI version is then supported for some number of subsequent
+releases and is annotated in the library's :ref:`soname<what_is_soname>`.
+
+ABI version support in subsequent releases facilitates application upgrades, by
+enabling applications built against the milestone release to upgrade to
+subsequent releases of a library without a rebuild.
+
+More details on major ABI version can be found in the :ref:`ABI versioning
+<major_abi_versions>` guide.
The DPDK ABI policy
-~~~~~~~~~~~~~~~~~~~
+-------------------
+
+A major ABI version is declared every year, aligned with that year's LTS
+release, e.g. v19.11. This ABI version is then supported for one year by all
+subsequent releases within that time period, until the next LTS release, e.g.
+v20.11.
-ABI versions are set at the time of major release labeling, and the ABI may
-change multiple times, without warning, between the last release label and the
-HEAD label of the git tree.
+At the declaration of a major ABI version, major version numbers encoded in
+libraries' sonames are bumped to indicate the new version, with the minor
+version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
+``librte_eal.so.21.0``.
-ABI versions, once released, are available until such time as their
-deprecation has been noted in the Release Notes for at least one major release
-cycle. For example consider the case where the ABI for DPDK 2.0 has been
-shipped and then a decision is made to modify it during the development of
-DPDK 2.1. The decision will be recorded in the Release Notes for the DPDK 2.1
-release and the modification will be made available in the DPDK 2.2 release.
+The ABI may then change multiple times, without warning, between the last major
+ABI version increment and the HEAD label of the git tree, with the condition
+that ABI compatibility with the major ABI version is preserved and therefore
+sonames do not change.
-ABI versions may be deprecated in whole or in part as needed by a given
-update.
+Minor versions are incremented to indicate the release of a new ABI compatible
+DPDK release, typically the DPDK quarterly releases. An example of this, might
+be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
+release, following the declaration of the new major ABI version ``20``.
-Some ABI changes may be too significant to reasonably maintain multiple
-versions. In those cases ABI's may be updated without backward compatibility
-being provided. The requirements for doing so are:
+An ABI version is supported in all new releases until the next major ABI version
+is declared. When changing the major ABI version, the release notes will detail
+all ABI changes.
+
+.. _figure_abi_stability_policy:
+
+.. figure:: img/abi_stability_policy.*
+
+*Figure 2. Mapping of new ABI versions and ABI version compatibility to DPDK
+releases.*
+
+.. _abi_changes:
+
+ABI Changes
+~~~~~~~~~~~
+
+The ABI may still change after the declaration of a major ABI version, that is
+new APIs may be still added or existing APIs may be modified.
+
+.. Warning::
+
+ Note that, this policy details the method by which the ABI may be changed,
+ with due regard to preserving compatibility and observing deprecation
+ notices. This process however should not be undertaken lightly, as a general
+ rule ABI stability is extremely important for downstream consumers of DPDK.
+ The ABI should only be changed for significant reasons, such as performance
+ enhancements. ABI breakages due to changes such as reorganizing public
+ structure fields for aesthetic or readability purposes should be avoided.
+
+The requirements for changing the ABI are:
#. At least 3 acknowledgments of the need to do so must be made on the
dpdk.org mailing list.
@@ -69,34 +134,119 @@ being provided. The requirements for doing so are:
no maintainer is available for the component, the tree/sub-tree maintainer
for that component must acknowledge the ABI change instead.
+ - The acknowledgment of three members of the technical board, as delegates
+ of the `technical board <https://core.dpdk.org/techboard/>`_ acknowledging
+ the need for the ABI change, is also mandatory.
+
- It is also recommended that acknowledgments from different "areas of
interest" be sought for each deprecation, for example: from NIC vendors,
CPU vendors, end-users, etc.
-#. The changes (including an alternative map file) can be included with
- deprecation notice, in wrapped way by the ``RTE_NEXT_ABI`` option,
- to provide more details about oncoming changes.
- ``RTE_NEXT_ABI`` wrapper will be removed when it become the default ABI.
- More preferred way to provide this information is sending the feature
- as a separate patch and reference it in deprecation notice.
+#. Backward compatibility with the major ABI version must be maintained through
+ :ref:`abi_versioning`, with :ref:`forward-only <forward-only>` compatibility
+ offered for any ABI changes that are indicated to be part of the next ABI
+ version.
-#. A full deprecation cycle, as explained above, must be made to offer
- downstream consumers sufficient warning of the change.
+ - In situations where backward compatibility is not possible, read the
+ section on :ref:`abi_breakages`.
-Note that the above process for ABI deprecation should not be undertaken
-lightly. ABI stability is extremely important for downstream consumers of the
-DPDK, especially when distributed in shared object form. Every effort should
-be made to preserve the ABI whenever possible. The ABI should only be changed
-for significant reasons, such as performance enhancements. ABI breakage due to
-changes such as reorganizing public structure fields for aesthetic or
-readability purposes should be avoided.
+ - No backward or forward compatibility is offered for API changes marked as
+ ``experimental``, as described in the section on :ref:`Experimental APIs
+ and Libraries <experimental_apis>`.
-.. note::
+#. If a newly proposed API functionally replaces an existing one, when the new
+ API becomes non-experimental, then the old one is marked with
+ ``__rte_deprecated``.
+
+ - The depreciated API should follow the notification process to be removed,
+ see :ref:`deprecation_notices`.
+
+ - At the declaration of the next major ABI version, those ABI changes then
+ become a formal part of the new ABI and the requirement to preserve ABI
+ compatibility with the last major ABI version is then dropped.
+
+ - The responsibility for removing redundant ABI compatibility code rests
+ with the original contributor of the ABI changes, failing that, then with
+ the contributor's company and then finally with the maintainer.
+
+.. _forward-only:
+
+.. Note::
+
+ Note that forward-only compatibility is offered for those changes made
+ between major ABI versions. As a library's soname can only describe
+ compatibility with the last major ABI version, until the next major ABI
+ version is declared, these changes therefore cannot be resolved as a runtime
+ dependency through the soname. Therefore any application wishing to make use
+ of these ABI changes can only ensure that its runtime dependencies are met
+ through Operating System package versioning.
+
+.. _hw_rqmts:
+
+.. Note::
Updates to the minimum hardware requirements, which drop support for hardware
which was previously supported, should be treated as an ABI change, and
- follow the relevant deprecation policy procedures as above: 3 acks and
- announcement at least one release in advance.
+ follow the relevant deprecation policy procedures as above: 3 acks, technical
+ board approval and announcement at least one release in advance.
+
+.. _abi_breakages:
+
+ABI Breakages
+~~~~~~~~~~~~~
+
+For those ABI changes that are too significant to reasonably maintain multiple
+symbol versions, there is an amended process. In these cases, ABIs may be
+updated without the requirement of backward compatibility being provided. These
+changes must follow the `same process :ref:`described above <abi_changes>` as non-breaking
+changes, however with the following additional requirements:
+
+#. ABI breaking changes (including an alternative map file) can be included with
+ deprecation notice, in wrapped way by the ``RTE_NEXT_ABI`` option, to provide
+ more details about oncoming changes. ``RTE_NEXT_ABI`` wrapper will be removed
+ at the declaration of the next major ABI version.
+
+#. Once approved, and after the deprecation notice has been observed these
+ changes will form part of the next declared major ABI version.
+
+Examples of ABI Changes
+~~~~~~~~~~~~~~~~~~~~~~~
+
+The following are examples of allowable ABI changes occurring between
+declarations of major ABI versions.
+
+* DPDK 19.11 release, defines the function ``rte_foo()``, and ``rte_foo()``
+ as part of the major ABI version ``20``.
+
+* DPDK 20.02 release defines a new function ``rte_foo(uint8_t bar)``, and
+ this is not a problem as long as the symbol ``rte_foo@DPDK20`` is
+ preserved through :ref:`abi_versioning`.
+
+ - The new function may be marked with the ``__rte_experimental`` tag for a
+ number of releases, as described in the section :ref:`experimental_apis`.
+
+ - Once ``rte_foo(uint8_t bar)`` becomes non-experimental ``rte_foo()`` is then
+ declared as ``__rte_depreciated``, with an associated deprecation notice
+ provided.
+
+* DPDK 19.11 is not re-released to include ``rte_foo(uint8_t bar)``, the new
+ version of ``rte_foo`` only exists from DPDK 20.02 onwards as described in the
+ :ref:`note on forward-only compatibility<forward-only>`.
+
+* DPDK 20.02 release defines the experimental function ``__rte_experimental
+ rte_baz()``. This function may or may not exist in the DPDK 20.05 release.
+
+* An application ``dPacket`` wishes to use ``rte_foo(uint8_t bar)``, before the
+ declaration of the DPDK ``21`` major API version. The application can only
+ ensure its runtime dependencies are met by specifying ``DPDK (>= 20.2)`` as
+ an explicit package dependency, as the soname only may only indicate the
+ supported major ABI version.
+
+* At the release of DPDK 20.11, the function ``rte_foo(uint8_t bar)`` becomes
+ formally part of then new major ABI version DPDK 21.0 and ``rte_foo()`` may be
+ removed.
+
+.. _deprecation_notices:
Examples of Deprecation Notices
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -104,46 +254,42 @@ Examples of Deprecation Notices
The following are some examples of ABI deprecation notices which would be
added to the Release Notes:
-* The Macro ``#RTE_FOO`` is deprecated and will be removed with version 2.0,
- to be replaced with the inline function ``rte_foo()``.
+* The Macro ``#RTE_FOO`` is deprecated and will be removed with ABI version
+ 21, to be replaced with the inline function ``rte_foo()``.
* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
- in version 2.0. Backwards compatibility will be maintained for this function
- until the release of version 2.1
+ in version 20.2. Backwards compatibility will be maintained for this function
+ until the release of the new DPDK major ABI version 21, in DPDK version
+ 20.11.
-* The members of ``struct rte_foo`` have been reorganized in release 2.0 for
+* The members of ``struct rte_foo`` have been reorganized in DPDK 20.02 for
performance reasons. Existing binary applications will have backwards
- compatibility in release 2.0, while newly built binaries will need to
- reference the new structure variant ``struct rte_foo2``. Compatibility will
- be removed in release 2.2, and all applications will require updating and
+ compatibility in release 20.02, while newly built binaries will need to
+ reference the new structure variant ``struct rte_foo2``. Compatibility will be
+ removed in release 20.11, and all applications will require updating and
rebuilding to the new structure at that time, which will be renamed to the
original ``struct rte_foo``.
* Significant ABI changes are planned for the ``librte_dostuff`` library. The
- upcoming release 2.0 will not contain these changes, but release 2.1 will,
+ upcoming release 20.02 will not contain these changes, but release 20.11 will,
and no backwards compatibility is planned due to the extensive nature of
- these changes. Binaries using this library built prior to version 2.1 will
+ these changes. Binaries using this library built prior to ABI version 21 will
require updating and recompilation.
-New API replacing previous one
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If a new API proposed functionally replaces an existing one, when the new API
-becomes non-experimental then the old one is marked with ``__rte_deprecated``.
-Deprecated APIs are removed completely just after the next LTS.
+.. _experimental_apis:
-Reminder that old API should follow deprecation process to be removed.
+Experimental
+------------
+APIs
+~~~~
-Experimental APIs
------------------
-
-APIs marked as ``experimental`` are not considered part of the ABI and may
-change without warning at any time. Since changes to APIs are most likely
-immediately after their introduction, as users begin to take advantage of
-those new APIs and start finding issues with them, new DPDK APIs will be
-automatically marked as ``experimental`` to allow for a period of stabilization
-before they become part of a tracked ABI.
+APIs marked as ``experimental`` are not considered part of an ABI version and
+may change without warning at any time. Since changes to APIs are most likely
+immediately after their introduction, as users begin to take advantage of those
+new APIs and start finding issues with them, new DPDK APIs will be automatically
+marked as ``experimental`` to allow for a period of stabilization before they
+become part of a tracked ABI version.
Note that marking an API as experimental is a multi step process.
To mark an API as experimental, the symbols which are desired to be exported
@@ -161,7 +307,16 @@ In addition to tagging the code with ``__rte_experimental``,
the doxygen markup must also contain the EXPERIMENTAL string,
and the MAINTAINERS file should note the EXPERIMENTAL libraries.
-For removing the experimental tag associated with an API, deprecation notice
-is not required. Though, an API should remain in experimental state for at least
-one release. Thereafter, normal process of posting patch for review to mailing
-list can be followed.
+For removing the experimental tag associated with an API, deprecation notice is
+not required. Though, an API should remain in experimental state for at least
+one release. Thereafter, the normal process of posting patch for review to
+mailing list can be followed.
+
+Libraries
+~~~~~~~~~
+
+Libraries marked as ``experimental`` are entirely not considered part of an ABI
+version, and may change without warning at any time. Experimental libraries
+always have a major version of ``0`` to indicate they exist outside of
+:ref:`abi_versioning` , with the minor version incremented with each ABI change
+to library.
diff --git a/doc/guides/contributing/img/abi_stability_policy.svg b/doc/guides/contributing/img/abi_stability_policy.svg
new file mode 100644
index 0000000..4fd4007
--- /dev/null
+++ b/doc/guides/contributing/img/abi_stability_policy.svg
@@ -0,0 +1,1059 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="1237.4869"
+ height="481.37463"
+ version="1.1"
+ viewBox="0 0 1237.4869 481.37463"
+ xml:space="preserve"
+ id="svg7800"
+ sodipodi:docname="abi_stability_policy.svg"
+ inkscape:version="0.92.2 (5c3e80d, 2017-08-06)"><metadata
+ id="metadata7804"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview7802"
+ showgrid="false"
+ inkscape:zoom="0.8875"
+ inkscape:cx="840.50495"
+ inkscape:cy="179.36692"
+ inkscape:window-x="-8"
+ inkscape:window-y="-8"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="svg7800" /><defs
+ id="defs7394"><clipPath
+ id="clipPath3975"><path
+ d="M 0,1.2207e-4 H 960 V 540.00012 H 0 Z"
+ id="path7226"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4003"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7229"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4025"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7232"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4037"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7235"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4049"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7238"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4061"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7241"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4073"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7244"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4085"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7247"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4097"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7250"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4109"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7253"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4121"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7256"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4133"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7259"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4145"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7262"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4157"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7265"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4169"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7268"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4181"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7271"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4193"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7274"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4205"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7277"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4217"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7280"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4229"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7283"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4241"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7286"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4253"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7289"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4265"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7292"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4277"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7295"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4289"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7298"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4301"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7301"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4313"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7304"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4327"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7307"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4339"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7310"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4351"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7313"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4363"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7316"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4375"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7319"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4389"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7322"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4403"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7325"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4417"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7328"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4429"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7331"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4441"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7334"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4453"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7337"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4477"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7340"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4489"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7343"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4501"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7346"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4513"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7349"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4525"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7352"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4537"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7355"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4549"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7358"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4561"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7361"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4573"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7364"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4589"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7367"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4601"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7370"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4615"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7373"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4629"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7376"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4641"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7379"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4653"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7382"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4673"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7385"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4685"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7388"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath4699"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path7391"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath></defs><g
+ id="g7406"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><path
+ style="fill:#44546a"
+ inkscape:connector-curvature="0"
+ id="path7400"
+ d="m 161.83,180.57 773.79,4.78 c 0.82,0.01 1.49,0.68 1.49,1.51 -0.01,0.83 -0.68,1.5 -1.51,1.49 l -773.79,-4.78 c -0.83,-0.01 -1.5,-0.68 -1.49,-1.51 0.01,-0.83 0.68,-1.5 1.51,-1.49 z m 772.3,1.77 8.97,4.56 -9.03,4.44 z" /><path
+ style="fill:#00b050;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path7402"
+ d="m 173.28,182.22 c 0,4.67 3.36,8.46 7.5,8.46 4.14,0 7.5,-3.79 7.5,-8.46 0,-4.67 -3.36,-8.46 -7.5,-8.46 -4.14,0 -7.5,3.79 -7.5,8.46 z" /><path
+ style="fill:#00b050;fill-rule:evenodd"
+ inkscape:connector-curvature="0"
+ id="path7404"
+ d="m 612.24,183.78 c 0,4.67 3.36,8.46 7.5,8.46 4.14,0 7.5,-3.79 7.5,-8.46 0,-4.67 -3.36,-8.46 -7.5,-8.46 -4.14,0 -7.5,3.79 -7.5,8.46 z" /></g><g
+ style="fill:#ff0000;fill-rule:evenodd"
+ id="g7420"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><path
+ inkscape:connector-curvature="0"
+ id="path7408"
+ d="m 228.12,182.22 c 0,4.67 3.36,8.46 7.5,8.46 4.14,0 7.5,-3.79 7.5,-8.46 0,-4.67 -3.36,-8.46 -7.5,-8.46 -4.14,0 -7.5,3.79 -7.5,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7410"
+ d="m 282.96,182.22 c 0,4.67 3.36,8.46 7.5,8.46 4.14,0 7.5,-3.79 7.5,-8.46 0,-4.67 -3.36,-8.46 -7.5,-8.46 -4.14,0 -7.5,3.79 -7.5,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7412"
+ d="m 337.8,182.22 c 0,4.67 3.38,8.46 7.56,8.46 4.18,0 7.56,-3.79 7.56,-8.46 0,-4.67 -3.38,-8.46 -7.56,-8.46 -4.18,0 -7.56,3.79 -7.56,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7414"
+ d="m 447.6,182.22 c 0,4.67 3.36,8.46 7.5,8.46 4.14,0 7.5,-3.79 7.5,-8.46 0,-4.67 -3.36,-8.46 -7.5,-8.46 -4.14,0 -7.5,3.79 -7.5,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7416"
+ d="m 502.44,182.34 c 0,4.67 3.38,8.46 7.56,8.46 4.18,0 7.56,-3.79 7.56,-8.46 0,-4.67 -3.38,-8.46 -7.56,-8.46 -4.18,0 -7.56,3.79 -7.56,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7418"
+ d="m 557.28,182.34 c 0,4.67 3.38,8.46 7.56,8.46 4.18,0 7.56,-3.79 7.56,-8.46 0,-4.67 -3.38,-8.46 -7.56,-8.46 -4.18,0 -7.56,3.79 -7.56,8.46 z" /></g><g
+ id="g7426"
+ clip-path="url(#clipPath4003)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7424"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,152.98,149.45)"><tspan
+ id="tspan7422"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v19.11</tspan></text>
+</g><path
+ style="fill:#00b050;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7428"
+ d="m 499.42541,379.9105 c 0,-6.22651 4.47989,-11.27972 9.99975,-11.27972 5.51986,0 9.99975,5.05321 9.99975,11.27972 0,6.22651 -4.47989,11.27972 -9.99975,11.27972 -5.51986,0 -9.99975,-5.05321 -9.99975,-11.27972 z" /><path
+ style="fill:#00b050;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7430"
+ d="m 1084.6908,373.67065 c 0,-6.22651 4.4799,-11.27971 9.9997,-11.27971 5.5199,0 9.9998,5.0532 9.9998,11.27971 0,6.22652 -4.4799,11.27972 -9.9998,11.27972 -5.5198,0 -9.9997,-5.0532 -9.9997,-11.27972 z" /><g
+ style="fill:#ff0000;fill-rule:evenodd"
+ id="g7438"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><path
+ inkscape:connector-curvature="0"
+ id="path7432"
+ d="m 667.08,185.4 c 0,4.64 3.36,8.4 7.5,8.4 4.14,0 7.5,-3.76 7.5,-8.4 0,-4.64 -3.36,-8.4 -7.5,-8.4 -4.14,0 -7.5,3.76 -7.5,8.4 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7434"
+ d="m 721.92,185.58 c 0,4.67 3.38,8.46 7.56,8.46 4.18,0 7.56,-3.79 7.56,-8.46 0,-4.67 -3.38,-8.46 -7.56,-8.46 -4.18,0 -7.56,3.79 -7.56,8.46 z" /><path
+ inkscape:connector-curvature="0"
+ id="path7436"
+ d="m 776.76,185.58 c 0,4.67 3.38,8.46 7.56,8.46 4.18,0 7.56,-3.79 7.56,-8.46 0,-4.67 -3.38,-8.46 -7.56,-8.46 -4.18,0 -7.56,3.79 -7.56,8.46 z" /></g><g
+ id="g7444"
+ clip-path="url(#clipPath4025)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7442"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,210.14,150.1)"><tspan
+ id="tspan7440"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v20.02</tspan></text>
+</g><g
+ id="g7450"
+ clip-path="url(#clipPath4037)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7448"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,265.01,150.1)"><tspan
+ id="tspan7446"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v20.05</tspan></text>
+</g><g
+ id="g7456"
+ clip-path="url(#clipPath4049)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7454"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,319.9,150.77)"><tspan
+ id="tspan7452"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v20.08</tspan></text>
+</g><g
+ id="g7462"
+ clip-path="url(#clipPath4061)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7460"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,375,150.94)"><tspan
+ id="tspan7458"
+ y="0"
+ x="0 7.9180322 14.992224 22.066416 25.652737 32.726929">V20.11</tspan></text>
+</g><g
+ id="g7468"
+ clip-path="url(#clipPath4073)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7466"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,429.17,150.94)"><tspan
+ id="tspan7464"
+ y="0"
+ x="0 6.3569279 13.445184 20.519377 24.105696 31.179888">v21.02</tspan></text>
+</g><g
+ id="g7474"
+ clip-path="url(#clipPath4085)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7472"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,483,150.55)"><tspan
+ id="tspan7470"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v21.05</tspan></text>
+</g><g
+ id="g7480"
+ clip-path="url(#clipPath4097)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7478"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,537.38,150.82)"><tspan
+ id="tspan7476"
+ y="0"
+ x="0 6.3569279 13.445184 20.519377 24.105696 31.179888">v21.08</tspan></text>
+</g><g
+ id="g7486"
+ clip-path="url(#clipPath4109)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7484"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,592.27,150.82)"><tspan
+ id="tspan7482"
+ y="0"
+ x="0 6.3569279 13.445184 20.519377 24.105696 31.179888">v21.11</tspan></text>
+</g><g
+ id="g7492"
+ clip-path="url(#clipPath4121)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7490"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,647.14,151.46)"><tspan
+ id="tspan7488"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 24.10668 31.22496">v22.02</tspan></text>
+</g><g
+ id="g7498"
+ clip-path="url(#clipPath4133)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7496"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,702.24,151.63)"><tspan
+ id="tspan7494"
+ y="0"
+ x="0 7.96068 14.99472 22.113001 25.651079 32.76936">V22.05</tspan></text>
+</g><g
+ id="g7504"
+ clip-path="url(#clipPath4145)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7502"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,756.43,151.63)"><tspan
+ id="tspan7500"
+ y="0"
+ x="0 7.96068 14.99472 22.113001 25.651079 32.76936">V22.08</tspan></text>
+</g><g
+ id="g7510"
+ clip-path="url(#clipPath4157)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7508"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,811.99,151.63)"><tspan
+ id="tspan7506"
+ y="0"
+ x="0 7.96068 14.99472 22.113001 25.651079 32.76936">V22.11</tspan></text>
+</g><g
+ id="g7516"
+ clip-path="url(#clipPath4169)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7514"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,105.82,214.18)"><tspan
+ id="tspan7512"
+ y="0"
+ x="0 6.3460798 13.46436">v20</tspan></text>
+</g><g
+ id="g7522"
+ clip-path="url(#clipPath4181)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7520"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,105.5,247.68)"><tspan
+ id="tspan7518"
+ y="0"
+ x="0 6.3569279 13.445184">v21</tspan></text>
+</g><g
+ id="g7528"
+ clip-path="url(#clipPath4193)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7526"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,228.79,214.51)"><tspan
+ id="tspan7524"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7534"
+ clip-path="url(#clipPath4205)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7532"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,283.8,214.51)"><tspan
+ id="tspan7530"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7540"
+ clip-path="url(#clipPath4217)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7538"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,337.68,214.51)"><tspan
+ id="tspan7536"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7546"
+ clip-path="url(#clipPath4229)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7544"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,611.66,285.79)"><tspan
+ id="tspan7542"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7552"
+ clip-path="url(#clipPath4241)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7550"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,666.65,285.79)"><tspan
+ id="tspan7548"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7558"
+ clip-path="url(#clipPath4253)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7556"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,719.4,285.79)"><tspan
+ id="tspan7554"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7564"
+ clip-path="url(#clipPath4265)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7562"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,775.56,285.79)"><tspan
+ id="tspan7560"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7570"
+ clip-path="url(#clipPath4277)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7568"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,398.54,249.22)"><tspan
+ id="tspan7566"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7576"
+ clip-path="url(#clipPath4289)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7574"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,453.53,249.22)"><tspan
+ id="tspan7572"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7582"
+ clip-path="url(#clipPath4301)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7580"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,507.43,249.22)"><tspan
+ id="tspan7578"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7588"
+ clip-path="url(#clipPath4313)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7586"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,561.05,249.22)"><tspan
+ id="tspan7584"
+ y="0"
+ x="0">√</tspan></text>
+</g><path
+ style="fill:#44546a;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7590"
+ d="m 217.67245,474.73479 v -25.14603 c 0,-1.10664 -0.89331,-1.99995 -1.99995,-1.99995 -1.10664,0 -1.99995,0.89331 -1.99995,1.99995 v 25.14603 c 0,1.09331 0.89331,1.99995 1.99995,1.99995 1.10664,0 1.99995,-0.90664 1.99995,-1.99995 z m 3.9999,-23.14608 -5.99985,-11.9997 -5.99985,11.9997 z" /><g
+ id="g7596"
+ clip-path="url(#clipPath4327)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7594"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,170.83,214.51)"><tspan
+ id="tspan7592"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7602"
+ clip-path="url(#clipPath4339)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-weight:bold;font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7600"
+ font-weight="bold"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,23.4,272.33)"><tspan
+ id="tspan7598"
+ y="0"
+ x="0 8.5227842 16.412687 20.167776">ABI </tspan></text>
+</g><g
+ id="g7608"
+ clip-path="url(#clipPath4351)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-weight:bold;font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7606"
+ font-weight="bold"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,46.68,272.33)"><tspan
+ id="tspan7604"
+ y="0"
+ x="0 7.566432 14.640624 19.563025 25.174561 28.662432 36.228863">Version</tspan></text>
+</g><g
+ id="g7614"
+ clip-path="url(#clipPath4363)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-weight:bold;font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7612"
+ font-weight="bold"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,17.64,255.5)"><tspan
+ id="tspan7610"
+ y="0"
+ x="0 7.4271598 14.98068 26.395201 33.934681 40.80024 45.700199 49.154041 56.7216 60.175442 63.671398 67.125237 72.053284">Compatibility</tspan></text>
+</g><g
+ id="g7620"
+ clip-path="url(#clipPath4375)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7618"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,191.28,116.86)"><tspan
+ id="tspan7616"
+ y="0"
+ x="0 6.3460798 13.46436">v20</tspan></text>
+</g><path
+ style="fill:#44546a;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7622"
+ d="m 511.7451,474.89479 v -25.14604 c 0,-1.10664 -0.89331,-1.99995 -1.99995,-1.99995 -1.10664,0 -1.99995,0.89331 -1.99995,1.99995 v 25.14604 c 0,1.09331 0.89331,1.99995 1.99995,1.99995 1.10664,0 1.99995,-0.90664 1.99995,-1.99995 z m 3.9999,-23.14609 -5.99985,-11.9997 -5.99985,11.9997 z" /><g
+ id="g7628"
+ clip-path="url(#clipPath4389)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7626"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,407.06,115.63)"><tspan
+ id="tspan7624"
+ y="0"
+ x="0 6.3460798 13.46436">v21</tspan></text>
+</g><path
+ style="fill:#44546a;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7630"
+ d="m 804.53778,476.01476 v -25.14604 c 0,-1.10664 -0.89331,-1.99995 -1.99995,-1.99995 -1.10664,0 -1.99995,0.89331 -1.99995,1.99995 v 25.14604 c 0,1.09331 0.89331,1.99995 1.99995,1.99995 1.10664,0 1.99995,-0.90664 1.99995,-1.99995 z m 3.9999,-23.14609 -5.99985,-11.9997 -5.99985,11.9997 z" /><g
+ id="g7636"
+ clip-path="url(#clipPath4403)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7634"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,626.66,114.74)"><tspan
+ id="tspan7632"
+ y="0"
+ x="0 6.3460798 13.46436">v22</tspan></text>
+</g><path
+ style="fill:#44546a;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7638"
+ d="m 1098.2904,479.37468 v -25.14604 c 0,-1.10664 -0.8933,-1.99995 -1.9999,-1.99995 -1.1067,0 -2,0.89331 -2,1.99995 v 25.14604 c 0,1.0933 0.8933,1.99995 2,1.99995 1.1066,0 1.9999,-0.90665 1.9999,-1.99995 z m 3.9999,-23.14609 -5.9998,-11.9997 -5.9999,11.9997 z" /><g
+ id="g7644"
+ clip-path="url(#clipPath4417)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7642"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,846.96,112.22)"><tspan
+ id="tspan7640"
+ y="0"
+ x="0 6.3569279 13.445184">v23</tspan></text>
+</g><g
+ id="g7650"
+ clip-path="url(#clipPath4429)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7648"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,832.87,318.46)"><tspan
+ id="tspan7646"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7656"
+ clip-path="url(#clipPath4441)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7654"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,105.5,285.67)"><tspan
+ id="tspan7652"
+ y="0"
+ x="0 6.3460798 13.46436">v22</tspan></text>
+</g><g
+ id="g7662"
+ clip-path="url(#clipPath4453)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7660"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,104.93,319.87)"><tspan
+ id="tspan7658"
+ y="0"
+ x="0 6.3460798 13.46436">v23</tspan></text>
+</g><path
+ style="fill:none;stroke:#5b9bd5;stroke-width:0.63998401;stroke-miterlimit:10;stroke-dasharray:2.559936, 1.919952"
+ inkscape:connector-curvature="0"
+ id="path7664"
+ stroke-miterlimit="10"
+ d="m 1104.7569,213.75465 -934.60326,0.39999" /><path
+ style="fill:none;stroke:#5b9bd5;stroke-width:0.63998401;stroke-miterlimit:10;stroke-dasharray:2.559936, 1.919952"
+ inkscape:connector-curvature="0"
+ id="path7666"
+ stroke-miterlimit="10"
+ d="M 1105.3969,255.35361 170.79362,255.7536" /><path
+ style="fill:none;stroke:#5b9bd5;stroke-width:0.63998401;stroke-miterlimit:10;stroke-dasharray:2.559936, 1.919952"
+ inkscape:connector-curvature="0"
+ id="path7668"
+ stroke-miterlimit="10"
+ d="M 1105.3969,299.35251 170.79362,299.7525" /><g
+ id="g7674"
+ clip-path="url(#clipPath4477)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#8497b0"
+ id="text7672"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,283.8,251.38)"><tspan
+ id="tspan7670"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7680"
+ clip-path="url(#clipPath4489)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#8497b0"
+ id="text7678"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,339.5,251.95)"><tspan
+ id="tspan7676"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7686"
+ clip-path="url(#clipPath4501)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#d0cece"
+ id="text7684"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,229.8,250.63)"><tspan
+ id="tspan7682"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7692"
+ clip-path="url(#clipPath4513)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#d0cece"
+ id="text7690"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,453.53,286.63)"><tspan
+ id="tspan7688"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7698"
+ clip-path="url(#clipPath4525)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#8497b0"
+ id="text7696"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,507.43,286.63)"><tspan
+ id="tspan7694"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7704"
+ clip-path="url(#clipPath4537)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#8497b0"
+ id="text7702"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,561.05,286.63)"><tspan
+ id="tspan7700"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7710"
+ clip-path="url(#clipPath4549)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#d0cece"
+ id="text7708"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,667.39,318.89)"><tspan
+ id="tspan7706"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7716"
+ clip-path="url(#clipPath4561)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#8497b0"
+ id="text7714"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,720.14,318.89)"><tspan
+ id="tspan7712"
+ y="0"
+ x="0">√</tspan></text>
+</g><g
+ id="g7722"
+ clip-path="url(#clipPath4573)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#8497b0"
+ id="text7720"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,776.3,318.89)"><tspan
+ id="tspan7718"
+ y="0"
+ x="0">√</tspan></text>
+</g><path
+ style="fill:#7030a0;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7724"
+ d="m 211.36594,305.0057 2.18661,-227.154316 c 0.0133,-1.0933 -0.87997,-1.99995 -1.98661,-2.01328 -1.09331,-0.0133 -1.99995,0.87998 -2.01329,1.98662 l -2.18661,227.140986 c -0.0133,1.10663 0.87998,2.01328 1.98662,2.02661 1.10664,0.0133 1.99995,-0.87998 2.01328,-1.98662 z m -7.97313,-2.07994 5.87985,12.06636 6.11985,-11.94637 z" /><path
+ style="fill:#7030a0;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7726"
+ d="M 289.03067,238.94069 V 107.43731 c 0,-1.10664 -0.89331,-1.99995 -1.99995,-1.99995 -1.10664,0 -1.99995,0.89331 -1.99995,1.99995 v 131.50338 c 0,1.09331 0.89331,1.99995 1.99995,1.99995 1.10664,0 1.99995,-0.90664 1.99995,-1.99995 z m -7.9998,-1.99995 5.99985,11.9997 5.99985,-11.9997 z" /><g
+ id="g7732"
+ clip-path="url(#clipPath4589)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7730"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,164.59,422.74)"><tspan
+ id="tspan7728"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 23.75568 31.88484 39.578758 43.06068 46.065239 49.294441 54.784081 57.957119 65.271957 72.263878 78.118561 81.347763 88.072922 92.762283 99.754204 107.04096 110.38248 117.10764 120.33684 123.56604 130.17888 137.50777 144.49968 151.78644 155.12796 165.16656 168.43788 173.14128 180.44208 183.67128 190.01736 197.13564 204.19775 207.77795 214.89624 221.94432 225.17352 229.9752 236.70036">v20 ABI is declared aligned with v19.11 LTS</tspan></text>
+</g><g
+ id="g7738"
+ clip-path="url(#clipPath4601)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.06400013px;font-family:Calibri;fill:#000000"
+ id="text7736"
+ font-size="14.064px"
+ transform="matrix(1,0,0,-1,222.12,398.3)"><tspan
+ id="tspan7734"
+ y="0"
+ x="0 6.3569279 13.445184 20.519377 23.740032 29.014032 35.385025 46.537777 53.851055 61.262783 64.497505 70.038719 73.034355 79.771011 84.440254 91.401939 94.622589 101.35925 108.65846 115.97174 122.93343 130.2467 133.59393 140.3306 147.62981 154.94308 158.16374 164.52068 171.60893 178.68312 181.90378 187.17778 193.54877 204.70152 212.0148 219.42653 222.66125 228.20247 231.32468 238.06133 242.73058 249.69226 252.81447 263.98129 271.39301 278.77661 282.01132 286.30084 289.53555 296.53943 303.82458 307.34061 310.51904 316.0462 323.3595 330.67276 337.98605 345.39777 350.30612 355.01755 358.13977 362.21832 369.63004 374.53839 377.5762 383.93314 391.02139 398.09558 401.2178 409.36084 417.03979 420.51361 423.6358 429.40204 436.81378 444.04266 448.75412 451.98883 459.28806 466.60132 473.56302 479.06204">v21 symbols are added and v20 symbols are modified, support for v20 ABI continues.</tspan></text>
+</g><path
+ style="fill:#7030a0;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7740"
+ d="m 510.78512,258.56686 -0.31999,-126.17017 c 0,-1.09331 0.89331,-1.99995 1.99995,-1.99995 1.09331,0 1.99995,0.89331 1.99995,1.99995 l 0.31999,126.15684 c 0,1.10664 -0.89331,2.01328 -1.99995,2.01328 -1.0933,0 -1.99995,-0.89331 -1.99995,-1.99995 z m 7.9998,-2.01328 -5.97318,12.01303 -6.02652,-11.98636 z" /><g
+ id="g7746"
+ clip-path="url(#clipPath4615)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7744"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,388.51,373.39)"><tspan
+ id="tspan7742"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 23.75568 31.88484 39.578758 43.06068 46.065239 49.294441 54.784081 57.957119 65.271957 72.263878 78.118561 81.347763 88.072922 92.762283 99.754204 107.04096 110.38248 117.10764 120.33684 123.56604 130.17888 137.50777 144.49968 151.78644 155.12796 165.16656 168.43788 173.14128 180.44208 183.67128 190.01736 197.13564 204.19775 207.77795 214.89624 221.94432 225.17352 229.9752 236.70036 243.14471 246.65472 249.78564 254.46095 261.45288 272.58661 279.31177 282.54095 289.86984 293.09903 300.47003 307.02673 310.36823 316.71432 323.83261 330.89471 334.12393 339.40295 345.76309 356.92487 364.23972 371.63879 374.91013 380.39975 383.4324 390.15756 394.83289 401.8248 404.99783 409.71527 416.70721 427.84091 435.23999 441.51587 448.50781 455.79456">v21 ABI is declared aligned with v20.11 LTS, remaining v20 symbols are removed.</tspan></text>
+</g><path
+ style="fill:none;stroke:#7030a0;stroke-width:2.07994795;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path7748"
+ stroke-miterlimit="10"
+ d="M 278.23094,342.95142 H 449.58665 V 261.03347 H 278.23094 Z" /><g
+ id="g7754"
+ clip-path="url(#clipPath4629)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-weight:bold;font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7752"
+ font-weight="bold"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,23.616,114.74)"><tspan
+ id="tspan7750"
+ y="0"
+ x="0 8.5082397 16.4268 20.17548 23.26428 30.817801 37.879921 42.821999 48.423962 51.93396 59.48748 67.026962">ABI Versions</tspan></text>
+</g><g
+ id="g7760"
+ clip-path="url(#clipPath4641)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-weight:bold;font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7758"
+ font-weight="bold"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,20.064,150.17)"><tspan
+ id="tspan7756"
+ y="0"
+ x="0 8.8451996 16.31448 25.159679 32.839561 36.0126 43.67844 50.740559 54.222481 61.284599 68.248444 73.850403 80.954643">DPDK Releases</tspan></text>
+</g><g
+ id="g7766"
+ clip-path="url(#clipPath4653)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7764"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,444,346.1)"><tspan
+ id="tspan7762"
+ y="0"
+ x="0 6.3460798 13.46436 20.52648 23.75568 29.034719 35.39484 46.556641 53.871479 61.270561 64.541878 70.031517 73.064163 79.789322 84.464638 91.456558 94.629601 101.35476 108.72576 116.02656 123.01848 130.30524 133.64676 140.37192 147.68677 155.0016 158.2308 164.57687 171.69516 178.75728 181.98648 187.26552 193.62564 204.78745 212.10228 219.50136 222.77267 228.26231 231.43536 238.16052 242.80775 249.79968 252.88847 264.05029 271.44937 278.82037 282.04956 286.33176 289.60309 296.595 303.88177 307.39175 310.56479 316.1106 323.42545 330.74026 338.05511 345.45419 350.39627 355.09967 358.20251 362.28815 369.68723 374.62933 377.63388 383.97995 391.09824 398.16037 401.27725 409.4064 417.10031 420.58224 423.69913 429.45551 436.85461 444.09924 448.80264 452.03183 459.33264 466.64749 473.6394 479.12903 488.81665 492.43896">v22 symbols are added and v21 symbols are modified, support for v21 ABI continues…..</tspan></text>
+</g><path
+ style="fill:#7030a0;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7768"
+ d="m 583.39664,198.26171 -0.13333,-30.49257 c 0,-1.10664 0.89331,-2.01329 1.98662,-2.01329 1.10664,0 2.01328,0.89331 2.01328,1.98662 l 0.13333,30.49257 c 0,1.10664 -0.89331,2.01328 -1.99995,2.01328 -1.0933,0 -1.99995,-0.89331 -1.99995,-1.98661 z m 7.98647,-2.03995 -5.94652,12.02636 -6.05318,-11.97303 z" /><path
+ style="fill:none;stroke:#7030a0;stroke-width:2.07994795;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path7770"
+ stroke-miterlimit="10"
+ d="M 571.18361,299.43251 H 742.37933 V 212.87467 H 571.18361 Z" /><path
+ style="fill:#00b050;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7772"
+ d="m 933.01457,30.959224 c 0,-6.22651 4.50655,-11.27972 10.07975,-11.27972 5.57319,0 10.07974,5.05321 10.07974,11.27972 0,6.22651 -4.50655,11.27972 -10.07974,11.27972 -5.5732,0 -10.07975,-5.05321 -10.07975,-11.27972 z" /><path
+ style="fill:#ff0000;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7774"
+ d="m 1081.3309,29.759254 c 0,-6.18651 4.4798,-11.19972 9.9997,-11.19972 5.5199,0 9.9998,5.01321 9.9998,11.19972 0,6.18651 -4.4799,11.19972 -9.9998,11.19972 -5.5199,0 -9.9997,-5.01321 -9.9997,-11.19972 z" /><g
+ id="g7780"
+ clip-path="url(#clipPath4673)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7778"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,744.89,439.54)"><tspan
+ id="tspan7776"
+ y="0"
+ x="0 4.8016801 11.52684 17.971201 21.144239 28.5714 35.56332 38.792519 45.728279 52.453442 57.943081">LTS Release</tspan></text>
+</g><g
+ id="g7786"
+ clip-path="url(#clipPath4685)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7784"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,856.06,439.75)"><tspan
+ id="tspan7782"
+ y="0"
+ x="0 12.0042 15.2334 22.562281 29.961361 34.903439 38.020321 45.461521 52.453442 55.68264 62.618401 69.343559 74.833199">Minor Release</tspan></text>
+</g><path
+ style="fill:#44546a;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path7788"
+ d="m 779.25841,46.265514 v -25.14604 c 0,-1.10664 -0.89331,-1.99995 -1.99995,-1.99995 -1.10664,0 -1.99995,0.89331 -1.99995,1.99995 v 25.14604 c 0,1.0933 0.89331,1.99995 1.99995,1.99995 1.10664,0 1.99995,-0.90665 1.99995,-1.99995 z m 3.9999,-23.14609 -5.99985,-11.9997 -5.99985,11.9997 z" /><g
+ id="g7794"
+ clip-path="url(#clipPath4699)"
+ transform="matrix(1.3333,0,0,-1.3333,-24.241503,623.02442)"><text
+ style="font-size:14.03999996px;font-family:Calibri;fill:#000000"
+ id="text7792"
+ font-size="14.04px"
+ transform="matrix(1,0,0,-1,622.34,439.54)"><tspan
+ id="tspan7790"
+ y="0"
+ x="0 8.1291599 15.82308 19.305 22.309561 29.512079 36.504002 41.151241 46.640881 49.870079 57.339359">ABI Version</tspan></text>
+</g><path
+ style="fill:none;stroke:#002060;stroke-width:1.27996802;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path7796"
+ stroke-miterlimit="10"
+ d="M 763.41881,62.078444 H 1236.847 V 0.63998401 H 763.41881 Z" /></svg>
\ No newline at end of file
diff --git a/doc/guides/contributing/img/what_is_an_abi.svg b/doc/guides/contributing/img/what_is_an_abi.svg
new file mode 100644
index 0000000..fd3d993
--- /dev/null
+++ b/doc/guides/contributing/img/what_is_an_abi.svg
@@ -0,0 +1,382 @@
+<?xml version="1.0" encoding="UTF-8" standalone="no"?>
+<svg
+ xmlns:dc="http://purl.org/dc/elements/1.1/"
+ xmlns:cc="http://creativecommons.org/ns#"
+ xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
+ xmlns:svg="http://www.w3.org/2000/svg"
+ xmlns="http://www.w3.org/2000/svg"
+ xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
+ xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
+ width="970.69568"
+ height="522.22693"
+ version="1.1"
+ viewBox="0 0 970.69568 522.22693"
+ xml:space="preserve"
+ id="svg8399"
+ sodipodi:docname="what_is_an_abi.svg"
+ inkscape:version="0.92.2 (5c3e80d, 2017-08-06)"><metadata
+ id="metadata8403"><rdf:RDF><cc:Work
+ rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
+ rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><sodipodi:namedview
+ pagecolor="#ffffff"
+ bordercolor="#666666"
+ borderopacity="1"
+ objecttolerance="10"
+ gridtolerance="10"
+ guidetolerance="10"
+ inkscape:pageopacity="0"
+ inkscape:pageshadow="2"
+ inkscape:window-width="1920"
+ inkscape:window-height="1017"
+ id="namedview8401"
+ showgrid="false"
+ inkscape:zoom="0.62755727"
+ inkscape:cx="820.83951"
+ inkscape:cy="-47.473217"
+ inkscape:window-x="-8"
+ inkscape:window-y="-8"
+ inkscape:window-maximized="1"
+ inkscape:current-layer="svg8399" /><defs
+ id="defs8269"><clipPath
+ id="clipPath26"><path
+ d="M 0,1.2207e-4 H 960 V 540.00012 H 0 Z"
+ id="path8206"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><radialGradient
+ id="radialGradient40"
+ cx="0"
+ cy="0"
+ r="1"
+ gradientTransform="matrix(386.44367,-1.3123672e-5,-1.3123672e-5,-386.44367,470.30824,246.15384)"
+ gradientUnits="userSpaceOnUse"><stop
+ stop-color="#f9d8e2"
+ offset="0"
+ id="stop8209" /><stop
+ stop-color="#fff"
+ offset=".74"
+ id="stop8211" /><stop
+ stop-color="#fff"
+ offset=".83"
+ id="stop8213" /><stop
+ stop-color="#fff"
+ offset="1"
+ id="stop8215" /></radialGradient><clipPath
+ id="clipPath56"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8218"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath68"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8221"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath82"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8224"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath96"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8227"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath108"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8230"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath120"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8233"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath132"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8236"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath144"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8239"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath156"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8242"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath168"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8245"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath180"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8248"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath192"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8251"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath204"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8254"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath216"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8257"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath228"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8260"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath240"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8263"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath><clipPath
+ id="clipPath260"><path
+ d="M 1.4305e-5,0 H 960.00001 V 540 H 1.4305e-5 Z"
+ id="path8266"
+ inkscape:connector-curvature="0"
+ style="clip-rule:evenodd" /></clipPath></defs><path
+ inkscape:connector-curvature="0"
+ style="fill:url(#radialGradient40);fill-rule:evenodd;stroke-width:1.33329999"
+ id="path8275"
+ d="m 116.15709,143.06309 c 0,-28.46596 23.07942,-51.545378 51.54538,-51.545378 h 605.21154 c 28.46595,0 51.54537,23.079418 51.54537,51.545378 V 349.2446 c 0,28.46595 -23.07942,51.54538 -51.54537,51.54538 H 167.70247 c -28.46595,0 -51.54538,-23.07943 -51.54538,-51.54538 z" /><path
+ style="fill:#00b050;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path8277"
+ d="m 478.70803,73.758152 0.58665,373.057338 c 0,1.67996 -1.35997,3.03993 -3.03992,3.03993 -1.67996,0.0133 -3.03993,-1.34663 -3.03993,-3.02659 L 472.62818,73.758152 c 0,-1.67995 1.35997,-3.03992 3.03992,-3.03992 1.67996,0 3.03993,1.35997 3.03993,3.03992 z m 6.65317,370.004088 -9.09311,18.25287 -9.14644,-18.22621 z" /><path
+ style="fill:none;stroke:#7030a0;stroke-width:6.07984781;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path8279"
+ stroke-miterlimit="10"
+ d="m 3.0399239,186.92866 c 0,-36.70575 29.7459201,-66.45167 66.4516701,-66.45167 H 778.00721 c 36.70575,0 66.45167,29.74592 66.45167,66.45167 v 265.80669 c 0,36.70574 -29.74592,66.45167 -66.45167,66.45167 H 69.491594 c -36.70575,0 -66.4516701,-29.74593 -66.4516701,-66.45167 z" /><path
+ style="fill:none;stroke:#3b3059;stroke-width:6.07984781;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path8281"
+ stroke-miterlimit="10"
+ d="m 101.27746,71.464882 c 0,-37.78572 30.63924,-68.4249581 68.42496,-68.4249581 h 729.52846 c 37.7857,0 68.4249,30.6392381 68.4249,68.4249581 V 345.1647 c 0,37.78572 -30.6392,68.42496 -68.4249,68.42496 H 169.70242 c -37.78572,0 -68.42496,-30.63924 -68.42496,-68.42496 z" /><g
+ id="g8287"
+ clip-path="url(#clipPath56)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:32.06399918px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8285"
+ font-size="32.064px"
+ transform="matrix(1,0,0,-1,409.78,93.312)"><tspan
+ id="tspan8283"
+ y="0"
+ x="0 23.855616 42.837505 66.693123">DPDK</tspan></text>
+</g><g
+ id="g8293"
+ clip-path="url(#clipPath68)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:32.06399918px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8291"
+ font-size="32.064px"
+ transform="matrix(1,0,0,-1,358.03,435.43)"><tspan
+ id="tspan8289"
+ y="0"
+ x="0 23.72736 45.595009 67.462654 73.875458 80.160004 100.90541 122.80512 133.54655 139.95937 160.96127">Application</tspan></text>
+</g><path
+ style="fill:#f9d8e2;fill-opacity:0.70196001;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path8295"
+ d="M 424.30939,345.59136 H 531.18672 V 277.91305 H 424.30939 Z" /><g
+ id="g8301"
+ clip-path="url(#clipPath82)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:32.04000092px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8299"
+ font-size="32.04px"
+ transform="matrix(1,0,0,-1,432.96,231.41)"><tspan
+ id="tspan8297"
+ y="0"
+ x="0 23.7096 42.67728">API</tspan></text>
+</g><path
+ style="fill:#f9d8e2;fill-opacity:0.70196001;fill-rule:evenodd;stroke-width:1.33329999"
+ inkscape:connector-curvature="0"
+ id="path8303"
+ d="m 422.38944,213.91465 h 107.19732 v -67.8383 H 422.38944 Z" /><g
+ id="g8309"
+ clip-path="url(#clipPath96)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:32.04000092px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8307"
+ font-size="32.04px"
+ transform="matrix(1,0,0,-1,431.54,330.29)"><tspan
+ id="tspan8305"
+ y="0"
+ x="0 23.7096 42.100559">ABI</tspan></text>
+</g><g
+ id="g8315"
+ clip-path="url(#clipPath108)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8313"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,293.23)"><tspan
+ id="tspan8311"
+ y="0"
+ x="0 9.4483204 14.25228 24.706079 35.447159 40.203239 51.10392 66.106323 81.076797 84.332642 94.068237">Programming</tspan></text>
+</g><g
+ id="g8321"
+ clip-path="url(#clipPath120)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.98400021px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8319"
+ font-size="15.984px"
+ transform="matrix(1,0,0,-1,221.78,274.03)"><tspan
+ id="tspan8317"
+ y="0"
+ x="0 7.320672 18.237743 27.987984 38.633327 48.351601 59.268673 69.945984">Language</tspan></text>
+</g><g
+ id="g8327"
+ clip-path="url(#clipPath132)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8325"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,254.81)"><tspan
+ id="tspan8323"
+ y="0"
+ x="0 7.6767602 17.38044 27.116039 37.442162 42.708961 45.93288 56.386681 66.122276">Functions</tspan></text>
+</g><g
+ id="g8333"
+ clip-path="url(#clipPath144)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8331"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,235.61)"><tspan
+ id="tspan8329"
+ y="0"
+ x="0 11.87424 22.77492 28.073641 38.974319 44.273041 52.891441 63.776161 74.150162">Datatypes</tspan></text>
+</g><g
+ id="g8339"
+ clip-path="url(#clipPath156)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8337"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,216.41)"><tspan
+ id="tspan8335"
+ y="0"
+ x="0 9.6877203 20.06172 25.312559 35.016239 39.820202 49.555801 54.216122 60.823559 69.441963 80.326683 90.700684">Return Types</tspan></text>
+</g><g
+ id="g8345"
+ clip-path="url(#clipPath168)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8343"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,197.21)"><tspan
+ id="tspan8341"
+ y="0"
+ x="0 12.97548 23.429279 33.164879 39.357361 44.640121 55.540798 65.276398 70.559158">Constants</tspan></text>
+</g><g
+ id="g8351"
+ clip-path="url(#clipPath180)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8349"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,221.78,178.01)"><tspan
+ id="tspan8347"
+ y="0"
+ x="0">…</tspan></text>
+</g><g
+ id="g8357"
+ clip-path="url(#clipPath192)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8355"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,546.38,354.12)"><tspan
+ id="tspan8353"
+ y="0"
+ x="0 3.8304 13.566 19.75848 25.07316 29.877119 39.580799 49.906921 55.189678 58.413601 68.867401 78.602997 83.2314 89.423882 99.797882">Instruction set</tspan></text>
+</g><g
+ id="g8363"
+ clip-path="url(#clipPath204)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.98400021px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8361"
+ font-size="15.984px"
+ transform="matrix(1,0,0,-1,546.38,332.88)"><tspan
+ id="tspan8359"
+ y="0"
+ x="0 8.5674238 16.239744 26.517456 36.859104 46.577377 51.836113 62.753185 73.654274 77.026894 87.352562 91.892014 103.99191 108.33955 115.66022 118.85703 128.60727 136.63123 147.02083">Executable & Linker</tspan></text>
+</g><g
+ id="g8369"
+ clip-path="url(#clipPath216)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8367"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,546.38,313.66)"><tspan
+ id="tspan8365"
+ y="0"
+ x="0 7.6767602 18.13056 22.934521 37.904999 48.805679">Format</tspan></text>
+</g><g
+ id="g8375"
+ clip-path="url(#clipPath228)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8373"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,546.38,292.42)"><tspan
+ id="tspan8371"
+ y="0"
+ x="0 12.97548 23.87616 27.22776 30.579359 33.80328 43.538879 54.200161 58.39764 71.373123 81.82692 91.562523 100.6278 110.95392 120.68952 125.95632 129.18024 139.63403 149.36964 155.56212">Calling Conventions.</tspan></text>
+</g><g
+ id="g8381"
+ clip-path="url(#clipPath240)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8379"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,546.38,271.3)"><tspan
+ id="tspan8377"
+ y="0"
+ x="0">…</tspan></text>
+</g><path
+ style="fill:none;stroke:#ffffff;stroke-width:6.07984781;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:18.239544, 24.319392"
+ inkscape:connector-curvature="0"
+ id="path8383"
+ stroke-miterlimit="10"
+ d="M 122.71693,120.47699 H 782.84709" /><path
+ style="fill:none;stroke:#ffffff;stroke-width:6.07984781;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10;stroke-dasharray:18.239544, 24.319392"
+ inkscape:connector-curvature="0"
+ id="path8385"
+ stroke-miterlimit="10"
+ d="M 177.27556,413.58966 H 837.40573" /><g
+ id="g8391"
+ clip-path="url(#clipPath260)"
+ transform="matrix(1.3333,0,0,-1.3333,-143.35642,633.10417)"><text
+ style="font-style:italic;font-size:15.96000004px;font-family:'Century Gothic';fill:#3b3059"
+ id="text8389"
+ font-style="italic"
+ font-size="15.96px"
+ transform="matrix(1,0,0,-1,483.19,405.82)"><tspan
+ id="tspan8387"
+ y="0"
+ x="0 5.0114398 14.71512 24.45072 34.77684 40.299 43.522919 53.976719 63.712318 68.13324 78.459358 89.360039 92.583961 95.807877">function calls</tspan></text>
+</g><path
+ style="fill:none;stroke:#3b3059;stroke-width:0.95997602;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path8393"
+ stroke-miterlimit="10"
+ d="m 574.38564,303.03242 c -11.93304,0 -21.59946,-1.61329 -21.59946,-3.59991 V 164.62255 c 0,-1.98662 -9.66643,-3.59991 -21.59946,-3.59991 11.93303,0 21.59946,-1.61329 21.59946,-3.59991 v -18.30621 c 0,-1.98662 9.66642,-3.59991 21.59946,-3.59991" /><path
+ style="fill:none;stroke:#3b3059;stroke-width:0.95997602;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:10"
+ inkscape:connector-curvature="0"
+ id="path8395"
+ stroke-miterlimit="10"
+ d="m 372.63068,389.43026 c 13.293,0 24.0794,-1.79995 24.0794,-4.01323 v -91.53105 c 0,-2.21327 10.78639,-4.01323 24.0794,-4.01323 -13.29301,0 -24.0794,-1.79995 -24.0794,-4.01323 v -65.3717 c 0,-2.21328 -10.7864,-4.01323 -24.0794,-4.01323" /></svg>
\ No newline at end of file
diff --git a/doc/guides/contributing/stable.rst b/doc/guides/contributing/stable.rst
index 6a5eee9..2b563d4 100644
--- a/doc/guides/contributing/stable.rst
+++ b/doc/guides/contributing/stable.rst
@@ -1,7 +1,7 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
-.. stable_lts_releases:
+.. _stable_lts_releases:
DPDK Stable Releases and Long Term Support
==========================================
@@ -53,6 +53,9 @@ year's November (X.11) release will be maintained as an LTS for 2 years.
After the X.11 release, an LTS branch will be created for it at
http://git.dpdk.org/dpdk-stable where bugfixes will be backported to.
+A LTS release may align with the declaration of a new major ABI version,
+please read the :ref:`abi_policy` for more information.
+
It is anticipated that there will be at least 4 releases per year of the LTS
or approximately 1 every 3 months. However, the cadence can be shorter or
longer depending on the number and criticality of the backported
@@ -119,10 +122,3 @@ A Stable Release will be released by:
list.
Stable releases are available on the `dpdk.org download page <http://core.dpdk.org/download/>`_.
-
-
-ABI
----
-
-The Stable Release should not be seen as a way of breaking or circumventing
-the DPDK ABI policy.
--
2.7.4
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v7 1/4] doc: separate versioning.rst into version and policy
2019-10-25 16:28 10% [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
@ 2019-10-25 16:28 13% ` Ray Kinsella
2019-10-25 16:28 23% ` [dpdk-dev] [PATCH v7 2/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
` (2 subsequent siblings)
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 16:28 UTC (permalink / raw)
To: dev
Cc: mdr, thomas, stephen, bruce.richardson, ferruh.yigit,
konstantin.ananyev, jerinj, olivier.matz, nhorman,
maxime.coquelin, john.mcnamara, marko.kovacevic, hemant.agrawal,
ktraynor, aconole
Separate versioning.rst into abi versioning and abi policy guidance, in
preparation for adding more detail to the abi policy.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/contributing/abi_policy.rst | 167 ++++++++
doc/guides/contributing/abi_versioning.rst | 427 +++++++++++++++++++++
doc/guides/contributing/index.rst | 3 +-
doc/guides/contributing/versioning.rst | 591 -----------------------------
4 files changed, 596 insertions(+), 592 deletions(-)
create mode 100644 doc/guides/contributing/abi_policy.rst
create mode 100644 doc/guides/contributing/abi_versioning.rst
delete mode 100644 doc/guides/contributing/versioning.rst
diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
new file mode 100644
index 0000000..d4f4e9f
--- /dev/null
+++ b/doc/guides/contributing/abi_policy.rst
@@ -0,0 +1,167 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 The DPDK contributors
+
+DPDK ABI/API policy
+===================
+
+Description
+-----------
+
+This document details some methods for handling ABI management in the DPDK.
+
+General Guidelines
+------------------
+
+#. Whenever possible, ABI should be preserved
+#. ABI/API may be changed with a deprecation process
+#. The modification of symbols can generally be managed with versioning
+#. Libraries or APIs marked in ``experimental`` state may change without constraint
+#. New APIs will be marked as ``experimental`` for at least one release to allow
+ any issues found by users of the new API to be fixed quickly
+#. The addition of symbols is generally not problematic
+#. The removal of symbols generally is an ABI break and requires bumping of the
+ LIBABIVER macro
+#. Updates to the minimum hardware requirements, which drop support for hardware which
+ was previously supported, should be treated as an ABI change.
+
+What is an ABI
+~~~~~~~~~~~~~~
+
+An ABI (Application Binary Interface) is the set of runtime interfaces exposed
+by a library. It is similar to an API (Application Programming Interface) but
+is the result of compilation. It is also effectively cloned when applications
+link to dynamic libraries. That is to say when an application is compiled to
+link against dynamic libraries, it is assumed that the ABI remains constant
+between the time the application is compiled/linked, and the time that it runs.
+Therefore, in the case of dynamic linking, it is critical that an ABI is
+preserved, or (when modified), done in such a way that the application is unable
+to behave improperly or in an unexpected fashion.
+
+
+ABI/API Deprecation
+-------------------
+
+The DPDK ABI policy
+~~~~~~~~~~~~~~~~~~~
+
+ABI versions are set at the time of major release labeling, and the ABI may
+change multiple times, without warning, between the last release label and the
+HEAD label of the git tree.
+
+ABI versions, once released, are available until such time as their
+deprecation has been noted in the Release Notes for at least one major release
+cycle. For example consider the case where the ABI for DPDK 2.0 has been
+shipped and then a decision is made to modify it during the development of
+DPDK 2.1. The decision will be recorded in the Release Notes for the DPDK 2.1
+release and the modification will be made available in the DPDK 2.2 release.
+
+ABI versions may be deprecated in whole or in part as needed by a given
+update.
+
+Some ABI changes may be too significant to reasonably maintain multiple
+versions. In those cases ABI's may be updated without backward compatibility
+being provided. The requirements for doing so are:
+
+#. At least 3 acknowledgments of the need to do so must be made on the
+ dpdk.org mailing list.
+
+ - The acknowledgment of the maintainer of the component is mandatory, or if
+ no maintainer is available for the component, the tree/sub-tree maintainer
+ for that component must acknowledge the ABI change instead.
+
+ - It is also recommended that acknowledgments from different "areas of
+ interest" be sought for each deprecation, for example: from NIC vendors,
+ CPU vendors, end-users, etc.
+
+#. The changes (including an alternative map file) can be included with
+ deprecation notice, in wrapped way by the ``RTE_NEXT_ABI`` option,
+ to provide more details about oncoming changes.
+ ``RTE_NEXT_ABI`` wrapper will be removed when it become the default ABI.
+ More preferred way to provide this information is sending the feature
+ as a separate patch and reference it in deprecation notice.
+
+#. A full deprecation cycle, as explained above, must be made to offer
+ downstream consumers sufficient warning of the change.
+
+Note that the above process for ABI deprecation should not be undertaken
+lightly. ABI stability is extremely important for downstream consumers of the
+DPDK, especially when distributed in shared object form. Every effort should
+be made to preserve the ABI whenever possible. The ABI should only be changed
+for significant reasons, such as performance enhancements. ABI breakage due to
+changes such as reorganizing public structure fields for aesthetic or
+readability purposes should be avoided.
+
+.. note::
+
+ Updates to the minimum hardware requirements, which drop support for hardware
+ which was previously supported, should be treated as an ABI change, and
+ follow the relevant deprecation policy procedures as above: 3 acks and
+ announcement at least one release in advance.
+
+Examples of Deprecation Notices
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following are some examples of ABI deprecation notices which would be
+added to the Release Notes:
+
+* The Macro ``#RTE_FOO`` is deprecated and will be removed with version 2.0,
+ to be replaced with the inline function ``rte_foo()``.
+
+* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
+ in version 2.0. Backwards compatibility will be maintained for this function
+ until the release of version 2.1
+
+* The members of ``struct rte_foo`` have been reorganized in release 2.0 for
+ performance reasons. Existing binary applications will have backwards
+ compatibility in release 2.0, while newly built binaries will need to
+ reference the new structure variant ``struct rte_foo2``. Compatibility will
+ be removed in release 2.2, and all applications will require updating and
+ rebuilding to the new structure at that time, which will be renamed to the
+ original ``struct rte_foo``.
+
+* Significant ABI changes are planned for the ``librte_dostuff`` library. The
+ upcoming release 2.0 will not contain these changes, but release 2.1 will,
+ and no backwards compatibility is planned due to the extensive nature of
+ these changes. Binaries using this library built prior to version 2.1 will
+ require updating and recompilation.
+
+New API replacing previous one
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If a new API proposed functionally replaces an existing one, when the new API
+becomes non-experimental then the old one is marked with ``__rte_deprecated``.
+Deprecated APIs are removed completely just after the next LTS.
+
+Reminder that old API should follow deprecation process to be removed.
+
+
+Experimental APIs
+-----------------
+
+APIs marked as ``experimental`` are not considered part of the ABI and may
+change without warning at any time. Since changes to APIs are most likely
+immediately after their introduction, as users begin to take advantage of
+those new APIs and start finding issues with them, new DPDK APIs will be
+automatically marked as ``experimental`` to allow for a period of stabilization
+before they become part of a tracked ABI.
+
+Note that marking an API as experimental is a multi step process.
+To mark an API as experimental, the symbols which are desired to be exported
+must be placed in an EXPERIMENTAL version block in the corresponding libraries'
+version map script.
+Secondly, the corresponding prototypes of those exported functions (in the
+development header files), must be marked with the ``__rte_experimental`` tag
+(see ``rte_compat.h``).
+The DPDK build makefiles perform a check to ensure that the map file and the
+C code reflect the same list of symbols.
+This check can be circumvented by defining ``ALLOW_EXPERIMENTAL_API``
+during compilation in the corresponding library Makefile.
+
+In addition to tagging the code with ``__rte_experimental``,
+the doxygen markup must also contain the EXPERIMENTAL string,
+and the MAINTAINERS file should note the EXPERIMENTAL libraries.
+
+For removing the experimental tag associated with an API, deprecation notice
+is not required. Though, an API should remain in experimental state for at least
+one release. Thereafter, normal process of posting patch for review to mailing
+list can be followed.
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
new file mode 100644
index 0000000..53e6ac0
--- /dev/null
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -0,0 +1,427 @@
+.. SPDX-License-Identifier: BSD-3-Clause
+ Copyright 2018 The DPDK contributors
+
+.. library_versioning:
+
+Library versioning
+------------------
+
+Downstreams might want to provide different DPDK releases at the same time to
+support multiple consumers of DPDK linked against older and newer sonames.
+
+Also due to the interdependencies that DPDK libraries can have applications
+might end up with an executable space in which multiple versions of a library
+are mapped by ld.so.
+
+Think of LibA that got an ABI bump and LibB that did not get an ABI bump but is
+depending on LibA.
+
+.. note::
+
+ Application
+ \-> LibA.old
+ \-> LibB.new -> LibA.new
+
+That is a conflict which can be avoided by setting ``CONFIG_RTE_MAJOR_ABI``.
+If set, the value of ``CONFIG_RTE_MAJOR_ABI`` overwrites all - otherwise per
+library - versions defined in the libraries ``LIBABIVER``.
+An example might be ``CONFIG_RTE_MAJOR_ABI=16.11`` which will make all libraries
+``librte<?>.so.16.11`` instead of ``librte<?>.so.<LIBABIVER>``.
+
+
+ABI versioning
+--------------
+
+Versioning Macros
+~~~~~~~~~~~~~~~~~
+
+When a symbol is exported from a library to provide an API, it also provides a
+calling convention (ABI) that is embodied in its name, return type and
+arguments. Occasionally that function may need to change to accommodate new
+functionality or behavior. When that occurs, it is desirable to allow for
+backward compatibility for a time with older binaries that are dynamically
+linked to the DPDK.
+
+To support backward compatibility the ``rte_compat.h``
+header file provides macros to use when updating exported functions. These
+macros are used in conjunction with the ``rte_<library>_version.map`` file for
+a given library to allow multiple versions of a symbol to exist in a shared
+library so that older binaries need not be immediately recompiled.
+
+The macros exported are:
+
+* ``VERSION_SYMBOL(b, e, n)``: Creates a symbol version table entry binding
+ versioned symbol ``b@DPDK_n`` to the internal function ``b_e``.
+
+* ``BIND_DEFAULT_SYMBOL(b, e, n)``: Creates a symbol version entry instructing
+ the linker to bind references to symbol ``b`` to the internal symbol
+ ``b_e``.
+
+* ``MAP_STATIC_SYMBOL(f, p)``: Declare the prototype ``f``, and map it to the
+ fully qualified function ``p``, so that if a symbol becomes versioned, it
+ can still be mapped back to the public symbol name.
+
+Examples of ABI Macro use
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Updating a public API
+_____________________
+
+Assume we have a function as follows
+
+.. code-block:: c
+
+ /*
+ * Create an acl context object for apps to
+ * manipulate
+ */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param)
+ {
+ ...
+ }
+
+
+Assume that struct rte_acl_ctx is a private structure, and that a developer
+wishes to enhance the acl api so that a debugging flag can be enabled on a
+per-context basis. This requires an addition to the structure (which, being
+private, is safe), but it also requires modifying the code as follows
+
+.. code-block:: c
+
+ /*
+ * Create an acl context object for apps to
+ * manipulate
+ */
+ struct rte_acl_ctx *
+ rte_acl_create(const struct rte_acl_param *param, int debug)
+ {
+ ...
+ }
+
+
+Note also that, being a public function, the header file prototype must also be
+changed, as must all the call sites, to reflect the new ABI footprint. We will
+maintain previous ABI versions that are accessible only to previously compiled
+binaries
+
+The addition of a parameter to the function is ABI breaking as the function is
+public, and existing application may use it in its current form. However, the
+compatibility macros in DPDK allow a developer to use symbol versioning so that
+multiple functions can be mapped to the same public symbol based on when an
+application was linked to it. To see how this is done, we start with the
+requisite libraries version map file. Initially the version map file for the
+acl library looks like this
+
+.. code-block:: none
+
+ DPDK_2.0 {
+ global:
+
+ rte_acl_add_rules;
+ rte_acl_build;
+ rte_acl_classify;
+ rte_acl_classify_alg;
+ rte_acl_classify_scalar;
+ rte_acl_create;
+ rte_acl_dump;
+ rte_acl_find_existing;
+ rte_acl_free;
+ rte_acl_ipv4vlan_add_rules;
+ rte_acl_ipv4vlan_build;
+ rte_acl_list_dump;
+ rte_acl_reset;
+ rte_acl_reset_rules;
+ rte_acl_set_ctx_classify;
+
+ local: *;
+ };
+
+This file needs to be modified as follows
+
+.. code-block:: none
+
+ DPDK_2.0 {
+ global:
+
+ rte_acl_add_rules;
+ rte_acl_build;
+ rte_acl_classify;
+ rte_acl_classify_alg;
+ rte_acl_classify_scalar;
+ rte_acl_create;
+ rte_acl_dump;
+ rte_acl_find_existing;
+ rte_acl_free;
+ rte_acl_ipv4vlan_add_rules;
+ rte_acl_ipv4vlan_build;
+ rte_acl_list_dump;
+ rte_acl_reset;
+ rte_acl_reset_rules;
+ rte_acl_set_ctx_classify;
+
+ local: *;
+ };
+
+ DPDK_2.1 {
+ global:
+ rte_acl_create;
+
+ } DPDK_2.0;
+
+The addition of the new block tells the linker that a new version node is
+available (DPDK_2.1), which contains the symbol rte_acl_create, and inherits the
+symbols from the DPDK_2.0 node. This list is directly translated into a list of
+exported symbols when DPDK is compiled as a shared library
+
+Next, we need to specify in the code which function map to the rte_acl_create
+symbol at which versions. First, at the site of the initial symbol definition,
+we need to update the function so that it is uniquely named, and not in conflict
+with the public symbol name
+
+.. code-block:: c
+
+ struct rte_acl_ctx *
+ -rte_acl_create(const struct rte_acl_param *param)
+ +rte_acl_create_v20(const struct rte_acl_param *param)
+ {
+ size_t sz;
+ struct rte_acl_ctx *ctx;
+ ...
+
+Note that the base name of the symbol was kept intact, as this is conducive to
+the macros used for versioning symbols. That is our next step, mapping this new
+symbol name to the initial symbol name at version node 2.0. Immediately after
+the function, we add this line of code
+
+.. code-block:: c
+
+ VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
+
+Remembering to also add the rte_compat.h header to the requisite c file where
+these changes are being made. The above macro instructs the linker to create a
+new symbol ``rte_acl_create@DPDK_2.0``, which matches the symbol created in older
+builds, but now points to the above newly named function. We have now mapped
+the original rte_acl_create symbol to the original function (but with a new
+name)
+
+Next, we need to create the 2.1 version of the symbol. We create a new function
+name, with a different suffix, and implement it appropriately
+
+.. code-block:: c
+
+ struct rte_acl_ctx *
+ rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ {
+ struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
+
+ ctx->debug = debug;
+
+ return ctx;
+ }
+
+This code serves as our new API call. Its the same as our old call, but adds
+the new parameter in place. Next we need to map this function to the symbol
+``rte_acl_create@DPDK_2.1``. To do this, we modify the public prototype of the call
+in the header file, adding the macro there to inform all including applications,
+that on re-link, the default rte_acl_create symbol should point to this
+function. Note that we could do this by simply naming the function above
+rte_acl_create, and the linker would chose the most recent version tag to apply
+in the version script, but we can also do this in the header file
+
+.. code-block:: c
+
+ struct rte_acl_ctx *
+ -rte_acl_create(const struct rte_acl_param *param);
+ +rte_acl_create(const struct rte_acl_param *param, int debug);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
+
+The BIND_DEFAULT_SYMBOL macro explicitly tells applications that include this
+header, to link to the rte_acl_create_v21 function and apply the DPDK_2.1
+version node to it. This method is more explicit and flexible than just
+re-implementing the exact symbol name, and allows for other features (such as
+linking to the old symbol version by default, when the new ABI is to be opt-in
+for a period.
+
+One last thing we need to do. Note that we've taken what was a public symbol,
+and duplicated it into two uniquely and differently named symbols. We've then
+mapped each of those back to the public symbol ``rte_acl_create`` with different
+version tags. This only applies to dynamic linking, as static linking has no
+notion of versioning. That leaves this code in a position of no longer having a
+symbol simply named ``rte_acl_create`` and a static build will fail on that
+missing symbol.
+
+To correct this, we can simply map a function of our choosing back to the public
+symbol in the static build with the ``MAP_STATIC_SYMBOL`` macro. Generally the
+assumption is that the most recent version of the symbol is the one you want to
+map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
+defined, we add this
+
+.. code-block:: c
+
+ struct rte_acl_ctx *
+ rte_acl_create_v21(const struct rte_acl_param *param, int debug)
+ {
+ ...
+ }
+ MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v21);
+
+That tells the compiler that, when building a static library, any calls to the
+symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
+
+That's it, on the next shared library rebuild, there will be two versions of
+rte_acl_create, an old DPDK_2.0 version, used by previously built applications,
+and a new DPDK_2.1 version, used by future built applications.
+
+
+Deprecating part of a public API
+________________________________
+
+Lets assume that you've done the above update, and after a few releases have
+passed you decide you would like to retire the old version of the function.
+After having gone through the ABI deprecation announcement process, removal is
+easy. Start by removing the symbol from the requisite version map file:
+
+.. code-block:: none
+
+ DPDK_2.0 {
+ global:
+
+ rte_acl_add_rules;
+ rte_acl_build;
+ rte_acl_classify;
+ rte_acl_classify_alg;
+ rte_acl_classify_scalar;
+ rte_acl_dump;
+ - rte_acl_create
+ rte_acl_find_existing;
+ rte_acl_free;
+ rte_acl_ipv4vlan_add_rules;
+ rte_acl_ipv4vlan_build;
+ rte_acl_list_dump;
+ rte_acl_reset;
+ rte_acl_reset_rules;
+ rte_acl_set_ctx_classify;
+
+ local: *;
+ };
+
+ DPDK_2.1 {
+ global:
+ rte_acl_create;
+ } DPDK_2.0;
+
+
+Next remove the corresponding versioned export.
+
+.. code-block:: c
+
+ -VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
+
+
+Note that the internal function definition could also be removed, but its used
+in our example by the newer version _v21, so we leave it in place. This is a
+coding style choice.
+
+Lastly, we need to bump the LIBABIVER number for this library in the Makefile to
+indicate to applications doing dynamic linking that this is a later, and
+possibly incompatible library version:
+
+.. code-block:: c
+
+ -LIBABIVER := 1
+ +LIBABIVER := 2
+
+Deprecating an entire ABI version
+_________________________________
+
+While removing a symbol from and ABI may be useful, it is often more practical
+to remove an entire version node at once. If a version node completely
+specifies an API, then removing part of it, typically makes it incomplete. In
+those cases it is better to remove the entire node
+
+To do this, start by modifying the version map file, such that all symbols from
+the node to be removed are merged into the next node in the map
+
+In the case of our map above, it would transform to look as follows
+
+.. code-block:: none
+
+ DPDK_2.1 {
+ global:
+
+ rte_acl_add_rules;
+ rte_acl_build;
+ rte_acl_classify;
+ rte_acl_classify_alg;
+ rte_acl_classify_scalar;
+ rte_acl_dump;
+ rte_acl_create
+ rte_acl_find_existing;
+ rte_acl_free;
+ rte_acl_ipv4vlan_add_rules;
+ rte_acl_ipv4vlan_build;
+ rte_acl_list_dump;
+ rte_acl_reset;
+ rte_acl_reset_rules;
+ rte_acl_set_ctx_classify;
+
+ local: *;
+ };
+
+Then any uses of BIND_DEFAULT_SYMBOL that pointed to the old node should be
+updated to point to the new version node in any header files for all affected
+symbols.
+
+.. code-block:: c
+
+ -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 2.0);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
+
+Lastly, any VERSION_SYMBOL macros that point to the old version node should be
+removed, taking care to keep, where need old code in place to support newer
+versions of the symbol.
+
+
+Running the ABI Validator
+-------------------------
+
+The ``devtools`` directory in the DPDK source tree contains a utility program,
+``validate-abi.sh``, for validating the DPDK ABI based on the Linux `ABI
+Compliance Checker
+<http://ispras.linuxbase.org/index.php/ABI_compliance_checker>`_.
+
+This has a dependency on the ``abi-compliance-checker`` and ``and abi-dumper``
+utilities which can be installed via a package manager. For example::
+
+ sudo yum install abi-compliance-checker
+ sudo yum install abi-dumper
+
+The syntax of the ``validate-abi.sh`` utility is::
+
+ ./devtools/validate-abi.sh <REV1> <REV2>
+
+Where ``REV1`` and ``REV2`` are valid gitrevisions(7)
+https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html
+on the local repo.
+
+For example::
+
+ # Check between the previous and latest commit:
+ ./devtools/validate-abi.sh HEAD~1 HEAD
+
+ # Check on a specific compilation target:
+ ./devtools/validate-abi.sh -t x86_64-native-linux-gcc HEAD~1 HEAD
+
+ # Check between two tags:
+ ./devtools/validate-abi.sh v2.0.0 v2.1.0
+
+ # Check between git master and local topic-branch "vhost-hacking":
+ ./devtools/validate-abi.sh master vhost-hacking
+
+After the validation script completes (it can take a while since it need to
+compile both tags) it will create compatibility reports in the
+``./abi-check/compat_report`` directory. Listed incompatibilities can be found
+as follows::
+
+ grep -lr Incompatible abi-check/compat_reports/
diff --git a/doc/guides/contributing/index.rst b/doc/guides/contributing/index.rst
index e2608d3..2fefd91 100644
--- a/doc/guides/contributing/index.rst
+++ b/doc/guides/contributing/index.rst
@@ -10,7 +10,8 @@ Contributor's Guidelines
coding_style
design
- versioning
+ abi_policy
+ abi_versioning
documentation
patches
vulnerability
diff --git a/doc/guides/contributing/versioning.rst b/doc/guides/contributing/versioning.rst
deleted file mode 100644
index 3ab2c43..0000000
--- a/doc/guides/contributing/versioning.rst
+++ /dev/null
@@ -1,591 +0,0 @@
-.. SPDX-License-Identifier: BSD-3-Clause
- Copyright 2018 The DPDK contributors
-
-DPDK ABI/API policy
-===================
-
-Description
------------
-
-This document details some methods for handling ABI management in the DPDK.
-
-General Guidelines
-------------------
-
-#. Whenever possible, ABI should be preserved
-#. ABI/API may be changed with a deprecation process
-#. The modification of symbols can generally be managed with versioning
-#. Libraries or APIs marked in ``experimental`` state may change without constraint
-#. New APIs will be marked as ``experimental`` for at least one release to allow
- any issues found by users of the new API to be fixed quickly
-#. The addition of symbols is generally not problematic
-#. The removal of symbols generally is an ABI break and requires bumping of the
- LIBABIVER macro
-#. Updates to the minimum hardware requirements, which drop support for hardware which
- was previously supported, should be treated as an ABI change.
-
-What is an ABI
-~~~~~~~~~~~~~~
-
-An ABI (Application Binary Interface) is the set of runtime interfaces exposed
-by a library. It is similar to an API (Application Programming Interface) but
-is the result of compilation. It is also effectively cloned when applications
-link to dynamic libraries. That is to say when an application is compiled to
-link against dynamic libraries, it is assumed that the ABI remains constant
-between the time the application is compiled/linked, and the time that it runs.
-Therefore, in the case of dynamic linking, it is critical that an ABI is
-preserved, or (when modified), done in such a way that the application is unable
-to behave improperly or in an unexpected fashion.
-
-
-ABI/API Deprecation
--------------------
-
-The DPDK ABI policy
-~~~~~~~~~~~~~~~~~~~
-
-ABI versions are set at the time of major release labeling, and the ABI may
-change multiple times, without warning, between the last release label and the
-HEAD label of the git tree.
-
-ABI versions, once released, are available until such time as their
-deprecation has been noted in the Release Notes for at least one major release
-cycle. For example consider the case where the ABI for DPDK 2.0 has been
-shipped and then a decision is made to modify it during the development of
-DPDK 2.1. The decision will be recorded in the Release Notes for the DPDK 2.1
-release and the modification will be made available in the DPDK 2.2 release.
-
-ABI versions may be deprecated in whole or in part as needed by a given
-update.
-
-Some ABI changes may be too significant to reasonably maintain multiple
-versions. In those cases ABI's may be updated without backward compatibility
-being provided. The requirements for doing so are:
-
-#. At least 3 acknowledgments of the need to do so must be made on the
- dpdk.org mailing list.
-
- - The acknowledgment of the maintainer of the component is mandatory, or if
- no maintainer is available for the component, the tree/sub-tree maintainer
- for that component must acknowledge the ABI change instead.
-
- - It is also recommended that acknowledgments from different "areas of
- interest" be sought for each deprecation, for example: from NIC vendors,
- CPU vendors, end-users, etc.
-
-#. The changes (including an alternative map file) can be included with
- deprecation notice, in wrapped way by the ``RTE_NEXT_ABI`` option,
- to provide more details about oncoming changes.
- ``RTE_NEXT_ABI`` wrapper will be removed when it become the default ABI.
- More preferred way to provide this information is sending the feature
- as a separate patch and reference it in deprecation notice.
-
-#. A full deprecation cycle, as explained above, must be made to offer
- downstream consumers sufficient warning of the change.
-
-Note that the above process for ABI deprecation should not be undertaken
-lightly. ABI stability is extremely important for downstream consumers of the
-DPDK, especially when distributed in shared object form. Every effort should
-be made to preserve the ABI whenever possible. The ABI should only be changed
-for significant reasons, such as performance enhancements. ABI breakage due to
-changes such as reorganizing public structure fields for aesthetic or
-readability purposes should be avoided.
-
-.. note::
-
- Updates to the minimum hardware requirements, which drop support for hardware
- which was previously supported, should be treated as an ABI change, and
- follow the relevant deprecation policy procedures as above: 3 acks and
- announcement at least one release in advance.
-
-Examples of Deprecation Notices
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-The following are some examples of ABI deprecation notices which would be
-added to the Release Notes:
-
-* The Macro ``#RTE_FOO`` is deprecated and will be removed with version 2.0,
- to be replaced with the inline function ``rte_foo()``.
-
-* The function ``rte_mbuf_grok()`` has been updated to include a new parameter
- in version 2.0. Backwards compatibility will be maintained for this function
- until the release of version 2.1
-
-* The members of ``struct rte_foo`` have been reorganized in release 2.0 for
- performance reasons. Existing binary applications will have backwards
- compatibility in release 2.0, while newly built binaries will need to
- reference the new structure variant ``struct rte_foo2``. Compatibility will
- be removed in release 2.2, and all applications will require updating and
- rebuilding to the new structure at that time, which will be renamed to the
- original ``struct rte_foo``.
-
-* Significant ABI changes are planned for the ``librte_dostuff`` library. The
- upcoming release 2.0 will not contain these changes, but release 2.1 will,
- and no backwards compatibility is planned due to the extensive nature of
- these changes. Binaries using this library built prior to version 2.1 will
- require updating and recompilation.
-
-New API replacing previous one
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-If a new API proposed functionally replaces an existing one, when the new API
-becomes non-experimental then the old one is marked with ``__rte_deprecated``.
-Deprecated APIs are removed completely just after the next LTS.
-
-Reminder that old API should follow deprecation process to be removed.
-
-
-Experimental APIs
------------------
-
-APIs marked as ``experimental`` are not considered part of the ABI and may
-change without warning at any time. Since changes to APIs are most likely
-immediately after their introduction, as users begin to take advantage of
-those new APIs and start finding issues with them, new DPDK APIs will be
-automatically marked as ``experimental`` to allow for a period of stabilization
-before they become part of a tracked ABI.
-
-Note that marking an API as experimental is a multi step process.
-To mark an API as experimental, the symbols which are desired to be exported
-must be placed in an EXPERIMENTAL version block in the corresponding libraries'
-version map script.
-Secondly, the corresponding prototypes of those exported functions (in the
-development header files), must be marked with the ``__rte_experimental`` tag
-(see ``rte_compat.h``).
-The DPDK build makefiles perform a check to ensure that the map file and the
-C code reflect the same list of symbols.
-This check can be circumvented by defining ``ALLOW_EXPERIMENTAL_API``
-during compilation in the corresponding library Makefile.
-
-In addition to tagging the code with ``__rte_experimental``,
-the doxygen markup must also contain the EXPERIMENTAL string,
-and the MAINTAINERS file should note the EXPERIMENTAL libraries.
-
-For removing the experimental tag associated with an API, deprecation notice
-is not required. Though, an API should remain in experimental state for at least
-one release. Thereafter, normal process of posting patch for review to mailing
-list can be followed.
-
-
-Library versioning
-------------------
-
-Downstreams might want to provide different DPDK releases at the same time to
-support multiple consumers of DPDK linked against older and newer sonames.
-
-Also due to the interdependencies that DPDK libraries can have applications
-might end up with an executable space in which multiple versions of a library
-are mapped by ld.so.
-
-Think of LibA that got an ABI bump and LibB that did not get an ABI bump but is
-depending on LibA.
-
-.. note::
-
- Application
- \-> LibA.old
- \-> LibB.new -> LibA.new
-
-That is a conflict which can be avoided by setting ``CONFIG_RTE_MAJOR_ABI``.
-If set, the value of ``CONFIG_RTE_MAJOR_ABI`` overwrites all - otherwise per
-library - versions defined in the libraries ``LIBABIVER``.
-An example might be ``CONFIG_RTE_MAJOR_ABI=16.11`` which will make all libraries
-``librte<?>.so.16.11`` instead of ``librte<?>.so.<LIBABIVER>``.
-
-
-ABI versioning
---------------
-
-Versioning Macros
-~~~~~~~~~~~~~~~~~
-
-When a symbol is exported from a library to provide an API, it also provides a
-calling convention (ABI) that is embodied in its name, return type and
-arguments. Occasionally that function may need to change to accommodate new
-functionality or behavior. When that occurs, it is desirable to allow for
-backward compatibility for a time with older binaries that are dynamically
-linked to the DPDK.
-
-To support backward compatibility the ``rte_compat.h``
-header file provides macros to use when updating exported functions. These
-macros are used in conjunction with the ``rte_<library>_version.map`` file for
-a given library to allow multiple versions of a symbol to exist in a shared
-library so that older binaries need not be immediately recompiled.
-
-The macros exported are:
-
-* ``VERSION_SYMBOL(b, e, n)``: Creates a symbol version table entry binding
- versioned symbol ``b@DPDK_n`` to the internal function ``b_e``.
-
-* ``BIND_DEFAULT_SYMBOL(b, e, n)``: Creates a symbol version entry instructing
- the linker to bind references to symbol ``b`` to the internal symbol
- ``b_e``.
-
-* ``MAP_STATIC_SYMBOL(f, p)``: Declare the prototype ``f``, and map it to the
- fully qualified function ``p``, so that if a symbol becomes versioned, it
- can still be mapped back to the public symbol name.
-
-Examples of ABI Macro use
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Updating a public API
-_____________________
-
-Assume we have a function as follows
-
-.. code-block:: c
-
- /*
- * Create an acl context object for apps to
- * manipulate
- */
- struct rte_acl_ctx *
- rte_acl_create(const struct rte_acl_param *param)
- {
- ...
- }
-
-
-Assume that struct rte_acl_ctx is a private structure, and that a developer
-wishes to enhance the acl api so that a debugging flag can be enabled on a
-per-context basis. This requires an addition to the structure (which, being
-private, is safe), but it also requires modifying the code as follows
-
-.. code-block:: c
-
- /*
- * Create an acl context object for apps to
- * manipulate
- */
- struct rte_acl_ctx *
- rte_acl_create(const struct rte_acl_param *param, int debug)
- {
- ...
- }
-
-
-Note also that, being a public function, the header file prototype must also be
-changed, as must all the call sites, to reflect the new ABI footprint. We will
-maintain previous ABI versions that are accessible only to previously compiled
-binaries
-
-The addition of a parameter to the function is ABI breaking as the function is
-public, and existing application may use it in its current form. However, the
-compatibility macros in DPDK allow a developer to use symbol versioning so that
-multiple functions can be mapped to the same public symbol based on when an
-application was linked to it. To see how this is done, we start with the
-requisite libraries version map file. Initially the version map file for the
-acl library looks like this
-
-.. code-block:: none
-
- DPDK_2.0 {
- global:
-
- rte_acl_add_rules;
- rte_acl_build;
- rte_acl_classify;
- rte_acl_classify_alg;
- rte_acl_classify_scalar;
- rte_acl_create;
- rte_acl_dump;
- rte_acl_find_existing;
- rte_acl_free;
- rte_acl_ipv4vlan_add_rules;
- rte_acl_ipv4vlan_build;
- rte_acl_list_dump;
- rte_acl_reset;
- rte_acl_reset_rules;
- rte_acl_set_ctx_classify;
-
- local: *;
- };
-
-This file needs to be modified as follows
-
-.. code-block:: none
-
- DPDK_2.0 {
- global:
-
- rte_acl_add_rules;
- rte_acl_build;
- rte_acl_classify;
- rte_acl_classify_alg;
- rte_acl_classify_scalar;
- rte_acl_create;
- rte_acl_dump;
- rte_acl_find_existing;
- rte_acl_free;
- rte_acl_ipv4vlan_add_rules;
- rte_acl_ipv4vlan_build;
- rte_acl_list_dump;
- rte_acl_reset;
- rte_acl_reset_rules;
- rte_acl_set_ctx_classify;
-
- local: *;
- };
-
- DPDK_2.1 {
- global:
- rte_acl_create;
-
- } DPDK_2.0;
-
-The addition of the new block tells the linker that a new version node is
-available (DPDK_2.1), which contains the symbol rte_acl_create, and inherits the
-symbols from the DPDK_2.0 node. This list is directly translated into a list of
-exported symbols when DPDK is compiled as a shared library
-
-Next, we need to specify in the code which function map to the rte_acl_create
-symbol at which versions. First, at the site of the initial symbol definition,
-we need to update the function so that it is uniquely named, and not in conflict
-with the public symbol name
-
-.. code-block:: c
-
- struct rte_acl_ctx *
- -rte_acl_create(const struct rte_acl_param *param)
- +rte_acl_create_v20(const struct rte_acl_param *param)
- {
- size_t sz;
- struct rte_acl_ctx *ctx;
- ...
-
-Note that the base name of the symbol was kept intact, as this is conducive to
-the macros used for versioning symbols. That is our next step, mapping this new
-symbol name to the initial symbol name at version node 2.0. Immediately after
-the function, we add this line of code
-
-.. code-block:: c
-
- VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
-
-Remembering to also add the rte_compat.h header to the requisite c file where
-these changes are being made. The above macro instructs the linker to create a
-new symbol ``rte_acl_create@DPDK_2.0``, which matches the symbol created in older
-builds, but now points to the above newly named function. We have now mapped
-the original rte_acl_create symbol to the original function (but with a new
-name)
-
-Next, we need to create the 2.1 version of the symbol. We create a new function
-name, with a different suffix, and implement it appropriately
-
-.. code-block:: c
-
- struct rte_acl_ctx *
- rte_acl_create_v21(const struct rte_acl_param *param, int debug);
- {
- struct rte_acl_ctx *ctx = rte_acl_create_v20(param);
-
- ctx->debug = debug;
-
- return ctx;
- }
-
-This code serves as our new API call. Its the same as our old call, but adds
-the new parameter in place. Next we need to map this function to the symbol
-``rte_acl_create@DPDK_2.1``. To do this, we modify the public prototype of the call
-in the header file, adding the macro there to inform all including applications,
-that on re-link, the default rte_acl_create symbol should point to this
-function. Note that we could do this by simply naming the function above
-rte_acl_create, and the linker would chose the most recent version tag to apply
-in the version script, but we can also do this in the header file
-
-.. code-block:: c
-
- struct rte_acl_ctx *
- -rte_acl_create(const struct rte_acl_param *param);
- +rte_acl_create(const struct rte_acl_param *param, int debug);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
-
-The BIND_DEFAULT_SYMBOL macro explicitly tells applications that include this
-header, to link to the rte_acl_create_v21 function and apply the DPDK_2.1
-version node to it. This method is more explicit and flexible than just
-re-implementing the exact symbol name, and allows for other features (such as
-linking to the old symbol version by default, when the new ABI is to be opt-in
-for a period.
-
-One last thing we need to do. Note that we've taken what was a public symbol,
-and duplicated it into two uniquely and differently named symbols. We've then
-mapped each of those back to the public symbol ``rte_acl_create`` with different
-version tags. This only applies to dynamic linking, as static linking has no
-notion of versioning. That leaves this code in a position of no longer having a
-symbol simply named ``rte_acl_create`` and a static build will fail on that
-missing symbol.
-
-To correct this, we can simply map a function of our choosing back to the public
-symbol in the static build with the ``MAP_STATIC_SYMBOL`` macro. Generally the
-assumption is that the most recent version of the symbol is the one you want to
-map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
-defined, we add this
-
-.. code-block:: c
-
- struct rte_acl_ctx *
- rte_acl_create_v21(const struct rte_acl_param *param, int debug)
- {
- ...
- }
- MAP_STATIC_SYMBOL(struct rte_acl_ctx *rte_acl_create(const struct rte_acl_param *param, int debug), rte_acl_create_v21);
-
-That tells the compiler that, when building a static library, any calls to the
-symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
-
-That's it, on the next shared library rebuild, there will be two versions of
-rte_acl_create, an old DPDK_2.0 version, used by previously built applications,
-and a new DPDK_2.1 version, used by future built applications.
-
-
-Deprecating part of a public API
-________________________________
-
-Lets assume that you've done the above update, and after a few releases have
-passed you decide you would like to retire the old version of the function.
-After having gone through the ABI deprecation announcement process, removal is
-easy. Start by removing the symbol from the requisite version map file:
-
-.. code-block:: none
-
- DPDK_2.0 {
- global:
-
- rte_acl_add_rules;
- rte_acl_build;
- rte_acl_classify;
- rte_acl_classify_alg;
- rte_acl_classify_scalar;
- rte_acl_dump;
- - rte_acl_create
- rte_acl_find_existing;
- rte_acl_free;
- rte_acl_ipv4vlan_add_rules;
- rte_acl_ipv4vlan_build;
- rte_acl_list_dump;
- rte_acl_reset;
- rte_acl_reset_rules;
- rte_acl_set_ctx_classify;
-
- local: *;
- };
-
- DPDK_2.1 {
- global:
- rte_acl_create;
- } DPDK_2.0;
-
-
-Next remove the corresponding versioned export.
-
-.. code-block:: c
-
- -VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
-
-
-Note that the internal function definition could also be removed, but its used
-in our example by the newer version _v21, so we leave it in place. This is a
-coding style choice.
-
-Lastly, we need to bump the LIBABIVER number for this library in the Makefile to
-indicate to applications doing dynamic linking that this is a later, and
-possibly incompatible library version:
-
-.. code-block:: c
-
- -LIBABIVER := 1
- +LIBABIVER := 2
-
-Deprecating an entire ABI version
-_________________________________
-
-While removing a symbol from and ABI may be useful, it is often more practical
-to remove an entire version node at once. If a version node completely
-specifies an API, then removing part of it, typically makes it incomplete. In
-those cases it is better to remove the entire node
-
-To do this, start by modifying the version map file, such that all symbols from
-the node to be removed are merged into the next node in the map
-
-In the case of our map above, it would transform to look as follows
-
-.. code-block:: none
-
- DPDK_2.1 {
- global:
-
- rte_acl_add_rules;
- rte_acl_build;
- rte_acl_classify;
- rte_acl_classify_alg;
- rte_acl_classify_scalar;
- rte_acl_dump;
- rte_acl_create
- rte_acl_find_existing;
- rte_acl_free;
- rte_acl_ipv4vlan_add_rules;
- rte_acl_ipv4vlan_build;
- rte_acl_list_dump;
- rte_acl_reset;
- rte_acl_reset_rules;
- rte_acl_set_ctx_classify;
-
- local: *;
- };
-
-Then any uses of BIND_DEFAULT_SYMBOL that pointed to the old node should be
-updated to point to the new version node in any header files for all affected
-symbols.
-
-.. code-block:: c
-
- -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 2.0);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
-
-Lastly, any VERSION_SYMBOL macros that point to the old version node should be
-removed, taking care to keep, where need old code in place to support newer
-versions of the symbol.
-
-
-Running the ABI Validator
--------------------------
-
-The ``devtools`` directory in the DPDK source tree contains a utility program,
-``validate-abi.sh``, for validating the DPDK ABI based on the Linux `ABI
-Compliance Checker
-<http://ispras.linuxbase.org/index.php/ABI_compliance_checker>`_.
-
-This has a dependency on the ``abi-compliance-checker`` and ``and abi-dumper``
-utilities which can be installed via a package manager. For example::
-
- sudo yum install abi-compliance-checker
- sudo yum install abi-dumper
-
-The syntax of the ``validate-abi.sh`` utility is::
-
- ./devtools/validate-abi.sh <REV1> <REV2>
-
-Where ``REV1`` and ``REV2`` are valid gitrevisions(7)
-https://www.kernel.org/pub/software/scm/git/docs/gitrevisions.html
-on the local repo.
-
-For example::
-
- # Check between the previous and latest commit:
- ./devtools/validate-abi.sh HEAD~1 HEAD
-
- # Check on a specific compilation target:
- ./devtools/validate-abi.sh -t x86_64-native-linux-gcc HEAD~1 HEAD
-
- # Check between two tags:
- ./devtools/validate-abi.sh v2.0.0 v2.1.0
-
- # Check between git master and local topic-branch "vhost-hacking":
- ./devtools/validate-abi.sh master vhost-hacking
-
-After the validation script completes (it can take a while since it need to
-compile both tags) it will create compatibility reports in the
-``./abi-check/compat_report`` directory. Listed incompatibilities can be found
-as follows::
-
- grep -lr Incompatible abi-check/compat_reports/
--
2.7.4
^ permalink raw reply [relevance 13%]
* [dpdk-dev] [PATCH v7 3/4] doc: updates to versioning guide for abi versions
2019-10-25 16:28 10% [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 1/4] doc: separate versioning.rst into version and policy Ray Kinsella
2019-10-25 16:28 23% ` [dpdk-dev] [PATCH v7 2/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
@ 2019-10-25 16:28 30% ` Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 4/4] doc: add maintainer for abi policy Ray Kinsella
3 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 16:28 UTC (permalink / raw)
To: dev
Cc: mdr, thomas, stephen, bruce.richardson, ferruh.yigit,
konstantin.ananyev, jerinj, olivier.matz, nhorman,
maxime.coquelin, john.mcnamara, marko.kovacevic, hemant.agrawal,
ktraynor, aconole
Updates to the ABI versioning guide, to account for the changes to the DPDK
ABI/API policy. Fixes for references to abi versioning and policy guides.
Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
---
doc/guides/contributing/abi_versioning.rst | 248 +++++++++++++++++++----------
doc/guides/contributing/patches.rst | 6 +-
doc/guides/contributing/stable.rst | 2 +-
doc/guides/rel_notes/deprecation.rst | 5 +-
4 files changed, 175 insertions(+), 86 deletions(-)
diff --git a/doc/guides/contributing/abi_versioning.rst b/doc/guides/contributing/abi_versioning.rst
index 53e6ac0..a7cf1b9 100644
--- a/doc/guides/contributing/abi_versioning.rst
+++ b/doc/guides/contributing/abi_versioning.rst
@@ -1,44 +1,134 @@
.. SPDX-License-Identifier: BSD-3-Clause
Copyright 2018 The DPDK contributors
-.. library_versioning:
+.. _abi_versioning:
-Library versioning
+ABI Versioning
+==============
+
+This document details the mechanics of ABI version management in DPDK.
+
+.. _what_is_soname:
+
+What is a library's soname?
+---------------------------
+
+System libraries usually adopt the familiar major and minor version naming
+convention, where major versions (e.g. ``librte_eal 20.x, 21.x``) are presumed
+to be ABI incompatible with each other and minor versions (e.g. ``librte_eal
+20.1, 20.2``) are presumed to be ABI compatible. A library's `soname
+<https://en.wikipedia.org/wiki/Soname>`_. is typically used to provide backward
+compatibility information about a given library, describing the lowest common
+denominator ABI supported by the library. The soname or logical name for the
+library, is typically comprised of the library's name and major version e.g.
+``librte_eal.so.20``.
+
+During an application's build process, a library's soname is noted as a runtime
+dependency of the application. This information is then used by the `dynamic
+linker <https://en.wikipedia.org/wiki/Dynamic_linker>`_ when resolving the
+applications dependencies at runtime, to load a library supporting the correct
+ABI version. The library loaded at runtime therefore, may be a minor revision
+supporting the same major ABI version (e.g. ``librte_eal.20.2``), as the library
+used to link the application (e.g ``librte_eal.20.0``).
+
+.. _major_abi_versions:
+
+Major ABI versions
------------------
-Downstreams might want to provide different DPDK releases at the same time to
-support multiple consumers of DPDK linked against older and newer sonames.
+An ABI version change to a given library, especially in core libraries such as
+``librte_mbuf``, may cause an implicit ripple effect on the ABI of it's
+consuming libraries, causing ABI breakages. There may however be no explicit
+reason to bump a dependent library's ABI version, as there may have been no
+obvious change to the dependent library's API, even though the library's ABI
+compatibility will have been broken.
+
+This interdependence of DPDK libraries, means that ABI versioning of libraries
+is more manageable at a project level, with all project libraries sharing a
+**single ABI version**. In addition, the need to maintain a stable ABI for some
+number of releases as described in the section :doc:`abi_policy`, means
+that ABI version increments need to carefully planned and managed at a project
+level.
+
+Major ABI versions are therefore declared typically aligned with an LTS release
+and is then supported some number of subsequent releases, shared across all
+libraries. This means that a single project level ABI version, reflected in all
+individual library's soname, library filenames and associated version maps
+persists over multiple releases.
+
+.. code-block:: none
+
+ $ head ./lib/librte_acl/rte_acl_version.map
+ DPDK_20 {
+ global:
+ ...
-Also due to the interdependencies that DPDK libraries can have applications
-might end up with an executable space in which multiple versions of a library
-are mapped by ld.so.
+ $ head ./lib/librte_eal/rte_eal_version.map
+ DPDK_20 {
+ global:
+ ...
-Think of LibA that got an ABI bump and LibB that did not get an ABI bump but is
-depending on LibA.
+When an ABI change is made between major ABI versions to a given library, a new
+section is added to that library's version map describing the impending new ABI
+version, as described in the section :ref:`example_abi_macro_usage`. The
+library's soname and filename however do not change, e.g. ``libacl.so.20``, as
+ABI compatibility with the last major ABI version continues to be preserved for
+that library.
-.. note::
+.. code-block:: none
- Application
- \-> LibA.old
- \-> LibB.new -> LibA.new
+ $ head ./lib/librte_acl/rte_acl_version.map
+ DPDK_20 {
+ global:
+ ...
-That is a conflict which can be avoided by setting ``CONFIG_RTE_MAJOR_ABI``.
-If set, the value of ``CONFIG_RTE_MAJOR_ABI`` overwrites all - otherwise per
-library - versions defined in the libraries ``LIBABIVER``.
-An example might be ``CONFIG_RTE_MAJOR_ABI=16.11`` which will make all libraries
-``librte<?>.so.16.11`` instead of ``librte<?>.so.<LIBABIVER>``.
+ DPDK_21 {
+ global:
+
+ } DPDK_20;
+ ...
+ $ head ./lib/librte_eal/rte_eal_version.map
+ DPDK_20 {
+ global:
+ ...
+
+However when a new ABI version is declared, for example DPDK ``21``, old
+depreciated functions may be safely removed at this point and the entire old
+major ABI version removed, see the section :ref:`deprecating_entire_abi` on
+how this may be done.
+
+.. code-block:: none
+
+ $ head ./lib/librte_acl/rte_acl_version.map
+ DPDK_21 {
+ global:
+ ...
+
+ $ head ./lib/librte_eal/rte_eal_version.map
+ DPDK_21 {
+ global:
+ ...
+
+At the same time, the major ABI version is changed atomically across all
+libraries by incrementing the major version in individual library's soname, e.g.
+``libacl.so.21``. This is done by bumping the LIBABIVER number in the libraries
+Makefile to indicate to dynamic linking applications that this is a later, and
+possibly incompatible library version:
+
+.. code-block:: c
+
+ -LIBABIVER := 20
+ +LIBABIVER := 21
-ABI versioning
---------------
Versioning Macros
-~~~~~~~~~~~~~~~~~
+-----------------
When a symbol is exported from a library to provide an API, it also provides a
calling convention (ABI) that is embodied in its name, return type and
arguments. Occasionally that function may need to change to accommodate new
-functionality or behavior. When that occurs, it is desirable to allow for
+functionality or behavior. When that occurs, it is may be required to allow for
backward compatibility for a time with older binaries that are dynamically
linked to the DPDK.
@@ -61,8 +151,10 @@ The macros exported are:
fully qualified function ``p``, so that if a symbol becomes versioned, it
can still be mapped back to the public symbol name.
+.. _example_abi_macro_usage:
+
Examples of ABI Macro use
-^^^^^^^^^^^^^^^^^^^^^^^^^
+~~~~~~~~~~~~~~~~~~~~~~~~~
Updating a public API
_____________________
@@ -106,16 +198,16 @@ maintain previous ABI versions that are accessible only to previously compiled
binaries
The addition of a parameter to the function is ABI breaking as the function is
-public, and existing application may use it in its current form. However, the
+public, and existing application may use it in its current form. However, the
compatibility macros in DPDK allow a developer to use symbol versioning so that
multiple functions can be mapped to the same public symbol based on when an
-application was linked to it. To see how this is done, we start with the
-requisite libraries version map file. Initially the version map file for the
-acl library looks like this
+application was linked to it. To see how this is done, we start with the
+requisite libraries version map file. Initially the version map file for the acl
+library looks like this
.. code-block:: none
- DPDK_2.0 {
+ DPDK_20 {
global:
rte_acl_add_rules;
@@ -141,7 +233,7 @@ This file needs to be modified as follows
.. code-block:: none
- DPDK_2.0 {
+ DPDK_20 {
global:
rte_acl_add_rules;
@@ -163,16 +255,16 @@ This file needs to be modified as follows
local: *;
};
- DPDK_2.1 {
+ DPDK_21 {
global:
rte_acl_create;
- } DPDK_2.0;
+ } DPDK_20;
The addition of the new block tells the linker that a new version node is
-available (DPDK_2.1), which contains the symbol rte_acl_create, and inherits the
-symbols from the DPDK_2.0 node. This list is directly translated into a list of
-exported symbols when DPDK is compiled as a shared library
+available (DPDK_21), which contains the symbol rte_acl_create, and inherits
+the symbols from the DPDK_20 node. This list is directly translated into a
+list of exported symbols when DPDK is compiled as a shared library
Next, we need to specify in the code which function map to the rte_acl_create
symbol at which versions. First, at the site of the initial symbol definition,
@@ -191,22 +283,22 @@ with the public symbol name
Note that the base name of the symbol was kept intact, as this is conducive to
the macros used for versioning symbols. That is our next step, mapping this new
-symbol name to the initial symbol name at version node 2.0. Immediately after
+symbol name to the initial symbol name at version node 20. Immediately after
the function, we add this line of code
.. code-block:: c
- VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
+ VERSION_SYMBOL(rte_acl_create, _v20, 20);
Remembering to also add the rte_compat.h header to the requisite c file where
-these changes are being made. The above macro instructs the linker to create a
-new symbol ``rte_acl_create@DPDK_2.0``, which matches the symbol created in older
-builds, but now points to the above newly named function. We have now mapped
-the original rte_acl_create symbol to the original function (but with a new
-name)
+these changes are being made. The above macro instructs the linker to create a
+new symbol ``rte_acl_create@DPDK_20``, which matches the symbol created in
+older builds, but now points to the above newly named function. We have now
+mapped the original rte_acl_create symbol to the original function (but with a
+new name)
-Next, we need to create the 2.1 version of the symbol. We create a new function
-name, with a different suffix, and implement it appropriately
+Next, we need to create the 21 version of the symbol. We create a new function
+name, with a different suffix, and implement it appropriately
.. code-block:: c
@@ -220,12 +312,12 @@ name, with a different suffix, and implement it appropriately
return ctx;
}
-This code serves as our new API call. Its the same as our old call, but adds
-the new parameter in place. Next we need to map this function to the symbol
-``rte_acl_create@DPDK_2.1``. To do this, we modify the public prototype of the call
-in the header file, adding the macro there to inform all including applications,
-that on re-link, the default rte_acl_create symbol should point to this
-function. Note that we could do this by simply naming the function above
+This code serves as our new API call. Its the same as our old call, but adds the
+new parameter in place. Next we need to map this function to the symbol
+``rte_acl_create@DPDK_21``. To do this, we modify the public prototype of the
+call in the header file, adding the macro there to inform all including
+applications, that on re-link, the default rte_acl_create symbol should point to
+this function. Note that we could do this by simply naming the function above
rte_acl_create, and the linker would chose the most recent version tag to apply
in the version script, but we can also do this in the header file
@@ -233,11 +325,11 @@ in the version script, but we can also do this in the header file
struct rte_acl_ctx *
-rte_acl_create(const struct rte_acl_param *param);
- +rte_acl_create(const struct rte_acl_param *param, int debug);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
+ +rte_acl_create_v21(const struct rte_acl_param *param, int debug);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
The BIND_DEFAULT_SYMBOL macro explicitly tells applications that include this
-header, to link to the rte_acl_create_v21 function and apply the DPDK_2.1
+header, to link to the rte_acl_create_v21 function and apply the DPDK_21
version node to it. This method is more explicit and flexible than just
re-implementing the exact symbol name, and allows for other features (such as
linking to the old symbol version by default, when the new ABI is to be opt-in
@@ -257,6 +349,7 @@ assumption is that the most recent version of the symbol is the one you want to
map. So, back in the C file where, immediately after ``rte_acl_create_v21`` is
defined, we add this
+
.. code-block:: c
struct rte_acl_ctx *
@@ -270,21 +363,22 @@ That tells the compiler that, when building a static library, any calls to the
symbol ``rte_acl_create`` should be linked to ``rte_acl_create_v21``
That's it, on the next shared library rebuild, there will be two versions of
-rte_acl_create, an old DPDK_2.0 version, used by previously built applications,
-and a new DPDK_2.1 version, used by future built applications.
+rte_acl_create, an old DPDK_20 version, used by previously built applications,
+and a new DPDK_21 version, used by future built applications.
Deprecating part of a public API
________________________________
-Lets assume that you've done the above update, and after a few releases have
-passed you decide you would like to retire the old version of the function.
-After having gone through the ABI deprecation announcement process, removal is
-easy. Start by removing the symbol from the requisite version map file:
+Lets assume that you've done the above update, and in preparation for the next
+major ABI version you decide you would like to retire the old version of the
+function. After having gone through the ABI deprecation announcement process,
+removal is easy. Start by removing the symbol from the requisite version map
+file:
.. code-block:: none
- DPDK_2.0 {
+ DPDK_20 {
global:
rte_acl_add_rules;
@@ -306,48 +400,42 @@ easy. Start by removing the symbol from the requisite version map file:
local: *;
};
- DPDK_2.1 {
+ DPDK_21 {
global:
rte_acl_create;
- } DPDK_2.0;
+ } DPDK_20;
Next remove the corresponding versioned export.
.. code-block:: c
- -VERSION_SYMBOL(rte_acl_create, _v20, 2.0);
+ -VERSION_SYMBOL(rte_acl_create, _v20, 20);
Note that the internal function definition could also be removed, but its used
-in our example by the newer version _v21, so we leave it in place. This is a
-coding style choice.
-
-Lastly, we need to bump the LIBABIVER number for this library in the Makefile to
-indicate to applications doing dynamic linking that this is a later, and
-possibly incompatible library version:
-
-.. code-block:: c
+in our example by the newer version v21, so we leave it in place and declare it
+as static. This is a coding style choice.
- -LIBABIVER := 1
- +LIBABIVER := 2
+.. _deprecating_entire_abi:
Deprecating an entire ABI version
_________________________________
-While removing a symbol from and ABI may be useful, it is often more practical
-to remove an entire version node at once. If a version node completely
-specifies an API, then removing part of it, typically makes it incomplete. In
-those cases it is better to remove the entire node
+While removing a symbol from an ABI may be useful, it is more practical to
+remove an entire version node at once, as is typically done at the declaration
+of a major ABI version. If a version node completely specifies an API, then
+removing part of it, typically makes it incomplete. In those cases it is better
+to remove the entire node.
To do this, start by modifying the version map file, such that all symbols from
-the node to be removed are merged into the next node in the map
+the node to be removed are merged into the next node in the map.
In the case of our map above, it would transform to look as follows
.. code-block:: none
- DPDK_2.1 {
+ DPDK_21 {
global:
rte_acl_add_rules;
@@ -375,8 +463,8 @@ symbols.
.. code-block:: c
- -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 2.0);
- +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 2.1);
+ -BIND_DEFAULT_SYMBOL(rte_acl_create, _v20, 20);
+ +BIND_DEFAULT_SYMBOL(rte_acl_create, _v21, 21);
Lastly, any VERSION_SYMBOL macros that point to the old version node should be
removed, taking care to keep, where need old code in place to support newer
diff --git a/doc/guides/contributing/patches.rst b/doc/guides/contributing/patches.rst
index 9e1013b..2140303 100644
--- a/doc/guides/contributing/patches.rst
+++ b/doc/guides/contributing/patches.rst
@@ -156,9 +156,9 @@ Make your planned changes in the cloned ``dpdk`` repo. Here are some guidelines
* For other PMDs and more info, refer to the ``MAINTAINERS`` file.
-* New external functions should be added to the local ``version.map`` file.
- See the :doc:`Guidelines for ABI policy and versioning </contributing/versioning>`.
- New external functions should also be added in alphabetical order.
+* New external functions should be added to the local ``version.map`` file. See
+ the :doc:`ABI policy <abi_policy>` and :ref:`ABI versioning <abi_versioning>`
+ guides. New external functions should also be added in alphabetical order.
* Important changes will require an addition to the release notes in ``doc/guides/rel_notes/``.
See the :ref:`Release Notes section of the Documentation Guidelines <doc_guidelines>` for details.
diff --git a/doc/guides/contributing/stable.rst b/doc/guides/contributing/stable.rst
index 2b563d4..4d38bb8 100644
--- a/doc/guides/contributing/stable.rst
+++ b/doc/guides/contributing/stable.rst
@@ -54,7 +54,7 @@ After the X.11 release, an LTS branch will be created for it at
http://git.dpdk.org/dpdk-stable where bugfixes will be backported to.
A LTS release may align with the declaration of a new major ABI version,
-please read the :ref:`abi_policy` for more information.
+please read the :doc:`abi_policy` for more information.
It is anticipated that there will be at least 4 releases per year of the LTS
or approximately 1 every 3 months. However, the cadence can be shorter or
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 237813b..25f71ad 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -4,8 +4,9 @@
ABI and API Deprecation
=======================
-See the :doc:`guidelines document for details of the ABI policy </contributing/versioning>`.
-API and ABI deprecation notices are to be posted here.
+See the guidelines document for details of the :doc:`ABI policy
+<../contributing/abi_policy>`. API and ABI deprecation notices are to be posted
+here.
Deprecation Notices
--
2.7.4
^ permalink raw reply [relevance 30%]
* [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions
@ 2019-10-25 16:28 10% Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 1/4] doc: separate versioning.rst into version and policy Ray Kinsella
` (3 more replies)
0 siblings, 4 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 16:28 UTC (permalink / raw)
To: dev
Cc: mdr, thomas, stephen, bruce.richardson, ferruh.yigit,
konstantin.ananyev, jerinj, olivier.matz, nhorman,
maxime.coquelin, john.mcnamara, marko.kovacevic, hemant.agrawal,
ktraynor, aconole
TL;DR abbreviation:
A major ABI version that all DPDK releases during an agreed period support. ABI
versioning is managed at a project-level, in place of library-level management.
ABI changes to add new features are permitted, as long as ABI compatibility with
the major ABI version is maintained.
Detail:
This patch introduces major ABI versions, released aligned with the LTS release
and maintained for one year through subsequent releases. The intention is that
the one year abi support period, will then be reviewed after the initial year
with the intention of lengthening the period for the next ABI version.
ABI changes that preserve ABI compatibility with the major ABI version are
permitted in subsequent releases. ABI changes, follow similar approval rules as
before with the additional gate of now requiring technical board approval. The
merging and release of ABI breaking changes would now be pushed to the
declaration of the next major ABI version.
This change encourages developers to maintain ABI compatibility with the major
ABI version, by promoting a permissive culture around those changes that
preserve ABI compatibility. This approach begins to align DPDK with those
projects that declare major ABI versions (e.g. version 2.x, 3.x) and support
those versions for some period, typically two years or more.
To provide an example of how this might work in practice:
* DPDK v20 is declared as the supported ABI version for one year, aligned with
the DPDK v19.11 (LTS) release. All library sonames are updated to reflect the
new ABI version, e.g. librte_eal.so.20, librte_acl.so.20...
* DPDK v20.02 .. v20.08 releases are ABI compatible with the DPDK v20 ABI. ABI
changes are permitted from DPDK v20.02 onwards, with the condition that ABI
compatibility with DPDK v20 is preserved.
* DPDK v21 is declared as the new supported ABI version for two years, aligned
with the DPDK v20.11 (LTS) release. The DPDK v20 ABI is now depreciated,
library sonames are updated to v21 and ABI compatibility breaking changes may
be introduced.
v7
* PNGs are now SVG. Some additional clarifications. Fixed typos and grammatical
errors. (as suggested by Thomas Monjalon and David Marchand)
v6
* Added figure to abi_policy.rst, comparing and contrasting the DPDK abi and
api. (as suggested by Aaron Conole)
v5
* Added figure to abi_policy.rst, mapping abi versions and abi compatibility to
DPDK releases. (as suggested by Neil Horman)
v4
* Removed changes to stable.rst, fixed typos and clarified the ABI policy
"warning".
v3
* Added myself as the maintainer of the ABI policy.
* Updated the policy and versioning guides to use the year of the LTS+1 (e.g.
v20), as the abi major version number.
v2
* Restructured the patch into 3 patches:
1. Splits the original versioning document into an ABI policy document
and ABI versioning document.
2. Add changes to the policy document introducing major ABI versions.
3. Fixes up the versioning document in light of major ABI versioning.
* Reduces the initial ABI stability from two years to one year, with a review
after the first year.
* Adds detail around ABI version handling for experimental libraries.
* Adds detail around chain of responsility for removing deprecated symbols.
Ray Kinsella (4):
doc: separate versioning.rst into version and policy
doc: changes to abi policy introducing major abi versions
doc: updates to versioning guide for abi versions
doc: add maintainer for abi policy
MAINTAINERS | 4 +
doc/guides/contributing/abi_policy.rst | 322 ++++++
doc/guides/contributing/abi_versioning.rst | 515 ++++++++++
.../contributing/img/abi_stability_policy.svg | 1059 ++++++++++++++++++++
doc/guides/contributing/img/what_is_an_abi.svg | 382 +++++++
doc/guides/contributing/index.rst | 3 +-
doc/guides/contributing/patches.rst | 6 +-
doc/guides/contributing/stable.rst | 12 +-
doc/guides/contributing/versioning.rst | 591 -----------
doc/guides/rel_notes/deprecation.rst | 5 +-
10 files changed, 2294 insertions(+), 605 deletions(-)
create mode 100644 doc/guides/contributing/abi_policy.rst
create mode 100644 doc/guides/contributing/abi_versioning.rst
create mode 100644 doc/guides/contributing/img/abi_stability_policy.svg
create mode 100644 doc/guides/contributing/img/what_is_an_abi.svg
delete mode 100644 doc/guides/contributing/versioning.rst
--
2.7.4
^ permalink raw reply [relevance 10%]
* Re: [dpdk-dev] [PATCH 1/2] security: add anti replay window size
2019-10-25 10:00 4% ` Ananyev, Konstantin
@ 2019-10-25 15:56 0% ` Hemant Agrawal
0 siblings, 0 replies; 200+ results
From: Hemant Agrawal @ 2019-10-25 15:56 UTC (permalink / raw)
To: Ananyev, Konstantin, dev, Akhil Goyal, Doherty, Declan
Hi Konstantin,
> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, October 25, 2019 3:30 PM
> To: Hemant Agrawal <hemant.agrawal@nxp.com>; dev@dpdk.org; Akhil
> Goyal <akhil.goyal@nxp.com>; Doherty, Declan <declan.doherty@intel.com>
> Subject: RE: [dpdk-dev] [PATCH 1/2] security: add anti replay window size
> Importance: High
>
> Hi Hemant,
>
> >
> > At present the ipsec xfrom is missing the important step to configure
> > the anti replay window size.
> > The newly added field will also help in to enable or disable the anti
> > replay checking, if available in offload by means of non-zero or zero
> > value.
>
> +1 for those changes.
> Though AFAIK, it will be an ABI breakage, right?
> So probably deserves changes in release notes.
[Hemant] ok
>
> >
> > Currently similar field is available in rte_ipsec lib for software
> > ipsec usage.
>
> Yep, the only thing why it was put here - to avoid ABI breakage within
> rte_security.
> Having it in the rte_security_ipsec_xform makes much more sense.
>
> >The newly introduced filed can replace
> > that field as well eventually.
>
> My suggestion would be to update librte_ipsec as part of these patch series.
>
[Hemant] will do it in v2
> >
> > Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> > ---
> > lib/librte_security/rte_security.h | 4 ++++
> > 1 file changed, 4 insertions(+)
> >
> > diff --git a/lib/librte_security/rte_security.h
> > b/lib/librte_security/rte_security.h
> > index aaafdfcd7..195ad5645 100644
> > --- a/lib/librte_security/rte_security.h
> > +++ b/lib/librte_security/rte_security.h
> > @@ -212,6 +212,10 @@ struct rte_security_ipsec_xform {
> > /**< Tunnel parameters, NULL for transport mode */
> > uint64_t esn_soft_limit;
> > /**< ESN for which the overflow event need to be raised */
> > + uint32_t replay_win_sz;
> > + /**< Anti replay window size to enable sequence replay attack
> handling.
> > + * replay checking is disabled if the window size is 0.
> > + */
> > };
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
2019-10-25 15:30 4% ` Burakov, Anatoly
@ 2019-10-25 15:33 4% ` Thomas Monjalon
2019-10-26 18:14 4% ` Kevin Traynor
2 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2019-10-25 15:33 UTC (permalink / raw)
To: David Marchand
Cc: dev, stephen, anatoly.burakov, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic, arybchenko, ferruh.yigit
25/10/2019 15:56, David Marchand:
> New accessor has been introduced to provide the hidden information.
> This symbol can now be kept internal.
[..]
> +* eal: The ``rte_logs`` struct and global symbol will be made private to
> + remove it from the externally visible ABI and allow it to be updated in the
> + future.
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
@ 2019-10-25 15:30 4% ` Burakov, Anatoly
2019-10-25 15:33 4% ` Thomas Monjalon
2019-10-26 18:14 4% ` Kevin Traynor
2 siblings, 0 replies; 200+ results
From: Burakov, Anatoly @ 2019-10-25 15:30 UTC (permalink / raw)
To: David Marchand, dev
Cc: stephen, thomas, ktraynor, Neil Horman, John McNamara, Marko Kovacevic
On 25-Oct-19 2:56 PM, David Marchand wrote:
> New accessor has been introduced to provide the hidden information.
> This symbol can now be kept internal.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> doc/guides/rel_notes/deprecation.rst | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
> index cf7744e..3aa1634 100644
> --- a/doc/guides/rel_notes/deprecation.rst
> +++ b/doc/guides/rel_notes/deprecation.rst
> @@ -34,6 +34,10 @@ Deprecation Notices
>
> + ``rte_eal_devargs_type_count``
>
> +* eal: The ``rte_logs`` struct and global symbol will be made private to
> + remove it from the externally visible ABI and allow it to be updated in the
> + future.
> +
> * vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
> have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
> functions. The due date for the removal targets DPDK 20.02.
>
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private
2019-10-25 13:56 12% ` [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private David Marchand
@ 2019-10-25 15:18 0% ` Burakov, Anatoly
0 siblings, 0 replies; 200+ results
From: Burakov, Anatoly @ 2019-10-25 15:18 UTC (permalink / raw)
To: David Marchand, dev
Cc: stephen, thomas, ktraynor, Neil Horman, John McNamara,
Marko Kovacevic, Harry van Haaren, Harini Ramakrishnan,
Omar Cardona, Anand Rawat, Ranjit Menon
On 25-Oct-19 2:56 PM, David Marchand wrote:
> From: Stephen Hemminger <stephen@networkplumber.org>
>
> The internal structure of lcore_config does not need to be part of
> visible API/ABI. Make it private to EAL.
>
> Rearrange the structure so it takes less memory (and cache footprint).
>
> Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
--
Thanks,
Anatoly
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
` (5 preceding siblings ...)
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 11/12] eal: make the global configuration private David Marchand
@ 2019-10-25 13:56 9% ` David Marchand
2019-10-25 15:30 4% ` Burakov, Anatoly
` (2 more replies)
6 siblings, 3 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic
New accessor has been introduced to provide the hidden information.
This symbol can now be kept internal.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index cf7744e..3aa1634 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,6 +34,10 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
+* eal: The ``rte_logs`` struct and global symbol will be made private to
+ remove it from the externally visible ABI and allow it to be updated in the
+ future.
+
* vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
--
1.8.3.1
^ permalink raw reply [relevance 9%]
* [dpdk-dev] [PATCH v3 11/12] eal: make the global configuration private
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
` (4 preceding siblings ...)
2019-10-25 13:56 3% ` [dpdk-dev] [PATCH v3 09/12] eal: deinline lcore APIs David Marchand
@ 2019-10-25 13:56 4% ` David Marchand
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, John McNamara,
Marko Kovacevic
Now that all elements of the rte_config structure have (deinlined)
accessors, we can hide it.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/eal_common_mcfg.c | 1 +
lib/librte_eal/common/eal_private.h | 32 ++++++++++++++++++++++++++++++++
lib/librte_eal/common/include/rte_eal.h | 32 --------------------------------
lib/librte_eal/common/malloc_heap.c | 1 +
lib/librte_eal/common/rte_malloc.c | 1 +
lib/librte_eal/rte_eal_version.map | 1 -
7 files changed, 38 insertions(+), 33 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 8d88257..393fb61 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -320,6 +320,9 @@ ABI Changes
* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
``rte_malloc_virt2iova`` since v17.11.
+* eal: made the ``rte_config`` struct and ``rte_eal_get_configuration``
+ function private.
+
* pci: removed the following deprecated functions since dpdk:
- ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
diff --git a/lib/librte_eal/common/eal_common_mcfg.c b/lib/librte_eal/common/eal_common_mcfg.c
index 0665494..0cf9a62 100644
--- a/lib/librte_eal/common/eal_common_mcfg.c
+++ b/lib/librte_eal/common/eal_common_mcfg.c
@@ -8,6 +8,7 @@
#include "eal_internal_cfg.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
void
eal_mcfg_complete(void)
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index 0e4b033..52eea9a 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -37,6 +37,38 @@ struct lcore_config {
extern struct lcore_config lcore_config[RTE_MAX_LCORE];
/**
+ * The global RTE configuration structure.
+ */
+struct rte_config {
+ uint32_t master_lcore; /**< Id of the master lcore */
+ uint32_t lcore_count; /**< Number of available logical cores. */
+ uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
+ uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
+ uint32_t service_lcore_count;/**< Number of available service cores. */
+ enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
+
+ /** Primary or secondary configuration */
+ enum rte_proc_type_t process_type;
+
+ /** PA or VA mapping mode */
+ enum rte_iova_mode iova_mode;
+
+ /**
+ * Pointer to memory configuration, which may be shared across multiple
+ * DPDK instances
+ */
+ struct rte_mem_config *mem_config;
+} __attribute__((__packed__));
+
+/**
+ * Get the global configuration structure.
+ *
+ * @return
+ * A pointer to the global configuration structure.
+ */
+struct rte_config *rte_eal_get_configuration(void);
+
+/**
* Initialize the memzone subsystem (private to eal).
*
* @return
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index ea3c9df..2f9ed29 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -52,38 +52,6 @@ enum rte_proc_type_t {
};
/**
- * The global RTE configuration structure.
- */
-struct rte_config {
- uint32_t master_lcore; /**< Id of the master lcore */
- uint32_t lcore_count; /**< Number of available logical cores. */
- uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
- uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
- uint32_t service_lcore_count;/**< Number of available service cores. */
- enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
-
- /** Primary or secondary configuration */
- enum rte_proc_type_t process_type;
-
- /** PA or VA mapping mode */
- enum rte_iova_mode iova_mode;
-
- /**
- * Pointer to memory configuration, which may be shared across multiple
- * DPDK instances
- */
- struct rte_mem_config *mem_config;
-} __attribute__((__packed__));
-
-/**
- * Get the global configuration structure.
- *
- * @return
- * A pointer to the global configuration structure.
- */
-struct rte_config *rte_eal_get_configuration(void);
-
-/**
* Get the process type in a multi-process setup
*
* @return
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 634ca21..842eb9d 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -27,6 +27,7 @@
#include "eal_internal_cfg.h"
#include "eal_memalloc.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
#include "malloc_elem.h"
#include "malloc_heap.h"
#include "malloc_mp.h"
diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c
index fecd9a9..044d3a9 100644
--- a/lib/librte_eal/common/rte_malloc.c
+++ b/lib/librte_eal/common/rte_malloc.c
@@ -26,6 +26,7 @@
#include "malloc_heap.h"
#include "eal_memalloc.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
/* Free the memory space back to heap */
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index d88649e..3478d3b 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -17,7 +17,6 @@ DPDK_2.0 {
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
- rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
rte_eal_has_hugepages;
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 03/12] eal: remove deprecated malloc virt2phys function
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
2019-10-25 13:56 12% ` [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private David Marchand
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 02/12] eal: remove deprecated CPU flags check function David Marchand
@ 2019-10-25 13:56 5% ` David Marchand
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 06/12] pci: remove deprecated functions David Marchand
` (3 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic
Remove rte_malloc_virt2phy as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/include/rte_malloc.h | 7 -------
3 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 50ac348..bbd5863 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
- by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
-
* vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 0520bc9..b3f7509 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -317,6 +317,9 @@ ABI Changes
* eal: removed the ``rte_cpu_check_supported`` function, replaced by
``rte_cpu_is_supported`` since dpdk v17.08.
+* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
+ ``rte_malloc_virt2iova`` since v17.11.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/include/rte_malloc.h b/lib/librte_eal/common/include/rte_malloc.h
index 3593fb4..42ca051 100644
--- a/lib/librte_eal/common/include/rte_malloc.h
+++ b/lib/librte_eal/common/include/rte_malloc.h
@@ -553,13 +553,6 @@ rte_malloc_set_limit(const char *type, size_t max);
rte_iova_t
rte_malloc_virt2iova(const void *addr);
-__rte_deprecated
-static inline phys_addr_t
-rte_malloc_virt2phy(const void *addr)
-{
- return rte_malloc_virt2iova(addr);
-}
-
#ifdef __cplusplus
}
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v3 09/12] eal: deinline lcore APIs
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
` (3 preceding siblings ...)
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 06/12] pci: remove deprecated functions David Marchand
@ 2019-10-25 13:56 3% ` David Marchand
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 11/12] eal: make the global configuration private David Marchand
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas, ktraynor
Those functions are used to setup or take control decisions.
Move them into the EAL common code and put them directly in the stable
ABI.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
lib/librte_eal/common/eal_common_lcore.c | 38 ++++++++++++++++++++++++++++
lib/librte_eal/common/include/rte_lcore.h | 41 +++----------------------------
lib/librte_eal/rte_eal_version.map | 10 ++++++++
3 files changed, 52 insertions(+), 37 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index 59a2fd1..b01a210 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -16,6 +16,16 @@
#include "eal_private.h"
#include "eal_thread.h"
+unsigned int rte_get_master_lcore(void)
+{
+ return rte_eal_get_configuration()->master_lcore;
+}
+
+unsigned int rte_lcore_count(void)
+{
+ return rte_eal_get_configuration()->lcore_count;
+}
+
int rte_lcore_index(int lcore_id)
{
if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -43,6 +53,34 @@ rte_cpuset_t rte_lcore_cpuset(unsigned int lcore_id)
return lcore_config[lcore_id].cpuset;
}
+int rte_lcore_is_enabled(unsigned int lcore_id)
+{
+ struct rte_config *cfg = rte_eal_get_configuration();
+
+ if (lcore_id >= RTE_MAX_LCORE)
+ return 0;
+ return cfg->lcore_role[lcore_id] == ROLE_RTE;
+}
+
+unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+{
+ i++;
+ if (wrap)
+ i %= RTE_MAX_LCORE;
+
+ while (i < RTE_MAX_LCORE) {
+ if (!rte_lcore_is_enabled(i) ||
+ (skip_master && (i == rte_get_master_lcore()))) {
+ i++;
+ if (wrap)
+ i %= RTE_MAX_LCORE;
+ continue;
+ }
+ break;
+ }
+ return i;
+}
+
unsigned int
rte_lcore_to_socket_id(unsigned int lcore_id)
{
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index 0c68391..ea40c25 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -93,11 +93,7 @@ rte_lcore_id(void)
* @return
* the id of the master lcore
*/
-static inline unsigned
-rte_get_master_lcore(void)
-{
- return rte_eal_get_configuration()->master_lcore;
-}
+unsigned int rte_get_master_lcore(void);
/**
* Return the number of execution units (lcores) on the system.
@@ -105,12 +101,7 @@ rte_get_master_lcore(void)
* @return
* the number of execution units (lcores) on the system.
*/
-static inline unsigned
-rte_lcore_count(void)
-{
- const struct rte_config *cfg = rte_eal_get_configuration();
- return cfg->lcore_count;
-}
+unsigned int rte_lcore_count(void);
/**
* Return the index of the lcore starting from zero.
@@ -215,14 +206,7 @@ rte_lcore_cpuset(unsigned int lcore_id);
* @return
* True if the given lcore is enabled; false otherwise.
*/
-static inline int
-rte_lcore_is_enabled(unsigned int lcore_id)
-{
- struct rte_config *cfg = rte_eal_get_configuration();
- if (lcore_id >= RTE_MAX_LCORE)
- return 0;
- return cfg->lcore_role[lcore_id] == ROLE_RTE;
-}
+int rte_lcore_is_enabled(unsigned int lcore_id);
/**
* Get the next enabled lcore ID.
@@ -237,25 +221,8 @@ rte_lcore_is_enabled(unsigned int lcore_id)
* @return
* The next lcore_id or RTE_MAX_LCORE if not found.
*/
-static inline unsigned int
-rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
-{
- i++;
- if (wrap)
- i %= RTE_MAX_LCORE;
+unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
- while (i < RTE_MAX_LCORE) {
- if (!rte_lcore_is_enabled(i) ||
- (skip_master && (i == rte_get_master_lcore()))) {
- i++;
- if (wrap)
- i %= RTE_MAX_LCORE;
- continue;
- }
- break;
- }
- return i;
-}
/**
* Macro to browse all running lcores.
*/
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 6d7e0e4..d88649e 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -310,6 +310,16 @@ DPDK_19.08 {
} DPDK_19.05;
+DPDK_19.11 {
+ global:
+
+ rte_get_master_lcore;
+ rte_get_next_lcore;
+ rte_lcore_count;
+ rte_lcore_is_enabled;
+
+} DPDK_19.08;
+
EXPERIMENTAL {
global:
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 06/12] pci: remove deprecated functions
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
` (2 preceding siblings ...)
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 03/12] eal: remove deprecated malloc virt2phys function David Marchand
@ 2019-10-25 13:56 4% ` David Marchand
2019-10-25 13:56 3% ` [dpdk-dev] [PATCH v3 09/12] eal: deinline lcore APIs David Marchand
` (2 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic, Gaetan Rivet
Those functions have been deprecated since 17.11 and have 1:1
replacement.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 7 -----
doc/guides/rel_notes/release_19_11.rst | 6 +++++
lib/librte_pci/rte_pci.c | 19 --------------
lib/librte_pci/rte_pci.h | 47 ----------------------------------
lib/librte_pci/rte_pci_version.map | 3 ---
5 files changed, 6 insertions(+), 76 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bbd5863..cf7744e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -38,13 +38,6 @@ Deprecation Notices
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
-* pci: Several exposed functions are misnamed.
- The following functions are deprecated starting from v17.11 and are replaced:
-
- - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
- - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
- - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
-
* dpaa2: removal of ``rte_dpaa2_memsegs`` structure which has been replaced
by a pa-va search library. This structure was earlier being used for holding
memory segments used by dpaa2 driver for faster pa->va translation. This
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index b3f7509..8d88257 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -320,6 +320,12 @@ ABI Changes
* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
``rte_malloc_virt2iova`` since v17.11.
+* pci: removed the following deprecated functions since dpdk:
+
+ - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
+ - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
+ - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_pci/rte_pci.c b/lib/librte_pci/rte_pci.c
index f400178..a753cf3 100644
--- a/lib/librte_pci/rte_pci.c
+++ b/lib/librte_pci/rte_pci.c
@@ -87,18 +87,6 @@ pci_dbdf_parse(const char *input, struct rte_pci_addr *dev_addr)
return 0;
}
-int
-eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_bdf_parse(input, dev_addr);
-}
-
-int
-eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_dbdf_parse(input, dev_addr);
-}
-
void
rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size)
@@ -110,13 +98,6 @@ rte_pci_device_name(const struct rte_pci_addr *addr,
}
int
-rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2)
-{
- return rte_pci_addr_cmp(addr, addr2);
-}
-
-int
rte_pci_addr_cmp(const struct rte_pci_addr *addr,
const struct rte_pci_addr *addr2)
{
diff --git a/lib/librte_pci/rte_pci.h b/lib/librte_pci/rte_pci.h
index eaa9d07..c878914 100644
--- a/lib/librte_pci/rte_pci.h
+++ b/lib/librte_pci/rte_pci.h
@@ -106,37 +106,6 @@ struct mapped_pci_resource {
TAILQ_HEAD(mapped_pci_res_list, mapped_pci_resource);
/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided without
- * a domain prefix (i.e. domain returned is always 0)
- *
- * @param input
- * The input string to be parsed. Should have the format XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned.
- * Domain will always be returned as 0
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided including
- * a domain prefix.
- *
- * @param input
- * The input string to be parsed. Should have the format XXXX:XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
* Utility function to write a pci device name, this device name can later be
* used to retrieve the corresponding rte_pci_addr using eal_parse_pci_*
* BDF helpers.
@@ -152,22 +121,6 @@ void rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size);
/**
- * @deprecated
- * Utility function to compare two PCI device addresses.
- *
- * @param addr
- * The PCI Bus-Device-Function address to compare
- * @param addr2
- * The PCI Bus-Device-Function address to compare
- * @return
- * 0 on equal PCI address.
- * Positive on addr is greater than addr2.
- * Negative on addr is less than addr2, or error.
- */
-int rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2);
-
-/**
* Utility function to compare two PCI device addresses.
*
* @param addr
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c028027..03790cb 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,11 +1,8 @@
DPDK_17.11 {
global:
- eal_parse_pci_BDF;
- eal_parse_pci_DomBDF;
pci_map_resource;
pci_unmap_resource;
- rte_eal_compare_pci_addr;
rte_pci_addr_cmp;
rte_pci_addr_parse;
rte_pci_device_name;
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 02/12] eal: remove deprecated CPU flags check function
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
2019-10-25 13:56 12% ` [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private David Marchand
@ 2019-10-25 13:56 5% ` David Marchand
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 03/12] eal: remove deprecated malloc virt2phys function David Marchand
` (4 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic
Remove rte_cpu_check_supported as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/eal_common_cpuflags.c | 11 -----------
lib/librte_eal/common/include/generic/rte_cpuflags.h | 9 ---------
lib/librte_eal/rte_eal_version.map | 1 -
5 files changed, 3 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e4a33e0..50ac348 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_cpu_check_supported`` function has been deprecated since
- v17.08 and will be removed.
-
* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index d2c4e9e..0520bc9 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -314,6 +314,9 @@ ABI Changes
* eal: made the ``lcore_config`` struct and global symbol private.
+* eal: removed the ``rte_cpu_check_supported`` function, replaced by
+ ``rte_cpu_is_supported`` since dpdk v17.08.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_cpuflags.c b/lib/librte_eal/common/eal_common_cpuflags.c
index 3a055f7..dc5f75d 100644
--- a/lib/librte_eal/common/eal_common_cpuflags.c
+++ b/lib/librte_eal/common/eal_common_cpuflags.c
@@ -7,17 +7,6 @@
#include <rte_common.h>
#include <rte_cpuflags.h>
-/**
- * Checks if the machine is adequate for running the binary. If it is not, the
- * program exits with status 1.
- */
-void
-rte_cpu_check_supported(void)
-{
- if (!rte_cpu_is_supported())
- exit(1);
-}
-
int
rte_cpu_is_supported(void)
{
diff --git a/lib/librte_eal/common/include/generic/rte_cpuflags.h b/lib/librte_eal/common/include/generic/rte_cpuflags.h
index 156ea00..872f0eb 100644
--- a/lib/librte_eal/common/include/generic/rte_cpuflags.h
+++ b/lib/librte_eal/common/include/generic/rte_cpuflags.h
@@ -49,15 +49,6 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature);
/**
* This function checks that the currently used CPU supports the CPU features
* that were specified at compile time. It is called automatically within the
- * EAL, so does not need to be used by applications.
- */
-__rte_deprecated
-void
-rte_cpu_check_supported(void);
-
-/**
- * This function checks that the currently used CPU supports the CPU features
- * that were specified at compile time. It is called automatically within the
* EAL, so does not need to be used by applications. This version returns a
* result so that decisions may be made (for instance, graceful shutdowns).
*/
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index aeedf39..0887549 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -8,7 +8,6 @@ DPDK_2.0 {
per_lcore__rte_errno;
rte_calloc;
rte_calloc_socket;
- rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
rte_cycles_vmware_tsc_map;
rte_delay_us;
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
@ 2019-10-25 13:56 12% ` David Marchand
2019-10-25 15:18 0% ` Burakov, Anatoly
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 02/12] eal: remove deprecated CPU flags check function David Marchand
` (5 subsequent siblings)
6 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-25 13:56 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, ktraynor, Neil Horman,
John McNamara, Marko Kovacevic, Harry van Haaren,
Harini Ramakrishnan, Omar Cardona, Anand Rawat, Ranjit Menon
From: Stephen Hemminger <stephen@networkplumber.org>
The internal structure of lcore_config does not need to be part of
visible API/ABI. Make it private to EAL.
Rearrange the structure so it takes less memory (and cache footprint).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
Based on Stephen v8: http://patchwork.dpdk.org/patch/60443/
Changes since Stephen v8:
- do not change core_id, socket_id and core_index types,
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_19_11.rst | 2 ++
lib/librte_eal/common/eal_common_launch.c | 2 ++
lib/librte_eal/common/eal_private.h | 25 +++++++++++++++++++++++++
lib/librte_eal/common/include/rte_lcore.h | 24 ------------------------
lib/librte_eal/common/rte_service.c | 2 ++
lib/librte_eal/rte_eal_version.map | 1 -
lib/librte_eal/windows/eal/eal_thread.c | 1 +
8 files changed, 32 insertions(+), 29 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 237813b..e4a33e0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -23,10 +23,6 @@ Deprecation Notices
* eal: The function ``rte_eal_remote_launch`` will return new error codes
after read or write error on the pipe, instead of calling ``rte_panic``.
-* eal: The ``lcore_config`` struct and global symbol will be made private to
- remove it from the externally visible ABI and allow it to be updated in the
- future.
-
* eal: both declaring and identifying devices will be streamlined in v18.11.
New functions will appear to query a specific port from buses, classes of
device and device drivers. Device declaration will be made coherent with the
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index e77d226..d2c4e9e 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -312,6 +312,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* eal: made the ``lcore_config`` struct and global symbol private.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index fe0ba3f..cf52d71 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -15,6 +15,8 @@
#include <rte_per_lcore.h>
#include <rte_lcore.h>
+#include "eal_private.h"
+
/*
* Wait until a lcore finished its job.
*/
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index 798ede5..0e4b033 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -10,6 +10,31 @@
#include <stdio.h>
#include <rte_dev.h>
+#include <rte_lcore.h>
+
+/**
+ * Structure storing internal configuration (per-lcore)
+ */
+struct lcore_config {
+ pthread_t thread_id; /**< pthread identifier */
+ int pipe_master2slave[2]; /**< communication pipe with master */
+ int pipe_slave2master[2]; /**< communication pipe with master */
+
+ lcore_function_t * volatile f; /**< function to call */
+ void * volatile arg; /**< argument of function */
+ volatile int ret; /**< return value of function */
+
+ volatile enum rte_lcore_state_t state; /**< lcore state */
+ unsigned int socket_id; /**< physical socket id for this lcore */
+ unsigned int core_id; /**< core number on socket for this lcore */
+ int core_index; /**< relative index, starting from 0 */
+ uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
+ uint8_t detected; /**< true if lcore was detected */
+
+ rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+};
+
+extern struct lcore_config lcore_config[RTE_MAX_LCORE];
/**
* Initialize the memzone subsystem (private to eal).
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index c86f72e..0c68391 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -66,30 +66,6 @@ typedef cpuset_t rte_cpuset_t;
} while (0)
#endif
-/**
- * Structure storing internal configuration (per-lcore)
- */
-struct lcore_config {
- unsigned detected; /**< true if lcore was detected */
- pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
- lcore_function_t * volatile f; /**< function to call */
- void * volatile arg; /**< argument of function */
- volatile int ret; /**< return value of function */
- volatile enum rte_lcore_state_t state; /**< lcore state */
- unsigned socket_id; /**< physical socket id for this lcore */
- unsigned core_id; /**< core number on socket for this lcore */
- int core_index; /**< relative index, starting from 0 */
- rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
- uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
-};
-
-/**
- * Internal configuration (per-lcore)
- */
-extern struct lcore_config lcore_config[RTE_MAX_LCORE];
-
RTE_DECLARE_PER_LCORE(unsigned, _lcore_id); /**< Per thread "lcore id". */
RTE_DECLARE_PER_LCORE(rte_cpuset_t, _cpuset); /**< Per thread "cpuset". */
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index beb9691..79235c0 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -21,6 +21,8 @@
#include <rte_memory.h>
#include <rte_malloc.h>
+#include "eal_private.h"
+
#define RTE_SERVICE_NUM_MAX 64
#define SERVICE_F_REGISTERED (1 << 0)
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d..aeedf39 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -4,7 +4,6 @@ DPDK_2.0 {
__rte_panic;
eal_parse_sysfs_value;
eal_timer_source;
- lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
rte_calloc;
diff --git a/lib/librte_eal/windows/eal/eal_thread.c b/lib/librte_eal/windows/eal/eal_thread.c
index 906502f..0591d4c 100644
--- a/lib/librte_eal/windows/eal/eal_thread.c
+++ b/lib/librte_eal/windows/eal/eal_thread.c
@@ -12,6 +12,7 @@
#include <rte_common.h>
#include <eal_thread.h>
+#include "eal_private.h"
RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
--
1.8.3.1
^ permalink raw reply [relevance 12%]
* [dpdk-dev] [PATCH v3 00/12] EAL and PCI ABI changes for 19.11
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
` (5 preceding siblings ...)
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
@ 2019-10-25 13:55 8% ` David Marchand
2019-10-25 13:56 12% ` [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private David Marchand
` (6 more replies)
6 siblings, 7 replies; 200+ results
From: David Marchand @ 2019-10-25 13:55 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas, ktraynor
Let's prepare for the ABI freeze.
The first patches are about changes that had been announced before.
The malloc_heap structure from the memory subsystem can be hidden.
The PCI library had some forgotten deprecated APIs that are removed with
this series.
rte_logs could be hidden, but I left it exposed for now.
I added an accessor to rte_logs.file, and added a deprecation notice
announcing its removal from the public ABI.
Changelog since v2:
- dropped patch 8 and added a deprecation notice on rte_logs instead,
Changelog since v1:
- I went a step further, hiding rte_config after de-inlining non critical
functions
--
David Marchand
David Marchand (11):
eal: remove deprecated CPU flags check function
eal: remove deprecated malloc virt2phys function
mem: hide internal heap header
net/bonding: use non deprecated PCI API
pci: remove deprecated functions
log: add log stream accessor
test/mem: remove dependency on EAL internals
eal: deinline lcore APIs
eal: factorize lcore role code
eal: make the global configuration private
doc: announce global logs struct removal from ABI
Stephen Hemminger (1):
eal: make lcore config private
app/test-pmd/testpmd.c | 1 -
app/test/test_memzone.c | 50 +++++++++------
doc/guides/rel_notes/deprecation.rst | 19 +-----
doc/guides/rel_notes/release_19_11.rst | 17 +++++
drivers/common/qat/qat_logs.c | 3 +-
drivers/common/qat/qat_logs.h | 3 +-
drivers/net/bonding/rte_eth_bond_args.c | 5 +-
lib/librte_eal/common/Makefile | 2 +-
lib/librte_eal/common/eal_common_cpuflags.c | 11 ----
lib/librte_eal/common/eal_common_launch.c | 2 +
lib/librte_eal/common/eal_common_lcore.c | 48 ++++++++++++++
lib/librte_eal/common/eal_common_log.c | 33 +++++-----
lib/librte_eal/common/eal_common_mcfg.c | 1 +
lib/librte_eal/common/eal_memcfg.h | 3 +-
lib/librte_eal/common/eal_private.h | 57 +++++++++++++++++
.../common/include/generic/rte_cpuflags.h | 9 ---
lib/librte_eal/common/include/rte_eal.h | 43 -------------
lib/librte_eal/common/include/rte_lcore.h | 73 ++++------------------
lib/librte_eal/common/include/rte_log.h | 13 ++++
lib/librte_eal/common/include/rte_malloc.h | 7 ---
lib/librte_eal/common/include/rte_malloc_heap.h | 35 -----------
lib/librte_eal/common/malloc_heap.c | 1 +
lib/librte_eal/common/malloc_heap.h | 25 +++++++-
lib/librte_eal/common/meson.build | 1 -
lib/librte_eal/common/rte_malloc.c | 1 +
lib/librte_eal/common/rte_service.c | 2 +
lib/librte_eal/freebsd/eal/eal.c | 7 ---
lib/librte_eal/linux/eal/eal.c | 7 ---
lib/librte_eal/rte_eal_version.map | 16 ++++-
lib/librte_eal/windows/eal/eal_thread.c | 1 +
lib/librte_pci/rte_pci.c | 19 ------
lib/librte_pci/rte_pci.h | 47 --------------
lib/librte_pci/rte_pci_version.map | 3 -
33 files changed, 253 insertions(+), 312 deletions(-)
delete mode 100644 lib/librte_eal/common/include/rte_malloc_heap.h
--
1.8.3.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions
2019-10-24 0:43 11% ` Thomas Monjalon
2019-10-25 9:10 5% ` Ray Kinsella
@ 2019-10-25 12:45 10% ` Ray Kinsella
1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 12:45 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
On 24/10/2019 01:43, Thomas Monjalon wrote:
> 27/09/2019 18:54, Ray Kinsella:
>> This policy change introduces major ABI versions, these are
>> declared every year, typically aligned with the LTS release
>> and are supported by subsequent releases in the following year.
>
> No, the ABI number may stand for more than one year.
ok, I will remove the reference to one year here.
Just on a point of order, what was approved by the technical board was `one year`, initially.
So the ABI Policy at this point in time is stability for `one year`.
I tried to make the `one year` point in as few places as possible.
Simply to reduce the rework later, when we lengthen the abi support period.
I also include a note up front, making it abundantly clear the intention to lengthen the support period, as follows.
"In 2019, the DPDK community stated it’s intention to move to ABI stable releases, over a number of release cycles. Beginning with maintaining ABI stability through one year of DPDK releases starting from DPDK 19.11. This policy will be reviewed in 2020, with intention of lengthening the stability period."
>
>> This change is intended to improve ABI stabilty for those projects
>> consuming DPDK.
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> doc/guides/contributing/abi_policy.rst | 321 +++++++++++++++------
>> .../contributing/img/abi_stability_policy.png | Bin 0 -> 61277 bytes
>> doc/guides/contributing/img/what_is_an_abi.png | Bin 0 -> 151683 bytes
>
> As an Open Source project, binary files are rejected :)
> Please provide the image source as SVG if the diagram is really required.
ACK, done
>
> [...]
>> +#. Major ABI versions are declared every **year** and are then supported for one
>> + year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
>
> As discussed on the cover letter, please avoid making "every year" cadence, the rule.
It's very hard to remove this one, what should can we say instead?
#. Major ABI versions are declared on some cadence and are then supported for some
period unknown, typically aligned with the `LTS release <stable_lts_releases>`.
>
>> +#. The ABI version is managed at a project level in DPDK, with the ABI version
>> + reflected in all :ref:`library's soname <what_is_soname>`.
>
> Should we make clear here that an experimental ABI change has no impact
> on the ABI version number?
Absolutely, see four points below.
#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
considered part of an ABI version and may change without constraint.
>> +#. The ABI should be preserved and not changed lightly. ABI changes must follow
>> + the outlined :ref:`deprecation process <abi_changes>`.
>> +#. The addition of symbols is generally not problematic. The modification of
>> + symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
>> +#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
>> + once approved these will form part of the next ABI version.
>> +#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
>> + considered part of an ABI version and may change without constraint.
>> +#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
>> + support for hardware which was previously supported, should be treated as an
>> + ABI change.
>> +
>> +.. note::
>> +
>> + In 2019, the DPDK community stated it's intention to move to ABI stable
>> + releases, over a number of release cycles. Beginning with maintaining ABI
>> + stability through one year of DPDK releases starting from DPDK 19.11.
>
> There is no verb in this sentence.
ACK, done.
>
>> + This
>> + policy will be reviewed in 2020, with intention of lengthening the stability
>> + period.
>
>> +What is an ABI version?
>> +~~~~~~~~~~~~~~~~~~~~~~~
>> +
>> +An ABI version is an instance of a library's ABI at a specific release. Certain
>> +releases are considered by the community to be milestone releases, the yearly
>> +LTS for example. Supporting those milestone release's ABI for some number of
>> +subsequent releases is desirable to facilitate application upgrade. Those ABI
>> +version's aligned with milestones release are therefore called 'ABI major
>> +versions' and are supported for some number of releases.
>
> If you understand this paragraph, please raise your hand :)
We can simplify as follows.
An ABI version is an instance of a library's ABI at a specific release. Certain
releases are considered to be milestone releases, the yearly LTS release for
example. The ABI of a milestone release may be designated as a 'major ABI
version', where this ABI version is then supported for some number of subsequent
releases.
Major ABI version support in subsequent releases facilitates application
upgrade, by enabling applications built against the milestone release, to
upgrade to subsequent releases of the library without a rebuild.
>
>> +More details on major ABI version can be found in the :ref:`ABI versioning
>> +<major_abi_versions>` guide.
>>
>> The DPDK ABI policy
>> -~~~~~~~~~~~~~~~~~~~
>> +-------------------
>> +
>> +A major ABI version is declared every year, aligned with that year's LTS
>> +release, e.g. v19.11. This ABI version is then supported for one year by all
>> +subsequent releases within that time period, until the next LTS release, e.g.
>> +v20.11.
>
> Again, the "one year" limit should not be documented as a general rule.
As I said above, it's not obvious to me what I would say in its place.
Can we leave as is until the community agree to lengthen the period?
>
>> +At the declaration of a major ABI version, major version numbers encoded in
>> +libraries soname's are bumped to indicate the new version, with the minor
>> +version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
>> +``librte_eal.so.21.0``.
>>
>> +The ABI may then change multiple times, without warning, between the last major
>> +ABI version increment and the HEAD label of the git tree, with the condition
>> +that ABI compatibility with the major ABI version is preserved and therefore
>> +soname's do not change.
>>
>> +Minor versions are incremented to indicate the release of a new ABI compatible
>> +DPDK release, typically the DPDK quarterly releases. An example of this, might
>> +be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
>> +release, following the declaration of the new major ABI version ``20``.
>
> I don't understand the benefit of having a minor ABI version number.
> Can we just have v20 and v21 as we discussed in the techboard?
> Is it because an application linked with v20.2 cannot work with v20.1?
You need to have minor versions for forward compatibility.
So let's say v20 is the major ABI in v19.11.
However a new function `rte_foobar` get's added in DPDK v20.02.
`rte_foobar` is no longer experimental.
I write a new application `funet` what _needs_ `rte_foobar`.
The only way I can tell I have gotten the right library version,
to satisfy `funet`s dependencies is the minor version number.
In this case `funet`s author can also have a reasonable expectation,
that `rte_foobar` will become part of the next `major ABI version`.
>
> If we must have a minor number, I suggest a numbering closer to release numbers:
> release 19.11 -> ABI 19.11
> release 20.02 -> ABI 19.14
> release 20.05 -> ABI 19.17
> release 20.08 -> ABI 19.20
> It shows the month number as if the first year never finishes.
> And when a new ABI is declared, release and ABI versions are the same:
> release 20.11 -> ABI 20.11
What was agreed at the technical board is that DPDK v19.11 is ABI v20.
Minor numbers are usually incremental, and release numbers do not need
to co-relate with ABI version numbers.
We previously discussed that
v20.1 = LTS + 1 release = v20.02
v20.2 = LTS + 2 release = v20.05
All that said.
I am very eager not to get into describing release management in the ABI policy.
ABI policy is enough to describe with detail DPDKs release management also.
>
>
>> +ABI versions, are supported by each release until such time as the next major
>> +ABI version is declared. At that time, the deprecation of the previous major ABI
>> +version will be noted in the Release Notes with guidance on individual symbol
>> +depreciation and upgrade notes provided.
>
> I suggest a rewording:
> "
> An ABI version is supported in all new releases
> until the next major ABI version is declared.
> When changing the major ABI version,
> the release notes give details about all ABI changes.
> "
ACK, happy to change, much of that wording was reuse from the old policy.
> [...]
>> + - The acknowledgment of a member of the technical board, as a delegate of the
>> + `technical board <https://core.dpdk.org/techboard/>`_ acknowledging the
>> + need for the ABI change, is also mandatory.
>
> Only one? What about 3 members minimum?
My feeling is that that three would become a real headache over time.
Limit the techboards ability to scale, with each change requiring three ACKs from them.
I am happy to change it, if you feel strongly.
>
> [...]
>> +#. If a newly proposed API functionally replaces an existing one, when the new
>> + API becomes non-experimental, then the old one is marked with
>> + ``__rte_deprecated``.
>> +
>> + - The depreciated API should follow the notification process to be removed,
>> + see :ref:`deprecation_notices`.
>> +
>> + - At the declaration of the next major ABI version, those ABI changes then
>> + become a formal part of the new ABI and the requirement to preserve ABI
>> + compatibility with the last major ABI version is then dropped.
>> +
>> + - The responsibility for removing redundant ABI compatibility code rests
>> + with the original contributor of the ABI changes, failing that, then with
>> + the contributor's company and then finally with the maintainer.
>
> Having too many responsibles look like nobody is really responsible.
> I would tend to think that only the maintainer is responsible,
> but he can ask for help.
Others had specifically asked that the chain of responsibility be very clear,
so that all the burden for excising redundant code does not fall
automatically on the maintainer.
^ permalink raw reply [relevance 10%]
* Re: [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions
2019-10-15 15:11 5% ` David Marchand
@ 2019-10-25 11:43 5% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 11:43 UTC (permalink / raw)
To: David Marchand
Cc: dev, Thomas Monjalon, Stephen Hemminger, Bruce Richardson, Yigit,
Ferruh, Ananyev, Konstantin, Jerin Jacob Kollanukkaran,
Olivier Matz, Neil Horman, Maxime Coquelin, Mcnamara, John,
Kovacevic, Marko, Hemant Agrawal, Kevin Traynor, Aaron Conole
On 15/10/2019 16:11, David Marchand wrote:
[SNIP]
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> doc/guides/contributing/abi_policy.rst | 321 +++++++++++++++------
>> .../contributing/img/abi_stability_policy.png | Bin 0 -> 61277 bytes
>> doc/guides/contributing/img/what_is_an_abi.png | Bin 0 -> 151683 bytes
>> doc/guides/contributing/stable.rst | 12 +-
>> 4 files changed, 241 insertions(+), 92 deletions(-)
>> create mode 100644 doc/guides/contributing/img/abi_stability_policy.png
>> create mode 100644 doc/guides/contributing/img/what_is_an_abi.png
>>
>> diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
>> index 55bacb4..8862d24 100644
>> --- a/doc/guides/contributing/abi_policy.rst
>> +++ b/doc/guides/contributing/abi_policy.rst
>> @@ -1,33 +1,46 @@
>> .. SPDX-License-Identifier: BSD-3-Clause
>> - Copyright 2018 The DPDK contributors
>> + Copyright 2019 The DPDK contributors
>>
>> -.. abi_api_policy:
>> +.. _abi_policy:
>>
>> -DPDK ABI/API policy
>> -===================
>> +ABI Policy
>> +==========
>>
>> Description
>> -----------
>>
>> -This document details some methods for handling ABI management in the DPDK.
>> +This document details the management policy that ensures the long-term stability
>> +of the DPDK ABI and API.
>>
>> General Guidelines
>> ------------------
>>
>> -#. Whenever possible, ABI should be preserved
>> -#. ABI/API may be changed with a deprecation process
>> -#. The modification of symbols can generally be managed with versioning
>> -#. Libraries or APIs marked in ``experimental`` state may change without constraint
>> -#. New APIs will be marked as ``experimental`` for at least one release to allow
>> - any issues found by users of the new API to be fixed quickly
>> -#. The addition of symbols is generally not problematic
>> -#. The removal of symbols generally is an ABI break and requires bumping of the
>> - LIBABIVER macro
>> -#. Updates to the minimum hardware requirements, which drop support for hardware which
>> - was previously supported, should be treated as an ABI change.
>> -
>> -What is an ABI
>> -~~~~~~~~~~~~~~
>> +#. Major ABI versions are declared every **year** and are then supported for one
>> + year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
>> +#. The ABI version is managed at a project level in DPDK, with the ABI version
>> + reflected in all :ref:`library's soname <what_is_soname>`.
>> +#. The ABI should be preserved and not changed lightly. ABI changes must follow
>> + the outlined :ref:`deprecation process <abi_changes>`.
>> +#. The addition of symbols is generally not problematic. The modification of
>> + symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
>> +#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
>> + once approved these will form part of the next ABI version.
>> +#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
>> + considered part of an ABI version and may change without constraint.
>> +#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
>> + support for hardware which was previously supported, should be treated as an
>> + ABI change.
>> +
>> +.. note::
>> +
>> + In 2019, the DPDK community stated it's intention to move to ABI stable
>
> its?
ACK, done
>
>> + releases, over a number of release cycles. Beginning with maintaining ABI
>> + stability through one year of DPDK releases starting from DPDK 19.11. This
>
> sentence without a verb?
ACK, done - rewritten to be clearer.
(maintain is a verb BTW).
>
>> + policy will be reviewed in 2020, with intention of lengthening the stability
>> + period.
>> +
>> +What is an ABI?
>> +~~~~~~~~~~~~~~~
>>
>> An ABI (Application Binary Interface) is the set of runtime interfaces exposed
>> by a library. It is similar to an API (Application Programming Interface) but
>> @@ -39,30 +52,80 @@ Therefore, in the case of dynamic linking, it is critical that an ABI is
>> preserved, or (when modified), done in such a way that the application is unable
>> to behave improperly or in an unexpected fashion.
>>
>> +.. _figure_what_is_an_abi:
>> +
>> +.. figure:: img/what_is_an_abi.*
>> +
>> +*Figure 1. Illustration of DPDK API and ABI .*
>>
>> -ABI/API Deprecation
>> --------------------
>> +
>> +What is an ABI version?
>> +~~~~~~~~~~~~~~~~~~~~~~~
>> +
>> +An ABI version is an instance of a library's ABI at a specific release. Certain
>> +releases are considered by the community to be milestone releases, the yearly
>> +LTS for example. Supporting those milestone release's ABI for some number of
>> +subsequent releases is desirable to facilitate application upgrade. Those ABI
>> +version's aligned with milestones release are therefore called 'ABI major
>
> versions?
ACK, done
> milestone releases
ACK, done
>
>> +versions' and are supported for some number of releases.
>> +
>> +More details on major ABI version can be found in the :ref:`ABI versioning
>> +<major_abi_versions>` guide.
>>
>> The DPDK ABI policy
>> -~~~~~~~~~~~~~~~~~~~
>> +-------------------
>> +
>> +A major ABI version is declared every year, aligned with that year's LTS
>> +release, e.g. v19.11. This ABI version is then supported for one year by all
>> +subsequent releases within that time period, until the next LTS release, e.g.
>> +v20.11.
>> +
>> +At the declaration of a major ABI version, major version numbers encoded in
>> +libraries soname's are bumped to indicate the new version, with the minor
>> +version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
>> +``librte_eal.so.21.0``.
>>
>> -ABI versions are set at the time of major release labeling, and the ABI may
>> -change multiple times, without warning, between the last release label and the
>> -HEAD label of the git tree.
>> +The ABI may then change multiple times, without warning, between the last major
>> +ABI version increment and the HEAD label of the git tree, with the condition
>> +that ABI compatibility with the major ABI version is preserved and therefore
>> +soname's do not change.
>>
>> -ABI versions, once released, are available until such time as their
>> -deprecation has been noted in the Release Notes for at least one major release
>> -cycle. For example consider the case where the ABI for DPDK 2.0 has been
>> -shipped and then a decision is made to modify it during the development of
>> -DPDK 2.1. The decision will be recorded in the Release Notes for the DPDK 2.1
>> -release and the modification will be made available in the DPDK 2.2 release.
>> +Minor versions are incremented to indicate the release of a new ABI compatible
>> +DPDK release, typically the DPDK quarterly releases. An example of this, might
>> +be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
>> +release, following the declaration of the new major ABI version ``20``.
>>
>> -ABI versions may be deprecated in whole or in part as needed by a given
>> -update.
>> +ABI versions, are supported by each release until such time as the next major
>> +ABI version is declared. At that time, the deprecation of the previous major ABI
>> +version will be noted in the Release Notes with guidance on individual symbol
>> +depreciation and upgrade notes provided.
>
> deprecation?
Gargh ...
ACK, done
>
>
>>
>> -Some ABI changes may be too significant to reasonably maintain multiple
>> -versions. In those cases ABI's may be updated without backward compatibility
>> -being provided. The requirements for doing so are:
>> +.. _figure_abi_stability_policy:
>> +
>> +.. figure:: img/abi_stability_policy.*
>> +
>> +*Figure 2. Mapping of new ABI versions and ABI version compatibility to DPDK
>> +releases.*
>> +
>> +.. _abi_changes:
>> +
>> +ABI Changes
>> +~~~~~~~~~~~
>> +
>> +The ABI may still change after the declaration of a major ABI version, that is
>> +new APIs may be still added or existing APIs may be modified.
>> +
>> +.. Warning::
>> +
>> + Note that, this policy details the method by which the ABI may be changed,
>> + with due regard to preserving compatibility and observing depreciation
>
> deprecation?
ACK, done
>
>> + notices. This process however should not be undertaken lightly, as a general
>> + rule ABI stability is extremely important for downstream consumers of DPDK.
>> + The ABI should only be changed for significant reasons, such as performance
>> + enhancements. ABI breakages due to changes such as reorganizing public
>> + structure fields for aesthetic or readability purposes should be avoided.
>> +
>> +The requirements for changing the ABI are:
>
> [snip]
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v6 1/4] doc: separate versioning.rst into version and policy
2019-10-21 9:53 0% ` Thomas Monjalon
@ 2019-10-25 11:36 0% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 11:36 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
On 21/10/2019 10:53, Thomas Monjalon wrote:
> 27/09/2019 18:54, Ray Kinsella:
>> Separate versioning.rst into abi versioning and abi policy guidance, in
>> preparation for adding more detail to the abi policy.
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> --- /dev/null
>> +++ b/doc/guides/contributing/abi_policy.rst
>> @@ -0,0 +1,169 @@
>> +.. SPDX-License-Identifier: BSD-3-Clause
>> + Copyright 2018 The DPDK contributors
>> +
>> +.. abi_api_policy:
>
> No need to add an anchor at the beginning of a file.
> RsT syntax :doc: allows to refer to a .rst file.
ACK, done.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 1/2] security: add anti replay window size
@ 2019-10-25 10:00 4% ` Ananyev, Konstantin
2019-10-25 15:56 0% ` Hemant Agrawal
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2019-10-25 10:00 UTC (permalink / raw)
To: Hemant Agrawal, dev, akhil.goyal, Doherty, Declan
Hi Hemant,
>
> At present the ipsec xfrom is missing the important step
> to configure the anti replay window size.
> The newly added field will also help in to enable or disable
> the anti replay checking, if available in offload by means
> of non-zero or zero value.
+1 for those changes.
Though AFAIK, it will be an ABI breakage, right?
So probably deserves changes in release notes.
>
> Currently similar field is available in rte_ipsec lib for
> software ipsec usage.
Yep, the only thing why it was put here - to avoid ABI breakage
within rte_security.
Having it in the rte_security_ipsec_xform makes much more sense.
>The newly introduced filed can replace
> that field as well eventually.
My suggestion would be to update librte_ipsec as part of these
patch series.
>
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
> lib/librte_security/rte_security.h | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/lib/librte_security/rte_security.h b/lib/librte_security/rte_security.h
> index aaafdfcd7..195ad5645 100644
> --- a/lib/librte_security/rte_security.h
> +++ b/lib/librte_security/rte_security.h
> @@ -212,6 +212,10 @@ struct rte_security_ipsec_xform {
> /**< Tunnel parameters, NULL for transport mode */
> uint64_t esn_soft_limit;
> /**< ESN for which the overflow event need to be raised */
> + uint32_t replay_win_sz;
> + /**< Anti replay window size to enable sequence replay attack handling.
> + * replay checking is disabled if the window size is 0.
> + */
> };
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v7 15/15] sched: remove redundant code
@ 2019-10-25 10:51 4% ` Jasvinder Singh
0 siblings, 0 replies; 200+ results
From: Jasvinder Singh @ 2019-10-25 10:51 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, Lukasz Krakowiak
Remove redundant data structure fields from port level data
structures and update the release notes.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
---
doc/guides/rel_notes/release_19_11.rst | 7 ++++-
lib/librte_sched/rte_sched.c | 42 +-------------------------
lib/librte_sched/rte_sched.h | 22 --------------
3 files changed, 7 insertions(+), 64 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f59a28307..524fb338b 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -228,6 +228,11 @@ API Changes
has been introduced in this release is used when used when all the packets
enqueued in the tx adapter are destined for the same Ethernet port & Tx queue.
+* sched: The pipe nodes configuration parameters such as number of pipes,
+ pipe queue sizes, pipe profiles, etc., are moved from port level structure
+ to subport level. This allows different subports of the same port to
+ have different configuration for the pipe nodes.
+
ABI Changes
-----------
@@ -315,7 +320,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
- librte_sched.so.3
+ + librte_sched.so.4
librte_security.so.2
librte_stack.so.1
librte_table.so.3
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 1faa580d0..710ecf65a 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -216,13 +216,6 @@ struct rte_sched_port {
uint32_t mtu;
uint32_t frame_overhead;
int socket;
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t n_pipe_profiles;
- uint32_t n_max_pipe_profiles;
- uint32_t pipe_tc_be_rate_max;
-#ifdef RTE_SCHED_RED
- struct rte_red_config red_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
/* Timing */
uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cyles */
@@ -230,48 +223,15 @@ struct rte_sched_port {
uint64_t time; /* Current NIC TX time measured in bytes */
struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
- /* Scheduling loop detection */
- uint32_t pipe_loop;
- uint32_t pipe_exhaustion;
-
- /* Bitmap */
- struct rte_bitmap *bmp;
- uint32_t grinder_base_bmp_pos[RTE_SCHED_PORT_N_GRINDERS] __rte_aligned_16;
-
/* Grinders */
- struct rte_sched_grinder grinder[RTE_SCHED_PORT_N_GRINDERS];
- uint32_t busy_grinders;
struct rte_mbuf **pkts_out;
uint32_t n_pkts_out;
uint32_t subport_id;
- /* Queue base calculation */
- uint32_t qsize_add[RTE_SCHED_QUEUES_PER_PIPE];
- uint32_t qsize_sum;
-
/* Large data structures */
- struct rte_sched_subport *subports[0];
- struct rte_sched_subport *subport;
- struct rte_sched_pipe *pipe;
- struct rte_sched_queue *queue;
- struct rte_sched_queue_extra *queue_extra;
- struct rte_sched_pipe_profile *pipe_profiles;
- uint8_t *bmp_array;
- struct rte_mbuf **queue_array;
- uint8_t memory[0] __rte_cache_aligned;
+ struct rte_sched_subport *subports[0] __rte_cache_aligned;
} __rte_cache_aligned;
-enum rte_sched_port_array {
- e_RTE_SCHED_PORT_ARRAY_SUBPORT = 0,
- e_RTE_SCHED_PORT_ARRAY_PIPE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_EXTRA,
- e_RTE_SCHED_PORT_ARRAY_PIPE_PROFILES,
- e_RTE_SCHED_PORT_ARRAY_BMP_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_TOTAL,
-};
-
enum rte_sched_subport_array {
e_RTE_SCHED_SUBPORT_ARRAY_PIPE = 0,
e_RTE_SCHED_SUBPORT_ARRAY_QUEUE,
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 40f02f124..c82c23c14 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -260,28 +260,6 @@ struct rte_sched_port_params {
* the subports of the same port.
*/
uint32_t n_pipes_per_subport;
-
- /** Packet queue size for each traffic class.
- * All the pipes within the same subport share the similar
- * configuration for the queues.
- */
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
-
- /** Pipe profile table.
- * Every pipe is configured using one of the profiles from this table.
- */
- struct rte_sched_pipe_params *pipe_profiles;
-
- /** Profiles in the pipe profile table */
- uint32_t n_pipe_profiles;
-
- /** Max profiles allowed in the pipe profile table */
- uint32_t n_max_pipe_profiles;
-
-#ifdef RTE_SCHED_RED
- /** RED parameters */
- struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
};
/*
--
2.21.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure
2019-10-24 16:30 0% ` Thomas Monjalon
@ 2019-10-25 9:19 0% ` Kevin Traynor
0 siblings, 0 replies; 200+ results
From: Kevin Traynor @ 2019-10-25 9:19 UTC (permalink / raw)
To: Thomas Monjalon, david.marchand; +Cc: dev, anaotoly.burakov, stephen
On 24/10/2019 17:30, Thomas Monjalon wrote:
> 23/10/2019 20:54, David Marchand:
>> No need to expose rte_logs, hide it and remove it from the current ABI.
>>
>> Signed-off-by: David Marchand <david.marchand@redhat.com>
>> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
> [...]
>> --- a/lib/librte_eal/common/include/rte_log.h
>> +++ b/lib/librte_eal/common/include/rte_log.h
>> -struct rte_log_dynamic_type;
>> -
>> -/** The rte_log structure. */
>> -struct rte_logs {
>> - uint32_t type; /**< Bitfield with enabled logs. */
>> - uint32_t level; /**< Log level. */
>> - FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
>> - size_t dynamic_types_len;
>> - struct rte_log_dynamic_type *dynamic_types;
>> -};
>
> I like this kind of change, but the FILE stream is available only through
> the new experimental function. It is against the famous Mr Traynor rule:
> we cannot deprecate or remove an old stable symbol if the replacement is experimental.
>
>
For the change
Acked-by: Kevin Traynor <ktraynor@redhat.com>
++ for the rule (although s/we cannot/Thou shall not/ sounds more biblical)
As for accessor function being experimental, it is so simple I don't see
any issue with promoting it now. OTOH, if no one is planning to change
the struct anytime soon, it's probably fine to keep it public and
promote the fn. later.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions
2019-10-24 0:43 11% ` Thomas Monjalon
@ 2019-10-25 9:10 5% ` Ray Kinsella
2019-10-25 12:45 10% ` Ray Kinsella
1 sibling, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-25 9:10 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
Hi Thomas,
QQ - So is there really a 'no png' rule, because we have lots of them in the documentation?
root@rkinsell-MOBL2:.../rkinsell/dpdk# find doc/ -name "*.png" | wc -l
61
root@rkinsell-MOBL2:.../rkinsell/dpdk# find doc/ -name "*.svg" | wc -l
116
I am looking at recreating the images as SVG, but if it comes down to it - would they be ok to go as PNGs?
Thanks,
Ray K
On 24/10/2019 01:43, Thomas Monjalon wrote:
> 27/09/2019 18:54, Ray Kinsella:
>> This policy change introduces major ABI versions, these are
>> declared every year, typically aligned with the LTS release
>> and are supported by subsequent releases in the following year.
>
> No, the ABI number may stand for more than one year.
>
>> This change is intended to improve ABI stabilty for those projects
>> consuming DPDK.
>>
>> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
>> ---
>> doc/guides/contributing/abi_policy.rst | 321 +++++++++++++++------
>> .../contributing/img/abi_stability_policy.png | Bin 0 -> 61277 bytes
>> doc/guides/contributing/img/what_is_an_abi.png | Bin 0 -> 151683 bytes
>
> As an Open Source project, binary files are rejected :)
> Please provide the image source as SVG if the diagram is really required.
>
> [...]
>> +#. Major ABI versions are declared every **year** and are then supported for one
>> + year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
>
> As discussed on the cover letter, please avoid making "every year" cadence, the rule.
>
>> +#. The ABI version is managed at a project level in DPDK, with the ABI version
>> + reflected in all :ref:`library's soname <what_is_soname>`.
>
> Should we make clear here that an experimental ABI change has no impact
> on the ABI version number?
>
>> +#. The ABI should be preserved and not changed lightly. ABI changes must follow
>> + the outlined :ref:`deprecation process <abi_changes>`.
>> +#. The addition of symbols is generally not problematic. The modification of
>> + symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
>> +#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
>> + once approved these will form part of the next ABI version.
>> +#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
>> + considered part of an ABI version and may change without constraint.
>> +#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
>> + support for hardware which was previously supported, should be treated as an
>> + ABI change.
>> +
>> +.. note::
>> +
>> + In 2019, the DPDK community stated it's intention to move to ABI stable
>> + releases, over a number of release cycles. Beginning with maintaining ABI
>> + stability through one year of DPDK releases starting from DPDK 19.11.
>
> There is no verb in this sentence.
>
>> + This
>> + policy will be reviewed in 2020, with intention of lengthening the stability
>> + period.
>
>> +What is an ABI version?
>> +~~~~~~~~~~~~~~~~~~~~~~~
>> +
>> +An ABI version is an instance of a library's ABI at a specific release. Certain
>> +releases are considered by the community to be milestone releases, the yearly
>> +LTS for example. Supporting those milestone release's ABI for some number of
>> +subsequent releases is desirable to facilitate application upgrade. Those ABI
>> +version's aligned with milestones release are therefore called 'ABI major
>> +versions' and are supported for some number of releases.
>
> If you understand this paragraph, please raise your hand :)
>
>> +More details on major ABI version can be found in the :ref:`ABI versioning
>> +<major_abi_versions>` guide.
>>
>> The DPDK ABI policy
>> -~~~~~~~~~~~~~~~~~~~
>> +-------------------
>> +
>> +A major ABI version is declared every year, aligned with that year's LTS
>> +release, e.g. v19.11. This ABI version is then supported for one year by all
>> +subsequent releases within that time period, until the next LTS release, e.g.
>> +v20.11.
>
> Again, the "one year" limit should not be documented as a general rule.
>
>> +At the declaration of a major ABI version, major version numbers encoded in
>> +libraries soname's are bumped to indicate the new version, with the minor
>> +version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
>> +``librte_eal.so.21.0``.
>>
>> +The ABI may then change multiple times, without warning, between the last major
>> +ABI version increment and the HEAD label of the git tree, with the condition
>> +that ABI compatibility with the major ABI version is preserved and therefore
>> +soname's do not change.
>>
>> +Minor versions are incremented to indicate the release of a new ABI compatible
>> +DPDK release, typically the DPDK quarterly releases. An example of this, might
>> +be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
>> +release, following the declaration of the new major ABI version ``20``.
>
> I don't understand the benefit of having a minor ABI version number.
> Can we just have v20 and v21 as we discussed in the techboard?
> Is it because an application linked with v20.2 cannot work with v20.1?
>
> If we must have a minor number, I suggest a numbering closer to release numbers:
> release 19.11 -> ABI 19.11
> release 20.02 -> ABI 19.14
> release 20.05 -> ABI 19.17
> release 20.08 -> ABI 19.20
> It shows the month number as if the first year never finishes.
> And when a new ABI is declared, release and ABI versions are the same:
> release 20.11 -> ABI 20.11
>
>
>> +ABI versions, are supported by each release until such time as the next major
>> +ABI version is declared. At that time, the deprecation of the previous major ABI
>> +version will be noted in the Release Notes with guidance on individual symbol
>> +depreciation and upgrade notes provided.
>
> I suggest a rewording:
> "
> An ABI version is supported in all new releases
> until the next major ABI version is declared.
> When changing the major ABI version,
> the release notes give details about all ABI changes.
> "
>
> [...]
>> + - The acknowledgment of a member of the technical board, as a delegate of the
>> + `technical board <https://core.dpdk.org/techboard/>`_ acknowledging the
>> + need for the ABI change, is also mandatory.
>
> Only one? What about 3 members minimum?
>
> [...]
>> +#. If a newly proposed API functionally replaces an existing one, when the new
>> + API becomes non-experimental, then the old one is marked with
>> + ``__rte_deprecated``.
>> +
>> + - The depreciated API should follow the notification process to be removed,
>> + see :ref:`deprecation_notices`.
>> +
>> + - At the declaration of the next major ABI version, those ABI changes then
>> + become a formal part of the new ABI and the requirement to preserve ABI
>> + compatibility with the last major ABI version is then dropped.
>> +
>> + - The responsibility for removing redundant ABI compatibility code rests
>> + with the original contributor of the ABI changes, failing that, then with
>> + the contributor's company and then finally with the maintainer.
>
> Having too many responsibles look like nobody is really responsible.
> I would tend to think that only the maintainer is responsible,
> but he can ask for help.
>
>
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] Please stop using iopl() in DPDK
@ 2019-10-25 7:22 3% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-25 7:22 UTC (permalink / raw)
To: Andy Lutomirski
Cc: dev, Thomas Gleixner, Peter Zijlstra, LKML, Maxime Coquelin,
Tiwei Bie, Thomas Monjalon
Hello Andy,
On Fri, Oct 25, 2019 at 6:46 AM Andy Lutomirski <luto@kernel.org> wrote:
> Supporting iopl() in the Linux kernel is becoming a maintainability
> problem. As far as I know, DPDK is the only major modern user of
> iopl().
Thanks for reaching out.
Copying our virtio maintainers (Maxime and Tiwei), since they are the
first impacted by such a change.
> After doing some research, DPDK uses direct io port access for only a
> single purpose: accessing legacy virtio configuration structures.
> These structures are mapped in IO space in BAR 0 on legacy virtio
> devices.
>
> There are at least three ways you could avoid using iopl(). Here they
> are in rough order of quality in my opinion:
>
> 1. Change pci_uio_ioport_read() and pci_uio_ioport_write() to use
> read() and write() on resource0 in sysfs.
>
> 2. Use the alternative access mechanism in the virtio legacy spec:
> there is a way to access all of these structures via configuration
> space.
>
> 3. Use ioperm() instead of iopl().
And you come with potential solutions, thanks :-)
We need to look at them and evaluate what is best from our point of view.
See how it impacts our ABI too (we decided on a freeze until 20.11).
> We are considering changes to the kernel that will potentially harm
> the performance of any program that uses iopl(3) -- in particular,
> context switches will become more expensive, and the scheduler might
> need to explicitly penalize such programs to ensure fairness. Using
> ioperm() already hurts performance, and the proposed changes to iopl()
> will make it even worse. Alternatively, the kernel could drop iopl()
> support entirely. I will certainly make a change to allow
> distributions to remove iopl() support entirely from their kernels,
> and I expect that distributions will do this.
>
> Please fix DPDK.
Unfortunately, we are currently closing our rc1 for the 19.11 release.
Not sure who is available, but I suppose we can work on this subject
in the 20.02 release timeframe.
Thanks.
--
David Marchand
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH] cmdline: prefix cmdline numeric enum
@ 2019-10-24 18:09 1% Stephen Hemminger
0 siblings, 0 replies; 200+ results
From: Stephen Hemminger @ 2019-10-24 18:09 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger
The values in an enum are really global names and not specific
to the enum in question. This can lead to namespace conflicts
in applications if a common value is visible.
The DPDK cmdline API has an enum for numeric type values with
names line UINT32 which could be used or defined in a user
application. Change these to be prefixed with the enum name
like other places in DPDK.
Lots of lines changed with no change in actual code
generated.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/test-cmdline/commands.c | 2 +-
app/test-pmd/bpf_cmd.c | 8 +-
app/test-pmd/cmdline.c | 656 +++++++++---------
app/test-pmd/cmdline_mtr.c | 84 +--
app/test-pmd/cmdline_tm.c | 172 ++---
app/test/test_cmdline_num.c | 48 +-
doc/guides/rel_notes/release_19_11.rst | 3 +
examples/ethtool/ethtool-app/ethapp.c | 18 +-
examples/ipsec-secgw/parser.c | 2 +-
examples/qos_sched/cmdline.c | 46 +-
examples/quota_watermark/qwctl/commands.c | 2 +-
.../guest_cli/vm_power_cli_guest.c | 2 +-
examples/vm_power_manager/vm_power_cli.c | 8 +-
lib/librte_cmdline/cmdline_parse_num.c | 40 +-
lib/librte_cmdline/cmdline_parse_num.h | 16 +-
15 files changed, 555 insertions(+), 552 deletions(-)
diff --git a/app/test-cmdline/commands.c b/app/test-cmdline/commands.c
index d81da9665aff..a3d8143d372b 100644
--- a/app/test-cmdline/commands.c
+++ b/app/test-cmdline/commands.c
@@ -191,7 +191,7 @@ cmd_num_parsed(void *parsed_result,
}
cmdline_parse_token_num_t cmd_num_tok =
- TOKEN_NUM_INITIALIZER(struct cmd_num_result, num, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_num_result, num, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_num = {
.f = cmd_num_parsed, /* function to call */
diff --git a/app/test-pmd/bpf_cmd.c b/app/test-pmd/bpf_cmd.c
index 830bfc13a520..465d5c46f0cb 100644
--- a/app/test-pmd/bpf_cmd.c
+++ b/app/test-pmd/bpf_cmd.c
@@ -124,9 +124,9 @@ cmdline_parse_token_string_t cmd_load_bpf_dir =
TOKEN_STRING_INITIALIZER(struct cmd_bpf_ld_result,
dir, "rx#tx");
cmdline_parse_token_num_t cmd_load_bpf_port =
- TOKEN_NUM_INITIALIZER(struct cmd_bpf_ld_result, port, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_bpf_ld_result, port, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_load_bpf_queue =
- TOKEN_NUM_INITIALIZER(struct cmd_bpf_ld_result, queue, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_bpf_ld_result, queue, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_load_bpf_flags =
TOKEN_STRING_INITIALIZER(struct cmd_bpf_ld_result,
flags, NULL);
@@ -180,9 +180,9 @@ cmdline_parse_token_string_t cmd_unload_bpf_dir =
TOKEN_STRING_INITIALIZER(struct cmd_bpf_unld_result,
dir, "rx#tx");
cmdline_parse_token_num_t cmd_unload_bpf_port =
- TOKEN_NUM_INITIALIZER(struct cmd_bpf_unld_result, port, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_bpf_unld_result, port, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_unload_bpf_queue =
- TOKEN_NUM_INITIALIZER(struct cmd_bpf_unld_result, queue, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_bpf_unld_result, queue, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_operate_bpf_unld_parse = {
.f = cmd_operate_bpf_unld_parsed,
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index 1bd977f91d87..bda239fc9497 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -1362,7 +1362,7 @@ cmdline_parse_token_string_t cmd_operate_specific_port_port =
name, "start#stop#close#reset");
cmdline_parse_token_num_t cmd_operate_specific_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_operate_specific_port_result,
- value, UINT8);
+ value, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_operate_specific_port = {
.f = cmd_operate_specific_port_parsed,
@@ -1498,7 +1498,7 @@ cmdline_parse_token_string_t cmd_operate_detach_port_keyword =
keyword, "detach");
cmdline_parse_token_num_t cmd_operate_detach_port_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_operate_detach_port_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_operate_detach_port = {
.f = cmd_operate_detach_port_parsed,
@@ -1720,7 +1720,7 @@ cmdline_parse_token_string_t cmd_config_speed_specific_keyword =
TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, keyword,
"config");
cmdline_parse_token_num_t cmd_config_speed_specific_id =
- TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_speed_specific, id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_speed_specific_item1 =
TOKEN_STRING_INITIALIZER(struct cmd_config_speed_specific, item1,
"speed");
@@ -1792,7 +1792,7 @@ cmdline_parse_token_string_t cmd_config_loopback_all_item =
TOKEN_STRING_INITIALIZER(struct cmd_config_loopback_all, item,
"loopback");
cmdline_parse_token_num_t cmd_config_loopback_all_mode =
- TOKEN_NUM_INITIALIZER(struct cmd_config_loopback_all, mode, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_loopback_all, mode, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_config_loopback_all = {
.f = cmd_config_loopback_all_parsed,
@@ -1846,13 +1846,13 @@ cmdline_parse_token_string_t cmd_config_loopback_specific_keyword =
"config");
cmdline_parse_token_num_t cmd_config_loopback_specific_id =
TOKEN_NUM_INITIALIZER(struct cmd_config_loopback_specific, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_loopback_specific_item =
TOKEN_STRING_INITIALIZER(struct cmd_config_loopback_specific, item,
"loopback");
cmdline_parse_token_num_t cmd_config_loopback_specific_mode =
TOKEN_NUM_INITIALIZER(struct cmd_config_loopback_specific, mode,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_config_loopback_specific = {
.f = cmd_config_loopback_specific_parsed,
@@ -1942,7 +1942,7 @@ cmdline_parse_token_string_t cmd_config_rx_tx_name =
TOKEN_STRING_INITIALIZER(struct cmd_config_rx_tx, name,
"rxq#txq#rxd#txd");
cmdline_parse_token_num_t cmd_config_rx_tx_value =
- TOKEN_NUM_INITIALIZER(struct cmd_config_rx_tx, value, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rx_tx, value, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_config_rx_tx = {
.f = cmd_config_rx_tx_parsed,
@@ -2024,7 +2024,7 @@ cmdline_parse_token_string_t cmd_config_max_pkt_len_name =
"max-pkt-len");
cmdline_parse_token_num_t cmd_config_max_pkt_len_value =
TOKEN_NUM_INITIALIZER(struct cmd_config_max_pkt_len_result, value,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_config_max_pkt_len = {
.f = cmd_config_max_pkt_len_parsed,
@@ -2073,9 +2073,9 @@ cmdline_parse_token_string_t cmd_config_mtu_mtu =
TOKEN_STRING_INITIALIZER(struct cmd_config_mtu_result, keyword,
"mtu");
cmdline_parse_token_num_t cmd_config_mtu_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_config_mtu_value =
- TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, value, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_mtu_result, value, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_config_mtu = {
.f = cmd_config_mtu_parsed,
@@ -2362,7 +2362,7 @@ cmdline_parse_token_string_t cmd_config_rss_hash_key_config =
TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_key, config,
"config");
cmdline_parse_token_num_t cmd_config_rss_hash_key_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_config_rss_hash_key, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rss_hash_key, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rss_hash_key_rss_hash_key =
TOKEN_STRING_INITIALIZER(struct cmd_config_rss_hash_key,
rss_hash_key, "rss-hash-key");
@@ -2460,19 +2460,19 @@ cmdline_parse_token_string_t cmd_config_rxtx_ring_size_config =
config, "config");
cmdline_parse_token_num_t cmd_config_rxtx_ring_size_portid =
TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
- portid, UINT16);
+ portid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rxtxq =
TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
rxtxq, "rxq#txq");
cmdline_parse_token_num_t cmd_config_rxtx_ring_size_qid =
TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
- qid, UINT16);
+ qid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rxtx_ring_size_rsize =
TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_ring_size,
rsize, "ring_size");
cmdline_parse_token_num_t cmd_config_rxtx_ring_size_size =
TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_ring_size,
- size, UINT16);
+ size, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_config_rxtx_ring_size = {
.f = cmd_config_rxtx_ring_size_parsed,
@@ -2561,11 +2561,11 @@ cmd_config_rxtx_queue_parsed(void *parsed_result,
cmdline_parse_token_string_t cmd_config_rxtx_queue_port =
TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_queue, port, "port");
cmdline_parse_token_num_t cmd_config_rxtx_queue_portid =
- TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_queue, portid, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_queue, portid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rxtx_queue_rxtxq =
TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_queue, rxtxq, "rxq#txq");
cmdline_parse_token_num_t cmd_config_rxtx_queue_qid =
- TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_queue, qid, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rxtx_queue, qid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rxtx_queue_opname =
TOKEN_STRING_INITIALIZER(struct cmd_config_rxtx_queue, opname,
"start#stop");
@@ -2641,13 +2641,13 @@ cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_port =
port, "port");
cmdline_parse_token_num_t cmd_config_deferred_start_rxtx_queue_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_rxtxq =
TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue,
rxtxq, "rxq#txq");
cmdline_parse_token_num_t cmd_config_deferred_start_rxtx_queue_qid =
TOKEN_NUM_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue,
- qid, UINT16);
+ qid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_opname =
TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue,
opname, "deferred_start");
@@ -2683,11 +2683,11 @@ struct cmd_setup_rxtx_queue {
cmdline_parse_token_string_t cmd_setup_rxtx_queue_port =
TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, port, "port");
cmdline_parse_token_num_t cmd_setup_rxtx_queue_portid =
- TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, portid, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, portid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setup_rxtx_queue_rxtxq =
TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, rxtxq, "rxq#txq");
cmdline_parse_token_num_t cmd_setup_rxtx_queue_qid =
- TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, qid, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_setup_rxtx_queue, qid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setup_rxtx_queue_setup =
TOKEN_STRING_INITIALIZER(struct cmd_setup_rxtx_queue, setup, "setup");
@@ -2896,7 +2896,7 @@ cmdline_parse_token_string_t cmd_config_rss_reta_port =
cmdline_parse_token_string_t cmd_config_rss_reta_keyword =
TOKEN_STRING_INITIALIZER(struct cmd_config_rss_reta, keyword, "config");
cmdline_parse_token_num_t cmd_config_rss_reta_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_config_rss_reta, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_rss_reta, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_rss_reta_name =
TOKEN_STRING_INITIALIZER(struct cmd_config_rss_reta, name, "rss");
cmdline_parse_token_string_t cmd_config_rss_reta_list_name =
@@ -3007,13 +3007,13 @@ cmdline_parse_token_string_t cmd_showport_reta_show =
cmdline_parse_token_string_t cmd_showport_reta_port =
TOKEN_STRING_INITIALIZER(struct cmd_showport_reta, port, "port");
cmdline_parse_token_num_t cmd_showport_reta_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_showport_reta, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showport_reta, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_showport_reta_rss =
TOKEN_STRING_INITIALIZER(struct cmd_showport_reta, rss, "rss");
cmdline_parse_token_string_t cmd_showport_reta_reta =
TOKEN_STRING_INITIALIZER(struct cmd_showport_reta, reta, "reta");
cmdline_parse_token_num_t cmd_showport_reta_size =
- TOKEN_NUM_INITIALIZER(struct cmd_showport_reta, size, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showport_reta, size, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_showport_reta_list_of_items =
TOKEN_STRING_INITIALIZER(struct cmd_showport_reta,
list_of_items, NULL);
@@ -3058,7 +3058,7 @@ cmdline_parse_token_string_t cmd_showport_rss_hash_show =
cmdline_parse_token_string_t cmd_showport_rss_hash_port =
TOKEN_STRING_INITIALIZER(struct cmd_showport_rss_hash, port, "port");
cmdline_parse_token_num_t cmd_showport_rss_hash_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_showport_rss_hash, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showport_rss_hash, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_showport_rss_hash_rss_hash =
TOKEN_STRING_INITIALIZER(struct cmd_showport_rss_hash, rss_hash,
"rss-hash");
@@ -3162,7 +3162,7 @@ cmdline_parse_token_string_t cmd_config_dcb_port =
cmdline_parse_token_string_t cmd_config_dcb_config =
TOKEN_STRING_INITIALIZER(struct cmd_config_dcb, config, "config");
cmdline_parse_token_num_t cmd_config_dcb_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_config_dcb, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_dcb, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_dcb_dcb =
TOKEN_STRING_INITIALIZER(struct cmd_config_dcb, dcb, "dcb");
cmdline_parse_token_string_t cmd_config_dcb_vt =
@@ -3170,7 +3170,7 @@ cmdline_parse_token_string_t cmd_config_dcb_vt =
cmdline_parse_token_string_t cmd_config_dcb_vt_en =
TOKEN_STRING_INITIALIZER(struct cmd_config_dcb, vt_en, "on#off");
cmdline_parse_token_num_t cmd_config_dcb_num_tcs =
- TOKEN_NUM_INITIALIZER(struct cmd_config_dcb, num_tcs, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_dcb, num_tcs, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_config_dcb_pfc=
TOKEN_STRING_INITIALIZER(struct cmd_config_dcb, pfc, "pfc");
cmdline_parse_token_string_t cmd_config_dcb_pfc_en =
@@ -3269,7 +3269,7 @@ cmdline_parse_token_string_t cmd_config_burst_all =
cmdline_parse_token_string_t cmd_config_burst_name =
TOKEN_STRING_INITIALIZER(struct cmd_config_burst, name, "burst");
cmdline_parse_token_num_t cmd_config_burst_value =
- TOKEN_NUM_INITIALIZER(struct cmd_config_burst, value, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_burst, value, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_config_burst = {
.f = cmd_config_burst_parsed,
@@ -3338,7 +3338,7 @@ cmdline_parse_token_string_t cmd_config_thresh_name =
TOKEN_STRING_INITIALIZER(struct cmd_config_thresh, name,
"txpt#txht#txwt#rxpt#rxht#rxwt");
cmdline_parse_token_num_t cmd_config_thresh_value =
- TOKEN_NUM_INITIALIZER(struct cmd_config_thresh, value, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_thresh, value, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_config_thresh = {
.f = cmd_config_thresh_parsed,
@@ -3402,7 +3402,7 @@ cmdline_parse_token_string_t cmd_config_threshold_name =
TOKEN_STRING_INITIALIZER(struct cmd_config_threshold, name,
"txfreet#txrst#rxfreet");
cmdline_parse_token_num_t cmd_config_threshold_value =
- TOKEN_NUM_INITIALIZER(struct cmd_config_threshold, value, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_config_threshold, value, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_config_threshold = {
.f = cmd_config_threshold_parsed,
@@ -3608,7 +3608,7 @@ cmdline_parse_token_string_t cmd_setmask_mask =
TOKEN_STRING_INITIALIZER(struct cmd_setmask_result, mask,
"coremask#portmask");
cmdline_parse_token_num_t cmd_setmask_value =
- TOKEN_NUM_INITIALIZER(struct cmd_setmask_result, hexavalue, UINT64);
+ TOKEN_NUM_INITIALIZER(struct cmd_setmask_result, hexavalue, CMDLINE_UINT64);
cmdline_parse_inst_t cmd_set_fwd_mask = {
.f = cmd_set_mask_parsed,
@@ -3654,7 +3654,7 @@ cmdline_parse_token_string_t cmd_set_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_result, what,
"nbport#nbcore#burst#verbose");
cmdline_parse_token_num_t cmd_set_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_result, value, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_result, value, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_set_numbers = {
.f = cmd_set_parsed,
@@ -3702,7 +3702,7 @@ cmdline_parse_token_string_t cmd_set_log_log =
cmdline_parse_token_string_t cmd_set_log_type =
TOKEN_STRING_INITIALIZER(struct cmd_set_log_result, type, NULL);
cmdline_parse_token_num_t cmd_set_log_level =
- TOKEN_NUM_INITIALIZER(struct cmd_set_log_result, level, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_log_result, level, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_set_log = {
.f = cmd_set_log_parsed,
@@ -3836,7 +3836,7 @@ cmdline_parse_token_string_t cmd_rx_vlan_filter_all_all =
all, "all");
cmdline_parse_token_num_t cmd_rx_vlan_filter_all_portid =
TOKEN_NUM_INITIALIZER(struct cmd_rx_vlan_filter_all_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_rx_vlan_filter_all = {
.f = cmd_rx_vlan_filter_all_parsed,
@@ -3998,10 +3998,10 @@ cmdline_parse_token_string_t cmd_vlan_tpid_what =
what, "tpid");
cmdline_parse_token_num_t cmd_vlan_tpid_tpid =
TOKEN_NUM_INITIALIZER(struct cmd_vlan_tpid_result,
- tp_id, UINT16);
+ tp_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vlan_tpid_portid =
TOKEN_NUM_INITIALIZER(struct cmd_vlan_tpid_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_vlan_tpid = {
.f = cmd_vlan_tpid_parsed,
@@ -4048,10 +4048,10 @@ cmdline_parse_token_string_t cmd_rx_vlan_filter_what =
what, "add#rm");
cmdline_parse_token_num_t cmd_rx_vlan_filter_vlanid =
TOKEN_NUM_INITIALIZER(struct cmd_rx_vlan_filter_result,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_rx_vlan_filter_portid =
TOKEN_NUM_INITIALIZER(struct cmd_rx_vlan_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_rx_vlan_filter = {
.f = cmd_rx_vlan_filter_parsed,
@@ -4101,10 +4101,10 @@ cmdline_parse_token_string_t cmd_tx_vlan_set_set =
set, "set");
cmdline_parse_token_num_t cmd_tx_vlan_set_portid =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tx_vlan_set_vlanid =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_result,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tx_vlan_set = {
.f = cmd_tx_vlan_set_parsed,
@@ -4155,13 +4155,13 @@ cmdline_parse_token_string_t cmd_tx_vlan_set_qinq_set =
set, "set");
cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_portid =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tx_vlan_set_qinq_vlanid_outer =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_qinq_result,
- vlan_id_outer, UINT16);
+ vlan_id_outer, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tx_vlan_set_qinq = {
.f = cmd_tx_vlan_set_qinq_parsed,
@@ -4213,10 +4213,10 @@ cmdline_parse_token_string_t cmd_tx_vlan_set_pvid_pvid =
pvid, "pvid");
cmdline_parse_token_num_t cmd_tx_vlan_set_pvid_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_pvid_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tx_vlan_set_pvid_vlan_id =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_set_pvid_result,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_tx_vlan_set_pvid_mode =
TOKEN_STRING_INITIALIZER(struct cmd_tx_vlan_set_pvid_result,
mode, "on#off");
@@ -4268,7 +4268,7 @@ cmdline_parse_token_string_t cmd_tx_vlan_reset_reset =
reset, "reset");
cmdline_parse_token_num_t cmd_tx_vlan_reset_portid =
TOKEN_NUM_INITIALIZER(struct cmd_tx_vlan_reset_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tx_vlan_reset = {
.f = cmd_tx_vlan_reset_parsed,
@@ -4474,7 +4474,7 @@ cmdline_parse_token_string_t cmd_csum_hwsw =
hwsw, "hw#sw");
cmdline_parse_token_num_t cmd_csum_portid =
TOKEN_NUM_INITIALIZER(struct cmd_csum_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_csum_set = {
.f = cmd_csum_parsed,
@@ -4545,7 +4545,7 @@ cmdline_parse_token_string_t cmd_csum_tunnel_onoff =
onoff, "on#off");
cmdline_parse_token_num_t cmd_csum_tunnel_portid =
TOKEN_NUM_INITIALIZER(struct cmd_csum_tunnel_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_csum_tunnel = {
.f = cmd_csum_tunnel_parsed,
@@ -4633,10 +4633,10 @@ cmdline_parse_token_string_t cmd_tso_set_mode =
mode, "set");
cmdline_parse_token_num_t cmd_tso_set_tso_segsz =
TOKEN_NUM_INITIALIZER(struct cmd_tso_set_result,
- tso_segsz, UINT16);
+ tso_segsz, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tso_set_portid =
TOKEN_NUM_INITIALIZER(struct cmd_tso_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tso_set = {
.f = cmd_tso_set_parsed,
@@ -4782,10 +4782,10 @@ cmdline_parse_token_string_t cmd_tunnel_tso_set_mode =
mode, "set");
cmdline_parse_token_num_t cmd_tunnel_tso_set_tso_segsz =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_tso_set_result,
- tso_segsz, UINT16);
+ tso_segsz, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tunnel_tso_set_portid =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_tso_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tunnel_tso_set = {
.f = cmd_tunnel_tso_set_parsed,
@@ -4849,7 +4849,7 @@ cmdline_parse_token_string_t cmd_gro_enable_port =
cmd_keyword, "port");
cmdline_parse_token_num_t cmd_gro_enable_pid =
TOKEN_NUM_INITIALIZER(struct cmd_gro_enable_result,
- cmd_pid, UINT16);
+ cmd_pid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_gro_enable_keyword =
TOKEN_STRING_INITIALIZER(struct cmd_gro_enable_result,
cmd_keyword, "gro");
@@ -4899,7 +4899,7 @@ cmdline_parse_token_string_t cmd_gro_show_port =
cmd_port, "port");
cmdline_parse_token_num_t cmd_gro_show_pid =
TOKEN_NUM_INITIALIZER(struct cmd_gro_show_result,
- cmd_pid, UINT16);
+ cmd_pid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_gro_show_keyword =
TOKEN_STRING_INITIALIZER(struct cmd_gro_show_result,
cmd_keyword, "gro");
@@ -4949,7 +4949,7 @@ cmdline_parse_token_string_t cmd_gro_flush_flush =
cmd_flush, "flush");
cmdline_parse_token_num_t cmd_gro_flush_cycles =
TOKEN_NUM_INITIALIZER(struct cmd_gro_flush_result,
- cmd_cycles, UINT8);
+ cmd_cycles, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_gro_flush = {
.f = cmd_gro_flush_parsed,
@@ -4999,7 +4999,7 @@ cmdline_parse_token_string_t cmd_gso_enable_mode =
cmd_mode, "on#off");
cmdline_parse_token_num_t cmd_gso_enable_pid =
TOKEN_NUM_INITIALIZER(struct cmd_gso_enable_result,
- cmd_pid, UINT16);
+ cmd_pid, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_gso_enable = {
.f = cmd_gso_enable_parsed,
@@ -5058,7 +5058,7 @@ cmdline_parse_token_string_t cmd_gso_size_segsz =
cmd_segsz, "segsz");
cmdline_parse_token_num_t cmd_gso_size_size =
TOKEN_NUM_INITIALIZER(struct cmd_gso_size_result,
- cmd_size, UINT16);
+ cmd_size, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_gso_size = {
.f = cmd_gso_size_parsed,
@@ -5116,7 +5116,7 @@ cmdline_parse_token_string_t cmd_gso_show_keyword =
cmd_keyword, "gso");
cmdline_parse_token_num_t cmd_gso_show_pid =
TOKEN_NUM_INITIALIZER(struct cmd_gso_show_result,
- cmd_pid, UINT16);
+ cmd_pid, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_gso_show = {
.f = cmd_gso_show_parsed,
@@ -5259,7 +5259,7 @@ cmdline_parse_token_string_t cmd_setbypass_mode_value =
value, "normal#bypass#isolate");
cmdline_parse_token_num_t cmd_setbypass_mode_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bypass_mode_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_set_bypass_mode = {
.f = cmd_set_bypass_mode_parsed,
@@ -5365,7 +5365,7 @@ cmdline_parse_token_string_t cmd_setbypass_event_mode_value =
mode_value, "normal#bypass#isolate");
cmdline_parse_token_num_t cmd_setbypass_event_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bypass_event_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_set_bypass_event = {
.f = cmd_set_bypass_event_parsed,
@@ -5532,7 +5532,7 @@ cmdline_parse_token_string_t cmd_showbypass_config_config =
config, "config");
cmdline_parse_token_num_t cmd_showbypass_config_port =
TOKEN_NUM_INITIALIZER(struct cmd_show_bypass_config_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_show_bypass_config = {
.f = cmd_show_bypass_config_parsed,
@@ -5581,10 +5581,10 @@ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_mode_result,
mode, "mode");
cmdline_parse_token_num_t cmd_setbonding_mode_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_mode_result,
- value, UINT8);
+ value, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_setbonding_mode_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_mode_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_set_bonding_mode = {
.f = cmd_set_bonding_mode_parsed,
@@ -5658,7 +5658,7 @@ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_lacp_dedicated_queues_result,
dedicated_queues, "dedicated_queues");
cmdline_parse_token_num_t cmd_setbonding_lacp_dedicated_queues_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_lacp_dedicated_queues_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setbonding_lacp_dedicated_queues_mode =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_lacp_dedicated_queues_result,
mode, "enable#disable");
@@ -5726,7 +5726,7 @@ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_balance_xmit_policy_result,
balance_xmit_policy, "balance_xmit_policy");
cmdline_parse_token_num_t cmd_setbonding_balance_xmit_policy_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_balance_xmit_policy_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setbonding_balance_xmit_policy_policy =
TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_balance_xmit_policy_result,
policy, "l2#l23#l34");
@@ -5874,7 +5874,7 @@ TOKEN_STRING_INITIALIZER(struct cmd_show_bonding_config_result,
config, "config");
cmdline_parse_token_num_t cmd_showbonding_config_port =
TOKEN_NUM_INITIALIZER(struct cmd_show_bonding_config_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_show_bonding_config = {
.f = cmd_show_bonding_config_parsed,
@@ -5927,10 +5927,10 @@ TOKEN_STRING_INITIALIZER(struct cmd_set_bonding_primary_result,
primary, "primary");
cmdline_parse_token_num_t cmd_setbonding_primary_slave =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- slave_id, UINT16);
+ slave_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_setbonding_primary_port =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_primary_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_set_bonding_primary = {
.f = cmd_set_bonding_primary_parsed,
@@ -5985,10 +5985,10 @@ TOKEN_STRING_INITIALIZER(struct cmd_add_bonding_slave_result,
slave, "slave");
cmdline_parse_token_num_t cmd_addbonding_slave_slaveid =
TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- slave_id, UINT16);
+ slave_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_addbonding_slave_port =
TOKEN_NUM_INITIALIZER(struct cmd_add_bonding_slave_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_add_bonding_slave = {
.f = cmd_add_bonding_slave_parsed,
@@ -6043,10 +6043,10 @@ cmdline_parse_token_string_t cmd_removebonding_slave_slave =
slave, "slave");
cmdline_parse_token_num_t cmd_removebonding_slave_slaveid =
TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- slave_id, UINT16);
+ slave_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_removebonding_slave_port =
TOKEN_NUM_INITIALIZER(struct cmd_remove_bonding_slave_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_remove_bonding_slave = {
.f = cmd_remove_bonding_slave_parsed,
@@ -6125,10 +6125,10 @@ cmdline_parse_token_string_t cmd_createbonded_device_device =
device, "device");
cmdline_parse_token_num_t cmd_createbonded_device_mode =
TOKEN_NUM_INITIALIZER(struct cmd_create_bonded_device_result,
- mode, UINT8);
+ mode, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_createbonded_device_socket =
TOKEN_NUM_INITIALIZER(struct cmd_create_bonded_device_result,
- socket, UINT8);
+ socket, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_create_bonded_device = {
.f = cmd_create_bonded_device_parsed,
@@ -6181,7 +6181,7 @@ cmdline_parse_token_string_t cmd_set_bond_mac_addr_mac =
"mac_addr");
cmdline_parse_token_num_t cmd_set_bond_mac_addr_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_set_bond_mac_addr_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_set_bond_mac_addr_addr =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_bond_mac_addr_result, address);
@@ -6234,10 +6234,10 @@ cmdline_parse_token_string_t cmd_set_bond_mon_period_mon_period =
mon_period, "mon_period");
cmdline_parse_token_num_t cmd_set_bond_mon_period_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_set_bond_mon_period_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_bond_mon_period_period_ms =
TOKEN_NUM_INITIALIZER(struct cmd_set_bond_mon_period_result,
- period_ms, UINT32);
+ period_ms, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_set_bond_mon_period = {
.f = cmd_set_bond_mon_period_parsed,
@@ -6296,7 +6296,7 @@ cmdline_parse_token_string_t cmd_set_bonding_agg_mode_agg_mode =
cmdline_parse_token_num_t cmd_set_bonding_agg_mode_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_set_bonding_agg_mode_policy_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_bonding_agg_mode_policy_string =
TOKEN_STRING_INITIALIZER(
@@ -6484,11 +6484,11 @@ cmdline_parse_token_string_t cmd_set_burst_tx_retry_tx =
cmdline_parse_token_string_t cmd_set_burst_tx_retry_delay =
TOKEN_STRING_INITIALIZER(struct cmd_set_burst_tx_retry_result, delay, "delay");
cmdline_parse_token_num_t cmd_set_burst_tx_retry_time =
- TOKEN_NUM_INITIALIZER(struct cmd_set_burst_tx_retry_result, time, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_burst_tx_retry_result, time, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_burst_tx_retry_retry =
TOKEN_STRING_INITIALIZER(struct cmd_set_burst_tx_retry_result, retry, "retry");
cmdline_parse_token_num_t cmd_set_burst_tx_retry_retry_num =
- TOKEN_NUM_INITIALIZER(struct cmd_set_burst_tx_retry_result, retry_num, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_burst_tx_retry_result, retry_num, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_set_burst_tx_retry = {
.f = cmd_set_burst_tx_retry_parsed,
@@ -6546,7 +6546,7 @@ cmdline_parse_token_string_t cmd_setpromisc_portall =
"all");
cmdline_parse_token_num_t cmd_setpromisc_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_set_promisc_mode_result, port_num,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setpromisc_mode =
TOKEN_STRING_INITIALIZER(struct cmd_set_promisc_mode_result, mode,
"on#off");
@@ -6620,7 +6620,7 @@ cmdline_parse_token_string_t cmd_setallmulti_portall =
"all");
cmdline_parse_token_num_t cmd_setallmulti_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_set_allmulti_mode_result, port_num,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setallmulti_mode =
TOKEN_STRING_INITIALIZER(struct cmd_set_allmulti_mode_result, mode,
"on#off");
@@ -6698,25 +6698,25 @@ cmdline_parse_token_string_t cmd_lfc_set_high_water_str =
hw_str, "high_water");
cmdline_parse_token_num_t cmd_lfc_set_high_water =
TOKEN_NUM_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
- high_water, UINT32);
+ high_water, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_lfc_set_low_water_str =
TOKEN_STRING_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
lw_str, "low_water");
cmdline_parse_token_num_t cmd_lfc_set_low_water =
TOKEN_NUM_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
- low_water, UINT32);
+ low_water, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_lfc_set_pause_time_str =
TOKEN_STRING_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
pt_str, "pause_time");
cmdline_parse_token_num_t cmd_lfc_set_pause_time =
TOKEN_NUM_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
- pause_time, UINT16);
+ pause_time, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_lfc_set_send_xon_str =
TOKEN_STRING_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
xon_str, "send_xon");
cmdline_parse_token_num_t cmd_lfc_set_send_xon =
TOKEN_NUM_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
- send_xon, UINT16);
+ send_xon, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_lfc_set_mac_ctrl_frame_fwd_mode =
TOKEN_STRING_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
mac_ctrl_frame_fwd, "mac_ctrl_frame_fwd");
@@ -6731,7 +6731,7 @@ cmdline_parse_token_string_t cmd_lfc_set_autoneg =
autoneg, "on#off");
cmdline_parse_token_num_t cmd_lfc_set_portid =
TOKEN_NUM_INITIALIZER(struct cmd_link_flow_ctrl_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
/* forward declaration */
static void
@@ -7026,19 +7026,19 @@ cmdline_parse_token_string_t cmd_pfc_set_tx_mode =
tx_pfc_mode, "on#off");
cmdline_parse_token_num_t cmd_pfc_set_high_water =
TOKEN_NUM_INITIALIZER(struct cmd_priority_flow_ctrl_set_result,
- high_water, UINT32);
+ high_water, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_pfc_set_low_water =
TOKEN_NUM_INITIALIZER(struct cmd_priority_flow_ctrl_set_result,
- low_water, UINT32);
+ low_water, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_pfc_set_pause_time =
TOKEN_NUM_INITIALIZER(struct cmd_priority_flow_ctrl_set_result,
- pause_time, UINT16);
+ pause_time, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_pfc_set_priority =
TOKEN_NUM_INITIALIZER(struct cmd_priority_flow_ctrl_set_result,
- priority, UINT8);
+ priority, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_pfc_set_portid =
TOKEN_NUM_INITIALIZER(struct cmd_priority_flow_ctrl_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_priority_flow_control_set = {
.f = cmd_priority_flow_ctrl_set_parsed,
@@ -7176,7 +7176,7 @@ cmdline_parse_token_string_t cmd_start_tx_first_n_tx_first =
tx_first, "tx_first");
cmdline_parse_token_num_t cmd_start_tx_first_n_tx_num =
TOKEN_NUM_INITIALIZER(struct cmd_start_tx_first_n_result,
- tx_num, UINT32);
+ tx_num, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_start_tx_first_n = {
.f = cmd_start_tx_first_n_parsed,
@@ -7207,7 +7207,7 @@ cmdline_parse_token_string_t cmd_set_link_up_link_up =
cmdline_parse_token_string_t cmd_set_link_up_port =
TOKEN_STRING_INITIALIZER(struct cmd_set_link_up_result, port, "port");
cmdline_parse_token_num_t cmd_set_link_up_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_set_link_up_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_link_up_result, port_id, CMDLINE_UINT16);
static void cmd_set_link_up_parsed(__attribute__((unused)) void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -7246,7 +7246,7 @@ cmdline_parse_token_string_t cmd_set_link_down_link_down =
cmdline_parse_token_string_t cmd_set_link_down_port =
TOKEN_STRING_INITIALIZER(struct cmd_set_link_down_result, port, "port");
cmdline_parse_token_num_t cmd_set_link_down_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_set_link_down_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_link_down_result, port_id, CMDLINE_UINT16);
static void cmd_set_link_down_parsed(
__attribute__((unused)) void *parsed_result,
@@ -7433,7 +7433,7 @@ cmdline_parse_token_string_t cmd_showport_what =
TOKEN_STRING_INITIALIZER(struct cmd_showport_result, what,
"info#summary#stats#xstats#fdir#stat_qmap#dcb_tc#cap");
cmdline_parse_token_num_t cmd_showport_portnum =
- TOKEN_NUM_INITIALIZER(struct cmd_showport_result, portnum, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showport_result, portnum, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_showport = {
.f = cmd_showport_parsed,
@@ -7524,9 +7524,9 @@ cmdline_parse_token_string_t cmd_showqueue_type =
cmdline_parse_token_string_t cmd_showqueue_what =
TOKEN_STRING_INITIALIZER(struct cmd_showqueue_result, what, "info");
cmdline_parse_token_num_t cmd_showqueue_portnum =
- TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, portnum, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, portnum, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_showqueue_queuenum =
- TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, queuenum, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_showqueue_result, queuenum, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_showqueue = {
.f = cmd_showqueue_parsed,
@@ -7607,9 +7607,9 @@ cmdline_parse_token_string_t cmd_read_reg_read =
cmdline_parse_token_string_t cmd_read_reg_reg =
TOKEN_STRING_INITIALIZER(struct cmd_read_reg_result, reg, "reg");
cmdline_parse_token_num_t cmd_read_reg_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_read_reg_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_reg_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_read_reg_reg_off =
- TOKEN_NUM_INITIALIZER(struct cmd_read_reg_result, reg_off, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_reg_result, reg_off, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_read_reg = {
.f = cmd_read_reg_parsed,
@@ -7652,16 +7652,16 @@ cmdline_parse_token_string_t cmd_read_reg_bit_field_regfield =
regfield, "regfield");
cmdline_parse_token_num_t cmd_read_reg_bit_field_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_field_result, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_read_reg_bit_field_reg_off =
TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_field_result, reg_off,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_read_reg_bit_field_bit1_pos =
TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_field_result, bit1_pos,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_read_reg_bit_field_bit2_pos =
TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_field_result, bit2_pos,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_inst_t cmd_read_reg_bit_field = {
.f = cmd_read_reg_bit_field_parsed,
@@ -7703,11 +7703,11 @@ cmdline_parse_token_string_t cmd_read_reg_bit_regbit =
TOKEN_STRING_INITIALIZER(struct cmd_read_reg_bit_result,
regbit, "regbit");
cmdline_parse_token_num_t cmd_read_reg_bit_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_read_reg_bit_reg_off =
- TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, reg_off, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, reg_off, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_read_reg_bit_bit_pos =
- TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, bit_pos, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_reg_bit_result, bit_pos, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_read_reg_bit = {
.f = cmd_read_reg_bit_parsed,
@@ -7746,11 +7746,11 @@ cmdline_parse_token_string_t cmd_write_reg_write =
cmdline_parse_token_string_t cmd_write_reg_reg =
TOKEN_STRING_INITIALIZER(struct cmd_write_reg_result, reg, "reg");
cmdline_parse_token_num_t cmd_write_reg_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_write_reg_reg_off =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, reg_off, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, reg_off, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_write_reg_value =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, value, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_result, value, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_write_reg = {
.f = cmd_write_reg_parsed,
@@ -7795,19 +7795,19 @@ cmdline_parse_token_string_t cmd_write_reg_bit_field_regfield =
regfield, "regfield");
cmdline_parse_token_num_t cmd_write_reg_bit_field_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_field_result, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_write_reg_bit_field_reg_off =
TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_field_result, reg_off,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_write_reg_bit_field_bit1_pos =
TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_field_result, bit1_pos,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_write_reg_bit_field_bit2_pos =
TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_field_result, bit2_pos,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_write_reg_bit_field_value =
TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_field_result, value,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_write_reg_bit_field = {
.f = cmd_write_reg_bit_field_parsed,
@@ -7853,13 +7853,13 @@ cmdline_parse_token_string_t cmd_write_reg_bit_regbit =
TOKEN_STRING_INITIALIZER(struct cmd_write_reg_bit_result,
regbit, "regbit");
cmdline_parse_token_num_t cmd_write_reg_bit_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_write_reg_bit_reg_off =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, reg_off, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, reg_off, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_write_reg_bit_bit_pos =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, bit_pos, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, bit_pos, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_write_reg_bit_value =
- TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, value, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_write_reg_bit_result, value, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_write_reg_bit = {
.f = cmd_write_reg_bit_parsed,
@@ -7905,11 +7905,11 @@ cmdline_parse_token_string_t cmd_read_rxd_txd_rxd_txd =
TOKEN_STRING_INITIALIZER(struct cmd_read_rxd_txd_result, rxd_txd,
"rxd#txd");
cmdline_parse_token_num_t cmd_read_rxd_txd_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_read_rxd_txd_queue_id =
- TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, queue_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, queue_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_read_rxd_txd_desc_id =
- TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, desc_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_read_rxd_txd_result, desc_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_read_rxd_txd = {
.f = cmd_read_rxd_txd_parsed,
@@ -7987,7 +7987,7 @@ cmdline_parse_token_string_t cmd_mac_addr_what =
"add#remove#set");
cmdline_parse_token_num_t cmd_mac_addr_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_mac_addr_result, port_num,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_mac_addr_addr =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_mac_addr_result, address);
@@ -8033,7 +8033,7 @@ cmdline_parse_token_string_t cmd_eth_peer_set =
cmdline_parse_token_string_t cmd_eth_peer =
TOKEN_STRING_INITIALIZER(struct cmd_eth_peer_result, eth_peer, "eth-peer");
cmdline_parse_token_num_t cmd_eth_peer_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_eth_peer_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_eth_peer_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_eth_peer_addr =
TOKEN_STRING_INITIALIZER(struct cmd_eth_peer_result, peer_addr, NULL);
@@ -8082,13 +8082,13 @@ cmdline_parse_token_string_t cmd_setqmap_what =
what, "tx#rx");
cmdline_parse_token_num_t cmd_setqmap_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_qmap_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_setqmap_queueid =
TOKEN_NUM_INITIALIZER(struct cmd_set_qmap_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_setqmap_mapvalue =
TOKEN_NUM_INITIALIZER(struct cmd_set_qmap_result,
- map_value, UINT8);
+ map_value, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_set_qmap = {
.f = cmd_set_qmap_parsed,
@@ -8184,7 +8184,7 @@ cmdline_parse_token_string_t cmd_set_uc_hash_port =
port, "port");
cmdline_parse_token_num_t cmd_set_uc_hash_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_uc_hash_table,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_uc_hash_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_uc_hash_table,
what, "uta");
@@ -8245,7 +8245,7 @@ cmdline_parse_token_string_t cmd_set_uc_all_hash_port =
port, "port");
cmdline_parse_token_num_t cmd_set_uc_all_hash_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_uc_all_hash_table,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_uc_all_hash_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_uc_all_hash_table,
what, "uta");
@@ -8337,13 +8337,13 @@ cmdline_parse_token_string_t cmd_set_vf_macvlan_port =
port, "port");
cmdline_parse_token_num_t cmd_set_vf_macvlan_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_macvlan_filter,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vf_macvlan_vf =
TOKEN_STRING_INITIALIZER(struct cmd_set_vf_macvlan_filter,
vf, "vf");
cmdline_parse_token_num_t cmd_set_vf_macvlan_vf_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_macvlan_filter,
- vf_id, UINT8);
+ vf_id, CMDLINE_UINT8);
cmdline_parse_token_etheraddr_t cmd_set_vf_macvlan_mac =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vf_macvlan_filter,
address);
@@ -8406,13 +8406,13 @@ cmdline_parse_token_string_t cmd_setvf_traffic_port =
port, "port");
cmdline_parse_token_num_t cmd_setvf_traffic_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_traffic,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_setvf_traffic_vf =
TOKEN_STRING_INITIALIZER(struct cmd_set_vf_traffic,
vf, "vf");
cmdline_parse_token_num_t cmd_setvf_traffic_vfid =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_traffic,
- vf_id, UINT8);
+ vf_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_setvf_traffic_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_vf_traffic,
what, "tx#rx");
@@ -8494,13 +8494,13 @@ cmdline_parse_token_string_t cmd_set_vf_rxmode_port =
port, "port");
cmdline_parse_token_num_t cmd_set_vf_rxmode_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_rxmode,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vf_rxmode_vf =
TOKEN_STRING_INITIALIZER(struct cmd_set_vf_rxmode,
vf, "vf");
cmdline_parse_token_num_t cmd_set_vf_rxmode_vfid =
TOKEN_NUM_INITIALIZER(struct cmd_set_vf_rxmode,
- vf_id, UINT8);
+ vf_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_set_vf_rxmode_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_vf_rxmode,
what, "rxmode");
@@ -8577,13 +8577,13 @@ cmdline_parse_token_string_t cmd_vf_mac_addr_port =
port,"port");
cmdline_parse_token_num_t cmd_vf_mac_addr_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_vf_mac_addr_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_mac_addr_vf =
TOKEN_STRING_INITIALIZER(struct cmd_vf_mac_addr_result,
vf,"vf");
cmdline_parse_token_num_t cmd_vf_mac_addr_vfnum =
TOKEN_NUM_INITIALIZER(struct cmd_vf_mac_addr_result,
- vf_num, UINT8);
+ vf_num, CMDLINE_UINT8);
cmdline_parse_token_etheraddr_t cmd_vf_mac_addr_addr =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_vf_mac_addr_result,
address);
@@ -8668,19 +8668,19 @@ cmdline_parse_token_string_t cmd_vf_rx_vlan_filter_what =
what, "add#rm");
cmdline_parse_token_num_t cmd_vf_rx_vlan_filter_vlanid =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rx_vlan_filter,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_rx_vlan_filter_port =
TOKEN_STRING_INITIALIZER(struct cmd_vf_rx_vlan_filter,
port, "port");
cmdline_parse_token_num_t cmd_vf_rx_vlan_filter_portid =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rx_vlan_filter,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_rx_vlan_filter_vf =
TOKEN_STRING_INITIALIZER(struct cmd_vf_rx_vlan_filter,
vf, "vf");
cmdline_parse_token_num_t cmd_vf_rx_vlan_filter_vf_mask =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rx_vlan_filter,
- vf_mask, UINT64);
+ vf_mask, CMDLINE_UINT64);
cmdline_parse_inst_t cmd_vf_rxvlan_filter = {
.f = cmd_vf_rx_vlan_filter_parsed,
@@ -8735,19 +8735,19 @@ cmdline_parse_token_string_t cmd_queue_rate_limit_port =
port, "port");
cmdline_parse_token_num_t cmd_queue_rate_limit_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_queue_rate_limit_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_queue_rate_limit_queue =
TOKEN_STRING_INITIALIZER(struct cmd_queue_rate_limit_result,
queue, "queue");
cmdline_parse_token_num_t cmd_queue_rate_limit_queuenum =
TOKEN_NUM_INITIALIZER(struct cmd_queue_rate_limit_result,
- queue_num, UINT8);
+ queue_num, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_queue_rate_limit_rate =
TOKEN_STRING_INITIALIZER(struct cmd_queue_rate_limit_result,
rate, "rate");
cmdline_parse_token_num_t cmd_queue_rate_limit_ratenum =
TOKEN_NUM_INITIALIZER(struct cmd_queue_rate_limit_result,
- rate_num, UINT16);
+ rate_num, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_queue_rate_limit = {
.f = cmd_queue_rate_limit_parsed,
@@ -8805,25 +8805,25 @@ cmdline_parse_token_string_t cmd_vf_rate_limit_port =
port, "port");
cmdline_parse_token_num_t cmd_vf_rate_limit_portnum =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result,
- port_num, UINT16);
+ port_num, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_rate_limit_vf =
TOKEN_STRING_INITIALIZER(struct cmd_vf_rate_limit_result,
vf, "vf");
cmdline_parse_token_num_t cmd_vf_rate_limit_vfnum =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result,
- vf_num, UINT8);
+ vf_num, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_vf_rate_limit_rate =
TOKEN_STRING_INITIALIZER(struct cmd_vf_rate_limit_result,
rate, "rate");
cmdline_parse_token_num_t cmd_vf_rate_limit_ratenum =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result,
- rate_num, UINT16);
+ rate_num, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_rate_limit_q_msk =
TOKEN_STRING_INITIALIZER(struct cmd_vf_rate_limit_result,
q_msk, "queue_mask");
cmdline_parse_token_num_t cmd_vf_rate_limit_q_msk_val =
TOKEN_NUM_INITIALIZER(struct cmd_vf_rate_limit_result,
- q_msk_val, UINT64);
+ q_msk_val, CMDLINE_UINT64);
cmdline_parse_inst_t cmd_vf_rate_limit = {
.f = cmd_vf_rate_limit_parsed,
@@ -8945,7 +8945,7 @@ cmdline_parse_token_string_t cmd_tunnel_filter_what =
what, "add#rm");
cmdline_parse_token_num_t cmd_tunnel_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_tunnel_filter_outer_mac =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_tunnel_filter_result,
outer_mac);
@@ -8954,7 +8954,7 @@ cmdline_parse_token_etheraddr_t cmd_tunnel_filter_inner_mac =
inner_mac);
cmdline_parse_token_num_t cmd_tunnel_filter_innner_vlan =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
- inner_vlan, UINT16);
+ inner_vlan, CMDLINE_UINT16);
cmdline_parse_token_ipaddr_t cmd_tunnel_filter_ip_value =
TOKEN_IPADDR_INITIALIZER(struct cmd_tunnel_filter_result,
ip_value);
@@ -8968,10 +8968,10 @@ cmdline_parse_token_string_t cmd_tunnel_filter_filter_type =
"imac#omac-imac-tenid");
cmdline_parse_token_num_t cmd_tunnel_filter_tenant_id =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
- tenant_id, UINT32);
+ tenant_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_tunnel_filter_queue_num =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_filter_result,
- queue_num, UINT16);
+ queue_num, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tunnel_filter = {
.f = cmd_tunnel_filter_parsed,
@@ -9037,10 +9037,10 @@ cmdline_parse_token_string_t cmd_tunnel_udp_config_what =
what, "add#rm");
cmdline_parse_token_num_t cmd_tunnel_udp_config_udp_port =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
- udp_port, UINT16);
+ udp_port, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_tunnel_udp_config_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_tunnel_udp_config,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_tunnel_udp_config = {
.f = cmd_tunnel_udp_config_parsed,
@@ -9110,7 +9110,7 @@ cmdline_parse_token_string_t cmd_config_tunnel_udp_port_config =
"config");
cmdline_parse_token_num_t cmd_config_tunnel_udp_port_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_config_tunnel_udp_port, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_tunnel_udp_port_tunnel_port =
TOKEN_STRING_INITIALIZER(struct cmd_config_tunnel_udp_port,
udp_tunnel_port,
@@ -9123,7 +9123,7 @@ cmdline_parse_token_string_t cmd_config_tunnel_udp_port_tunnel_type =
"vxlan#geneve#vxlan-gpe");
cmdline_parse_token_num_t cmd_config_tunnel_udp_port_value =
TOKEN_NUM_INITIALIZER(struct cmd_config_tunnel_udp_port, udp_port,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_inst_t cmd_cfg_tunnel_udp_port = {
.f = cmd_cfg_tunnel_udp_port_parsed,
@@ -9172,13 +9172,13 @@ cmdline_parse_token_string_t cmd_global_config_cmd =
"global_config");
cmdline_parse_token_num_t cmd_global_config_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_global_config_result, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_global_config_type =
TOKEN_STRING_INITIALIZER(struct cmd_global_config_result,
cfg_type, "gre-key-len");
cmdline_parse_token_num_t cmd_global_config_gre_key_len =
TOKEN_NUM_INITIALIZER(struct cmd_global_config_result,
- len, UINT8);
+ len, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_global_config = {
.f = cmd_global_config_parsed,
@@ -9215,13 +9215,13 @@ cmdline_parse_token_string_t cmd_mirror_mask_port =
port, "port");
cmdline_parse_token_num_t cmd_mirror_mask_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_mask_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_mirror_mask_mirror =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_mask_result,
mirror, "mirror-rule");
cmdline_parse_token_num_t cmd_mirror_mask_ruleid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_mask_result,
- rule_id, UINT8);
+ rule_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_mirror_mask_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_mask_result,
what, "pool-mirror-up#pool-mirror-down"
@@ -9234,7 +9234,7 @@ cmdline_parse_token_string_t cmd_mirror_mask_dstpool =
dstpool, "dst-pool");
cmdline_parse_token_num_t cmd_mirror_mask_poolid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_mask_result,
- dstpool_id, UINT8);
+ dstpool_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_mirror_mask_on =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_mask_result,
on, "on#off");
@@ -9330,13 +9330,13 @@ cmdline_parse_token_string_t cmd_mirror_link_port =
port, "port");
cmdline_parse_token_num_t cmd_mirror_link_portid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_link_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_mirror_link_mirror =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_link_result,
mirror, "mirror-rule");
cmdline_parse_token_num_t cmd_mirror_link_ruleid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_link_result,
- rule_id, UINT8);
+ rule_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_mirror_link_what =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_link_result,
what, "uplink-mirror#downlink-mirror");
@@ -9345,7 +9345,7 @@ cmdline_parse_token_string_t cmd_mirror_link_dstpool =
dstpool, "dst-pool");
cmdline_parse_token_num_t cmd_mirror_link_poolid =
TOKEN_NUM_INITIALIZER(struct cmd_set_mirror_link_result,
- dstpool_id, UINT8);
+ dstpool_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_mirror_link_on =
TOKEN_STRING_INITIALIZER(struct cmd_set_mirror_link_result,
on, "on#off");
@@ -9416,13 +9416,13 @@ cmdline_parse_token_string_t cmd_rm_mirror_rule_port =
port, "port");
cmdline_parse_token_num_t cmd_rm_mirror_rule_portid =
TOKEN_NUM_INITIALIZER(struct cmd_rm_mirror_rule_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_rm_mirror_rule_mirror =
TOKEN_STRING_INITIALIZER(struct cmd_rm_mirror_rule_result,
mirror, "mirror-rule");
cmdline_parse_token_num_t cmd_rm_mirror_rule_ruleid =
TOKEN_NUM_INITIALIZER(struct cmd_rm_mirror_rule_result,
- rule_id, UINT8);
+ rule_id, CMDLINE_UINT8);
static void
cmd_reset_mirror_rule_parsed(void *parsed_result,
@@ -9615,7 +9615,7 @@ cmdline_parse_token_string_t cmd_syn_filter_filter =
filter, "syn_filter");
cmdline_parse_token_num_t cmd_syn_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_syn_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_syn_filter_ops =
TOKEN_STRING_INITIALIZER(struct cmd_syn_filter_result,
ops, "add#del");
@@ -9630,7 +9630,7 @@ cmdline_parse_token_string_t cmd_syn_filter_queue =
queue, "queue");
cmdline_parse_token_num_t cmd_syn_filter_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_syn_filter_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_syn_filter = {
.f = cmd_syn_filter_parsed,
@@ -9707,7 +9707,7 @@ cmdline_parse_token_string_t cmd_queue_region_port =
TOKEN_STRING_INITIALIZER(struct cmd_queue_region_result, port, "port");
cmdline_parse_token_num_t cmd_queue_region_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_queue_region_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_queue_region_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_queue_region_result,
cmd, "queue-region");
@@ -9716,19 +9716,19 @@ cmdline_parse_token_string_t cmd_queue_region_id =
region, "region_id");
cmdline_parse_token_num_t cmd_queue_region_index =
TOKEN_NUM_INITIALIZER(struct cmd_queue_region_result,
- region_id, UINT8);
+ region_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_queue_region_queue_start_index =
TOKEN_STRING_INITIALIZER(struct cmd_queue_region_result,
queue_start_index, "queue_start_index");
cmdline_parse_token_num_t cmd_queue_region_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_queue_region_result,
- queue_id, UINT8);
+ queue_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_queue_region_queue_num =
TOKEN_STRING_INITIALIZER(struct cmd_queue_region_result,
queue_num, "queue_num");
cmdline_parse_token_num_t cmd_queue_region_queue_num_value =
TOKEN_NUM_INITIALIZER(struct cmd_queue_region_result,
- queue_num_value, UINT8);
+ queue_num_value, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_queue_region = {
.f = cmd_queue_region_parsed,
@@ -9807,7 +9807,7 @@ cmdline_parse_token_string_t cmd_region_flowtype_port =
port, "port");
cmdline_parse_token_num_t cmd_region_flowtype_port_index =
TOKEN_NUM_INITIALIZER(struct cmd_region_flowtype_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_region_flowtype_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_region_flowtype_result,
cmd, "queue-region");
@@ -9816,13 +9816,13 @@ cmdline_parse_token_string_t cmd_region_flowtype_index =
region, "region_id");
cmdline_parse_token_num_t cmd_region_flowtype_id =
TOKEN_NUM_INITIALIZER(struct cmd_region_flowtype_result,
- region_id, UINT8);
+ region_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_region_flowtype_flow_index =
TOKEN_STRING_INITIALIZER(struct cmd_region_flowtype_result,
flowtype, "flowtype");
cmdline_parse_token_num_t cmd_region_flowtype_flow_id =
TOKEN_NUM_INITIALIZER(struct cmd_region_flowtype_result,
- flowtype_id, UINT8);
+ flowtype_id, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_region_flowtype = {
.f = cmd_region_flowtype_parsed,
.data = NULL,
@@ -9898,7 +9898,7 @@ cmdline_parse_token_string_t cmd_user_priority_region_port =
port, "port");
cmdline_parse_token_num_t cmd_user_priority_region_port_index =
TOKEN_NUM_INITIALIZER(struct cmd_user_priority_region_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_user_priority_region_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_user_priority_region_result,
cmd, "queue-region");
@@ -9907,13 +9907,13 @@ cmdline_parse_token_string_t cmd_user_priority_region_UP =
user_priority, "UP");
cmdline_parse_token_num_t cmd_user_priority_region_UP_id =
TOKEN_NUM_INITIALIZER(struct cmd_user_priority_region_result,
- user_priority_id, UINT8);
+ user_priority_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_user_priority_region_region =
TOKEN_STRING_INITIALIZER(struct cmd_user_priority_region_result,
region, "region_id");
cmdline_parse_token_num_t cmd_user_priority_region_region_id =
TOKEN_NUM_INITIALIZER(struct cmd_user_priority_region_result,
- region_id, UINT8);
+ region_id, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_user_priority_region = {
.f = cmd_user_priority_region_parsed,
@@ -9991,7 +9991,7 @@ cmdline_parse_token_string_t cmd_flush_queue_region_port =
port, "port");
cmdline_parse_token_num_t cmd_flush_queue_region_port_index =
TOKEN_NUM_INITIALIZER(struct cmd_flush_queue_region_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flush_queue_region_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_flush_queue_region_result,
cmd, "queue-region");
@@ -10072,7 +10072,7 @@ cmdline_parse_token_string_t cmd_show_queue_region_info_port =
port, "port");
cmdline_parse_token_num_t cmd_show_queue_region_info_port_index =
TOKEN_NUM_INITIALIZER(struct cmd_show_queue_region_info,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_show_queue_region_info_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_show_queue_region_info,
cmd, "queue-region");
@@ -10173,7 +10173,7 @@ cmdline_parse_token_string_t cmd_2tuple_filter_filter =
filter, "2tuple_filter");
cmdline_parse_token_num_t cmd_2tuple_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_2tuple_filter_ops =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
ops, "add#del");
@@ -10182,37 +10182,37 @@ cmdline_parse_token_string_t cmd_2tuple_filter_dst_port =
dst_port, "dst_port");
cmdline_parse_token_num_t cmd_2tuple_filter_dst_port_value =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- dst_port_value, UINT16);
+ dst_port_value, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_2tuple_filter_protocol =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
protocol, "protocol");
cmdline_parse_token_num_t cmd_2tuple_filter_protocol_value =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- protocol_value, UINT8);
+ protocol_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_2tuple_filter_mask =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
mask, "mask");
cmdline_parse_token_num_t cmd_2tuple_filter_mask_value =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- mask_value, INT8);
+ mask_value, CMDLINE_INT8);
cmdline_parse_token_string_t cmd_2tuple_filter_tcp_flags =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
tcp_flags, "tcp_flags");
cmdline_parse_token_num_t cmd_2tuple_filter_tcp_flags_value =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- tcp_flags_value, UINT8);
+ tcp_flags_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_2tuple_filter_priority =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
priority, "priority");
cmdline_parse_token_num_t cmd_2tuple_filter_priority_value =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- priority_value, UINT8);
+ priority_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_2tuple_filter_queue =
TOKEN_STRING_INITIALIZER(struct cmd_2tuple_filter_result,
queue, "queue");
cmdline_parse_token_num_t cmd_2tuple_filter_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_2tuple_filter_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_2tuple_filter = {
.f = cmd_2tuple_filter_parsed,
@@ -10352,7 +10352,7 @@ cmdline_parse_token_string_t cmd_5tuple_filter_filter =
filter, "5tuple_filter");
cmdline_parse_token_num_t cmd_5tuple_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_5tuple_filter_ops =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
ops, "add#del");
@@ -10373,43 +10373,43 @@ cmdline_parse_token_string_t cmd_5tuple_filter_dst_port =
dst_port, "dst_port");
cmdline_parse_token_num_t cmd_5tuple_filter_dst_port_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- dst_port_value, UINT16);
+ dst_port_value, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_5tuple_filter_src_port =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
src_port, "src_port");
cmdline_parse_token_num_t cmd_5tuple_filter_src_port_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- src_port_value, UINT16);
+ src_port_value, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_5tuple_filter_protocol =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
protocol, "protocol");
cmdline_parse_token_num_t cmd_5tuple_filter_protocol_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- protocol_value, UINT8);
+ protocol_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_5tuple_filter_mask =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
mask, "mask");
cmdline_parse_token_num_t cmd_5tuple_filter_mask_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- mask_value, INT8);
+ mask_value, CMDLINE_INT8);
cmdline_parse_token_string_t cmd_5tuple_filter_tcp_flags =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
tcp_flags, "tcp_flags");
cmdline_parse_token_num_t cmd_5tuple_filter_tcp_flags_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- tcp_flags_value, UINT8);
+ tcp_flags_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_5tuple_filter_priority =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
priority, "priority");
cmdline_parse_token_num_t cmd_5tuple_filter_priority_value =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- priority_value, UINT8);
+ priority_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_5tuple_filter_queue =
TOKEN_STRING_INITIALIZER(struct cmd_5tuple_filter_result,
queue, "queue");
cmdline_parse_token_num_t cmd_5tuple_filter_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_5tuple_filter_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_5tuple_filter = {
.f = cmd_5tuple_filter_parsed,
@@ -10575,7 +10575,7 @@ cmdline_parse_token_string_t cmd_flex_filter_filter =
filter, "flex_filter");
cmdline_parse_token_num_t cmd_flex_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flex_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flex_filter_ops =
TOKEN_STRING_INITIALIZER(struct cmd_flex_filter_result,
ops, "add#del");
@@ -10584,7 +10584,7 @@ cmdline_parse_token_string_t cmd_flex_filter_len =
len, "len");
cmdline_parse_token_num_t cmd_flex_filter_len_value =
TOKEN_NUM_INITIALIZER(struct cmd_flex_filter_result,
- len_value, UINT8);
+ len_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flex_filter_bytes =
TOKEN_STRING_INITIALIZER(struct cmd_flex_filter_result,
bytes, "bytes");
@@ -10602,13 +10602,13 @@ cmdline_parse_token_string_t cmd_flex_filter_priority =
priority, "priority");
cmdline_parse_token_num_t cmd_flex_filter_priority_value =
TOKEN_NUM_INITIALIZER(struct cmd_flex_filter_result,
- priority_value, UINT8);
+ priority_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flex_filter_queue =
TOKEN_STRING_INITIALIZER(struct cmd_flex_filter_result,
queue, "queue");
cmdline_parse_token_num_t cmd_flex_filter_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_flex_filter_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_flex_filter = {
.f = cmd_flex_filter_parsed,
.data = NULL,
@@ -10654,7 +10654,7 @@ cmdline_parse_token_string_t cmd_ethertype_filter_filter =
filter, "ethertype_filter");
cmdline_parse_token_num_t cmd_ethertype_filter_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_ethertype_filter_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_ethertype_filter_ops =
TOKEN_STRING_INITIALIZER(struct cmd_ethertype_filter_result,
ops, "add#del");
@@ -10669,7 +10669,7 @@ cmdline_parse_token_string_t cmd_ethertype_filter_ethertype =
ethertype, "ethertype");
cmdline_parse_token_num_t cmd_ethertype_filter_ethertype_value =
TOKEN_NUM_INITIALIZER(struct cmd_ethertype_filter_result,
- ethertype_value, UINT16);
+ ethertype_value, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_ethertype_filter_drop =
TOKEN_STRING_INITIALIZER(struct cmd_ethertype_filter_result,
drop, "drop#fwd");
@@ -10678,7 +10678,7 @@ cmdline_parse_token_string_t cmd_ethertype_filter_queue =
queue, "queue");
cmdline_parse_token_num_t cmd_ethertype_filter_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_ethertype_filter_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
static void
cmd_ethertype_filter_parsed(void *parsed_result,
@@ -11157,7 +11157,7 @@ cmdline_parse_token_string_t cmd_flow_director_filter =
flow_director_filter, "flow_director_filter");
cmdline_parse_token_num_t cmd_flow_director_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_ops =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
ops, "add#del#update");
@@ -11172,7 +11172,7 @@ cmdline_parse_token_string_t cmd_flow_director_ether =
ether, "ether");
cmdline_parse_token_num_t cmd_flow_director_ether_type =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- ether_type, UINT16);
+ ether_type, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_src =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
src, "src");
@@ -11181,7 +11181,7 @@ cmdline_parse_token_ipaddr_t cmd_flow_director_ip_src =
ip_src);
cmdline_parse_token_num_t cmd_flow_director_port_src =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- port_src, UINT16);
+ port_src, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_dst =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
dst, "dst");
@@ -11190,37 +11190,37 @@ cmdline_parse_token_ipaddr_t cmd_flow_director_ip_dst =
ip_dst);
cmdline_parse_token_num_t cmd_flow_director_port_dst =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- port_dst, UINT16);
+ port_dst, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_verify_tag =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
verify_tag, "verify_tag");
cmdline_parse_token_num_t cmd_flow_director_verify_tag_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- verify_tag_value, UINT32);
+ verify_tag_value, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_flow_director_tos =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
tos, "tos");
cmdline_parse_token_num_t cmd_flow_director_tos_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- tos_value, UINT8);
+ tos_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flow_director_proto =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
proto, "proto");
cmdline_parse_token_num_t cmd_flow_director_proto_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- proto_value, UINT8);
+ proto_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flow_director_ttl =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
ttl, "ttl");
cmdline_parse_token_num_t cmd_flow_director_ttl_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- ttl_value, UINT8);
+ ttl_value, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flow_director_vlan =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
vlan, "vlan");
cmdline_parse_token_num_t cmd_flow_director_vlan_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- vlan_value, UINT16);
+ vlan_value, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_flexbytes =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
flexbytes, "flexbytes");
@@ -11238,13 +11238,13 @@ cmdline_parse_token_string_t cmd_flow_director_queue =
queue, "queue");
cmdline_parse_token_num_t cmd_flow_director_queue_id =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_fd_id =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
fd_id, "fd_id");
cmdline_parse_token_num_t cmd_flow_director_fd_id_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- fd_id_value, UINT32);
+ fd_id_value, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_flow_director_mode =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
@@ -11278,7 +11278,7 @@ cmdline_parse_token_string_t cmd_flow_director_tunnel_id =
tunnel_id, "tunnel-id");
cmdline_parse_token_num_t cmd_flow_director_tunnel_id_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_result,
- tunnel_id_value, UINT32);
+ tunnel_id_value, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_flow_director_packet =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_result,
packet, "packet");
@@ -11522,7 +11522,7 @@ cmdline_parse_token_string_t cmd_flush_flow_director_flush =
flush_flow_director, "flush_flow_director");
cmdline_parse_token_num_t cmd_flush_flow_director_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flush_flow_director_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
static void
cmd_flush_flow_director_parsed(void *parsed_result,
@@ -11640,13 +11640,13 @@ cmdline_parse_token_string_t cmd_flow_director_mask =
flow_director_mask, "flow_director_mask");
cmdline_parse_token_num_t cmd_flow_director_mask_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_mask_vlan =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
vlan, "vlan");
cmdline_parse_token_num_t cmd_flow_director_mask_vlan_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- vlan_mask, UINT16);
+ vlan_mask, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_mask_src =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
src_mask, "src_mask");
@@ -11658,7 +11658,7 @@ cmdline_parse_token_ipaddr_t cmd_flow_director_mask_ipv6_src =
ipv6_src);
cmdline_parse_token_num_t cmd_flow_director_mask_port_src =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- port_src, UINT16);
+ port_src, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_mask_dst =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
dst_mask, "dst_mask");
@@ -11670,7 +11670,7 @@ cmdline_parse_token_ipaddr_t cmd_flow_director_mask_ipv6_dst =
ipv6_dst);
cmdline_parse_token_num_t cmd_flow_director_mask_port_dst =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- port_dst, UINT16);
+ port_dst, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_mask_mode =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
@@ -11689,19 +11689,19 @@ cmdline_parse_token_string_t cmd_flow_director_mask_mac =
mac, "mac");
cmdline_parse_token_num_t cmd_flow_director_mask_mac_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- mac_addr_byte_mask, UINT8);
+ mac_addr_byte_mask, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flow_director_mask_tunnel_type =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
tunnel_type, "tunnel-type");
cmdline_parse_token_num_t cmd_flow_director_mask_tunnel_type_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- tunnel_type_mask, UINT8);
+ tunnel_type_mask, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_flow_director_mask_tunnel_id =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_mask_result,
tunnel_id, "tunnel-id");
cmdline_parse_token_num_t cmd_flow_director_mask_tunnel_id_value =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_mask_result,
- tunnel_id_mask, UINT32);
+ tunnel_id_mask, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_set_flow_director_ip_mask = {
.f = cmd_flow_director_mask_parsed,
@@ -11855,7 +11855,7 @@ cmdline_parse_token_string_t cmd_flow_director_flexmask =
"flow_director_flex_mask");
cmdline_parse_token_num_t cmd_flow_director_flexmask_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_flex_mask_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_flexmask_flow =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_flex_mask_result,
flow, "flow");
@@ -11973,7 +11973,7 @@ cmdline_parse_token_string_t cmd_flow_director_flexpayload =
"flow_director_flex_payload");
cmdline_parse_token_num_t cmd_flow_director_flexpayload_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_flow_director_flexpayload_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_flow_director_flexpayload_payload_layer =
TOKEN_STRING_INITIALIZER(struct cmd_flow_director_flexpayload_result,
payload_layer, "raw#l2#l3#l4");
@@ -12041,7 +12041,7 @@ cmdline_parse_token_string_t cmd_get_sym_hash_ena_per_port_all =
get_sym_hash_ena_per_port, "get_sym_hash_ena_per_port");
cmdline_parse_token_num_t cmd_get_sym_hash_ena_per_port_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_get_sym_hash_ena_per_port_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_get_sym_hash_ena_per_port = {
.f = cmd_get_sym_hash_per_port_parsed,
@@ -12097,7 +12097,7 @@ cmdline_parse_token_string_t cmd_set_sym_hash_ena_per_port_all =
set_sym_hash_ena_per_port, "set_sym_hash_ena_per_port");
cmdline_parse_token_num_t cmd_set_sym_hash_ena_per_port_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_sym_hash_ena_per_port_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_sym_hash_ena_per_port_enable =
TOKEN_STRING_INITIALIZER(struct cmd_set_sym_hash_ena_per_port_result,
enable, "enable#disable");
@@ -12222,7 +12222,7 @@ cmdline_parse_token_string_t cmd_get_hash_global_config_all =
get_hash_global_config, "get_hash_global_config");
cmdline_parse_token_num_t cmd_get_hash_global_config_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_get_hash_global_config_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_inst_t cmd_get_hash_global_config = {
.f = cmd_get_hash_global_config_parsed,
@@ -12297,7 +12297,7 @@ cmdline_parse_token_string_t cmd_set_hash_global_config_all =
set_hash_global_config, "set_hash_global_config");
cmdline_parse_token_num_t cmd_set_hash_global_config_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_hash_global_config_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_hash_global_config_hash_func =
TOKEN_STRING_INITIALIZER(struct cmd_set_hash_global_config_result,
hash_func, "toeplitz#simple_xor#symmetric_toeplitz#default");
@@ -12413,7 +12413,7 @@ cmdline_parse_token_string_t cmd_set_hash_input_set_cmd =
set_hash_input_set, "set_hash_input_set");
cmdline_parse_token_num_t cmd_set_hash_input_set_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_hash_input_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_hash_input_set_flow_type =
TOKEN_STRING_INITIALIZER(struct cmd_set_hash_input_set_result,
flow_type, NULL);
@@ -12486,7 +12486,7 @@ cmdline_parse_token_string_t cmd_set_fdir_input_set_cmd =
set_fdir_input_set, "set_fdir_input_set");
cmdline_parse_token_num_t cmd_set_fdir_input_set_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_fdir_input_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_fdir_input_set_flow_type =
TOKEN_STRING_INITIALIZER(struct cmd_set_fdir_input_set_result,
flow_type,
@@ -12559,7 +12559,7 @@ cmdline_parse_token_string_t cmd_mcast_addr_what =
TOKEN_STRING_INITIALIZER(struct cmd_mcast_addr_result, what,
"add#remove");
cmdline_parse_token_num_t cmd_mcast_addr_portnum =
- TOKEN_NUM_INITIALIZER(struct cmd_mcast_addr_result, port_num, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_mcast_addr_result, port_num, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_mcast_addr_addr =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_mac_addr_result, address);
@@ -12608,7 +12608,7 @@ cmdline_parse_token_string_t cmd_config_l2_tunnel_eth_type_all_str =
cmdline_parse_token_num_t cmd_config_l2_tunnel_eth_type_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_l2_tunnel_eth_type_result,
- id, UINT16);
+ id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_l2_tunnel_eth_type_l2_tunnel =
TOKEN_STRING_INITIALIZER
(struct cmd_config_l2_tunnel_eth_type_result,
@@ -12624,7 +12624,7 @@ cmdline_parse_token_string_t cmd_config_l2_tunnel_eth_type_eth_type =
cmdline_parse_token_num_t cmd_config_l2_tunnel_eth_type_eth_type_val =
TOKEN_NUM_INITIALIZER
(struct cmd_config_l2_tunnel_eth_type_result,
- eth_type_val, UINT16);
+ eth_type_val, CMDLINE_UINT16);
static enum rte_eth_tunnel_type
str2fdir_l2_tunnel_type(char *string)
@@ -12742,7 +12742,7 @@ cmdline_parse_token_string_t cmd_config_l2_tunnel_en_dis_all_str =
cmdline_parse_token_num_t cmd_config_l2_tunnel_en_dis_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_l2_tunnel_en_dis_result,
- id, UINT16);
+ id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_l2_tunnel_en_dis_l2_tunnel =
TOKEN_STRING_INITIALIZER
(struct cmd_config_l2_tunnel_en_dis_result,
@@ -12920,7 +12920,7 @@ cmdline_parse_token_string_t cmd_config_e_tag_port_tag_id =
cmdline_parse_token_num_t cmd_config_e_tag_port_tag_id_val =
TOKEN_NUM_INITIALIZER
(struct cmd_config_e_tag_result,
- port_tag_id_val, UINT32);
+ port_tag_id_val, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_config_e_tag_e_tag_id =
TOKEN_STRING_INITIALIZER
(struct cmd_config_e_tag_result,
@@ -12928,7 +12928,7 @@ cmdline_parse_token_string_t cmd_config_e_tag_e_tag_id =
cmdline_parse_token_num_t cmd_config_e_tag_e_tag_id_val =
TOKEN_NUM_INITIALIZER
(struct cmd_config_e_tag_result,
- e_tag_id_val, UINT16);
+ e_tag_id_val, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_e_tag_dst_pool =
TOKEN_STRING_INITIALIZER
(struct cmd_config_e_tag_result,
@@ -12936,7 +12936,7 @@ cmdline_parse_token_string_t cmd_config_e_tag_dst_pool =
cmdline_parse_token_num_t cmd_config_e_tag_dst_pool_val =
TOKEN_NUM_INITIALIZER
(struct cmd_config_e_tag_result,
- dst_pool_val, UINT8);
+ dst_pool_val, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_config_e_tag_port =
TOKEN_STRING_INITIALIZER
(struct cmd_config_e_tag_result,
@@ -12944,7 +12944,7 @@ cmdline_parse_token_string_t cmd_config_e_tag_port =
cmdline_parse_token_num_t cmd_config_e_tag_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_e_tag_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_e_tag_vf =
TOKEN_STRING_INITIALIZER
(struct cmd_config_e_tag_result,
@@ -12952,7 +12952,7 @@ cmdline_parse_token_string_t cmd_config_e_tag_vf =
cmdline_parse_token_num_t cmd_config_e_tag_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_e_tag_result,
- vf_id, UINT8);
+ vf_id, CMDLINE_UINT8);
/* E-tag insertion configuration */
static void
@@ -13271,11 +13271,11 @@ cmdline_parse_token_string_t cmd_vf_vlan_anti_spoof_antispoof =
cmdline_parse_token_num_t cmd_vf_vlan_anti_spoof_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_anti_spoof_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_vlan_anti_spoof_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_anti_spoof_result,
- vf_id, UINT32);
+ vf_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_vf_vlan_anti_spoof_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_vlan_anti_spoof_result,
@@ -13377,11 +13377,11 @@ cmdline_parse_token_string_t cmd_vf_mac_anti_spoof_antispoof =
cmdline_parse_token_num_t cmd_vf_mac_anti_spoof_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_mac_anti_spoof_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_mac_anti_spoof_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_mac_anti_spoof_result,
- vf_id, UINT32);
+ vf_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_vf_mac_anti_spoof_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_mac_anti_spoof_result,
@@ -13483,11 +13483,11 @@ cmdline_parse_token_string_t cmd_vf_vlan_stripq_stripq =
cmdline_parse_token_num_t cmd_vf_vlan_stripq_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_stripq_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_vlan_stripq_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_stripq_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_vlan_stripq_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_vlan_stripq_result,
@@ -13589,15 +13589,15 @@ cmdline_parse_token_string_t cmd_vf_vlan_insert_insert =
cmdline_parse_token_num_t cmd_vf_vlan_insert_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_insert_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_vlan_insert_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_insert_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_vlan_insert_vlan_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_vlan_insert_result,
- vlan_id, UINT16);
+ vlan_id, CMDLINE_UINT16);
static void
cmd_set_vf_vlan_insert_parsed(
@@ -13687,7 +13687,7 @@ cmdline_parse_token_string_t cmd_tx_loopback_loopback =
cmdline_parse_token_num_t cmd_tx_loopback_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_tx_loopback_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_tx_loopback_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_tx_loopback_result,
@@ -13787,7 +13787,7 @@ cmdline_parse_token_string_t cmd_all_queues_drop_en_drop =
cmdline_parse_token_num_t cmd_all_queues_drop_en_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_all_queues_drop_en_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_all_queues_drop_en_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_all_queues_drop_en_result,
@@ -13879,11 +13879,11 @@ cmdline_parse_token_string_t cmd_vf_split_drop_en_drop =
cmdline_parse_token_num_t cmd_vf_split_drop_en_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_split_drop_en_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_split_drop_en_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_split_drop_en_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_vf_split_drop_en_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_split_drop_en_result,
@@ -13973,11 +13973,11 @@ cmdline_parse_token_string_t cmd_set_vf_mac_addr_addr =
cmdline_parse_token_num_t cmd_set_vf_mac_addr_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_mac_addr_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_vf_mac_addr_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_mac_addr_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_set_vf_mac_addr_mac_addr =
TOKEN_ETHERADDR_INITIALIZER(struct cmd_set_vf_mac_addr_result,
mac_addr);
@@ -14074,7 +14074,7 @@ cmdline_parse_token_string_t cmd_macsec_offload_on_offload =
cmdline_parse_token_num_t cmd_macsec_offload_on_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_offload_on_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_macsec_offload_on_on =
TOKEN_STRING_INITIALIZER
(struct cmd_macsec_offload_on_result,
@@ -14189,7 +14189,7 @@ cmdline_parse_token_string_t cmd_macsec_offload_off_offload =
cmdline_parse_token_num_t cmd_macsec_offload_off_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_offload_off_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_macsec_offload_off_off =
TOKEN_STRING_INITIALIZER
(struct cmd_macsec_offload_off_result,
@@ -14284,7 +14284,7 @@ cmdline_parse_token_string_t cmd_macsec_sc_tx_rx =
cmdline_parse_token_num_t cmd_macsec_sc_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sc_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t cmd_macsec_sc_mac =
TOKEN_ETHERADDR_INITIALIZER
(struct cmd_macsec_sc_result,
@@ -14292,7 +14292,7 @@ cmdline_parse_token_etheraddr_t cmd_macsec_sc_mac =
cmdline_parse_token_num_t cmd_macsec_sc_pi =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sc_result,
- pi, UINT16);
+ pi, CMDLINE_UINT16);
static void
cmd_set_macsec_sc_parsed(
@@ -14376,19 +14376,19 @@ cmdline_parse_token_string_t cmd_macsec_sa_tx_rx =
cmdline_parse_token_num_t cmd_macsec_sa_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sa_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_macsec_sa_idx =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sa_result,
- idx, UINT8);
+ idx, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_macsec_sa_an =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sa_result,
- an, UINT8);
+ an, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_macsec_sa_pn =
TOKEN_NUM_INITIALIZER
(struct cmd_macsec_sa_result,
- pn, UINT32);
+ pn, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_macsec_sa_key =
TOKEN_STRING_INITIALIZER
(struct cmd_macsec_sa_result,
@@ -14496,11 +14496,11 @@ cmdline_parse_token_string_t cmd_vf_promisc_promisc =
cmdline_parse_token_num_t cmd_vf_promisc_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_promisc_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_promisc_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_promisc_result,
- vf_id, UINT32);
+ vf_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_vf_promisc_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_promisc_result,
@@ -14586,11 +14586,11 @@ cmdline_parse_token_string_t cmd_vf_allmulti_allmulti =
cmdline_parse_token_num_t cmd_vf_allmulti_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_allmulti_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_allmulti_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_allmulti_result,
- vf_id, UINT32);
+ vf_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_vf_allmulti_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_allmulti_result,
@@ -14676,11 +14676,11 @@ cmdline_parse_token_string_t cmd_set_vf_broadcast_broadcast =
cmdline_parse_token_num_t cmd_set_vf_broadcast_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_broadcast_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_vf_broadcast_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_broadcast_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vf_broadcast_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_set_vf_broadcast_result,
@@ -14770,11 +14770,11 @@ cmdline_parse_token_string_t cmd_set_vf_vlan_tag_tag =
cmdline_parse_token_num_t cmd_set_vf_vlan_tag_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_vlan_tag_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_vf_vlan_tag_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_set_vf_vlan_tag_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vf_vlan_tag_on_off =
TOKEN_STRING_INITIALIZER
(struct cmd_set_vf_vlan_tag_result,
@@ -14880,19 +14880,19 @@ cmdline_parse_token_string_t cmd_vf_tc_bw_max_bw =
cmdline_parse_token_num_t cmd_vf_tc_bw_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_tc_bw_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_tc_bw_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_tc_bw_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_vf_tc_bw_tc_no =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_tc_bw_result,
- tc_no, UINT8);
+ tc_no, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_vf_tc_bw_bw =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_tc_bw_result,
- bw, UINT32);
+ bw, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_vf_tc_bw_bw_list =
TOKEN_STRING_INITIALIZER
(struct cmd_vf_tc_bw_result,
@@ -14900,7 +14900,7 @@ cmdline_parse_token_string_t cmd_vf_tc_bw_bw_list =
cmdline_parse_token_num_t cmd_vf_tc_bw_tc_map =
TOKEN_NUM_INITIALIZER
(struct cmd_vf_tc_bw_result,
- tc_map, UINT8);
+ tc_map, CMDLINE_UINT8);
/* VF max bandwidth setting */
static void
@@ -15205,7 +15205,7 @@ cmdline_parse_token_string_t cmd_set_port_tm_hierarchy_default_default =
cmdline_parse_token_num_t cmd_set_port_tm_hierarchy_default_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_tm_hierarchy_default_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
static void cmd_set_port_tm_hierarchy_default_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -15285,27 +15285,27 @@ cmdline_parse_token_string_t cmd_set_vxlan_vni =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"vni");
cmdline_parse_token_num_t cmd_set_vxlan_vni_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, vni, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_vxlan_udp_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"udp-src");
cmdline_parse_token_num_t cmd_set_vxlan_udp_src_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_src, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vxlan_udp_dst =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"udp-dst");
cmdline_parse_token_num_t cmd_set_vxlan_udp_dst_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, udp_dst, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vxlan_ip_tos =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"ip-tos");
cmdline_parse_token_num_t cmd_set_vxlan_ip_tos_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tos, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tos, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_set_vxlan_ip_ttl =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"ip-ttl");
cmdline_parse_token_num_t cmd_set_vxlan_ip_ttl_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, ttl, UINT8);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, ttl, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_set_vxlan_ip_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"ip-src");
@@ -15320,7 +15320,7 @@ cmdline_parse_token_string_t cmd_set_vxlan_vlan =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"vlan-tci");
cmdline_parse_token_num_t cmd_set_vxlan_vlan_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_vxlan_result, tci, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_vxlan_eth_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_vxlan_result, pos_token,
"eth-src");
@@ -15505,7 +15505,7 @@ cmdline_parse_token_string_t cmd_set_nvgre_tni =
TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
"tni");
cmdline_parse_token_num_t cmd_set_nvgre_tni_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tni, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_nvgre_ip_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
"ip-src");
@@ -15520,7 +15520,7 @@ cmdline_parse_token_string_t cmd_set_nvgre_vlan =
TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
"vlan-tci");
cmdline_parse_token_num_t cmd_set_nvgre_vlan_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_nvgre_result, tci, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_nvgre_eth_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_nvgre_result, pos_token,
"eth-src");
@@ -15651,7 +15651,7 @@ cmdline_parse_token_string_t cmd_set_l2_encap_vlan =
TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
"vlan-tci");
cmdline_parse_token_num_t cmd_set_l2_encap_vlan_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_l2_encap_result, tci, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_l2_encap_result, tci, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_l2_encap_eth_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_l2_encap_result, pos_token,
"eth-src");
@@ -15811,7 +15811,7 @@ cmdline_parse_token_string_t cmd_set_mplsogre_encap_label =
pos_token, "label");
cmdline_parse_token_num_t cmd_set_mplsogre_encap_label_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, label,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_mplsogre_encap_ip_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
pos_token, "ip-src");
@@ -15827,7 +15827,7 @@ cmdline_parse_token_string_t cmd_set_mplsogre_encap_vlan =
pos_token, "vlan-tci");
cmdline_parse_token_num_t cmd_set_mplsogre_encap_vlan_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsogre_encap_result, tci,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_mplsogre_encap_eth_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsogre_encap_result,
pos_token, "eth-src");
@@ -16035,19 +16035,19 @@ cmdline_parse_token_string_t cmd_set_mplsoudp_encap_label =
pos_token, "label");
cmdline_parse_token_num_t cmd_set_mplsoudp_encap_label_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, label,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
pos_token, "udp-src");
cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_src_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_src,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_mplsoudp_encap_udp_dst =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
pos_token, "udp-dst");
cmdline_parse_token_num_t cmd_set_mplsoudp_encap_udp_dst_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, udp_dst,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_mplsoudp_encap_ip_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
pos_token, "ip-src");
@@ -16063,7 +16063,7 @@ cmdline_parse_token_string_t cmd_set_mplsoudp_encap_vlan =
pos_token, "vlan-tci");
cmdline_parse_token_num_t cmd_set_mplsoudp_encap_vlan_value =
TOKEN_NUM_INITIALIZER(struct cmd_set_mplsoudp_encap_result, tci,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_set_mplsoudp_encap_eth_src =
TOKEN_STRING_INITIALIZER(struct cmd_set_mplsoudp_encap_result,
pos_token, "eth-src");
@@ -16306,7 +16306,7 @@ cmdline_parse_token_string_t cmd_ddp_add_ddp =
cmdline_parse_token_string_t cmd_ddp_add_add =
TOKEN_STRING_INITIALIZER(struct cmd_ddp_add_result, add, "add");
cmdline_parse_token_num_t cmd_ddp_add_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_ddp_add_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_ddp_add_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_ddp_add_filepath =
TOKEN_STRING_INITIALIZER(struct cmd_ddp_add_result, filepath, NULL);
@@ -16386,7 +16386,7 @@ cmdline_parse_token_string_t cmd_ddp_del_ddp =
cmdline_parse_token_string_t cmd_ddp_del_del =
TOKEN_STRING_INITIALIZER(struct cmd_ddp_del_result, del, "del");
cmdline_parse_token_num_t cmd_ddp_del_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_ddp_del_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_ddp_del_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_ddp_del_filepath =
TOKEN_STRING_INITIALIZER(struct cmd_ddp_del_result, filepath, NULL);
@@ -16693,7 +16693,7 @@ cmdline_parse_token_string_t cmd_ddp_get_list_get =
cmdline_parse_token_string_t cmd_ddp_get_list_list =
TOKEN_STRING_INITIALIZER(struct cmd_ddp_get_list_result, list, "list");
cmdline_parse_token_num_t cmd_ddp_get_list_port_id =
- TOKEN_NUM_INITIALIZER(struct cmd_ddp_get_list_result, port_id, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cmd_ddp_get_list_result, port_id, CMDLINE_UINT16);
static void
cmd_ddp_get_list_parsed(
@@ -16842,13 +16842,13 @@ cmdline_parse_token_string_t cmd_cfg_input_set_cfg =
cfg, "config");
cmdline_parse_token_num_t cmd_cfg_input_set_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_cfg_input_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_cfg_input_set_pctype =
TOKEN_STRING_INITIALIZER(struct cmd_cfg_input_set_result,
pctype, "pctype");
cmdline_parse_token_num_t cmd_cfg_input_set_pctype_id =
TOKEN_NUM_INITIALIZER(struct cmd_cfg_input_set_result,
- pctype_id, UINT8);
+ pctype_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_cfg_input_set_inset_type =
TOKEN_STRING_INITIALIZER(struct cmd_cfg_input_set_result,
inset_type,
@@ -16861,7 +16861,7 @@ cmdline_parse_token_string_t cmd_cfg_input_set_field =
field, "field");
cmdline_parse_token_num_t cmd_cfg_input_set_field_idx =
TOKEN_NUM_INITIALIZER(struct cmd_cfg_input_set_result,
- field_idx, UINT8);
+ field_idx, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_cfg_input_set = {
.f = cmd_cfg_input_set_parsed,
@@ -16943,13 +16943,13 @@ cmdline_parse_token_string_t cmd_clear_input_set_cfg =
cfg, "config");
cmdline_parse_token_num_t cmd_clear_input_set_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_clear_input_set_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_clear_input_set_pctype =
TOKEN_STRING_INITIALIZER(struct cmd_clear_input_set_result,
pctype, "pctype");
cmdline_parse_token_num_t cmd_clear_input_set_pctype_id =
TOKEN_NUM_INITIALIZER(struct cmd_clear_input_set_result,
- pctype_id, UINT8);
+ pctype_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_clear_input_set_inset_type =
TOKEN_STRING_INITIALIZER(struct cmd_clear_input_set_result,
inset_type,
@@ -17006,11 +17006,11 @@ cmdline_parse_token_string_t cmd_show_vf_stats_stats =
cmdline_parse_token_num_t cmd_show_vf_stats_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_show_vf_stats_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_vf_stats_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_show_vf_stats_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
static void
cmd_show_vf_stats_parsed(
@@ -17115,11 +17115,11 @@ cmdline_parse_token_string_t cmd_clear_vf_stats_stats =
cmdline_parse_token_num_t cmd_clear_vf_stats_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_clear_vf_stats_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_clear_vf_stats_vf_id =
TOKEN_NUM_INITIALIZER
(struct cmd_clear_vf_stats_result,
- vf_id, UINT16);
+ vf_id, CMDLINE_UINT16);
static void
cmd_clear_vf_stats_parsed(
@@ -17199,7 +17199,7 @@ cmdline_parse_token_string_t cmd_pctype_mapping_reset_config =
cmdline_parse_token_num_t cmd_pctype_mapping_reset_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_pctype_mapping_reset_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_pctype_mapping_reset_pctype =
TOKEN_STRING_INITIALIZER
(struct cmd_pctype_mapping_reset_result,
@@ -17281,7 +17281,7 @@ cmdline_parse_token_string_t cmd_pctype_mapping_get_port =
cmdline_parse_token_num_t cmd_pctype_mapping_get_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_pctype_mapping_get_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_pctype_mapping_get_pctype =
TOKEN_STRING_INITIALIZER
(struct cmd_pctype_mapping_get_result,
@@ -17385,7 +17385,7 @@ cmdline_parse_token_string_t cmd_pctype_mapping_update_config =
cmdline_parse_token_num_t cmd_pctype_mapping_update_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_pctype_mapping_update_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_pctype_mapping_update_pctype =
TOKEN_STRING_INITIALIZER
(struct cmd_pctype_mapping_update_result,
@@ -17405,7 +17405,7 @@ cmdline_parse_token_string_t cmd_pctype_mapping_update_pc_type =
cmdline_parse_token_num_t cmd_pctype_mapping_update_flow_type =
TOKEN_NUM_INITIALIZER
(struct cmd_pctype_mapping_update_result,
- flow_type, UINT16);
+ flow_type, CMDLINE_UINT16);
static void
cmd_pctype_mapping_update_parsed(
@@ -17499,11 +17499,11 @@ cmdline_parse_token_string_t cmd_ptype_mapping_get_get =
cmdline_parse_token_num_t cmd_ptype_mapping_get_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_get_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_ptype_mapping_get_valid_only =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_get_result,
- valid_only, UINT8);
+ valid_only, CMDLINE_UINT8);
static void
cmd_ptype_mapping_get_parsed(
@@ -17596,19 +17596,19 @@ cmdline_parse_token_string_t cmd_ptype_mapping_replace_replace =
cmdline_parse_token_num_t cmd_ptype_mapping_replace_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_replace_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_ptype_mapping_replace_target =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_replace_result,
- target, UINT32);
+ target, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_ptype_mapping_replace_mask =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_replace_result,
- mask, UINT8);
+ mask, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_ptype_mapping_replace_pkt_type =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_replace_result,
- pkt_type, UINT32);
+ pkt_type, CMDLINE_UINT32);
static void
cmd_ptype_mapping_replace_parsed(
@@ -17690,7 +17690,7 @@ cmdline_parse_token_string_t cmd_ptype_mapping_reset_reset =
cmdline_parse_token_num_t cmd_ptype_mapping_reset_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_reset_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
static void
cmd_ptype_mapping_reset_parsed(
@@ -17763,15 +17763,15 @@ cmdline_parse_token_string_t cmd_ptype_mapping_update_update =
cmdline_parse_token_num_t cmd_ptype_mapping_update_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_update_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_ptype_mapping_update_hw_ptype =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_update_result,
- hw_ptype, UINT8);
+ hw_ptype, CMDLINE_UINT8);
cmdline_parse_token_num_t cmd_ptype_mapping_update_sw_ptype =
TOKEN_NUM_INITIALIZER
(struct cmd_ptype_mapping_update_result,
- sw_ptype, UINT32);
+ sw_ptype, CMDLINE_UINT32);
static void
cmd_ptype_mapping_update_parsed(
@@ -17882,7 +17882,7 @@ cmdline_parse_token_string_t cmd_rx_offload_get_capa_port =
cmdline_parse_token_num_t cmd_rx_offload_get_capa_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_rx_offload_get_capa_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_rx_offload_get_capa_rx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_rx_offload_get_capa_result,
@@ -17979,7 +17979,7 @@ cmdline_parse_token_string_t cmd_rx_offload_get_configuration_port =
cmdline_parse_token_num_t cmd_rx_offload_get_configuration_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_rx_offload_get_configuration_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_rx_offload_get_configuration_rx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_rx_offload_get_configuration_result,
@@ -18061,7 +18061,7 @@ cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_config =
cmdline_parse_token_num_t cmd_config_per_port_rx_offload_result_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_port_rx_offload_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_port_rx_offload_result_rx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_port_rx_offload_result,
@@ -18183,7 +18183,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_port =
cmdline_parse_token_num_t cmd_config_per_queue_rx_offload_result_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_queue_rx_offload_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_rxq =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_rx_offload_result,
@@ -18191,7 +18191,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_rxq =
cmdline_parse_token_num_t cmd_config_per_queue_rx_offload_result_queue_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_queue_rx_offload_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_queue_rx_offload_result_rxoffload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_rx_offload_result,
@@ -18292,7 +18292,7 @@ cmdline_parse_token_string_t cmd_tx_offload_get_capa_port =
cmdline_parse_token_num_t cmd_tx_offload_get_capa_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_tx_offload_get_capa_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_tx_offload_get_capa_tx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_tx_offload_get_capa_result,
@@ -18389,7 +18389,7 @@ cmdline_parse_token_string_t cmd_tx_offload_get_configuration_port =
cmdline_parse_token_num_t cmd_tx_offload_get_configuration_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_tx_offload_get_configuration_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_tx_offload_get_configuration_tx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_tx_offload_get_configuration_result,
@@ -18471,7 +18471,7 @@ cmdline_parse_token_string_t cmd_config_per_port_tx_offload_result_config =
cmdline_parse_token_num_t cmd_config_per_port_tx_offload_result_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_port_tx_offload_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_port_tx_offload_result_tx_offload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_port_tx_offload_result,
@@ -18600,7 +18600,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_port =
cmdline_parse_token_num_t cmd_config_per_queue_tx_offload_result_port_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_queue_tx_offload_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_txq =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_tx_offload_result,
@@ -18608,7 +18608,7 @@ cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_txq =
cmdline_parse_token_num_t cmd_config_per_queue_tx_offload_result_queue_id =
TOKEN_NUM_INITIALIZER
(struct cmd_config_per_queue_tx_offload_result,
- queue_id, UINT16);
+ queue_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_per_queue_tx_offload_result_txoffload =
TOKEN_STRING_INITIALIZER
(struct cmd_config_per_queue_tx_offload_result,
@@ -18725,13 +18725,13 @@ cmdline_parse_token_string_t cmd_config_tx_metadata_specific_keyword =
keyword, "config");
cmdline_parse_token_num_t cmd_config_tx_metadata_specific_id =
TOKEN_NUM_INITIALIZER(struct cmd_config_tx_metadata_specific_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_config_tx_metadata_specific_item =
TOKEN_STRING_INITIALIZER(struct cmd_config_tx_metadata_specific_result,
item, "tx_metadata");
cmdline_parse_token_num_t cmd_config_tx_metadata_specific_value =
TOKEN_NUM_INITIALIZER(struct cmd_config_tx_metadata_specific_result,
- value, UINT32);
+ value, CMDLINE_UINT32);
cmdline_parse_inst_t cmd_config_tx_metadata_specific = {
.f = cmd_config_tx_metadata_specific_parsed,
@@ -18780,7 +18780,7 @@ cmdline_parse_token_string_t cmd_show_tx_metadata_port =
cmd_port, "port");
cmdline_parse_token_num_t cmd_show_tx_metadata_pid =
TOKEN_NUM_INITIALIZER(struct cmd_show_tx_metadata_result,
- cmd_pid, UINT16);
+ cmd_pid, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_show_tx_metadata_keyword =
TOKEN_STRING_INITIALIZER(struct cmd_show_tx_metadata_result,
cmd_keyword, "tx_metadata");
diff --git a/app/test-pmd/cmdline_mtr.c b/app/test-pmd/cmdline_mtr.c
index ab5c8642dba3..b21e1a633ecb 100644
--- a/app/test-pmd/cmdline_mtr.c
+++ b/app/test-pmd/cmdline_mtr.c
@@ -253,7 +253,7 @@ cmdline_parse_token_string_t cmd_show_port_meter_cap_cap =
struct cmd_show_port_meter_cap_result, cap, "cap");
cmdline_parse_token_num_t cmd_show_port_meter_cap_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_show_port_meter_cap_result, port_id, UINT16);
+ struct cmd_show_port_meter_cap_result, port_id, CMDLINE_UINT16);
static void cmd_show_port_meter_cap_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -359,23 +359,23 @@ cmdline_parse_token_string_t cmd_add_port_meter_profile_srtcm_srtcm_rfc2697 =
cmdline_parse_token_num_t cmd_add_port_meter_profile_srtcm_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_srtcm_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_meter_profile_srtcm_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_srtcm_result,
- profile_id, UINT32);
+ profile_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_meter_profile_srtcm_cir =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_srtcm_result,
- cir, UINT64);
+ cir, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_srtcm_cbs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_srtcm_result,
- cbs, UINT64);
+ cbs, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_srtcm_ebs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_srtcm_result,
- ebs, UINT64);
+ ebs, CMDLINE_UINT64);
static void cmd_add_port_meter_profile_srtcm_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -461,27 +461,27 @@ cmdline_parse_token_string_t cmd_add_port_meter_profile_trtcm_trtcm_rfc2698 =
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- profile_id, UINT32);
+ profile_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_cir =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- cir, UINT64);
+ cir, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_pir =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- pir, UINT64);
+ pir, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_cbs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- cbs, UINT64);
+ cbs, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_pbs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_result,
- pbs, UINT64);
+ pbs, CMDLINE_UINT64);
static void cmd_add_port_meter_profile_trtcm_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -571,27 +571,27 @@ cmdline_parse_token_string_t
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- profile_id, UINT32);
+ profile_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_cir =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- cir, UINT64);
+ cir, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_eir =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- eir, UINT64);
+ eir, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_cbs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- cbs, UINT64);
+ cbs, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_meter_profile_trtcm_rfc4115_ebs =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_meter_profile_trtcm_rfc4115_result,
- ebs, UINT64);
+ ebs, CMDLINE_UINT64);
static void cmd_add_port_meter_profile_trtcm_rfc4115_parsed(
void *parsed_result,
@@ -672,11 +672,11 @@ cmdline_parse_token_string_t cmd_del_port_meter_profile_profile =
cmdline_parse_token_num_t cmd_del_port_meter_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_meter_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_meter_profile_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_meter_profile_result,
- profile_id, UINT32);
+ profile_id, CMDLINE_UINT32);
static void cmd_del_port_meter_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -742,13 +742,13 @@ cmdline_parse_token_string_t cmd_create_port_meter_meter =
struct cmd_create_port_meter_result, meter, "meter");
cmdline_parse_token_num_t cmd_create_port_meter_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_create_port_meter_result, port_id, UINT16);
+ struct cmd_create_port_meter_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_create_port_meter_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_create_port_meter_result, mtr_id, UINT32);
+ struct cmd_create_port_meter_result, mtr_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_create_port_meter_profile_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_create_port_meter_result, profile_id, UINT32);
+ struct cmd_create_port_meter_result, profile_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_create_port_meter_meter_enable =
TOKEN_STRING_INITIALIZER(struct cmd_create_port_meter_result,
meter_enable, "yes#no");
@@ -763,10 +763,10 @@ cmdline_parse_token_string_t cmd_create_port_meter_r_action =
r_action, "R#Y#G#D#r#y#g#d");
cmdline_parse_token_num_t cmd_create_port_meter_statistics_mask =
TOKEN_NUM_INITIALIZER(struct cmd_create_port_meter_result,
- statistics_mask, UINT64);
+ statistics_mask, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_create_port_meter_shared =
TOKEN_NUM_INITIALIZER(struct cmd_create_port_meter_result,
- shared, UINT32);
+ shared, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_create_port_meter_input_color =
TOKEN_STRING_INITIALIZER(struct cmd_create_port_meter_result,
meter_input_color, TOKEN_STRING_MULTI);
@@ -866,10 +866,10 @@ cmdline_parse_token_string_t cmd_enable_port_meter_meter =
struct cmd_enable_port_meter_result, meter, "meter");
cmdline_parse_token_num_t cmd_enable_port_meter_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_enable_port_meter_result, port_id, UINT16);
+ struct cmd_enable_port_meter_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_enable_port_meter_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_enable_port_meter_result, mtr_id, UINT32);
+ struct cmd_enable_port_meter_result, mtr_id, CMDLINE_UINT32);
static void cmd_enable_port_meter_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -927,10 +927,10 @@ cmdline_parse_token_string_t cmd_disable_port_meter_meter =
struct cmd_disable_port_meter_result, meter, "meter");
cmdline_parse_token_num_t cmd_disable_port_meter_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_disable_port_meter_result, port_id, UINT16);
+ struct cmd_disable_port_meter_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_disable_port_meter_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_disable_port_meter_result, mtr_id, UINT32);
+ struct cmd_disable_port_meter_result, mtr_id, CMDLINE_UINT32);
static void cmd_disable_port_meter_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -988,10 +988,10 @@ cmdline_parse_token_string_t cmd_del_port_meter_meter =
struct cmd_del_port_meter_result, meter, "meter");
cmdline_parse_token_num_t cmd_del_port_meter_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_del_port_meter_result, port_id, UINT16);
+ struct cmd_del_port_meter_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_meter_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_del_port_meter_result, mtr_id, UINT32);
+ struct cmd_del_port_meter_result, mtr_id, CMDLINE_UINT32);
static void cmd_del_port_meter_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1054,13 +1054,13 @@ cmdline_parse_token_string_t cmd_set_port_meter_profile_profile =
struct cmd_set_port_meter_profile_result, profile, "profile");
cmdline_parse_token_num_t cmd_set_port_meter_profile_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_meter_profile_result, port_id, UINT16);
+ struct cmd_set_port_meter_profile_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_port_meter_profile_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_meter_profile_result, mtr_id, UINT32);
+ struct cmd_set_port_meter_profile_result, mtr_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_meter_profile_profile_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_meter_profile_result, profile_id, UINT32);
+ struct cmd_set_port_meter_profile_result, profile_id, CMDLINE_UINT32);
static void cmd_set_port_meter_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1208,15 +1208,15 @@ cmdline_parse_token_string_t cmd_set_port_meter_policer_action_action =
cmdline_parse_token_num_t cmd_set_port_meter_policer_action_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_meter_policer_action_result, port_id,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_port_meter_policer_action_mtr_id =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_meter_policer_action_result, mtr_id,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_meter_policer_action_action_mask =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_meter_policer_action_result, action_mask,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_set_port_meter_policer_action_policer_action =
TOKEN_STRING_INITIALIZER(
struct cmd_set_port_meter_policer_action_result,
@@ -1316,14 +1316,14 @@ cmdline_parse_token_string_t cmd_set_port_meter_stats_mask_mask =
struct cmd_set_port_meter_stats_mask_result, mask, "mask");
cmdline_parse_token_num_t cmd_set_port_meter_stats_mask_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_meter_stats_mask_result, port_id, UINT16);
+ struct cmd_set_port_meter_stats_mask_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_port_meter_stats_mask_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_meter_stats_mask_result, mtr_id, UINT32);
+ struct cmd_set_port_meter_stats_mask_result, mtr_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_meter_stats_mask_stats_mask =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_meter_stats_mask_result, stats_mask,
- UINT64);
+ CMDLINE_UINT64);
static void cmd_set_port_meter_stats_mask_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1388,10 +1388,10 @@ cmdline_parse_token_string_t cmd_show_port_meter_stats_stats =
struct cmd_show_port_meter_stats_result, stats, "stats");
cmdline_parse_token_num_t cmd_show_port_meter_stats_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_show_port_meter_stats_result, port_id, UINT16);
+ struct cmd_show_port_meter_stats_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_port_meter_stats_mtr_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_show_port_meter_stats_result, mtr_id, UINT32);
+ struct cmd_show_port_meter_stats_result, mtr_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_show_port_meter_stats_clear =
TOKEN_STRING_INITIALIZER(
struct cmd_show_port_meter_stats_result, clear, "yes#no");
diff --git a/app/test-pmd/cmdline_tm.c b/app/test-pmd/cmdline_tm.c
index d62a4f54439f..506fb9055be0 100644
--- a/app/test-pmd/cmdline_tm.c
+++ b/app/test-pmd/cmdline_tm.c
@@ -217,7 +217,7 @@ cmdline_parse_token_string_t cmd_show_port_tm_cap_cap =
cap, "cap");
cmdline_parse_token_num_t cmd_show_port_tm_cap_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_cap_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
static void cmd_show_port_tm_cap_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -354,10 +354,10 @@ cmdline_parse_token_string_t cmd_show_port_tm_level_cap_cap =
cap, "cap");
cmdline_parse_token_num_t cmd_show_port_tm_level_cap_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_level_cap_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_port_tm_level_cap_level_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_level_cap_result,
- level_id, UINT32);
+ level_id, CMDLINE_UINT32);
static void cmd_show_port_tm_level_cap_parsed(void *parsed_result,
@@ -481,10 +481,10 @@ cmdline_parse_token_string_t cmd_show_port_tm_node_cap_cap =
cap, "cap");
cmdline_parse_token_num_t cmd_show_port_tm_node_cap_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_node_cap_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_port_tm_node_cap_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_node_cap_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
static void cmd_show_port_tm_node_cap_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -593,14 +593,14 @@ cmdline_parse_token_string_t cmd_show_port_tm_node_stats_stats =
struct cmd_show_port_tm_node_stats_result, stats, "stats");
cmdline_parse_token_num_t cmd_show_port_tm_node_stats_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_show_port_tm_node_stats_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_port_tm_node_stats_node_id =
TOKEN_NUM_INITIALIZER(
struct cmd_show_port_tm_node_stats_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_show_port_tm_node_stats_clear =
TOKEN_NUM_INITIALIZER(
- struct cmd_show_port_tm_node_stats_result, clear, UINT32);
+ struct cmd_show_port_tm_node_stats_result, clear, CMDLINE_UINT32);
static void cmd_show_port_tm_node_stats_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -712,11 +712,11 @@ cmdline_parse_token_string_t cmd_show_port_tm_node_type_type =
cmdline_parse_token_num_t cmd_show_port_tm_node_type_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_show_port_tm_node_type_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_show_port_tm_node_type_node_id =
TOKEN_NUM_INITIALIZER(
struct cmd_show_port_tm_node_type_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
static void cmd_show_port_tm_node_type_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -804,31 +804,31 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_shaper_profile_profile =
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_shaper_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- shaper_id, UINT32);
+ shaper_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_cmit_tb_rate =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- cmit_tb_rate, UINT64);
+ cmit_tb_rate, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_cmit_tb_size =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- cmit_tb_size, UINT64);
+ cmit_tb_size, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_peak_tb_rate =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- peak_tb_rate, UINT64);
+ peak_tb_rate, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_peak_tb_size =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- peak_tb_size, UINT64);
+ peak_tb_size, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_shaper_profile_pktlen_adjust =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shaper_profile_result,
- pktlen_adjust, UINT32);
+ pktlen_adjust, CMDLINE_UINT32);
static void cmd_add_port_tm_node_shaper_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -920,11 +920,11 @@ cmdline_parse_token_string_t cmd_del_port_tm_node_shaper_profile_profile =
cmdline_parse_token_num_t cmd_del_port_tm_node_shaper_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_shaper_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_tm_node_shaper_profile_shaper_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_shaper_profile_result,
- shaper_id, UINT32);
+ shaper_id, CMDLINE_UINT32);
static void cmd_del_port_tm_node_shaper_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1001,15 +1001,15 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_shared_shaper_shaper =
cmdline_parse_token_num_t cmd_add_port_tm_node_shared_shaper_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shared_shaper_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_shared_shaper_shared_shaper_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shared_shaper_result,
- shared_shaper_id, UINT32);
+ shared_shaper_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_node_shared_shaper_shaper_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_shared_shaper_result,
- shaper_profile_id, UINT32);
+ shaper_profile_id, CMDLINE_UINT32);
static void cmd_add_port_tm_node_shared_shaper_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1101,11 +1101,11 @@ cmdline_parse_token_string_t cmd_del_port_tm_node_shared_shaper_shaper =
cmdline_parse_token_num_t cmd_del_port_tm_node_shared_shaper_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_shared_shaper_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_tm_node_shared_shaper_shared_shaper_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_shared_shaper_result,
- shared_shaper_id, UINT32);
+ shared_shaper_id, CMDLINE_UINT32);
static void cmd_del_port_tm_node_shared_shaper_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1194,11 +1194,11 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_profile =
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_wred_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- wred_profile_id, UINT32);
+ wred_profile_id, CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_g =
TOKEN_STRING_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
@@ -1206,19 +1206,19 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_g =
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_min_th_g =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- min_th_g, UINT64);
+ min_th_g, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_max_th_g =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- max_th_g, UINT64);
+ max_th_g, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_maxp_inv_g =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- maxp_inv_g, UINT16);
+ maxp_inv_g, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_wq_log2_g =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- wq_log2_g, UINT16);
+ wq_log2_g, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_y =
TOKEN_STRING_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
@@ -1226,19 +1226,19 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_y =
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_min_th_y =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- min_th_y, UINT64);
+ min_th_y, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_max_th_y =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- max_th_y, UINT64);
+ max_th_y, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_maxp_inv_y =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- maxp_inv_y, UINT16);
+ maxp_inv_y, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_wq_log2_y =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- wq_log2_y, UINT16);
+ wq_log2_y, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_r =
TOKEN_STRING_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
@@ -1246,19 +1246,19 @@ cmdline_parse_token_string_t cmd_add_port_tm_node_wred_profile_color_r =
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_min_th_r =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- min_th_r, UINT64);
+ min_th_r, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_max_th_r =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- max_th_r, UINT64);
+ max_th_r, CMDLINE_UINT64);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_maxp_inv_r =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- maxp_inv_r, UINT16);
+ maxp_inv_r, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_node_wred_profile_wq_log2_r =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_node_wred_profile_result,
- wq_log2_r, UINT16);
+ wq_log2_r, CMDLINE_UINT16);
static void cmd_add_port_tm_node_wred_profile_parsed(void *parsed_result,
@@ -1374,11 +1374,11 @@ cmdline_parse_token_string_t cmd_del_port_tm_node_wred_profile_profile =
cmdline_parse_token_num_t cmd_del_port_tm_node_wred_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_wred_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_tm_node_wred_profile_wred_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_del_port_tm_node_wred_profile_result,
- wred_profile_id, UINT32);
+ wred_profile_id, CMDLINE_UINT32);
static void cmd_del_port_tm_node_wred_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1456,15 +1456,15 @@ cmdline_parse_token_string_t cmd_set_port_tm_node_shaper_profile_profile =
cmdline_parse_token_num_t cmd_set_port_tm_node_shaper_profile_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_tm_node_shaper_profile_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_port_tm_node_shaper_profile_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_port_tm_node_shaper_profile_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
cmdline_parse_token_num_t
cmd_set_port_tm_node_shaper_shaper_profile_profile_id =
TOKEN_NUM_INITIALIZER(
struct cmd_set_port_tm_node_shaper_profile_result,
- shaper_profile_id, UINT32);
+ shaper_profile_id, CMDLINE_UINT32);
static void cmd_set_port_tm_node_shaper_profile_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1550,31 +1550,31 @@ cmdline_parse_token_string_t cmd_add_port_tm_nonleaf_node_node =
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_add_port_tm_nonleaf_node_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_parent_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- parent_node_id, INT32);
+ parent_node_id, CMDLINE_INT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_priority =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- priority, UINT32);
+ priority, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_weight =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- weight, UINT32);
+ weight, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_level_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- level_id, UINT32);
+ level_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_shaper_profile_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- shaper_profile_id, INT32);
+ shaper_profile_id, CMDLINE_INT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_n_sp_priorities =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- n_sp_priorities, UINT32);
+ n_sp_priorities, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_nonleaf_node_stats_mask =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
- stats_mask, UINT64);
+ stats_mask, CMDLINE_UINT64);
cmdline_parse_token_string_t
cmd_add_port_tm_nonleaf_node_multi_shared_shaper_id =
TOKEN_STRING_INITIALIZER(struct cmd_add_port_tm_nonleaf_node_result,
@@ -1708,34 +1708,34 @@ cmdline_parse_token_string_t cmd_add_port_tm_leaf_node_node =
struct cmd_add_port_tm_leaf_node_result, node, "node");
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_parent_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- parent_node_id, INT32);
+ parent_node_id, CMDLINE_INT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_priority =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- priority, UINT32);
+ priority, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_weight =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- weight, UINT32);
+ weight, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_level_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- level_id, UINT32);
+ level_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_shaper_profile_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- shaper_profile_id, INT32);
+ shaper_profile_id, CMDLINE_INT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_cman_mode =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- cman_mode, UINT32);
+ cman_mode, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_wred_profile_id =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- wred_profile_id, UINT32);
+ wred_profile_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_add_port_tm_leaf_node_stats_mask =
TOKEN_NUM_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
- stats_mask, UINT64);
+ stats_mask, CMDLINE_UINT64);
cmdline_parse_token_string_t
cmd_add_port_tm_leaf_node_multi_shared_shaper_id =
TOKEN_STRING_INITIALIZER(struct cmd_add_port_tm_leaf_node_result,
@@ -1858,10 +1858,10 @@ cmdline_parse_token_string_t cmd_del_port_tm_node_node =
struct cmd_del_port_tm_node_result, node, "node");
cmdline_parse_token_num_t cmd_del_port_tm_node_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_del_port_tm_node_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_del_port_tm_node_node_id =
TOKEN_NUM_INITIALIZER(struct cmd_del_port_tm_node_result,
- node_id, UINT32);
+ node_id, CMDLINE_UINT32);
static void cmd_del_port_tm_node_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -1936,19 +1936,19 @@ cmdline_parse_token_string_t cmd_set_port_tm_node_parent_parent =
struct cmd_set_port_tm_node_parent_result, parent, "parent");
cmdline_parse_token_num_t cmd_set_port_tm_node_parent_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_tm_node_parent_result, port_id, UINT16);
+ struct cmd_set_port_tm_node_parent_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_set_port_tm_node_parent_node_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_set_port_tm_node_parent_result, node_id, UINT32);
+ struct cmd_set_port_tm_node_parent_result, node_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_tm_node_parent_parent_id =
TOKEN_NUM_INITIALIZER(struct cmd_set_port_tm_node_parent_result,
- parent_id, UINT32);
+ parent_id, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_tm_node_parent_priority =
TOKEN_NUM_INITIALIZER(struct cmd_set_port_tm_node_parent_result,
- priority, UINT32);
+ priority, CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_set_port_tm_node_parent_weight =
TOKEN_NUM_INITIALIZER(struct cmd_set_port_tm_node_parent_result,
- weight, UINT32);
+ weight, CMDLINE_UINT32);
static void cmd_set_port_tm_node_parent_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -2024,10 +2024,10 @@ cmdline_parse_token_string_t cmd_suspend_port_tm_node_node =
struct cmd_suspend_port_tm_node_result, node, "node");
cmdline_parse_token_num_t cmd_suspend_port_tm_node_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_suspend_port_tm_node_result, port_id, UINT16);
+ struct cmd_suspend_port_tm_node_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_suspend_port_tm_node_node_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_suspend_port_tm_node_result, node_id, UINT32);
+ struct cmd_suspend_port_tm_node_result, node_id, CMDLINE_UINT32);
static void cmd_suspend_port_tm_node_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -2089,10 +2089,10 @@ cmdline_parse_token_string_t cmd_resume_port_tm_node_node =
struct cmd_resume_port_tm_node_result, node, "node");
cmdline_parse_token_num_t cmd_resume_port_tm_node_port_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_resume_port_tm_node_result, port_id, UINT16);
+ struct cmd_resume_port_tm_node_result, port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_resume_port_tm_node_node_id =
TOKEN_NUM_INITIALIZER(
- struct cmd_resume_port_tm_node_result, node_id, UINT32);
+ struct cmd_resume_port_tm_node_result, node_id, CMDLINE_UINT32);
static void cmd_resume_port_tm_node_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -2156,7 +2156,7 @@ cmdline_parse_token_string_t cmd_port_tm_hierarchy_commit_commit =
cmdline_parse_token_num_t cmd_port_tm_hierarchy_commit_port_id =
TOKEN_NUM_INITIALIZER(
struct cmd_port_tm_hierarchy_commit_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_port_tm_hierarchy_commit_clean_on_fail =
TOKEN_STRING_INITIALIZER(struct cmd_port_tm_hierarchy_commit_result,
clean_on_fail, "yes#no");
@@ -2236,17 +2236,17 @@ cmdline_parse_token_string_t cmd_port_tm_mark_ip_ecn_ip_ecn =
ip_ecn, "ip_ecn");
cmdline_parse_token_num_t cmd_port_tm_mark_ip_ecn_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_ecn_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_ecn_green =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_ecn_result,
- green, UINT16);
+ green, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_ecn_yellow =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_ecn_result,
- yellow, UINT16);
+ yellow, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_ecn_red =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_ecn_result,
- red, UINT16);
+ red, CMDLINE_UINT16);
static void cmd_port_tm_mark_ip_ecn_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -2323,17 +2323,17 @@ cmdline_parse_token_string_t cmd_port_tm_mark_ip_dscp_ip_dscp =
ip_dscp, "ip_dscp");
cmdline_parse_token_num_t cmd_port_tm_mark_ip_dscp_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_dscp_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_dscp_green =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_dscp_result,
- green, UINT16);
+ green, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_dscp_yellow =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_dscp_result,
- yellow, UINT16);
+ yellow, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_ip_dscp_red =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_ip_dscp_result,
- red, UINT16);
+ red, CMDLINE_UINT16);
static void cmd_port_tm_mark_ip_dscp_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
@@ -2410,17 +2410,17 @@ cmdline_parse_token_string_t cmd_port_tm_mark_vlan_dei_vlan_dei =
vlan_dei, "vlan_dei");
cmdline_parse_token_num_t cmd_port_tm_mark_vlan_dei_port_id =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_vlan_dei_result,
- port_id, UINT16);
+ port_id, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_vlan_dei_green =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_vlan_dei_result,
- green, UINT16);
+ green, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_vlan_dei_yellow =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_vlan_dei_result,
- yellow, UINT16);
+ yellow, CMDLINE_UINT16);
cmdline_parse_token_num_t cmd_port_tm_mark_vlan_dei_red =
TOKEN_NUM_INITIALIZER(struct cmd_port_tm_mark_vlan_dei_result,
- red, UINT16);
+ red, CMDLINE_UINT16);
static void cmd_port_tm_mark_vlan_dei_parsed(void *parsed_result,
__attribute__((unused)) struct cmdline *cl,
diff --git a/app/test/test_cmdline_num.c b/app/test/test_cmdline_num.c
index 4c97caf3d0bf..9e76dadf5d92 100644
--- a/app/test/test_cmdline_num.c
+++ b/app/test/test_cmdline_num.c
@@ -233,31 +233,31 @@ static int
can_parse_unsigned(uint64_t expected_result, enum cmdline_numtype type)
{
switch (type) {
- case UINT8:
+ case CMDLINE_UINT8:
if (expected_result > UINT8_MAX)
return 0;
break;
- case UINT16:
+ case CMDLINE_UINT16:
if (expected_result > UINT16_MAX)
return 0;
break;
- case UINT32:
+ case CMDLINE_UINT32:
if (expected_result > UINT32_MAX)
return 0;
break;
- case INT8:
+ case CMDLINE_INT8:
if (expected_result > INT8_MAX)
return 0;
break;
- case INT16:
+ case CMDLINE_INT16:
if (expected_result > INT16_MAX)
return 0;
break;
- case INT32:
+ case CMDLINE_INT32:
if (expected_result > INT32_MAX)
return 0;
break;
- case INT64:
+ case CMDLINE_INT64:
if (expected_result > INT64_MAX)
return 0;
break;
@@ -271,31 +271,31 @@ static int
can_parse_signed(int64_t expected_result, enum cmdline_numtype type)
{
switch (type) {
- case UINT8:
+ case CMDLINE_UINT8:
if (expected_result > UINT8_MAX || expected_result < 0)
return 0;
break;
- case UINT16:
+ case CMDLINE_UINT16:
if (expected_result > UINT16_MAX || expected_result < 0)
return 0;
break;
- case UINT32:
+ case CMDLINE_UINT32:
if (expected_result > UINT32_MAX || expected_result < 0)
return 0;
break;
- case UINT64:
+ case CMDLINE_UINT64:
if (expected_result < 0)
return 0;
break;
- case INT8:
+ case CMDLINE_INT8:
if (expected_result > INT8_MAX || expected_result < INT8_MIN)
return 0;
break;
- case INT16:
+ case CMDLINE_INT16:
if (expected_result > INT16_MAX || expected_result < INT16_MIN)
return 0;
break;
- case INT32:
+ case CMDLINE_INT32:
if (expected_result > INT32_MAX || expected_result < INT32_MIN)
return 0;
break;
@@ -315,7 +315,7 @@ test_parse_num_invalid_param(void)
int ret = 0;
/* set up a token */
- token.num_data.type = UINT32;
+ token.num_data.type = CMDLINE_UINT32;
/* copy string to buffer */
strlcpy(buf, num_valid_positive_strs[0].str, sizeof(buf));
@@ -388,7 +388,7 @@ test_parse_num_invalid_data(void)
cmdline_parse_token_num_t token;
/* cycle through all possible parsed types */
- for (type = UINT8; type <= INT64; type++) {
+ for (type = CMDLINE_UINT8; type <= CMDLINE_INT64; type++) {
token.num_data.type = type;
/* test full strings */
@@ -427,7 +427,7 @@ test_parse_num_valid(void)
/** valid strings **/
/* cycle through all possible parsed types */
- for (type = UINT8; type <= INT64; type++) {
+ for (type = CMDLINE_UINT8; type <= CMDLINE_INT64; type++) {
token.num_data.type = type;
/* test positive strings */
@@ -481,13 +481,13 @@ test_parse_num_valid(void)
if (ret > 0) {
/* detect negative */
switch (type) {
- case INT8:
+ case CMDLINE_INT8:
result = (int8_t) result;
break;
- case INT16:
+ case CMDLINE_INT16:
result = (int16_t) result;
break;
- case INT32:
+ case CMDLINE_INT32:
result = (int32_t) result;
break;
default:
@@ -505,7 +505,7 @@ test_parse_num_valid(void)
/** garbage strings **/
/* cycle through all possible parsed types */
- for (type = UINT8; type <= INT64; type++) {
+ for (type = CMDLINE_UINT8; type <= CMDLINE_INT64; type++) {
token.num_data.type = type;
/* test positive garbage strings */
@@ -559,15 +559,15 @@ test_parse_num_valid(void)
if (ret > 0) {
/* detect negative */
switch (type) {
- case INT8:
+ case CMDLINE_INT8:
if (result & (INT8_MAX + 1))
result |= 0xFFFFFFFFFFFFFF00ULL;
break;
- case INT16:
+ case CMDLINE_INT16:
if (result & (INT16_MAX + 1))
result |= 0xFFFFFFFFFFFF0000ULL;
break;
- case INT32:
+ case CMDLINE_INT32:
if (result & (INT32_MAX + 1ULL))
result |= 0xFFFFFFFF00000000ULL;
break;
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f59a283074f6..e4377d42af44 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -228,6 +228,9 @@ API Changes
has been introduced in this release is used when used when all the packets
enqueued in the tx adapter are destined for the same Ethernet port & Tx queue.
+* cmdline: the cmdline_numtype enum values have been been prefixed
+ by ``CMDLINE_`` to avoid conflicts with user code.
+
ABI Changes
-----------
diff --git a/examples/ethtool/ethtool-app/ethapp.c b/examples/ethtool/ethtool-app/ethapp.c
index b6b967118e4c..0e8b2315c467 100644
--- a/examples/ethtool/ethtool-app/ethapp.c
+++ b/examples/ethtool/ethtool-app/ethapp.c
@@ -70,7 +70,7 @@ cmdline_parse_token_string_t pcmd_rxmode_token_cmd =
cmdline_parse_token_string_t pcmd_portstats_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_int_params, cmd, "portstats");
cmdline_parse_token_num_t pcmd_int_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_int_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_int_params, port, CMDLINE_UINT16);
/* Commands taking port id and string */
cmdline_parse_token_string_t pcmd_eeprom_token_cmd =
@@ -84,7 +84,7 @@ cmdline_parse_token_string_t pcmd_regs_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_intstr_params, cmd, "regs");
cmdline_parse_token_num_t pcmd_intstr_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_intstr_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intstr_params, port, CMDLINE_UINT16);
cmdline_parse_token_string_t pcmd_intstr_token_opt =
TOKEN_STRING_INITIALIZER(struct pcmd_intstr_params, opt, NULL);
@@ -92,7 +92,7 @@ cmdline_parse_token_string_t pcmd_intstr_token_opt =
cmdline_parse_token_string_t pcmd_macaddr_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_intmac_params, cmd, "macaddr");
cmdline_parse_token_num_t pcmd_intmac_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_intmac_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intmac_params, port, CMDLINE_UINT16);
cmdline_parse_token_etheraddr_t pcmd_intmac_token_mac =
TOKEN_ETHERADDR_INITIALIZER(struct pcmd_intmac_params, mac);
@@ -106,18 +106,18 @@ cmdline_parse_token_string_t pcmd_ringparam_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_intintint_params, cmd,
"ringparam");
cmdline_parse_token_num_t pcmd_intintint_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, port, CMDLINE_UINT16);
cmdline_parse_token_num_t pcmd_intintint_token_tx =
- TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, tx, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, tx, CMDLINE_UINT16);
cmdline_parse_token_num_t pcmd_intintint_token_rx =
- TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, rx, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intintint_params, rx, CMDLINE_UINT16);
/* Pause commands */
cmdline_parse_token_string_t pcmd_pause_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_intstr_params, cmd, "pause");
cmdline_parse_token_num_t pcmd_pause_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_intstr_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_intstr_params, port, CMDLINE_UINT16);
cmdline_parse_token_string_t pcmd_pause_token_opt =
TOKEN_STRING_INITIALIZER(struct pcmd_intstr_params,
opt, "all#tx#rx#none");
@@ -126,11 +126,11 @@ cmdline_parse_token_string_t pcmd_pause_token_opt =
cmdline_parse_token_string_t pcmd_vlan_token_cmd =
TOKEN_STRING_INITIALIZER(struct pcmd_vlan_params, cmd, "vlan");
cmdline_parse_token_num_t pcmd_vlan_token_port =
- TOKEN_NUM_INITIALIZER(struct pcmd_vlan_params, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_vlan_params, port, CMDLINE_UINT16);
cmdline_parse_token_string_t pcmd_vlan_token_mode =
TOKEN_STRING_INITIALIZER(struct pcmd_vlan_params, mode, "add#del");
cmdline_parse_token_num_t pcmd_vlan_token_vid =
- TOKEN_NUM_INITIALIZER(struct pcmd_vlan_params, vid, UINT16);
+ TOKEN_NUM_INITIALIZER(struct pcmd_vlan_params, vid, CMDLINE_UINT16);
static void
diff --git a/examples/ipsec-secgw/parser.c b/examples/ipsec-secgw/parser.c
index fc8c238fe5a5..b46d5184b5e0 100644
--- a/examples/ipsec-secgw/parser.c
+++ b/examples/ipsec-secgw/parser.c
@@ -516,7 +516,7 @@ cmdline_parse_token_string_t cfg_add_neigh_start =
cmdline_parse_token_string_t cfg_add_neigh_pstr =
TOKEN_STRING_INITIALIZER(struct cfg_neigh_add_item, pstr, "port");
cmdline_parse_token_num_t cfg_add_neigh_port =
- TOKEN_NUM_INITIALIZER(struct cfg_neigh_add_item, port, UINT16);
+ TOKEN_NUM_INITIALIZER(struct cfg_neigh_add_item, port, CMDLINE_UINT16);
cmdline_parse_token_string_t cfg_add_neigh_mac =
TOKEN_STRING_INITIALIZER(struct cfg_neigh_add_item, mac, NULL);
diff --git a/examples/qos_sched/cmdline.c b/examples/qos_sched/cmdline.c
index 15f51830c160..cd43419a8823 100644
--- a/examples/qos_sched/cmdline.c
+++ b/examples/qos_sched/cmdline.c
@@ -113,7 +113,7 @@ cmdline_parse_token_string_t cmd_setqavg_param_string =
"period#n");
cmdline_parse_token_num_t cmd_setqavg_number =
TOKEN_NUM_INITIALIZER(struct cmd_setqavg_result, number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_setqavg = {
.f = cmd_setqavg_parsed,
@@ -188,10 +188,10 @@ cmdline_parse_token_string_t cmd_subportstats_subport_string =
"subport");
cmdline_parse_token_num_t cmd_subportstats_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_subportstats_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_num_t cmd_subportstats_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_subportstats_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_inst_t cmd_subportstats = {
.f = cmd_subportstats_parsed,
@@ -236,19 +236,19 @@ cmdline_parse_token_string_t cmd_pipestats_port_string =
"port");
cmdline_parse_token_num_t cmd_pipestats_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_pipestats_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_pipestats_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_pipestats_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_pipestats_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_pipestats_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_pipestats_pipe_string =
TOKEN_STRING_INITIALIZER(struct cmd_pipestats_result, pipe_string,
"pipe");
cmdline_parse_token_num_t cmd_pipestats_pipe_number =
TOKEN_NUM_INITIALIZER(struct cmd_pipestats_result, pipe_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_pipestats = {
.f = cmd_pipestats_parsed,
@@ -299,31 +299,31 @@ cmdline_parse_token_string_t cmd_avg_q_port_string =
"port");
cmdline_parse_token_num_t cmd_avg_q_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_q_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_avg_q_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_q_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_avg_q_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_q_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_q_pipe_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_q_result, pipe_string,
"pipe");
cmdline_parse_token_num_t cmd_avg_q_pipe_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_q_result, pipe_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_q_tc_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_q_result, tc_string,
"tc");
cmdline_parse_token_num_t cmd_avg_q_tc_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_q_result, tc_number,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_avg_q_q_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_q_result, q_string,
"q");
cmdline_parse_token_num_t cmd_avg_q_q_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_q_result, q_number,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_inst_t cmd_avg_q = {
.f = cmd_avg_q_parsed,
@@ -376,25 +376,25 @@ cmdline_parse_token_string_t cmd_avg_tcpipe_port_string =
"port");
cmdline_parse_token_num_t cmd_avg_tcpipe_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcpipe_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_avg_tcpipe_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_tcpipe_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_avg_tcpipe_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcpipe_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_tcpipe_pipe_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_tcpipe_result, pipe_string,
"pipe");
cmdline_parse_token_num_t cmd_avg_tcpipe_pipe_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcpipe_result, pipe_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_tcpipe_tc_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_tcpipe_result, tc_string,
"tc");
cmdline_parse_token_num_t cmd_avg_tcpipe_tc_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcpipe_result, tc_number,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_inst_t cmd_avg_tcpipe = {
.f = cmd_avg_tcpipe_parsed,
@@ -443,19 +443,19 @@ cmdline_parse_token_string_t cmd_avg_pipe_port_string =
"port");
cmdline_parse_token_num_t cmd_avg_pipe_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_pipe_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_avg_pipe_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_pipe_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_avg_pipe_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_pipe_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_pipe_pipe_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_pipe_result, pipe_string,
"pipe");
cmdline_parse_token_num_t cmd_avg_pipe_pipe_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_pipe_result, pipe_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_avg_pipe = {
.f = cmd_avg_pipe_parsed,
@@ -502,19 +502,19 @@ cmdline_parse_token_string_t cmd_avg_tcsubport_port_string =
"port");
cmdline_parse_token_num_t cmd_avg_tcsubport_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcsubport_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_avg_tcsubport_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_tcsubport_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_avg_tcsubport_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcsubport_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_token_string_t cmd_avg_tcsubport_tc_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_tcsubport_result, tc_string,
"tc");
cmdline_parse_token_num_t cmd_avg_tcsubport_tc_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_tcsubport_result, tc_number,
- UINT8);
+ CMDLINE_UINT8);
cmdline_parse_inst_t cmd_avg_tcsubport = {
.f = cmd_avg_tcsubport_parsed,
@@ -559,13 +559,13 @@ cmdline_parse_token_string_t cmd_avg_subport_port_string =
"port");
cmdline_parse_token_num_t cmd_avg_subport_port_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_subport_result, port_number,
- UINT16);
+ CMDLINE_UINT16);
cmdline_parse_token_string_t cmd_avg_subport_subport_string =
TOKEN_STRING_INITIALIZER(struct cmd_avg_subport_result, subport_string,
"subport");
cmdline_parse_token_num_t cmd_avg_subport_subport_number =
TOKEN_NUM_INITIALIZER(struct cmd_avg_subport_result, subport_number,
- UINT32);
+ CMDLINE_UINT32);
cmdline_parse_inst_t cmd_avg_subport = {
.f = cmd_avg_subport_parsed,
diff --git a/examples/quota_watermark/qwctl/commands.c b/examples/quota_watermark/qwctl/commands.c
index a1c646b9fb52..f2579519c277 100644
--- a/examples/quota_watermark/qwctl/commands.c
+++ b/examples/quota_watermark/qwctl/commands.c
@@ -74,7 +74,7 @@ cmdline_parse_token_string_t cmd_set_variable =
TOKEN_STRING_INITIALIZER(struct cmd_set_tokens, variable, NULL);
cmdline_parse_token_num_t cmd_set_value =
- TOKEN_NUM_INITIALIZER(struct cmd_set_tokens, value, UINT32);
+ TOKEN_NUM_INITIALIZER(struct cmd_set_tokens, value, CMDLINE_UINT32);
static void
cmd_set_handler(__attribute__((unused)) void *parsed_result,
diff --git a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c
index fe09b0778ac1..a914549ddcb3 100644
--- a/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c
+++ b/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c
@@ -173,7 +173,7 @@ cmdline_parse_token_string_t cmd_set_cpu_freq =
set_cpu_freq, "set_cpu_freq");
cmdline_parse_token_string_t cmd_set_cpu_freq_core_num =
TOKEN_NUM_INITIALIZER(struct cmd_set_cpu_freq_result,
- lcore_id, UINT8);
+ lcore_id, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_set_cpu_freq_cmd_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_set_cpu_freq_result,
cmd, "up#down#min#max#enable_turbo#disable_turbo");
diff --git a/examples/vm_power_manager/vm_power_cli.c b/examples/vm_power_manager/vm_power_cli.c
index 89b000d923a8..4a7fb510a40f 100644
--- a/examples/vm_power_manager/vm_power_cli.c
+++ b/examples/vm_power_manager/vm_power_cli.c
@@ -155,10 +155,10 @@ cmdline_parse_token_string_t cmd_set_pcpu_vm_name =
vm_name, NULL);
cmdline_parse_token_num_t set_pcpu_vcpu =
TOKEN_NUM_INITIALIZER(struct cmd_set_pcpu_result,
- vcpu, UINT8);
+ vcpu, CMDLINE_UINT8);
cmdline_parse_token_num_t set_pcpu_core =
TOKEN_NUM_INITIALIZER(struct cmd_set_pcpu_result,
- core, UINT64);
+ core, CMDLINE_UINT64);
cmdline_parse_inst_t cmd_set_pcpu_set = {
@@ -408,7 +408,7 @@ cmdline_parse_token_string_t cmd_show_cpu_freq =
cmdline_parse_token_num_t cmd_show_cpu_freq_core_num =
TOKEN_NUM_INITIALIZER(struct cmd_show_cpu_freq_result,
- core_num, UINT8);
+ core_num, CMDLINE_UINT8);
cmdline_parse_inst_t cmd_show_cpu_freq_set = {
.f = cmd_show_cpu_freq_parsed,
@@ -457,7 +457,7 @@ cmdline_parse_token_string_t cmd_set_cpu_freq =
set_cpu_freq, "set_cpu_freq");
cmdline_parse_token_num_t cmd_set_cpu_freq_core_num =
TOKEN_NUM_INITIALIZER(struct cmd_set_cpu_freq_result,
- core_num, UINT8);
+ core_num, CMDLINE_UINT8);
cmdline_parse_token_string_t cmd_set_cpu_freq_cmd_cmd =
TOKEN_STRING_INITIALIZER(struct cmd_set_cpu_freq_result,
cmd, "up#down#min#max#enable_turbo#disable_turbo");
diff --git a/lib/librte_cmdline/cmdline_parse_num.c b/lib/librte_cmdline/cmdline_parse_num.c
index 478f181b4443..0d9f63aef797 100644
--- a/lib/librte_cmdline/cmdline_parse_num.c
+++ b/lib/librte_cmdline/cmdline_parse_num.c
@@ -69,23 +69,23 @@ static int
check_res_size(struct cmdline_token_num_data *nd, unsigned ressize)
{
switch (nd->type) {
- case INT8:
- case UINT8:
+ case CMDLINE_INT8:
+ case CMDLINE_UINT8:
if (ressize < sizeof(int8_t))
return -1;
break;
- case INT16:
- case UINT16:
+ case CMDLINE_INT16:
+ case CMDLINE_UINT16:
if (ressize < sizeof(int16_t))
return -1;
break;
- case INT32:
- case UINT32:
+ case CMDLINE_INT32:
+ case CMDLINE_UINT32:
if (ressize < sizeof(int32_t))
return -1;
break;
- case INT64:
- case UINT64:
+ case CMDLINE_INT64:
+ case CMDLINE_UINT64:
if (ressize < sizeof(int64_t))
return -1;
break;
@@ -259,35 +259,35 @@ cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
case HEX_OK:
case OCTAL_OK:
case BIN_OK:
- if ( nd.type == INT8 && res1 <= INT8_MAX ) {
+ if ( nd.type == CMDLINE_INT8 && res1 <= INT8_MAX ) {
if (res) *(int8_t *)res = (int8_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == INT16 && res1 <= INT16_MAX ) {
+ else if ( nd.type == CMDLINE_INT16 && res1 <= INT16_MAX ) {
if (res) *(int16_t *)res = (int16_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == INT32 && res1 <= INT32_MAX ) {
+ else if ( nd.type == CMDLINE_INT32 && res1 <= INT32_MAX ) {
if (res) *(int32_t *)res = (int32_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == INT64 && res1 <= INT64_MAX ) {
+ else if ( nd.type == CMDLINE_INT64 && res1 <= INT64_MAX ) {
if (res) *(int64_t *)res = (int64_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == UINT8 && res1 <= UINT8_MAX ) {
+ else if ( nd.type == CMDLINE_UINT8 && res1 <= UINT8_MAX ) {
if (res) *(uint8_t *)res = (uint8_t) res1;
return buf-srcbuf;
}
- else if (nd.type == UINT16 && res1 <= UINT16_MAX ) {
+ else if (nd.type == CMDLINE_UINT16 && res1 <= UINT16_MAX ) {
if (res) *(uint16_t *)res = (uint16_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == UINT32 && res1 <= UINT32_MAX ) {
+ else if ( nd.type == CMDLINE_UINT32 && res1 <= UINT32_MAX ) {
if (res) *(uint32_t *)res = (uint32_t) res1;
return buf-srcbuf;
}
- else if ( nd.type == UINT64 ) {
+ else if ( nd.type == CMDLINE_UINT64 ) {
if (res) *(uint64_t *)res = res1;
return buf-srcbuf;
}
@@ -297,19 +297,19 @@ cmdline_parse_num(cmdline_parse_token_hdr_t *tk, const char *srcbuf, void *res,
break;
case DEC_NEG_OK:
- if ( nd.type == INT8 && res1 <= INT8_MAX + 1 ) {
+ if ( nd.type == CMDLINE_INT8 && res1 <= INT8_MAX + 1 ) {
if (res) *(int8_t *)res = (int8_t) (-res1);
return buf-srcbuf;
}
- else if ( nd.type == INT16 && res1 <= (uint16_t)INT16_MAX + 1 ) {
+ else if ( nd.type == CMDLINE_INT16 && res1 <= (uint16_t)INT16_MAX + 1 ) {
if (res) *(int16_t *)res = (int16_t) (-res1);
return buf-srcbuf;
}
- else if ( nd.type == INT32 && res1 <= (uint32_t)INT32_MAX + 1 ) {
+ else if ( nd.type == CMDLINE_INT32 && res1 <= (uint32_t)INT32_MAX + 1 ) {
if (res) *(int32_t *)res = (int32_t) (-res1);
return buf-srcbuf;
}
- else if ( nd.type == INT64 && res1 <= (uint64_t)INT64_MAX + 1 ) {
+ else if ( nd.type == CMDLINE_INT64 && res1 <= (uint64_t)INT64_MAX + 1 ) {
if (res) *(int64_t *)res = (int64_t) (-res1);
return buf-srcbuf;
}
diff --git a/lib/librte_cmdline/cmdline_parse_num.h b/lib/librte_cmdline/cmdline_parse_num.h
index 58b28cad7c41..28b04520f01e 100644
--- a/lib/librte_cmdline/cmdline_parse_num.h
+++ b/lib/librte_cmdline/cmdline_parse_num.h
@@ -14,14 +14,14 @@ extern "C" {
#endif
enum cmdline_numtype {
- UINT8 = 0,
- UINT16,
- UINT32,
- UINT64,
- INT8,
- INT16,
- INT32,
- INT64
+ CMDLINE_UINT8 = 0,
+ CMDLINE_UINT16,
+ CMDLINE_UINT32,
+ CMDLINE_UINT64,
+ CMDLINE_INT8,
+ CMDLINE_INT16,
+ CMDLINE_INT32,
+ CMDLINE_INT64
};
struct cmdline_token_num_data {
--
2.20.1
^ permalink raw reply [relevance 1%]
* [dpdk-dev] [PATCH v6 15/15] sched: remove redundant code
@ 2019-10-24 18:46 4% ` Jasvinder Singh
1 sibling, 0 replies; 200+ results
From: Jasvinder Singh @ 2019-10-24 18:46 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, Lukasz Krakowiak
Remove redundant data structure fields from port level data
structures and update the release notes.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
---
doc/guides/rel_notes/release_19_11.rst | 7 ++++-
lib/librte_sched/rte_sched.c | 42 +-------------------------
lib/librte_sched/rte_sched.h | 22 --------------
3 files changed, 7 insertions(+), 64 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index f59a28307..524fb338b 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -228,6 +228,11 @@ API Changes
has been introduced in this release is used when used when all the packets
enqueued in the tx adapter are destined for the same Ethernet port & Tx queue.
+* sched: The pipe nodes configuration parameters such as number of pipes,
+ pipe queue sizes, pipe profiles, etc., are moved from port level structure
+ to subport level. This allows different subports of the same port to
+ have different configuration for the pipe nodes.
+
ABI Changes
-----------
@@ -315,7 +320,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
- librte_sched.so.3
+ + librte_sched.so.4
librte_security.so.2
librte_stack.so.1
librte_table.so.3
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 1faa580d0..710ecf65a 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -216,13 +216,6 @@ struct rte_sched_port {
uint32_t mtu;
uint32_t frame_overhead;
int socket;
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t n_pipe_profiles;
- uint32_t n_max_pipe_profiles;
- uint32_t pipe_tc_be_rate_max;
-#ifdef RTE_SCHED_RED
- struct rte_red_config red_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
/* Timing */
uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cyles */
@@ -230,48 +223,15 @@ struct rte_sched_port {
uint64_t time; /* Current NIC TX time measured in bytes */
struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
- /* Scheduling loop detection */
- uint32_t pipe_loop;
- uint32_t pipe_exhaustion;
-
- /* Bitmap */
- struct rte_bitmap *bmp;
- uint32_t grinder_base_bmp_pos[RTE_SCHED_PORT_N_GRINDERS] __rte_aligned_16;
-
/* Grinders */
- struct rte_sched_grinder grinder[RTE_SCHED_PORT_N_GRINDERS];
- uint32_t busy_grinders;
struct rte_mbuf **pkts_out;
uint32_t n_pkts_out;
uint32_t subport_id;
- /* Queue base calculation */
- uint32_t qsize_add[RTE_SCHED_QUEUES_PER_PIPE];
- uint32_t qsize_sum;
-
/* Large data structures */
- struct rte_sched_subport *subports[0];
- struct rte_sched_subport *subport;
- struct rte_sched_pipe *pipe;
- struct rte_sched_queue *queue;
- struct rte_sched_queue_extra *queue_extra;
- struct rte_sched_pipe_profile *pipe_profiles;
- uint8_t *bmp_array;
- struct rte_mbuf **queue_array;
- uint8_t memory[0] __rte_cache_aligned;
+ struct rte_sched_subport *subports[0] __rte_cache_aligned;
} __rte_cache_aligned;
-enum rte_sched_port_array {
- e_RTE_SCHED_PORT_ARRAY_SUBPORT = 0,
- e_RTE_SCHED_PORT_ARRAY_PIPE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_EXTRA,
- e_RTE_SCHED_PORT_ARRAY_PIPE_PROFILES,
- e_RTE_SCHED_PORT_ARRAY_BMP_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_TOTAL,
-};
-
enum rte_sched_subport_array {
e_RTE_SCHED_SUBPORT_ARRAY_PIPE = 0,
e_RTE_SCHED_SUBPORT_ARRAY_QUEUE,
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 40f02f124..c82c23c14 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -260,28 +260,6 @@ struct rte_sched_port_params {
* the subports of the same port.
*/
uint32_t n_pipes_per_subport;
-
- /** Packet queue size for each traffic class.
- * All the pipes within the same subport share the similar
- * configuration for the queues.
- */
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
-
- /** Pipe profile table.
- * Every pipe is configured using one of the profiles from this table.
- */
- struct rte_sched_pipe_params *pipe_profiles;
-
- /** Profiles in the pipe profile table */
- uint32_t n_pipe_profiles;
-
- /** Max profiles allowed in the pipe profile table */
- uint32_t n_max_pipe_profiles;
-
-#ifdef RTE_SCHED_RED
- /** RED parameters */
- struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
};
/*
--
2.21.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3] mbuf: support dynamic fields and flags
2019-10-24 8:13 3% ` [dpdk-dev] [PATCH v3] " Olivier Matz
@ 2019-10-24 16:40 0% ` Thomas Monjalon
0 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2019-10-24 16:40 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Shahaf Shuler, Stephen Hemminger,
Slava Ovsiienko
24/10/2019 10:13, Olivier Matz:
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each
> feature. Also, changing fields in the mbuf structure can break the API
> or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration
> of fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload
> feature, when the application requests to enable this feature. As
> the space in mbuf is limited, the space should only be reserved if it
> is going to be used (i.e when the application explicitly asks for it).
>
> The registration can be done at any moment, but it is not possible
> to unregister fields or flags.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
I feel I could merge this patch.
I will hold on for few hours and will proceed.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (7 preceding siblings ...)
2019-10-23 21:10 7% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 Stephen Hemminger
@ 2019-10-24 16:37 4% ` Thomas Monjalon
8 siblings, 0 replies; 200+ results
From: Thomas Monjalon @ 2019-10-24 16:37 UTC (permalink / raw)
To: David Marchand; +Cc: dev, stephen, anatoly.burakov
23/10/2019 20:54, David Marchand:
> Let's prepare for the ABI freeze.
>
> The first patches are about changes that had been announced before (with
> a patch from Stephen that I took as it is ready as is from my pov).
>
> The malloc_heap structure from the memory subsystem can be hidden.
> The PCI library had some forgotten deprecated APIs that are removed with
> this series.
>
> rte_logs could be hidden, but I am not that confortable about
> doing it right away: I added an accessor to rte_logs.file, but I am fine
> with dropping the last patch and wait for actually hiding this in the next
> ABI break.
>
> Changelog since v1:
> - I went a step further, hiding rte_config after de-inlining non critical
> functions
>
> Comments?
Except patch 8 (hiding rte_logs),
Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure David Marchand
@ 2019-10-24 16:30 0% ` Thomas Monjalon
2019-10-25 9:19 0% ` Kevin Traynor
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2019-10-24 16:30 UTC (permalink / raw)
To: david.marchand; +Cc: dev, anaotoly.burakov, stephen, ktraynor
23/10/2019 20:54, David Marchand:
> No need to expose rte_logs, hide it and remove it from the current ABI.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
[...]
> --- a/lib/librte_eal/common/include/rte_log.h
> +++ b/lib/librte_eal/common/include/rte_log.h
> -struct rte_log_dynamic_type;
> -
> -/** The rte_log structure. */
> -struct rte_logs {
> - uint32_t type; /**< Bitfield with enabled logs. */
> - uint32_t level; /**< Log level. */
> - FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
> - size_t dynamic_types_len;
> - struct rte_log_dynamic_type *dynamic_types;
> -};
I like this kind of change, but the FILE stream is available only through
the new experimental function. It is against the famous Mr Traynor rule:
we cannot deprecate or remove an old stable symbol if the replacement is experimental.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-24 15:37 4% ` Stephen Hemminger
@ 2019-10-24 16:01 4% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-24 16:01 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Thomas Monjalon, dev, Burakov, Anatoly
On Thu, Oct 24, 2019 at 5:37 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
> > > > rte_logs could be hidden, but I am not that confortable about
> > > > doing it right away: I added an accessor to rte_logs.file, but I am fine
> > > > with dropping the last patch and wait for actually hiding this in the next
> > > > ABI break.
> > >
> > > 19.11 is an api/abi break so maybe do it now.
> >
> > I went and hid more internals, I did not see an impact on really basic bench.
> >
> > I would appreciate other opinions.
>
> These all look good. There is probably a lot more that could be
> done, adding more accessors in 20.02 could help but more hiding won't happen
> again until 20.11
Yes, I went with the low hanging fruits.
It is a long term effort in any case, when reviewing too.
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-24 7:32 4% ` David Marchand
@ 2019-10-24 15:37 4% ` Stephen Hemminger
2019-10-24 16:01 4% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2019-10-24 15:37 UTC (permalink / raw)
To: David Marchand; +Cc: Thomas Monjalon, dev, Burakov, Anatoly
On Thu, 24 Oct 2019 09:32:10 +0200
David Marchand <david.marchand@redhat.com> wrote:
> On Wed, Oct 23, 2019 at 11:10 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> >
> > On Wed, 23 Oct 2019 20:54:12 +0200
> > David Marchand <david.marchand@redhat.com> wrote:
> >
> > > Let's prepare for the ABI freeze.
> > >
> > > The first patches are about changes that had been announced before (with
> > > a patch from Stephen that I took as it is ready as is from my pov).
> > >
> > > The malloc_heap structure from the memory subsystem can be hidden.
> > > The PCI library had some forgotten deprecated APIs that are removed with
> > > this series.
> > >
> > > rte_logs could be hidden, but I am not that confortable about
> > > doing it right away: I added an accessor to rte_logs.file, but I am fine
> > > with dropping the last patch and wait for actually hiding this in the next
> > > ABI break.
> >
> > 19.11 is an api/abi break so maybe do it now.
>
> Did you look at the 4 new patches too?
>
> Same concern + this was not announced before either.
> I went and hid more internals, I did not see an impact on really basic bench.
>
> I would appreciate other opinions.
>
>
These all look good. There is probably a lot more that could be
done, adding more accessors in 20.02 could help but more hiding won't happen
again until 20.11
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] ethdev: extend flow metadata
2019-10-24 9:22 0% ` Olivier Matz
@ 2019-10-24 12:30 0% ` Slava Ovsiienko
0 siblings, 0 replies; 200+ results
From: Slava Ovsiienko @ 2019-10-24 12:30 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, Matan Azrad, Raslan Darawsheh, Thomas Monjalon
Hi Olivier,
> -----Original Message-----
> From: Olivier Matz <olivier.matz@6wind.com>
> Sent: Thursday, October 24, 2019 12:23
> To: Slava Ovsiienko <viacheslavo@mellanox.com>
> Cc: dev@dpdk.org; Matan Azrad <matan@mellanox.com>; Raslan
> Darawsheh <rasland@mellanox.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: Re: [PATCH v2] ethdev: extend flow metadata
>
> Hi Slava,
>
> On Thu, Oct 24, 2019 at 06:49:41AM +0000, Slava Ovsiienko wrote:
> > Hi, Olivier
> >
> > > > [snip]
> > > >
> > > > > > +int
> > > > > > +rte_flow_dynf_metadata_register(void)
> > > > > > +{
> > > > > > + int offset;
> > > > > > + int flag;
> > > > > > +
> > > > > > + static const struct rte_mbuf_dynfield desc_offs = {
> > > > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > > > + .size = MBUF_DYNF_METADATA_SIZE,
> > > > > > + .align = MBUF_DYNF_METADATA_ALIGN,
> > > > > > + .flags = MBUF_DYNF_METADATA_FLAGS,
> > > > > > + };
> > > > > > + static const struct rte_mbuf_dynflag desc_flag = {
> > > > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > > > + };
> > > > >
> > > > > I don't see think we need #defines.
> > > > > You can directly use the name, sizeof() and __alignof__() here.
> > > > > If the information is used externally, the structure shall be
> > > > > made global non- static.
> > > >
> > > > The intention was to gather all dynamic fields definitions in one
> > > > place (in rte_mbuf_dyn.h).
> > >
> > > If the dynamic field is only going to be used inside rte_flow, I
> > > think there is no need to expose it in rte_mbuf_dyn.h.
> > > The other reason is I think the #define are just "passthrough", and
> > > do not really bring added value, just an indirection.
> > >
> > > > It would be easy to see all fields in one sight (some might be
> > > > shared, some might be mutual exclusive, estimate mbuf space,
> > > > required by various features, etc.). So, we can't just fill
> > > > structure fields with simple sizeof() and alignof() instead of
> > > > definitions (the field parameters must be defined once).
> > > >
> > > > I do not see the reasons to make table global. I would prefer the
> > > definitions.
> > > > - the definitions are compile time processing (table fields are
> > > > runtime), it provides code optimization and better performance.
> > >
> > > There is indeed no need to make the table global if the field is
> > > private to rte_flow. About better performance, my understanding is
> > > that it would only impact registration, am I missing something?
> >
> > OK, I thought about some opportunity to allow application to register
> > field directly, bypassing rte_flow_dynf_metadata_register(). So either
> > definitions or field description table was supposed to be global.
> > I agree, let's do not complicate the matter, I'll will make global the
> > metadata field name definition only - in the rte_mbuf_dyn.h in order
> > just to have some centralizing point.
>
> By reading your mail, things are also clearer to me about which parts need
> access to this field.
>
> To summarize what I understand:
> - dyn field registration is done in rte_flow lib when configuring
> a flow using META
> - the dynamic field will never be get/set in a mbuf by a PMD or rte_flow
> before a flow using META is added
In testpmd with current patch - yes, and this is just a sample. The common practice of
enabling metadata may differ. If application sees some PMD supporting RX/TX_METADATA
offload and it desires to receive metadata - it registers the dynamic field for ones.
> One question then: why would you need the dyn field name to be exported?
> Does the PMD need to know if the field is registered with a lookup or
> something like that? If yes, can you detail why?
I think it might happen the PMD does.
Right now I have an issue with mlx5 PMD compiled as shared library.
The global variables from rte_flow.c is not seen in PMD (just because I forget
to add one into the .map file). The way dynamic data are linked is system
dependent and it might be needed to optimize. I mean - in some
cases PMD might need to do lookup explicitly and use local copies
of offset and mask. So' I'd prefer to see field descriptor be
global visible. Yes, PMD can take the offset/flag directly
from the rte_flow variables and cache ones internally,
so global descriptor is just some kind of insurance.
As for the name - it is less critical, it may
be just useful for various log/debug messages and so on. The other
reason to have name definition is to put it in the "centralizing point"
somewhere in the rte_mbuf_dyn.h, to gather all names together and
eliminate the name conflicts (yes, the documented name convention
reduces the risk, but it just convenient to see all fields names
within one sight - it is easy to determine which are supported, etc).
> >
> > > >
> > > > > > +
> > > > > > + offset = rte_mbuf_dynfield_register(&desc_offs);
> > > > > > + if (offset < 0)
> > > > > > + goto error;
> > > > > > + flag = rte_mbuf_dynflag_register(&desc_flag);
> > > > > > + if (flag < 0)
> > > > > > + goto error;
> > > > > > + rte_flow_dynf_metadata_offs = offset;
> > > > > > + rte_flow_dynf_metadata_mask = (1ULL << flag);
> > > > > > + return 0;
> > > > > > +
> > > > > > +error:
> > > > > > + rte_flow_dynf_metadata_offs = -1;
> > > > > > + rte_flow_dynf_metadata_mask = 0ULL;
> > > > > > + return -rte_errno;
> > > > > > +}
> > > > > > +
> > > > > > static int
> > > > > > flow_err(uint16_t port_id, int ret, struct rte_flow_error
> > > > > > *error) { diff --git a/lib/librte_ethdev/rte_flow.h
> > > > > > b/lib/librte_ethdev/rte_flow.h index 391a44a..a27e619 100644
> > > > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > > > @@ -27,6 +27,8 @@
> > > > > > #include <rte_udp.h>
> > > > > > #include <rte_byteorder.h>
> > > > > > #include <rte_esp.h>
> > > > > > +#include <rte_mbuf.h>
> > > > > > +#include <rte_mbuf_dyn.h>
> > > > > >
> > > > > > #ifdef __cplusplus
> > > > > > extern "C" {
> > > > > > @@ -417,7 +419,8 @@ enum rte_flow_item_type {
> > > > > > /**
> > > > > > * [META]
> > > > > > *
> > > > > > - * Matches a metadata value specified in mbuf metadata
> field.
> > > > > > + * Matches a metadata value.
> > > > > > + *
> > > > > > * See struct rte_flow_item_meta.
> > > > > > */
> > > > > > RTE_FLOW_ITEM_TYPE_META,
> > > > > > @@ -1213,9 +1216,17 @@ struct
> > > rte_flow_item_icmp6_nd_opt_tla_eth {
> > > > > > #endif
> > > > > >
> > > > > > /**
> > > > > > - * RTE_FLOW_ITEM_TYPE_META.
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > > > + notice
> > > > > > *
> > > > > > - * Matches a specified metadata value.
> > > > > > + * RTE_FLOW_ITEM_TYPE_META
> > > > > > + *
> > > > > > + * Matches a specified metadata value. On egress, metadata
> > > > > > + can be set either by
> > > > > > + * mbuf tx_metadata field with PKT_TX_METADATA flag or
> > > > > > + * RTE_FLOW_ACTION_TYPE_SET_META. On ingress,
> > > > > > + RTE_FLOW_ACTION_TYPE_SET_META sets
> > > > > > + * metadata for a packet and the metadata will be reported
> > > > > > + via mbuf metadata
> > > > > > + * dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic
> > > mbuf
> > > > > > + field must be
> > > > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > > > */
> > > > > > struct rte_flow_item_meta {
> > > > > > rte_be32_t data;
> > > > > > @@ -1813,6 +1824,13 @@ enum rte_flow_action_type {
> > > > > > * undefined behavior.
> > > > > > */
> > > > > > RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK,
> > > > > > +
> > > > > > + /**
> > > > > > + * Set metadata on ingress or egress path.
> > > > > > + *
> > > > > > + * See struct rte_flow_action_set_meta.
> > > > > > + */
> > > > > > + RTE_FLOW_ACTION_TYPE_SET_META,
> > > > > > };
> > > > > >
> > > > > > /**
> > > > > > @@ -2300,6 +2318,43 @@ struct rte_flow_action_set_mac {
> > > > > > uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; };
> > > > > >
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > > > +notice
> > > > > > + *
> > > > > > + * RTE_FLOW_ACTION_TYPE_SET_META
> > > > > > + *
> > > > > > + * Set metadata. Metadata set by mbuf tx_metadata field with
> > > > > > + * PKT_TX_METADATA flag on egress will be overridden by this
> action.
> > > > > > +On
> > > > > > + * ingress, the metadata will be carried by mbuf metadata
> > > > > > +dynamic field
> > > > > > + * with PKT_RX_DYNF_METADATA flag if set. The dynamic mbuf
> > > > > > +field must be
> > > > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > > > + *
> > > > > > + * Altering partial bits is supported with mask. For bits
> > > > > > +which have never
> > > > > > + * been set, unpredictable value will be seen depending on
> > > > > > +driver
> > > > > > + * implementation. For loopback/hairpin packet, metadata set
> > > > > > +on Rx/Tx may
> > > > > > + * or may not be propagated to the other path depending on HW
> > > > > capability.
> > > > > > + *
> > > > > > + * RTE_FLOW_ITEM_TYPE_META matches metadata.
> > > > > > + */
> > > > > > +struct rte_flow_action_set_meta {
> > > > > > + rte_be32_t data;
> > > > > > + rte_be32_t mask;
> > > > > > +};
> > > > > > +
> > > > > > +/* Mbuf dynamic field offset for metadata. */ extern int
> > > > > > +rte_flow_dynf_metadata_offs;
> > > > > > +
> > > > > > +/* Mbuf dynamic field flag mask for metadata. */ extern
> > > > > > +uint64_t rte_flow_dynf_metadata_mask;
> > > > > > +
> > > > > > +/* Mbuf dynamic field pointer for metadata. */ #define
> > > > > > +RTE_FLOW_DYNF_METADATA(m) \
> > > > > > + RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs,
> uint32_t
> > > > > *)
> > > > > > +
> > > > > > +/* Mbuf dynamic flag for metadata. */ #define
> > > > > > +PKT_RX_DYNF_METADATA
> > > > > > +(rte_flow_dynf_metadata_mask)
> > > > > > +
> > > > >
> > > > > I wonder if helpers like this wouldn't be better, because they
> > > > > combine the flag and the field:
> > > > >
> > > > > /**
> > > > > * Set metadata dynamic field and flag in mbuf.
> > > > > *
> > > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > > */
> > > > > __rte_experimental
> > > > > static inline void rte_mbuf_dyn_metadata_set(struct rte_mbuf *m,
> > > > > uint32_t metadata) {
> > > > > *RTE_MBUF_DYNFIELD(m, rte_flow_dynf_metadata_offs,
> > > > > uint32_t *) = metadata;
> > > > > m->ol_flags |= rte_flow_dynf_metadata_mask; }
> > > > Setting flag looks redundantly.
> > > > What if driver just replaces the metadata and flag is already set?
> > > > The other option - the flags (for set of fields) might be set in
> combinations.
> > > > mbuf field is supposed to be engaged in datapath, performance is
> > > > very critical, adding one more abstraction layer seems not to be
> relevant.
> > >
> > > Ok, that was just a suggestion. Let's use your accessors if you fear
> > > a performance impact.
> > The simple example - mlx5 PMD has the rx_burst routine implemented
> > with vector instructions, and it processes four packets at once. No
> > need to check field availability four times, and the storing the
> > metadata is the subject for further optimization with vector instructions.
> > It is a bit difficult to provide common helpers to handle the metadata
> > field due to extremely high optimization requirements.
>
> ok, got it
>
> > > Nevertheless I suggest to use static inline functions in place of
> > > macros if possible. For RTE_MBUF_DYNFIELD(), I used a macro because
> > > it's the only way to provide a type to cast the result. But in your
> > > case, you know it's a uint32_t *.
> > What If one needs to specify the address of field? Macro allows to do
> > that, inline functions - do not. Packets may be processed in bizarre
> > ways, for example in a batch, with vector instructions. OK, I'll
> > provide the set/get routines, but I'm not sure whether will use ones in mlx5
> code.
> > In my opinion it just obscures the field nature. Field is just a
> > field, AFAIU, it is main idea of your patch, the way to handle dynamic
> > field should be close to handling usual static fields, I think. Macro
> > pointer follows this approach, routines - does not.
>
> Well, I just think that:
> rte_mbuf_set_time_stamp(m, 1234);
> is more readable than:
> *RTE_MBUF_TIMESTAMP(m) = 1234;
I implemented these metadata set/get in v3, as you proposed.
But, mlx5 PMD does not use these ones (possible, I'll refactor some occurrences)
BTW, I did not find any rte_mbuf_set_xxxx() implemented? Did I miss smth?
Should we start with metadata field specifically? 😊
>
> Anyway, in your case, if you need to use vector instructions in the PMD, I
> guess you will directly use the offset.
Right.
>
> > > > Also, metadata is not feature of mbuf. It should have rte_flow prefix.
> > >
> > > Yes, sure. The example derives from a test I've done, and I forgot
> > > to change it.
> > >
> > >
> > > > > /**
> > > > > * Get metadata dynamic field value in mbuf.
> > > > > *
> > > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > > */
> > > > > __rte_experimental
> > > > > static inline int rte_mbuf_dyn_metadata_get(const struct rte_mbuf
> *m,
> > > > > uint32_t *metadata) {
> > > > > if ((m->ol_flags & rte_flow_dynf_metadata_mask) == 0)
> > > > > return -1;
> > > > What if metadata is 0xFFFFFFFF ?
> > > > The checking of availability might embrace larger code block, so
> > > > this might be not the best place to check availability.
> > > >
> > > > > *metadata = *RTE_MBUF_DYNFIELD(m,
> > > rte_flow_dynf_metadata_offs,
> > > > > uint32_t *);
> > > > > return 0;
> > > > > }
> > > > >
> > > > > /**
> > > > > * Delete the metadata dynamic flag in mbuf.
> > > > > *
> > > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > > */
> > > > > __rte_experimental
> > > > > static inline void rte_mbuf_dyn_metadata_del(struct rte_mbuf *m) {
> > > > > m->ol_flags &= ~rte_flow_dynf_metadata_mask; }
> > > > >
> > > > Sorry, I do not see the practical usecase for these helpers. In my
> > > > opinion it
> > > is just some kind of obscuration.
> > > > They do replace the very simple code and introduce some risk of
> > > performance impact.
> > > >
> > > > >
> > > > > > /*
> > > > > > * Definition of a single action.
> > > > > > *
> > > > > > @@ -2533,6 +2588,32 @@ enum rte_flow_conv_op { };
> > > > > >
> > > > > > /**
> > > > > > + * Check if mbuf dynamic field for metadata is registered.
> > > > > > + *
> > > > > > + * @return
> > > > > > + * True if registered, false otherwise.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +static inline int
> > > > > > +rte_flow_dynf_metadata_avail(void) {
> > > > > > + return !!rte_flow_dynf_metadata_mask; }
> > > > >
> > > > > _registered() instead of _avail() ?
> > > > Accepted, sounds better.
> >
> > Hmm, I changed my opinion - we already have
> > rte_flow_dynf_metadata_register(void). Is it OK to have
> > rte_flow_dynf_metadata_registerED(void) ?
> > It would be easy to mistype.
>
> what about xxx_is_registered() ?
It seems to be not much better, sorry ☹
> if you feel it's too long, ok, let's keep avail()
Actually, I tend to complete with "_available", but it is really long.
> >
> > > >
> > > > >
> > > > > > +
> > > > > > +/**
> > > > > > + * Register mbuf dynamic field and flag for metadata.
> > > > > > + *
> > > > > > + * This function must be called prior to use SET_META action
> > > > > > +in order to
> > > > > > + * register the dynamic mbuf field. Otherwise, the data
> > > > > > +cannot be delivered to
> > > > > > + * application.
> > > > > > + *
> > > > > > + * @return
> > > > > > + * 0 on success, a negative errno value otherwise and rte_errno is
> > > set.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_flow_dynf_metadata_register(void);
> > > > > > +
> > > > > > +/**
> > > > > > * Check whether a flow rule can be created on a given port.
> > > > > > *
> > > > > > * The flow rule is validated for correctness and whether it
> > > > > > could be accepted diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > > b/lib/librte_mbuf/rte_mbuf_dyn.h index 6e2c816..4ff33ac 100644
> > > > > > --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > > @@ -160,4 +160,12 @@ int rte_mbuf_dynflag_lookup(const char
> > > *name,
> > > > > > */
> > > > > > #define RTE_MBUF_DYNFIELD(m, offset, type)
> > > > > > ((type)((uintptr_t)(m)
> > > > > > +
> > > > > > (offset)))
> > > > > >
> > > > > > +/**
> > > > > > + * Flow metadata dynamic field definitions.
> > > > > > + */
> > > > > > +#define MBUF_DYNF_METADATA_NAME "flow-metadata"
> > > > > > +#define MBUF_DYNF_METADATA_SIZE sizeof(uint32_t) #define
> > > > > > +MBUF_DYNF_METADATA_ALIGN __alignof__(uint32_t) #define
> > > > > > +MBUF_DYNF_METADATA_FLAGS 0
> > > > >
> > > > > If this flag is only to be used in rte_flow, it can stay in rte_flow.
> > > > > The name should follow the function name conventions, I suggest
> > > > > "rte_flow_metadata".
> > > >
> > > > The definitions:
> > > > MBUF_DYNF_METADATA_NAME,
> > > > MBUF_DYNF_METADATA_SIZE,
> > > > MBUF_DYNF_METADATA_ALIGN
> > > > are global. rte_flow proposes only minimal set tyo check and
> > > > access the metadata. By knowing the field names applications would
> > > > have the more flexibility in processing the fields, for example it
> > > > allows to optimize the handling of multiple dynamic fields . The
> > > > definition of metadata size allows to generate optimized code:
> > > > #if MBUF_DYNF_METADATA_SIZE == sizeof(uint32)
> > > > *RTE_MBUF_DYNFIELD(m) = get_metadata_32bit() #else
> > > > *RTE_MBUF_DYNFIELD(m) = get_metadata_64bit() #endif
> > >
> > > I don't see any reason why the same dynamic field could have
> > > different sizes, I even think it could be dangerous. Your accessors
> > > suppose that the metadata is a uint32_t. Having a compile-time
> > > option for that does not look desirable.
> >
> > I tried to provide maximal flexibility and It was just an example of
> > the thing we could do with global definitions. If you think we do not
> > need it - OK, let's do things simpler.
> >
> > >
> > > Just a side note: we have to take care when adding a new *public*
> > > dynamic field that it won't change in the future: the accessors are
> > > macros or static inline functions, so they are embedded in the binaries.
> > > This is probably something we should discuss and may not be when
> > > updating the dpdk (as shared lib).
> >
> > Yes, agree, defines just will not work correct in correct way and even break
> an ABI.
> > As we decided - global metadata defines MBUF_DYNF_METADATA_xxxx
> should
> > be removed.
> >
> > >
> > > > MBUF_DYNF_METADATA_FLAGS flag is not used by rte_flow, this flag
> > > > is related exclusively to dynamic mbuf " Reserved for future use, must
> be 0".
> > > > Would you like to drop this definition?
> > > >
> > > > >
> > > > > If the flag is going to be used in several places in dpdk
> > > > > (rte_flow, pmd, app, ...), I wonder if it shouldn't be defined
> > > > > it in rte_mbuf_dyn.c. I
> > > mean:
> > > > >
> > > > > ====
> > > > > /* rte_mbuf_dyn.c */
> > > > > const struct rte_mbuf_dynfield rte_mbuf_dynfield_flow_metadata = {
> > > > > ...
> > > > > };
> > > > In this case we would make this descriptor global.
> > > > It is no needed, because there Is no supposed any usage, but by
> > > > rte_flow_dynf_metadata_register() only. The
> > >
> > > Yes, in my example I wasn't sure it was going to be private to
> > > rte_flow (see "If the flag is going to be used in several places in
> > > dpdk (rte_flow, pmd, app, ...)").
> > >
> > > So yes, I agree the struct should remain private.
> > OK.
> >
> > >
> > >
> > > > > int rte_mbuf_dynfield_flow_metadata_offset = -1; const struct
> > > > > rte_mbuf_dynflag rte_mbuf_dynflag_flow_metadata = {
> > > > > ...
> > > > > };
> > > > > int rte_mbuf_dynflag_flow_metadata_bitnum = -1;
> > > > >
> > > > > int rte_mbuf_dyn_flow_metadata_register(void)
> > > > > {
> > > > > ...
> > > > > }
> > > > >
> > > > > /* rte_mbuf_dyn.h */
> > > > > extern const struct rte_mbuf_dynfield
> > > > > rte_mbuf_dynfield_flow_metadata; extern int
> > > > > rte_mbuf_dynfield_flow_metadata_offset;
> > > > > extern const struct rte_mbuf_dynflag
> > > > > rte_mbuf_dynflag_flow_metadata; extern int
> > > > > rte_mbuf_dynflag_flow_metadata_bitnum;
> > > > >
> > > > > ...helpers to set/get metadata...
> > > > > ===
> > > > >
> > > > > Centralizing the definitions of non-private dynamic fields/flags
> > > > > in rte_mbuf_dyn may help other people to reuse a field that is
> > > > > well described if it match their use-case.
> > > >
> > > > Yes, centralizing is important, that's why MBUF_DYNF_METADATA_xxx
> > > > placed in rte_mbuf_dyn.h. Do you think we should share the
> > > > descriptors
> > > either?
> > > > I have no idea why someone (but rte_flow_dynf_metadata_register())
> > > > might register metadata field directly.
> > >
> > > If the field is private to rte_flow, yes, there is no need to share
> > > the "struct rte_mbuf_dynfield". Even the
> > > rte_flow_dynf_metadata_register() could be marked as internal, right?
> > rte_flow_dynf_metadata_register() is intended to be called by application.
> > Some applications might wish to engage metadata feature, some ones -
> not.
> >
> > >
> > > One more question: I see the registration is done by
> > > parse_vc_action_set_meta(). My understanding is that this function
> > > is not in datapath, and is called when configuring rte_flow. Do you
> confirm?
> > Rather it is called to configure application in general. If user sets
> > metadata (by issuing the appropriate command) it is assumed he/she
> > would like the metadata on Rx side either. This is just for test
> > purposes and it is not brilliant example of
> rte_flow_dynf_metadata_register() use case.
> >
> >
> > >
> > > > > In your case, what is carried by metadata? Could it be reused by
> > > > > others? I think some more description is needed.
> > > > In my case, metadata is just opaquie rte_flow related 32-bit
> > > > unsigned value provided by
> > > > mlx5 hardrware in rx datapath. I have no guess whether someone
> > > > wishes
> > > to reuse.
> > >
> > > What is the user supposed to do with this value? If it is
> > > hw-specific data, I think the name of the mbuf field should include
> > > "MLX", and it should be described.
> >
> > Metadata are not HW specific at all - they neither control nor are
> > produced by HW (abstracting from the flow engine is implemented in HW).
> > Metadata are some opaque data, it is some kind of link between data
> > path and flow space. With metadata application may provide some per
> > packet information to flow engine and get back some information from
> flow engine.
> > it is generic concept, supposed to be neither HW-related nor vendor
> specific.
>
> ok, understood, it's like a mark or tag.
>
> > > Are these rte_flow actions somehow specific to mellanox drivers ?
> >
> > AFAIK, currently it is going to be supported by mlx5 PMD only, but
> > concept is common and is not vendor specific.
> >
> > >
> > > > Brief summary of you comment (just to make sure I understood your
> > > proposal in correct way):
> > > > 1. drop all definitions MBUF_DYNF_METADATA_xxx, leave
> > > > MBUF_DYNF_METADATA_NAME only 2. move the descriptor const
> struct
> > > > rte_mbuf_dynfield desc_offs = {} to rte_mbuf_dyn.c and make it
> > > > global 3. provide helpers to access metadata
> > > >
> > > > [1] and [2] look OK in general. Although I think these ones make
> > > > code less
> > > flexible, restrict the potential compile time options.
> > > > For now it is rather theoretical question, if you insist on your
> > > > approach - please, let me know, I'll address [1] and [2] and
> > > > update.my
> > > patch.
> > >
> > > [1] I think the #define only adds an indirection, and I didn't see any
> > > perf constraint here.
> > > [2] My previous comment was surely not clear, sorry. The code can stay
> > > in rte_flow.
> > >
> > > > As for [3] - IMHO, the extra abstraction layer is not useful, and
> > > > might be
> > > even harmful.
> > > > I tend not to complicate the code, at least, for now.
> > >
> > > [3] ok for me
> > >
> > >
> > > Thanks,
> > > Olivier
> >
With best regards, Slava
^ permalink raw reply [relevance 0%]
* [dpdk-dev] DPDK Release Status Meeting 24/10/2019
@ 2019-10-24 11:44 4% Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2019-10-24 11:44 UTC (permalink / raw)
To: dpdk-dev; +Cc: Thomas Monjalon
Minutes 24 October 2019
-----------------------
Agenda:
* Release Dates
* Subtrees
* Opens
Participants:
* Debian/Microsoft
* Intel
* Marvell
* Mellanox
* NXP
* Red Hat
Release Dates
-------------
* v19.11 dates:
* Integration/Merge/RC1 date pushed to *Sunday 27 October*
* RC2 Friday 8 November
* RC3 Friday 15 November
* Release Friday 22 November
* Proposed dates for 20.02 release on the mail list, please comment
https://mails.dpdk.org/archives/dev/2019-September/143311.html
These dates may affected from the 19.11 delays, please review again.
Subtrees
--------
* main
* Some features/patches will be pushed to rc2, rc1 won't be feature complete
* Some optimization/good to have patches can be postponed to next release, to
reduce the stress in current release, which is big, already late and the
start of a new ABI policy.
* eal, ABI, LTO patches are on the queue for review
* Better to get following eal patchset for rc1, more review requested
https://patchwork.dpdk.org/project/dpdk/list/?series=7024
* Dynamic mbuf, RIB & FIB libs may be merged for rc1
* next-net
* ~50 patches in backlog
* Planning to get as much ethdev changes for rc1, remaining ones:
* hairpin, Rx offloads,
* planning to get to rc1 if all goes OK
* extend flow metadata,
* depends on dynamic mbuf, patch may go in depends on dynamic mbuf go into
main repo or not
* flow tag, LRO
* won't go in unless reviewed/acked on time for rc1
* PMD and testpmd patches will be merged with best of for rc1, rest will
pushed to rc2
* A new PMD sent after v1 deadline, ionic, postponed to next release
* next-net-crypto
* Pull request has been sent, some patches are postponed to rc2
* security library, CPU Crypto change is not concluded in Tech Board,
since it is a library change if not merged for rc1, will be postponed
next release.
* ipsec-secgw changes are trending to be postponed to next release
* armv8 crypto PMD discussion is still going on in the mail list
https://mails.dpdk.org/archives/dev/2019-October/146813.html
* next-net-eventdev
* Some more patches are in the tree waiting for pull
* l2fwd-event app can be pushed to rc2
(update the existing l3fwd for eventdev pushed to next release)
* next-net-virtio
* Will merge Marvin's vhost patch for rc1
* Maxim's (own) Virtio vDPA set pushed to next release
* next-net-intel
* ipn3ke PMD still at risk for rc1, can be considered for rc2
* LTS
* v18.11.3-rc2 under test
* Target release date is tomorrow (25 October)
Opens
-----
* Luca reported a build error on master, from bnx2x
https://build.opensuse.org/package/live_build_log/home:bluca:dpdk/dpdk/Debian_10/aarch64
DPDK Release Status Meetings
============================
The DPDK Release Status Meeting is intended for DPDK Committers to discuss
the status of the master tree and sub-trees, and for project managers to
track progress or milestone dates.
The meeting occurs on Thursdays at 8:30 UTC. If you wish to attend just
send an email to "John McNamara <john.mcnamara@intel.com>" for the invite.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v9 00/13] vhost packed ring performance optimization
2019-10-24 16:08 3% ` [dpdk-dev] [PATCH v9 " Marvin Liu
@ 2019-10-24 10:18 0% ` Maxime Coquelin
0 siblings, 0 replies; 200+ results
From: Maxime Coquelin @ 2019-10-24 10:18 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie, zhihong.wang, stephen, gavin.hu; +Cc: dev
On 10/24/19 6:08 PM, Marvin Liu wrote:
> Packed ring has more compact ring format and thus can significantly
> reduce the number of cache miss. It can lead to better performance.
> This has been approved in virtio user driver, on normal E5 Xeon cpu
> single core performance can raise 12%.
>
> http://mails.dpdk.org/archives/dev/2018-April/095470.html
>
> However vhost performance with packed ring performance was decreased.
> Through analysis, mostly extra cost was from the calculating of each
> descriptor flag which depended on ring wrap counter. Moreover, both
> frontend and backend need to write same descriptors which will cause
> cache contention. Especially when doing vhost enqueue function, virtio
> refill packed ring function may write same cache line when vhost doing
> enqueue function. This kind of extra cache cost will reduce the benefit
> of reducing cache misses.
>
> For optimizing vhost packed ring performance, vhost enqueue and dequeue
> function will be split into fast and normal path.
>
> Several methods will be taken in fast path:
> Handle descriptors in one cache line by batch.
> Split loop function into more pieces and unroll them.
> Prerequisite check that whether I/O space can copy directly into mbuf
> space and vice versa.
> Prerequisite check that whether descriptor mapping is successful.
> Distinguish vhost used ring update function by enqueue and dequeue
> function.
> Buffer dequeue used descriptors as many as possible.
> Update enqueue used descriptors by cache line.
>
> After all these methods done, single core vhost PvP performance with 64B
> packet on Xeon 8180 can boost 35%.
>
> v9:
> - Fix clang build error
>
> v8:
> - Allocate mbuf by virtio_dev_pktmbuf_alloc
>
> v7:
> - Rebase code
> - Rename unroll macro and definitions
> - Calculate flags when doing single dequeue
>
> v6:
> - Fix dequeue zcopy result check
>
> v5:
> - Remove disable sw prefetch as performance impact is small
> - Change unroll pragma macro format
> - Rename shadow counter elements names
> - Clean dequeue update check condition
> - Add inline functions replace of duplicated code
> - Unify code style
>
> v4:
> - Support meson build
> - Remove memory region cache for no clear performance gain and ABI break
> - Not assume ring size is power of two
>
> v3:
> - Check available index overflow
> - Remove dequeue remained descs number check
> - Remove changes in split ring datapath
> - Call memory write barriers once when updating used flags
> - Rename some functions and macros
> - Code style optimization
>
> v2:
> - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> - Optimize dequeue used ring update when in_order negotiated
>
>
> Marvin Liu (13):
> vhost: add packed ring indexes increasing function
> vhost: add packed ring single enqueue
> vhost: try to unroll for each loop
> vhost: add packed ring batch enqueue
> vhost: add packed ring single dequeue
> vhost: add packed ring batch dequeue
> vhost: flush enqueue updates by cacheline
> vhost: flush batched enqueue descs directly
> vhost: buffer packed ring dequeue updates
> vhost: optimize packed ring enqueue
> vhost: add packed ring zcopy batch and single dequeue
> vhost: optimize packed ring dequeue
> vhost: optimize packed ring dequeue when in-order
>
> lib/librte_vhost/Makefile | 18 +
> lib/librte_vhost/meson.build | 7 +
> lib/librte_vhost/vhost.h | 57 ++
> lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
> 4 files changed, 837 insertions(+), 193 deletions(-)
>
Applied to dpdk-next-virtio/master.
Thanks,
Maxime
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v5 09/10] build: change ABI version to 20.0
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (8 preceding siblings ...)
2019-10-24 9:46 3% ` [dpdk-dev] [PATCH v5 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
@ 2019-10-24 9:46 2% ` Anatoly Burakov
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 10/10] buildtools: add ABI versioning check script Anatoly Burakov
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Pawel Modrak, Nicolas Chautru, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Stephen Hemminger, Anoob Joseph, Tomasz Duszynski,
Liron Himi, Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru,
Lee Daly, Fiona Trahe, Ashish Gupta, Sunila Sahu, Declan Doherty,
Pablo de Lara, Gagandeep Singh, Ravi Kumar, Akhil Goyal,
Michael Shamis, Nagadheeraj Rottela, Srikanth Jampala, Fan Zhang,
Jay Zhou, Nipun Gupta, Mattias Rönnblom, Pavan Nikhilesh,
Liang Ma, Peter Mccarthy, Harry van Haaren, Artem V. Andreev,
Andrew Rybchenko, Olivier Matz, Gage Eads, John W. Linville,
Xiaolong Ye, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Igor Russkikh, Pavel Belous, Allain Legacy, Matt Peters,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Chas Williams, Rahul Lakkireddy, Wenzhuo Lu, Marcin Wojtas,
Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Xiao Wang, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Wei Hu (Xavier), Min Hu (Connor),
Yisen Zhuang, Beilei Xing, Jingjing Wu, Qiming Yang,
Konstantin Ananyev, Ferruh Yigit, Shijith Thotton,
Srisivasubramanian Srinivasan, Jakub Grajciar, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
K. Y. Srinivasan, Haiyang Zhang, Rastislav Cernay, Jan Remes,
Alejandro Lucero, Tetsuya Mukawa, Kiran Kumar K,
Bruce Richardson, Jasvinder Singh, Cristian Dumitrescu,
Keith Wiles, Maciej Czekaj, Maxime Coquelin, Tiwei Bie,
Zhihong Wang, Yong Wang, Tianfei zhang, Xiaoyun Li, Satha Rao,
Shreyansh Jain, David Hunt, Byron Marohn, Yipeng Wang,
Thomas Monjalon, Bernard Iremonger, Jiayu Hu, Sameh Gobriel,
Reshma Pattan, Vladimir Medvedkin, Honnappa Nagarahalli,
Kevin Laatz, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, ray.kinsella, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Merge all vesions in linker version script files to DPDK_20.0.
This commit was generated by running the following command:
:~/DPDK$ buildtools/update-abi.sh 20.0
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +++----
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++++-----
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 6 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +--
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 ++--
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 ++--
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 ++----
.../rte_distributor_version.map | 4 +-
lib/librte_eal/rte_eal_version.map | 310 +++++++-----------
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +++------
lib/librte_eventdev/rte_eventdev_version.map | 130 +++-----
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +--
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm_version.map | 39 +--
lib/librte_mbuf/rte_mbuf_version.map | 49 +--
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +--
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +---
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +--
154 files changed, 724 insertions(+), 1406 deletions(-)
diff --git a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
index f64b0f9c27..6bcea2cc7f 100644
--- a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
+++ b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
@@ -1,10 +1,10 @@
-DPDK_19.08 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
EXPERIMENTAL {
- global:
+ global:
- fpga_lte_fec_configure;
+ fpga_lte_fec_configure;
};
diff --git a/drivers/baseband/null/rte_pmd_bbdev_null_version.map b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/null/rte_pmd_bbdev_null_version.map
+++ b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
+++ b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a221522c23..9ab8c76eef 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
bman_acquire;
@@ -8,127 +8,94 @@ DPDK_17.11 {
bman_new_pool;
bman_query_free_buffers;
bman_release;
+ bman_thread_irq;
+ dpaa_logtype_eventdev;
dpaa_logtype_mempool;
dpaa_logtype_pmd;
dpaa_netcfg;
+ dpaa_svr_family;
fman_ccsr_map_fd;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
fman_if_clear_mac_addr;
fman_if_disable_rx;
- fman_if_enable_rx;
fman_if_discard_rx_errors;
- fman_if_get_fc_threshold;
+ fman_if_enable_rx;
fman_if_get_fc_quanta;
+ fman_if_get_fc_threshold;
fman_if_get_fdoff;
+ fman_if_get_sg_enable;
fman_if_loopback_disable;
fman_if_loopback_enable;
fman_if_promiscuous_disable;
fman_if_promiscuous_enable;
fman_if_reset_mcast_filter_table;
fman_if_set_bp;
- fman_if_set_fc_threshold;
fman_if_set_fc_quanta;
+ fman_if_set_fc_threshold;
fman_if_set_fdoff;
fman_if_set_ic_params;
fman_if_set_maxfrm;
fman_if_set_mcast_filter_table;
+ fman_if_set_sg;
fman_if_stats_get;
fman_if_stats_get_all;
fman_if_stats_reset;
fman_ip_rev;
+ fsl_qman_fq_portal_create;
netcfg_acquire;
netcfg_release;
of_find_compatible_node;
+ of_get_mac_address;
of_get_property;
+ per_lcore_dpaa_io;
+ per_lcore_held_bufs;
qm_channel_caam;
+ qm_channel_pool1;
+ qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
+ qman_clear_irq;
+ qman_create_cgr;
qman_create_fq;
+ qman_dca_index;
+ qman_delete_cgr;
qman_dequeue;
qman_dqrr_consume;
qman_enqueue;
qman_enqueue_multi;
+ qman_enqueue_multi_fq;
qman_fq_fqid;
+ qman_fq_portal_irqsource_add;
+ qman_fq_portal_irqsource_remove;
+ qman_fq_portal_thread_irq;
qman_fq_state;
qman_global_init;
qman_init_fq;
- qman_poll_dqrr;
- qman_query_fq_np;
- qman_set_vdq;
- qman_reserve_fqid_range;
- qman_volatile_dequeue;
- rte_dpaa_driver_register;
- rte_dpaa_driver_unregister;
- rte_dpaa_mem_ptov;
- rte_dpaa_portal_init;
-
- local: *;
-};
-
-DPDK_18.02 {
- global:
-
- dpaa_logtype_eventdev;
- dpaa_svr_family;
- per_lcore_dpaa_io;
- per_lcore_held_bufs;
- qm_channel_pool1;
- qman_alloc_cgrid_range;
- qman_alloc_pool_range;
- qman_create_cgr;
- qman_dca_index;
- qman_delete_cgr;
- qman_enqueue_multi_fq;
+ qman_irqsource_add;
+ qman_irqsource_remove;
qman_modify_cgr;
qman_oos_fq;
+ qman_poll_dqrr;
qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
+ qman_query_fq_np;
qman_release_cgrid_range;
+ qman_reserve_fqid_range;
qman_retire_fq;
+ qman_set_fq_lookup_table;
+ qman_set_vdq;
qman_static_dequeue_add;
- rte_dpaa_portal_fq_close;
- rte_dpaa_portal_fq_init;
-
-} DPDK_17.11;
-
-DPDK_18.08 {
- global:
-
- fman_if_get_sg_enable;
- fman_if_set_sg;
- of_get_mac_address;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
-
- bman_thread_irq;
- fman_if_get_sg_enable;
- fman_if_set_sg;
- qman_clear_irq;
-
- qman_irqsource_add;
- qman_irqsource_remove;
qman_thread_fd;
qman_thread_irq;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- qman_set_fq_lookup_table;
-
-} DPDK_18.11;
-
-DPDK_19.11 {
- global:
-
- fsl_qman_fq_portal_create;
- qman_fq_portal_irqsource_add;
- qman_fq_portal_irqsource_remove;
- qman_fq_portal_thread_irq;
-
-} DPDK_19.05;
+ qman_volatile_dequeue;
+ rte_dpaa_driver_register;
+ rte_dpaa_driver_unregister;
+ rte_dpaa_mem_ptov;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
+ rte_dpaa_portal_init;
+
+ local: *;
+};
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 4da787236b..fe45575046 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,32 +1,67 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
+ dpaa2_affine_qbman_ethrx_swp;
dpaa2_affine_qbman_swp;
dpaa2_alloc_dpbp_dev;
dpaa2_alloc_dq_storage;
+ dpaa2_dpbp_supported;
+ dpaa2_dqrr_size;
+ dpaa2_eqcr_size;
dpaa2_free_dpbp_dev;
dpaa2_free_dq_storage;
+ dpaa2_free_eq_descriptors;
+ dpaa2_get_qbman_swp;
+ dpaa2_io_portal;
+ dpaa2_svr_family;
+ dpaa2_virt_mode;
dpbp_disable;
dpbp_enable;
dpbp_get_attributes;
dpbp_get_num_free_bufs;
dpbp_open;
dpbp_reset;
+ dpci_get_opr;
+ dpci_set_opr;
+ dpci_set_rx_queue;
+ dpcon_get_attributes;
+ dpcon_open;
+ dpdmai_close;
+ dpdmai_disable;
+ dpdmai_enable;
+ dpdmai_get_attributes;
+ dpdmai_get_rx_queue;
+ dpdmai_get_tx_queue;
+ dpdmai_open;
+ dpdmai_set_rx_queue;
+ dpio_add_static_dequeue_channel;
dpio_close;
dpio_disable;
dpio_enable;
dpio_get_attributes;
dpio_open;
+ dpio_remove_static_dequeue_channel;
dpio_reset;
dpio_set_stashing_destination;
+ mc_get_soc_version;
+ mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
+ per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
+ qbman_check_new_result;
qbman_eq_desc_clear;
+ qbman_eq_desc_set_dca;
qbman_eq_desc_set_fq;
qbman_eq_desc_set_no_orp;
+ qbman_eq_desc_set_orp;
qbman_eq_desc_set_qd;
qbman_eq_desc_set_response;
+ qbman_eq_desc_set_token;
+ qbman_fq_query_state;
+ qbman_fq_state_frame_count;
+ qbman_get_dqrr_from_idx;
+ qbman_get_dqrr_idx;
qbman_pull_desc_clear;
qbman_pull_desc_set_fq;
qbman_pull_desc_set_numframes;
@@ -35,112 +70,43 @@ DPDK_17.05 {
qbman_release_desc_set_bpid;
qbman_result_DQ_fd;
qbman_result_DQ_flags;
- qbman_result_has_new_result;
- qbman_swp_acquire;
- qbman_swp_pull;
- qbman_swp_release;
- rte_fslmc_driver_register;
- rte_fslmc_driver_unregister;
- rte_fslmc_vfio_dmamap;
- rte_mcp_ptr_list;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- dpaa2_io_portal;
- dpaa2_get_qbman_swp;
- dpci_set_rx_queue;
- dpcon_open;
- dpcon_get_attributes;
- dpio_add_static_dequeue_channel;
- dpio_remove_static_dequeue_channel;
- mc_get_soc_version;
- mc_get_version;
- qbman_check_new_result;
- qbman_eq_desc_set_dca;
- qbman_get_dqrr_from_idx;
- qbman_get_dqrr_idx;
qbman_result_DQ_fqd_ctx;
+ qbman_result_DQ_odpid;
+ qbman_result_DQ_seqnum;
qbman_result_SCN_state;
+ qbman_result_eqresp_fd;
+ qbman_result_eqresp_rc;
+ qbman_result_eqresp_rspid;
+ qbman_result_eqresp_set_rspid;
+ qbman_result_has_new_result;
+ qbman_swp_acquire;
qbman_swp_dqrr_consume;
+ qbman_swp_dqrr_idx_consume;
qbman_swp_dqrr_next;
qbman_swp_enqueue_multiple;
qbman_swp_enqueue_multiple_desc;
+ qbman_swp_enqueue_multiple_fd;
qbman_swp_interrupt_clear_status;
+ qbman_swp_prefetch_dqrr_next;
+ qbman_swp_pull;
qbman_swp_push_set;
+ qbman_swp_release;
rte_dpaa2_alloc_dpci_dev;
- rte_fslmc_object_register;
- rte_global_active_dqs_list;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- dpaa2_dpbp_supported;
rte_dpaa2_dev_type;
+ rte_dpaa2_free_dpci_dev;
rte_dpaa2_intr_disable;
rte_dpaa2_intr_enable;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- dpaa2_svr_family;
- dpaa2_virt_mode;
- per_lcore_dpaa2_held_bufs;
- qbman_fq_query_state;
- qbman_fq_state_frame_count;
- qbman_swp_dqrr_idx_consume;
- qbman_swp_prefetch_dqrr_next;
- rte_fslmc_get_device_count;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- dpaa2_affine_qbman_ethrx_swp;
- dpdmai_close;
- dpdmai_disable;
- dpdmai_enable;
- dpdmai_get_attributes;
- dpdmai_get_rx_queue;
- dpdmai_get_tx_queue;
- dpdmai_open;
- dpdmai_set_rx_queue;
- rte_dpaa2_free_dpci_dev;
rte_dpaa2_memsegs;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
- dpaa2_dqrr_size;
- dpaa2_eqcr_size;
- dpci_get_opr;
- dpci_set_opr;
-
-} DPDK_18.05;
-
-DPDK_19.05 {
- global:
- dpaa2_free_eq_descriptors;
-
- qbman_eq_desc_set_orp;
- qbman_eq_desc_set_token;
- qbman_result_DQ_odpid;
- qbman_result_DQ_seqnum;
- qbman_result_eqresp_fd;
- qbman_result_eqresp_rc;
- qbman_result_eqresp_rspid;
- qbman_result_eqresp_set_rspid;
- qbman_swp_enqueue_multiple_fd;
-} DPDK_18.11;
+ rte_fslmc_driver_register;
+ rte_fslmc_driver_unregister;
+ rte_fslmc_get_device_count;
+ rte_fslmc_object_register;
+ rte_fslmc_vfio_dmamap;
+ rte_global_active_dqs_list;
+ rte_mcp_ptr_list;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/bus/ifpga/rte_bus_ifpga_version.map b/drivers/bus/ifpga/rte_bus_ifpga_version.map
index 964c9a9c45..05b4a28c1b 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga_version.map
+++ b/drivers/bus/ifpga/rte_bus_ifpga_version.map
@@ -1,17 +1,11 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
- rte_ifpga_get_integer32_arg;
- rte_ifpga_get_string_arg;
rte_ifpga_driver_register;
rte_ifpga_driver_unregister;
+ rte_ifpga_find_afu_by_name;
+ rte_ifpga_get_integer32_arg;
+ rte_ifpga_get_string_arg;
local: *;
};
-
-DPDK_19.05 {
- global:
-
- rte_ifpga_find_afu_by_name;
-
-} DPDK_18.05;
diff --git a/drivers/bus/pci/rte_bus_pci_version.map b/drivers/bus/pci/rte_bus_pci_version.map
index 27e9c4f101..012d817e14 100644
--- a/drivers/bus/pci/rte_bus_pci_version.map
+++ b/drivers/bus/pci/rte_bus_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pci_dump;
diff --git a/drivers/bus/vdev/rte_bus_vdev_version.map b/drivers/bus/vdev/rte_bus_vdev_version.map
index 590cf9b437..5abb10ecb0 100644
--- a/drivers/bus/vdev/rte_bus_vdev_version.map
+++ b/drivers/bus/vdev/rte_bus_vdev_version.map
@@ -1,18 +1,12 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
+ rte_vdev_add_custom_scan;
rte_vdev_init;
rte_vdev_register;
+ rte_vdev_remove_custom_scan;
rte_vdev_uninit;
rte_vdev_unregister;
local: *;
};
-
-DPDK_18.02 {
- global:
-
- rte_vdev_add_custom_scan;
- rte_vdev_remove_custom_scan;
-
-} DPDK_17.11;
diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map
index ae231ad329..cbaaebc06c 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus_version.map
+++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map
@@ -1,6 +1,4 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_vmbus_chan_close;
@@ -20,6 +18,7 @@ DPDK_18.08 {
rte_vmbus_probe;
rte_vmbus_register;
rte_vmbus_scan;
+ rte_vmbus_set_latency;
rte_vmbus_sub_channel_index;
rte_vmbus_subchan_open;
rte_vmbus_unmap_device;
@@ -27,10 +26,3 @@ DPDK_18.08 {
local: *;
};
-
-DPDK_18.11 {
- global:
-
- rte_vmbus_set_latency;
-
-} DPDK_18.08;
diff --git a/drivers/common/cpt/rte_common_cpt_version.map b/drivers/common/cpt/rte_common_cpt_version.map
index dec614f0de..79fa5751bc 100644
--- a/drivers/common/cpt/rte_common_cpt_version.map
+++ b/drivers/common/cpt/rte_common_cpt_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
cpt_pmd_ops_helper_get_mlen_direct_mode;
cpt_pmd_ops_helper_get_mlen_sg_mode;
+
+ local: *;
};
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index 8131c9e305..45d62aea9d 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,11 +1,11 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- dpaax_iova_table_update;
dpaax_iova_table_depopulate;
dpaax_iova_table_dump;
dpaax_iova_table_p;
dpaax_iova_table_populate;
+ dpaax_iova_table_update;
local: *;
};
diff --git a/drivers/common/mvep/rte_common_mvep_version.map b/drivers/common/mvep/rte_common_mvep_version.map
index c71722d79f..030928439d 100644
--- a/drivers/common/mvep/rte_common_mvep_version.map
+++ b/drivers/common/mvep/rte_common_mvep_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- rte_mvep_init;
rte_mvep_deinit;
+ rte_mvep_init;
+
+ local: *;
};
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index a9b3cff9bc..c15fb89112 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,8 +1,10 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
octeontx_logtype_mbox;
+ octeontx_mbox_send;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
- octeontx_mbox_send;
+
+ local: *;
};
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 4400120da0..adad21a2d6 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -1,39 +1,35 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
otx2_dev_active_vfs;
otx2_dev_fini;
otx2_dev_priv_init;
-
+ otx2_disable_irqs;
+ otx2_intra_dev_get_cfg;
otx2_logtype_base;
otx2_logtype_dpi;
otx2_logtype_mbox;
+ otx2_logtype_nix;
otx2_logtype_npa;
otx2_logtype_npc;
- otx2_logtype_nix;
otx2_logtype_sso;
- otx2_logtype_tm;
otx2_logtype_tim;
-
+ otx2_logtype_tm;
otx2_mbox_alloc_msg_rsp;
otx2_mbox_get_rsp;
otx2_mbox_get_rsp_tmo;
otx2_mbox_id2name;
otx2_mbox_msg_send;
otx2_mbox_wait_for_rsp;
-
- otx2_intra_dev_get_cfg;
otx2_npa_lf_active;
otx2_npa_lf_obj_get;
otx2_npa_lf_obj_ref;
otx2_npa_pf_func_get;
otx2_npa_set_defaults;
+ otx2_register_irq;
otx2_sso_pf_func_get;
otx2_sso_pf_func_set;
-
- otx2_disable_irqs;
otx2_unregister_irq;
- otx2_register_irq;
local: *;
};
diff --git a/drivers/compress/isal/rte_pmd_isal_version.map b/drivers/compress/isal/rte_pmd_isal_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/compress/isal/rte_pmd_isal_version.map
+++ b/drivers/compress/isal/rte_pmd_isal_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
+++ b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/qat/rte_pmd_qat_version.map b/drivers/compress/qat/rte_pmd_qat_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/qat/rte_pmd_qat_version.map
+++ b/drivers/compress/qat/rte_pmd_qat_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/zlib/rte_pmd_zlib_version.map b/drivers/compress/zlib/rte_pmd_zlib_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/zlib/rte_pmd_zlib_version.map
+++ b/drivers/compress/zlib/rte_pmd_zlib_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
+++ b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/armv8/rte_pmd_armv8_version.map b/drivers/crypto/armv8/rte_pmd_armv8_version.map
index 1f84b68a83..f9f17e4f6e 100644
--- a/drivers/crypto/armv8/rte_pmd_armv8_version.map
+++ b/drivers/crypto/armv8/rte_pmd_armv8_version.map
@@ -1,3 +1,3 @@
-DPDK_17.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
+++ b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/ccp/rte_pmd_ccp_version.map b/drivers/crypto/ccp/rte_pmd_ccp_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/crypto/ccp/rte_pmd_ccp_version.map
+++ b/drivers/crypto/ccp/rte_pmd_ccp_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 0bfb986d0b..5952d645fd 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_18.11 {
+DPDK_20.0 {
global:
dpaa2_sec_eventq_attach;
dpaa2_sec_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index cc7f2162e0..8580fa13db 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_19.11 {
+DPDK_20.0 {
global:
dpaa_sec_eventq_attach;
dpaa_sec_eventq_detach;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
index 8ffeca934e..f9f17e4f6e 100644
--- a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -1,3 +1,3 @@
-DPDK_16.07 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
+++ b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
index 406964d1fc..f9f17e4f6e 100644
--- a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
+++ b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/null/rte_pmd_null_crypto_version.map b/drivers/crypto/null/rte_pmd_null_crypto_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/null/rte_pmd_null_crypto_version.map
+++ b/drivers/crypto/null/rte_pmd_null_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
+++ b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/openssl/rte_pmd_openssl_version.map b/drivers/crypto/openssl/rte_pmd_openssl_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/openssl/rte_pmd_openssl_version.map
+++ b/drivers/crypto/openssl/rte_pmd_openssl_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
index 5c43127cf2..077afedce7 100644
--- a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -1,21 +1,16 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_cryptodev_scheduler_load_user_scheduler;
- rte_cryptodev_scheduler_slave_attach;
- rte_cryptodev_scheduler_slave_detach;
- rte_cryptodev_scheduler_ordering_set;
- rte_cryptodev_scheduler_ordering_get;
-
-};
-
-DPDK_17.05 {
- global:
-
rte_cryptodev_scheduler_mode_get;
rte_cryptodev_scheduler_mode_set;
rte_cryptodev_scheduler_option_get;
rte_cryptodev_scheduler_option_set;
+ rte_cryptodev_scheduler_ordering_get;
+ rte_cryptodev_scheduler_ordering_set;
+ rte_cryptodev_scheduler_slave_attach;
+ rte_cryptodev_scheduler_slave_detach;
rte_cryptodev_scheduler_slaves_get;
-} DPDK_17.02;
+ local: *;
+};
diff --git a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
+++ b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
+++ b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/zuc/rte_pmd_zuc_version.map b/drivers/crypto/zuc/rte_pmd_zuc_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/zuc/rte_pmd_zuc_version.map
+++ b/drivers/crypto/zuc/rte_pmd_zuc_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
+++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
index 1c0b7559dc..f9f17e4f6e 100644
--- a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
+++ b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dsw/rte_pmd_dsw_event_version.map b/drivers/event/dsw/rte_pmd_dsw_event_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/event/dsw/rte_pmd_dsw_event_version.map
+++ b/drivers/event/dsw/rte_pmd_dsw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
+++ b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
index 41c65c8c9c..f9f17e4f6e 100644
--- a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
+++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
+DPDK_20.0 {
local: *;
};
-
diff --git a/drivers/event/opdl/rte_pmd_opdl_event_version.map b/drivers/event/opdl/rte_pmd_opdl_event_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/event/opdl/rte_pmd_opdl_event_version.map
+++ b/drivers/event/opdl/rte_pmd_opdl_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/sw/rte_pmd_sw_event_version.map b/drivers/event/sw/rte_pmd_sw_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/sw/rte_pmd_sw_event_version.map
+++ b/drivers/event/sw/rte_pmd_sw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket_version.map
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 60bf50b2d1..9eebaf7ffd 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_dpaa_bpid_info;
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index b45e7a9ac1..cd4bc88273 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,16 +1,10 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_dpaa2_bpid_info;
rte_dpaa2_mbuf_alloc_bulk;
-
- local: *;
-};
-
-DPDK_18.05 {
- global:
-
rte_dpaa2_mbuf_from_buf_addr;
rte_dpaa2_mbuf_pool_bpid;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
index d703368c31..d4f81aed8e 100644
--- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
+++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
@@ -1,8 +1,8 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
- otx2_npa_lf_init;
otx2_npa_lf_fini;
+ otx2_npa_lf_init;
local: *;
};
diff --git a/drivers/mempool/ring/rte_mempool_ring_version.map b/drivers/mempool/ring/rte_mempool_ring_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/ring/rte_mempool_ring_version.map
+++ b/drivers/mempool/ring/rte_mempool_ring_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/stack/rte_mempool_stack_version.map b/drivers/mempool/stack/rte_mempool_stack_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/stack/rte_mempool_stack_version.map
+++ b/drivers/mempool/stack/rte_mempool_stack_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_packet/rte_pmd_af_packet_version.map b/drivers/net/af_packet/rte_pmd_af_packet_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/af_packet/rte_pmd_af_packet_version.map
+++ b/drivers/net/af_packet/rte_pmd_af_packet_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
index c6db030fe6..f9f17e4f6e 100644
--- a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
+++ b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
@@ -1,3 +1,3 @@
-DPDK_19.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ark/rte_pmd_ark_version.map b/drivers/net/ark/rte_pmd_ark_version.map
index 1062e0429f..f9f17e4f6e 100644
--- a/drivers/net/ark/rte_pmd_ark_version.map
+++ b/drivers/net/ark/rte_pmd_ark_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
- local: *;
-
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/atlantic/rte_pmd_atlantic_version.map b/drivers/net/atlantic/rte_pmd_atlantic_version.map
index b16faa999f..9b04838d84 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic_version.map
+++ b/drivers/net/atlantic/rte_pmd_atlantic_version.map
@@ -1,5 +1,4 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
@@ -13,4 +12,3 @@ EXPERIMENTAL {
rte_pmd_atl_macsec_select_txsa;
rte_pmd_atl_macsec_select_rxsa;
};
-
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/net/avp/rte_pmd_avp_version.map
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/axgbe/rte_pmd_axgbe_version.map b/drivers/net/axgbe/rte_pmd_axgbe_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/net/axgbe/rte_pmd_axgbe_version.map
+++ b/drivers/net/axgbe/rte_pmd_axgbe_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
+++ b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnxt/rte_pmd_bnxt_version.map b/drivers/net/bnxt/rte_pmd_bnxt_version.map
index 4750d40ad6..bb52562347 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt_version.map
+++ b/drivers/net/bnxt/rte_pmd_bnxt_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_pmd_bnxt_get_vf_rx_status;
@@ -10,13 +10,13 @@ DPDK_17.08 {
rte_pmd_bnxt_set_tx_loopback;
rte_pmd_bnxt_set_vf_mac_addr;
rte_pmd_bnxt_set_vf_mac_anti_spoof;
+ rte_pmd_bnxt_set_vf_persist_stats;
rte_pmd_bnxt_set_vf_rate_limit;
rte_pmd_bnxt_set_vf_rxmode;
rte_pmd_bnxt_set_vf_vlan_anti_spoof;
rte_pmd_bnxt_set_vf_vlan_filter;
rte_pmd_bnxt_set_vf_vlan_insert;
rte_pmd_bnxt_set_vf_vlan_stripq;
- rte_pmd_bnxt_set_vf_persist_stats;
local: *;
};
diff --git a/drivers/net/bonding/rte_pmd_bond_version.map b/drivers/net/bonding/rte_pmd_bond_version.map
index 00d955c481..270c7d5d55 100644
--- a/drivers/net/bonding/rte_pmd_bond_version.map
+++ b/drivers/net/bonding/rte_pmd_bond_version.map
@@ -1,9 +1,21 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_bond_8023ad_agg_selection_get;
+ rte_eth_bond_8023ad_agg_selection_set;
+ rte_eth_bond_8023ad_conf_get;
+ rte_eth_bond_8023ad_dedicated_queues_disable;
+ rte_eth_bond_8023ad_dedicated_queues_enable;
+ rte_eth_bond_8023ad_ext_collect;
+ rte_eth_bond_8023ad_ext_collect_get;
+ rte_eth_bond_8023ad_ext_distrib;
+ rte_eth_bond_8023ad_ext_distrib_get;
+ rte_eth_bond_8023ad_ext_slowtx;
+ rte_eth_bond_8023ad_setup;
rte_eth_bond_8023ad_slave_info;
rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
+ rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
rte_eth_bond_mac_address_reset;
rte_eth_bond_mac_address_set;
@@ -19,36 +31,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- rte_eth_bond_free;
-
-} DPDK_2.0;
-
-DPDK_16.04 {
-};
-
-DPDK_16.07 {
- global:
-
- rte_eth_bond_8023ad_ext_collect;
- rte_eth_bond_8023ad_ext_collect_get;
- rte_eth_bond_8023ad_ext_distrib;
- rte_eth_bond_8023ad_ext_distrib_get;
- rte_eth_bond_8023ad_ext_slowtx;
-
-} DPDK_16.04;
-
-DPDK_17.08 {
- global:
-
- rte_eth_bond_8023ad_dedicated_queues_enable;
- rte_eth_bond_8023ad_dedicated_queues_disable;
- rte_eth_bond_8023ad_agg_selection_get;
- rte_eth_bond_8023ad_agg_selection_set;
- rte_eth_bond_8023ad_conf_get;
- rte_eth_bond_8023ad_setup;
-
-} DPDK_16.07;
diff --git a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
+++ b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index 8cb4500b51..f403a1526d 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,12 +1,9 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
dpaa_eth_eventq_attach;
dpaa_eth_eventq_detach;
rte_pmd_dpaa_set_tx_loopback;
-} DPDK_17.11;
+
+ local: *;
+};
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index d1b4cdb232..f2bb793319 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,15 +1,11 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_17.11 {
+DPDK_20.0 {
global:
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -17,4 +13,4 @@ EXPERIMENTAL {
rte_pmd_dpaa2_mux_flow_create;
rte_pmd_dpaa2_set_custom_hash;
rte_pmd_dpaa2_set_timestamp;
-} DPDK_17.11;
+};
diff --git a/drivers/net/e1000/rte_pmd_e1000_version.map b/drivers/net/e1000/rte_pmd_e1000_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/e1000/rte_pmd_e1000_version.map
+++ b/drivers/net/e1000/rte_pmd_e1000_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ena/rte_pmd_ena_version.map b/drivers/net/ena/rte_pmd_ena_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/ena/rte_pmd_ena_version.map
+++ b/drivers/net/ena/rte_pmd_ena_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enetc/rte_pmd_enetc_version.map b/drivers/net/enetc/rte_pmd_enetc_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/net/enetc/rte_pmd_enetc_version.map
+++ b/drivers/net/enetc/rte_pmd_enetc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enic/rte_pmd_enic_version.map b/drivers/net/enic/rte_pmd_enic_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/enic/rte_pmd_enic_version.map
+++ b/drivers/net/enic/rte_pmd_enic_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/failsafe/rte_pmd_failsafe_version.map b/drivers/net/failsafe/rte_pmd_failsafe_version.map
index b6d2840be4..f9f17e4f6e 100644
--- a/drivers/net/failsafe/rte_pmd_failsafe_version.map
+++ b/drivers/net/failsafe/rte_pmd_failsafe_version.map
@@ -1,4 +1,3 @@
-DPDK_17.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/fm10k/rte_pmd_fm10k_version.map b/drivers/net/fm10k/rte_pmd_fm10k_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/fm10k/rte_pmd_fm10k_version.map
+++ b/drivers/net/fm10k/rte_pmd_fm10k_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hinic/rte_pmd_hinic_version.map b/drivers/net/hinic/rte_pmd_hinic_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/hinic/rte_pmd_hinic_version.map
+++ b/drivers/net/hinic/rte_pmd_hinic_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
index 35e5f2debb..f9f17e4f6e 100644
--- a/drivers/net/hns3/rte_pmd_hns3_version.map
+++ b/drivers/net/hns3/rte_pmd_hns3_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map
index cccd5768c2..a80e69b93e 100644
--- a/drivers/net/i40e/rte_pmd_i40e_version.map
+++ b/drivers/net/i40e/rte_pmd_i40e_version.map
@@ -1,23 +1,34 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_i40e_add_vf_mac_addr;
+ rte_pmd_i40e_flow_add_del_packet_template;
+ rte_pmd_i40e_flow_type_mapping_get;
+ rte_pmd_i40e_flow_type_mapping_reset;
+ rte_pmd_i40e_flow_type_mapping_update;
+ rte_pmd_i40e_get_ddp_info;
+ rte_pmd_i40e_get_ddp_list;
rte_pmd_i40e_get_vf_stats;
+ rte_pmd_i40e_inset_get;
+ rte_pmd_i40e_inset_set;
rte_pmd_i40e_ping_vfs;
+ rte_pmd_i40e_process_ddp_package;
rte_pmd_i40e_ptype_mapping_get;
rte_pmd_i40e_ptype_mapping_replace;
rte_pmd_i40e_ptype_mapping_reset;
rte_pmd_i40e_ptype_mapping_update;
+ rte_pmd_i40e_query_vfid_by_mac;
rte_pmd_i40e_reset_vf_stats;
+ rte_pmd_i40e_rss_queue_region_conf;
+ rte_pmd_i40e_set_tc_strict_prio;
rte_pmd_i40e_set_tx_loopback;
rte_pmd_i40e_set_vf_broadcast;
rte_pmd_i40e_set_vf_mac_addr;
rte_pmd_i40e_set_vf_mac_anti_spoof;
+ rte_pmd_i40e_set_vf_max_bw;
rte_pmd_i40e_set_vf_multicast_promisc;
+ rte_pmd_i40e_set_vf_tc_bw_alloc;
+ rte_pmd_i40e_set_vf_tc_max_bw;
rte_pmd_i40e_set_vf_unicast_promisc;
rte_pmd_i40e_set_vf_vlan_anti_spoof;
rte_pmd_i40e_set_vf_vlan_filter;
@@ -25,43 +36,5 @@ DPDK_17.02 {
rte_pmd_i40e_set_vf_vlan_stripq;
rte_pmd_i40e_set_vf_vlan_tag;
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_pmd_i40e_set_tc_strict_prio;
- rte_pmd_i40e_set_vf_max_bw;
- rte_pmd_i40e_set_vf_tc_bw_alloc;
- rte_pmd_i40e_set_vf_tc_max_bw;
- rte_pmd_i40e_process_ddp_package;
- rte_pmd_i40e_get_ddp_list;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_i40e_get_ddp_info;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_pmd_i40e_add_vf_mac_addr;
- rte_pmd_i40e_flow_add_del_packet_template;
- rte_pmd_i40e_flow_type_mapping_update;
- rte_pmd_i40e_flow_type_mapping_get;
- rte_pmd_i40e_flow_type_mapping_reset;
- rte_pmd_i40e_query_vfid_by_mac;
- rte_pmd_i40e_rss_queue_region_conf;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_pmd_i40e_inset_get;
- rte_pmd_i40e_inset_set;
-} DPDK_17.11;
\ No newline at end of file
+ local: *;
+};
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
index 7b23b609da..f9f17e4f6e 100644
--- a/drivers/net/ice/rte_pmd_ice_version.map
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -1,4 +1,3 @@
-DPDK_19.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ifc/rte_pmd_ifc_version.map b/drivers/net/ifc/rte_pmd_ifc_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/net/ifc/rte_pmd_ifc_version.map
+++ b/drivers/net/ifc/rte_pmd_ifc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
+++ b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
index c814f96d72..21534dbc3d 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
- rte_pmd_ixgbe_set_all_queues_drop_en;
- rte_pmd_ixgbe_set_tx_loopback;
- rte_pmd_ixgbe_set_vf_mac_addr;
- rte_pmd_ixgbe_set_vf_mac_anti_spoof;
- rte_pmd_ixgbe_set_vf_split_drop_en;
- rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
- rte_pmd_ixgbe_set_vf_vlan_insert;
- rte_pmd_ixgbe_set_vf_vlan_stripq;
-} DPDK_2.0;
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_ixgbe_bypass_event_show;
+ rte_pmd_ixgbe_bypass_event_store;
+ rte_pmd_ixgbe_bypass_init;
+ rte_pmd_ixgbe_bypass_state_set;
+ rte_pmd_ixgbe_bypass_state_show;
+ rte_pmd_ixgbe_bypass_ver_show;
+ rte_pmd_ixgbe_bypass_wd_reset;
+ rte_pmd_ixgbe_bypass_wd_timeout_show;
+ rte_pmd_ixgbe_bypass_wd_timeout_store;
rte_pmd_ixgbe_macsec_config_rxsc;
rte_pmd_ixgbe_macsec_config_txsc;
rte_pmd_ixgbe_macsec_disable;
rte_pmd_ixgbe_macsec_enable;
rte_pmd_ixgbe_macsec_select_rxsa;
rte_pmd_ixgbe_macsec_select_txsa;
+ rte_pmd_ixgbe_ping_vf;
+ rte_pmd_ixgbe_set_all_queues_drop_en;
+ rte_pmd_ixgbe_set_tc_bw_alloc;
+ rte_pmd_ixgbe_set_tx_loopback;
+ rte_pmd_ixgbe_set_vf_mac_addr;
+ rte_pmd_ixgbe_set_vf_mac_anti_spoof;
rte_pmd_ixgbe_set_vf_rate_limit;
rte_pmd_ixgbe_set_vf_rx;
rte_pmd_ixgbe_set_vf_rxmode;
+ rte_pmd_ixgbe_set_vf_split_drop_en;
rte_pmd_ixgbe_set_vf_tx;
+ rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
rte_pmd_ixgbe_set_vf_vlan_filter;
-} DPDK_16.11;
+ rte_pmd_ixgbe_set_vf_vlan_insert;
+ rte_pmd_ixgbe_set_vf_vlan_stripq;
-DPDK_17.05 {
- global:
-
- rte_pmd_ixgbe_ping_vf;
- rte_pmd_ixgbe_set_tc_bw_alloc;
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_ixgbe_bypass_event_show;
- rte_pmd_ixgbe_bypass_event_store;
- rte_pmd_ixgbe_bypass_init;
- rte_pmd_ixgbe_bypass_state_set;
- rte_pmd_ixgbe_bypass_state_show;
- rte_pmd_ixgbe_bypass_ver_show;
- rte_pmd_ixgbe_bypass_wd_reset;
- rte_pmd_ixgbe_bypass_wd_timeout_show;
- rte_pmd_ixgbe_bypass_wd_timeout_store;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/net/kni/rte_pmd_kni_version.map b/drivers/net/kni/rte_pmd_kni_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/kni/rte_pmd_kni_version.map
+++ b/drivers/net/kni/rte_pmd_kni_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/liquidio/rte_pmd_liquidio_version.map b/drivers/net/liquidio/rte_pmd_liquidio_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/liquidio/rte_pmd_liquidio_version.map
+++ b/drivers/net/liquidio/rte_pmd_liquidio_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/memif/rte_pmd_memif_version.map b/drivers/net/memif/rte_pmd_memif_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/net/memif/rte_pmd_memif_version.map
+++ b/drivers/net/memif/rte_pmd_memif_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/mlx4/rte_pmd_mlx4_version.map b/drivers/net/mlx4/rte_pmd_mlx4_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/mlx4/rte_pmd_mlx4_version.map
+++ b/drivers/net/mlx4/rte_pmd_mlx4_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5_version.map
+++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvneta/rte_pmd_mvneta_version.map b/drivers/net/mvneta/rte_pmd_mvneta_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/net/mvneta/rte_pmd_mvneta_version.map
+++ b/drivers/net/mvneta/rte_pmd_mvneta_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
+++ b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/netvsc/rte_pmd_netvsc_version.map b/drivers/net/netvsc/rte_pmd_netvsc_version.map
index d534019a6b..f9f17e4f6e 100644
--- a/drivers/net/netvsc/rte_pmd_netvsc_version.map
+++ b/drivers/net/netvsc/rte_pmd_netvsc_version.map
@@ -1,5 +1,3 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfb/rte_pmd_nfb_version.map b/drivers/net/nfb/rte_pmd_nfb_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/nfb/rte_pmd_nfb_version.map
+++ b/drivers/net/nfb/rte_pmd_nfb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfp/rte_pmd_nfp_version.map b/drivers/net/nfp/rte_pmd_nfp_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/nfp/rte_pmd_nfp_version.map
+++ b/drivers/net/nfp/rte_pmd_nfp_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/null/rte_pmd_null_version.map b/drivers/net/null/rte_pmd_null_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/null/rte_pmd_null_version.map
+++ b/drivers/net/null/rte_pmd_null_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/octeontx/rte_pmd_octeontx_version.map b/drivers/net/octeontx/rte_pmd_octeontx_version.map
index a3161b14d0..f7cae02fac 100644
--- a/drivers/net/octeontx/rte_pmd_octeontx_version.map
+++ b/drivers/net/octeontx/rte_pmd_octeontx_version.map
@@ -1,11 +1,7 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.02 {
+DPDK_20.0 {
global:
rte_octeontx_pchan_map;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/pcap/rte_pmd_pcap_version.map b/drivers/net/pcap/rte_pmd_pcap_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/pcap/rte_pmd_pcap_version.map
+++ b/drivers/net/pcap/rte_pmd_pcap_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/qede/rte_pmd_qede_version.map b/drivers/net/qede/rte_pmd_qede_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/qede/rte_pmd_qede_version.map
+++ b/drivers/net/qede/rte_pmd_qede_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ring/rte_pmd_ring_version.map b/drivers/net/ring/rte_pmd_ring_version.map
index 1f785d9409..ebb6be2733 100644
--- a/drivers/net/ring/rte_pmd_ring_version.map
+++ b/drivers/net/ring/rte_pmd_ring_version.map
@@ -1,14 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_from_ring;
rte_eth_from_rings;
local: *;
};
-
-DPDK_2.2 {
- global:
-
- rte_eth_from_ring;
-
-} DPDK_2.0;
diff --git a/drivers/net/sfc/rte_pmd_sfc_version.map b/drivers/net/sfc/rte_pmd_sfc_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/sfc/rte_pmd_sfc_version.map
+++ b/drivers/net/sfc/rte_pmd_sfc_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/softnic/rte_pmd_softnic_version.map b/drivers/net/softnic/rte_pmd_softnic_version.map
index bc44b06f98..50f113d5a2 100644
--- a/drivers/net/softnic/rte_pmd_softnic_version.map
+++ b/drivers/net/softnic/rte_pmd_softnic_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pmd_softnic_run;
diff --git a/drivers/net/szedata2/rte_pmd_szedata2_version.map b/drivers/net/szedata2/rte_pmd_szedata2_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/szedata2/rte_pmd_szedata2_version.map
+++ b/drivers/net/szedata2/rte_pmd_szedata2_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/tap/rte_pmd_tap_version.map b/drivers/net/tap/rte_pmd_tap_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/tap/rte_pmd_tap_version.map
+++ b/drivers/net/tap/rte_pmd_tap_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_version.map b/drivers/net/thunderx/rte_pmd_thunderx_version.map
index 1901bcb3b3..f9f17e4f6e 100644
--- a/drivers/net/thunderx/rte_pmd_thunderx_version.map
+++ b/drivers/net/thunderx/rte_pmd_thunderx_version.map
@@ -1,4 +1,3 @@
-DPDK_16.07 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
+++ b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vhost/rte_pmd_vhost_version.map b/drivers/net/vhost/rte_pmd_vhost_version.map
index 695db85749..16b591ccc4 100644
--- a/drivers/net/vhost/rte_pmd_vhost_version.map
+++ b/drivers/net/vhost/rte_pmd_vhost_version.map
@@ -1,13 +1,8 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
rte_eth_vhost_get_queue_event;
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
rte_eth_vhost_get_vid_from_port_id;
+
+ local: *;
};
diff --git a/drivers/net/virtio/rte_pmd_virtio_version.map b/drivers/net/virtio/rte_pmd_virtio_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/virtio/rte_pmd_virtio_version.map
+++ b/drivers/net/virtio/rte_pmd_virtio_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
+++ b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
+++ b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
index d16a136fc8..ca6a0d7626 100644
--- a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
+++ b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
@@ -1,4 +1,4 @@
-DPDK_19.05 {
+DPDK_20.0 {
global:
rte_qdma_attr_get;
@@ -9,9 +9,9 @@ DPDK_19.05 {
rte_qdma_start;
rte_qdma_stop;
rte_qdma_vq_create;
- rte_qdma_vq_destroy;
rte_qdma_vq_dequeue;
rte_qdma_vq_dequeue_multi;
+ rte_qdma_vq_destroy;
rte_qdma_vq_enqueue;
rte_qdma_vq_enqueue_multi;
rte_qdma_vq_stats;
diff --git a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
+++ b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ioat/rte_rawdev_ioat_version.map b/drivers/raw/ioat/rte_rawdev_ioat_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/ioat/rte_rawdev_ioat_version.map
+++ b/drivers/raw/ioat/rte_rawdev_ioat_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ntb/rte_rawdev_ntb_version.map b/drivers/raw/ntb/rte_rawdev_ntb_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/raw/ntb/rte_rawdev_ntb_version.map
+++ b/drivers/raw/ntb/rte_rawdev_ntb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
+++ b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
+++ b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/lib/librte_acl/rte_acl_version.map b/lib/librte_acl/rte_acl_version.map
index b09370a104..c3daca8115 100644
--- a/lib/librte_acl/rte_acl_version.map
+++ b/lib/librte_acl/rte_acl_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_acl_add_rules;
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
index 3624eb1cb4..45b560dbe7 100644
--- a/lib/librte_bbdev/rte_bbdev_version.map
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_bitratestats/rte_bitratestats_version.map b/lib/librte_bitratestats/rte_bitratestats_version.map
index fe7454452d..88fc2912db 100644
--- a/lib/librte_bitratestats/rte_bitratestats_version.map
+++ b/lib/librte_bitratestats/rte_bitratestats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_stats_bitrate_calc;
diff --git a/lib/librte_bpf/rte_bpf_version.map b/lib/librte_bpf/rte_bpf_version.map
index a203e088ea..e1ec43faa0 100644
--- a/lib/librte_bpf/rte_bpf_version.map
+++ b/lib/librte_bpf/rte_bpf_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cfgfile/rte_cfgfile_version.map b/lib/librte_cfgfile/rte_cfgfile_version.map
index a0a11cea8d..906eee96bf 100644
--- a/lib/librte_cfgfile/rte_cfgfile_version.map
+++ b/lib/librte_cfgfile/rte_cfgfile_version.map
@@ -1,40 +1,22 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_cfgfile_add_entry;
+ rte_cfgfile_add_section;
rte_cfgfile_close;
+ rte_cfgfile_create;
rte_cfgfile_get_entry;
rte_cfgfile_has_entry;
rte_cfgfile_has_section;
rte_cfgfile_load;
+ rte_cfgfile_load_with_params;
rte_cfgfile_num_sections;
+ rte_cfgfile_save;
rte_cfgfile_section_entries;
+ rte_cfgfile_section_entries_by_index;
rte_cfgfile_section_num_entries;
rte_cfgfile_sections;
+ rte_cfgfile_set_entry;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_cfgfile_section_entries_by_index;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_cfgfile_load_with_params;
-
-} DPDK_16.04;
-
-DPDK_17.11 {
- global:
-
- rte_cfgfile_add_entry;
- rte_cfgfile_add_section;
- rte_cfgfile_create;
- rte_cfgfile_save;
- rte_cfgfile_set_entry;
-
-} DPDK_17.05;
diff --git a/lib/librte_cmdline/rte_cmdline_version.map b/lib/librte_cmdline/rte_cmdline_version.map
index 04bcb387f2..95fce812ff 100644
--- a/lib/librte_cmdline/rte_cmdline_version.map
+++ b/lib/librte_cmdline/rte_cmdline_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
cirbuf_add_buf_head;
@@ -40,6 +40,7 @@ DPDK_2.0 {
cmdline_parse_num;
cmdline_parse_portlist;
cmdline_parse_string;
+ cmdline_poll;
cmdline_printf;
cmdline_quit;
cmdline_set_prompt;
@@ -68,10 +69,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- cmdline_poll;
-
-} DPDK_2.0;
diff --git a/lib/librte_compressdev/rte_compressdev_version.map b/lib/librte_compressdev/rte_compressdev_version.map
index e2a108b650..cfcd50ac1c 100644
--- a/lib/librte_compressdev/rte_compressdev_version.map
+++ b/lib/librte_compressdev/rte_compressdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 3deb265ac2..1dd1e259a0 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -1,92 +1,62 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
- rte_cryptodevs;
+ rte_crypto_aead_algorithm_strings;
+ rte_crypto_aead_operation_strings;
+ rte_crypto_auth_algorithm_strings;
+ rte_crypto_auth_operation_strings;
+ rte_crypto_cipher_algorithm_strings;
+ rte_crypto_cipher_operation_strings;
+ rte_crypto_op_pool_create;
+ rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
- rte_cryptodev_count;
rte_cryptodev_configure;
+ rte_cryptodev_count;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_devices_get;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
+ rte_cryptodev_get_aead_algo_enum;
+ rte_cryptodev_get_auth_algo_enum;
+ rte_cryptodev_get_cipher_algo_enum;
rte_cryptodev_get_dev_id;
rte_cryptodev_get_feature_name;
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
- rte_cryptodev_sym_session_create;
- rte_cryptodev_sym_session_free;
+ rte_cryptodev_queue_pair_count;
+ rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
rte_cryptodev_start;
rte_cryptodev_stats_get;
rte_cryptodev_stats_reset;
rte_cryptodev_stop;
- rte_cryptodev_queue_pair_count;
- rte_cryptodev_queue_pair_setup;
- rte_crypto_op_pool_create;
-
- local: *;
-};
-
-DPDK_17.02 {
- global:
-
- rte_cryptodev_devices_get;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_sym_capability_check_aead;
rte_cryptodev_sym_capability_check_auth;
rte_cryptodev_sym_capability_check_cipher;
rte_cryptodev_sym_capability_get;
- rte_crypto_auth_algorithm_strings;
- rte_crypto_auth_operation_strings;
- rte_crypto_cipher_algorithm_strings;
- rte_crypto_cipher_operation_strings;
-
-} DPDK_16.04;
-
-DPDK_17.05 {
- global:
-
- rte_cryptodev_get_auth_algo_enum;
- rte_cryptodev_get_cipher_algo_enum;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_cryptodev_allocate_driver;
- rte_cryptodev_device_count_by_driver;
- rte_cryptodev_driver_id_get;
- rte_cryptodev_driver_name_get;
- rte_cryptodev_get_aead_algo_enum;
- rte_cryptodev_sym_capability_check_aead;
- rte_cryptodev_sym_session_init;
- rte_cryptodev_sym_session_clear;
- rte_crypto_aead_algorithm_strings;
- rte_crypto_aead_operation_strings;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_cryptodev_get_sec_ctx;
- rte_cryptodev_name_get;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_parse_input_args;
-
-} DPDK_17.08;
-
-DPDK_18.05 {
- global:
-
rte_cryptodev_sym_get_header_session_size;
rte_cryptodev_sym_get_private_session_size;
+ rte_cryptodev_sym_session_clear;
+ rte_cryptodev_sym_session_create;
+ rte_cryptodev_sym_session_free;
+ rte_cryptodev_sym_session_init;
+ rte_cryptodevs;
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 00e26b4804..1b7c643005 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_distributor_clear_returns;
@@ -10,4 +10,6 @@ DPDK_17.05 {
rte_distributor_request_pkt;
rte_distributor_return_pkt;
rte_distributor_returned_pkts;
+
+ local: *;
};
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d37b..8c41999317 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
__rte_panic;
@@ -7,46 +7,111 @@ DPDK_2.0 {
lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
+ rte_bus_dump;
+ rte_bus_find;
+ rte_bus_find_by_device;
+ rte_bus_find_by_name;
+ rte_bus_get_iommu_class;
+ rte_bus_probe;
+ rte_bus_register;
+ rte_bus_scan;
+ rte_bus_unregister;
rte_calloc;
rte_calloc_socket;
rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
+ rte_cpu_get_flag_name;
+ rte_cpu_is_supported;
+ rte_ctrl_thread_create;
rte_cycles_vmware_tsc_map;
rte_delay_us;
+ rte_delay_us_block;
+ rte_delay_us_callback_register;
+ rte_dev_is_probed;
+ rte_dev_probe;
+ rte_dev_remove;
+ rte_devargs_add;
+ rte_devargs_dump;
+ rte_devargs_insert;
+ rte_devargs_next;
+ rte_devargs_parse;
+ rte_devargs_parsef;
+ rte_devargs_remove;
+ rte_devargs_type_count;
rte_dump_physmem_layout;
rte_dump_registers;
rte_dump_stack;
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
+ rte_eal_cleanup;
+ rte_eal_create_uio_dev;
rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
+ rte_eal_get_runtime_dir;
rte_eal_has_hugepages;
+ rte_eal_has_pci;
+ rte_eal_hotplug_add;
+ rte_eal_hotplug_remove;
rte_eal_hpet_init;
rte_eal_init;
rte_eal_iopl_init;
+ rte_eal_iova_mode;
rte_eal_lcore_role;
+ rte_eal_mbuf_user_pool_ops;
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
+ rte_eal_primary_proc_alive;
rte_eal_process_type;
rte_eal_remote_launch;
rte_eal_tailq_lookup;
rte_eal_tailq_register;
+ rte_eal_using_phys_addrs;
+ rte_eal_vfio_intr_mode;
rte_eal_wait_lcore;
+ rte_epoll_ctl;
+ rte_epoll_wait;
rte_exit;
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
rte_get_tsc_hz;
rte_hexdump;
+ rte_hypervisor_get;
+ rte_hypervisor_get_name;
+ rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
+ rte_intr_cap_multiple;
rte_intr_disable;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
rte_intr_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_keepalive_create;
+ rte_keepalive_dispatch_pings;
+ rte_keepalive_mark_alive;
+ rte_keepalive_mark_sleep;
+ rte_keepalive_register_core;
+ rte_keepalive_register_relay_callback;
+ rte_lcore_has_role;
+ rte_lcore_index;
+ rte_lcore_to_socket_id;
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
+ rte_log_dump;
+ rte_log_get_global_level;
+ rte_log_get_level;
+ rte_log_register;
+ rte_log_set_global_level;
+ rte_log_set_level;
+ rte_log_set_level_pattern;
+ rte_log_set_level_regexp;
rte_logs;
rte_malloc;
rte_malloc_dump_stats;
@@ -54,155 +119,38 @@ DPDK_2.0 {
rte_malloc_set_limit;
rte_malloc_socket;
rte_malloc_validate;
+ rte_malloc_virt2iova;
+ rte_mcfg_mem_read_lock;
+ rte_mcfg_mem_read_unlock;
+ rte_mcfg_mem_write_lock;
+ rte_mcfg_mem_write_unlock;
+ rte_mcfg_mempool_read_lock;
+ rte_mcfg_mempool_read_unlock;
+ rte_mcfg_mempool_write_lock;
+ rte_mcfg_mempool_write_unlock;
+ rte_mcfg_tailq_read_lock;
+ rte_mcfg_tailq_read_unlock;
+ rte_mcfg_tailq_write_lock;
+ rte_mcfg_tailq_write_unlock;
rte_mem_lock_page;
+ rte_mem_virt2iova;
rte_mem_virt2phy;
rte_memdump;
rte_memory_get_nchannel;
rte_memory_get_nrank;
rte_memzone_dump;
+ rte_memzone_free;
rte_memzone_lookup;
rte_memzone_reserve;
rte_memzone_reserve_aligned;
rte_memzone_reserve_bounded;
rte_memzone_walk;
rte_openlog_stream;
+ rte_rand;
rte_realloc;
- rte_set_application_usage_hook;
- rte_socket_id;
- rte_strerror;
- rte_strsplit;
- rte_sys_gettid;
- rte_thread_get_affinity;
- rte_thread_set_affinity;
- rte_vlog;
- rte_zmalloc;
- rte_zmalloc_socket;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_epoll_ctl;
- rte_epoll_wait;
- rte_intr_allow_others;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
- rte_memzone_free;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_intr_cap_multiple;
- rte_keepalive_create;
- rte_keepalive_dispatch_pings;
- rte_keepalive_mark_alive;
- rte_keepalive_register_core;
-
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_cpu_get_flag_name;
- rte_eal_primary_proc_alive;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_keepalive_mark_sleep;
- rte_keepalive_register_relay_callback;
- rte_rtm_supported;
- rte_thread_setname;
-
-} DPDK_16.04;
-
-DPDK_16.11 {
- global:
-
- rte_delay_us_block;
- rte_delay_us_callback_register;
-
-} DPDK_16.07;
-
-DPDK_17.02 {
- global:
-
- rte_bus_dump;
- rte_bus_probe;
- rte_bus_register;
- rte_bus_scan;
- rte_bus_unregister;
-
-} DPDK_16.11;
-
-DPDK_17.05 {
- global:
-
- rte_cpu_is_supported;
- rte_intr_free_epoll_fd;
- rte_log_dump;
- rte_log_get_global_level;
- rte_log_register;
- rte_log_set_global_level;
- rte_log_set_level;
- rte_log_set_level_regexp;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_bus_find;
- rte_bus_find_by_device;
- rte_bus_find_by_name;
- rte_log_get_level;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eal_create_uio_dev;
- rte_bus_get_iommu_class;
- rte_eal_has_pci;
- rte_eal_iova_mode;
- rte_eal_using_phys_addrs;
- rte_eal_vfio_intr_mode;
- rte_lcore_has_role;
- rte_malloc_virt2iova;
- rte_mem_virt2iova;
- rte_vfio_enable;
- rte_vfio_is_enabled;
- rte_vfio_noiommu_is_enabled;
- rte_vfio_release_device;
- rte_vfio_setup_device;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_hypervisor_get;
- rte_hypervisor_get_name;
- rte_vfio_clear_group;
rte_reciprocal_value;
rte_reciprocal_value_u64;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_log_set_level_pattern;
+ rte_rtm_supported;
rte_service_attr_get;
rte_service_attr_reset_all;
rte_service_component_register;
@@ -215,6 +163,8 @@ DPDK_18.05 {
rte_service_get_count;
rte_service_get_name;
rte_service_lcore_add;
+ rte_service_lcore_attr_get;
+ rte_service_lcore_attr_reset_all;
rte_service_lcore_count;
rte_service_lcore_count_services;
rte_service_lcore_del;
@@ -224,6 +174,7 @@ DPDK_18.05 {
rte_service_lcore_stop;
rte_service_map_lcore_get;
rte_service_map_lcore_set;
+ rte_service_may_be_active;
rte_service_probe_capability;
rte_service_run_iter_on_app_lcore;
rte_service_runstate_get;
@@ -231,17 +182,23 @@ DPDK_18.05 {
rte_service_set_runstate_mapped_check;
rte_service_set_stats_enable;
rte_service_start_with_defaults;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eal_mbuf_user_pool_ops;
+ rte_set_application_usage_hook;
+ rte_socket_count;
+ rte_socket_id;
+ rte_socket_id_by_idx;
+ rte_srand;
+ rte_strerror;
+ rte_strscpy;
+ rte_strsplit;
+ rte_sys_gettid;
+ rte_thread_get_affinity;
+ rte_thread_set_affinity;
+ rte_thread_setname;
rte_uuid_compare;
rte_uuid_is_null;
rte_uuid_parse;
rte_uuid_unparse;
+ rte_vfio_clear_group;
rte_vfio_container_create;
rte_vfio_container_destroy;
rte_vfio_container_dma_map;
@@ -250,67 +207,20 @@ DPDK_18.08 {
rte_vfio_container_group_unbind;
rte_vfio_dma_map;
rte_vfio_dma_unmap;
+ rte_vfio_enable;
rte_vfio_get_container_fd;
rte_vfio_get_group_fd;
rte_vfio_get_group_num;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_dev_probe;
- rte_dev_remove;
- rte_eal_get_runtime_dir;
- rte_eal_hotplug_add;
- rte_eal_hotplug_remove;
- rte_strscpy;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_ctrl_thread_create;
- rte_dev_is_probed;
- rte_devargs_add;
- rte_devargs_dump;
- rte_devargs_insert;
- rte_devargs_next;
- rte_devargs_parse;
- rte_devargs_parsef;
- rte_devargs_remove;
- rte_devargs_type_count;
- rte_eal_cleanup;
- rte_socket_count;
- rte_socket_id_by_idx;
-
-} DPDK_18.11;
-
-DPDK_19.08 {
- global:
-
- rte_lcore_index;
- rte_lcore_to_socket_id;
- rte_mcfg_mem_read_lock;
- rte_mcfg_mem_read_unlock;
- rte_mcfg_mem_write_lock;
- rte_mcfg_mem_write_unlock;
- rte_mcfg_mempool_read_lock;
- rte_mcfg_mempool_read_unlock;
- rte_mcfg_mempool_write_lock;
- rte_mcfg_mempool_write_unlock;
- rte_mcfg_tailq_read_lock;
- rte_mcfg_tailq_read_unlock;
- rte_mcfg_tailq_write_lock;
- rte_mcfg_tailq_write_unlock;
- rte_rand;
- rte_service_lcore_attr_get;
- rte_service_lcore_attr_reset_all;
- rte_service_may_be_active;
- rte_srand;
-
-} DPDK_19.05;
+ rte_vfio_is_enabled;
+ rte_vfio_noiommu_is_enabled;
+ rte_vfio_release_device;
+ rte_vfio_setup_device;
+ rte_vlog;
+ rte_zmalloc;
+ rte_zmalloc_socket;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_efd/rte_efd_version.map b/lib/librte_efd/rte_efd_version.map
index ae60a64178..e010eecfe4 100644
--- a/lib/librte_efd/rte_efd_version.map
+++ b/lib/librte_efd/rte_efd_version.map
@@ -1,4 +1,4 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_efd_create;
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index 6df42a47b8..9e1dbdebb4 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -1,35 +1,53 @@
-DPDK_2.2 {
+DPDK_20.0 {
global:
+ _rte_eth_dev_callback_process;
+ _rte_eth_dev_reset;
+ rte_eth_add_first_rx_callback;
rte_eth_add_rx_callback;
rte_eth_add_tx_callback;
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_allocate;
rte_eth_dev_allocated;
+ rte_eth_dev_attach_secondary;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
rte_eth_dev_close;
rte_eth_dev_configure;
rte_eth_dev_count;
+ rte_eth_dev_count_avail;
+ rte_eth_dev_count_total;
rte_eth_dev_default_mac_addr_set;
+ rte_eth_dev_filter_ctrl;
rte_eth_dev_filter_supported;
rte_eth_dev_flow_ctrl_get;
rte_eth_dev_flow_ctrl_set;
+ rte_eth_dev_fw_version_get;
rte_eth_dev_get_dcb_info;
rte_eth_dev_get_eeprom;
rte_eth_dev_get_eeprom_length;
rte_eth_dev_get_mtu;
+ rte_eth_dev_get_name_by_port;
+ rte_eth_dev_get_port_by_name;
rte_eth_dev_get_reg_info;
+ rte_eth_dev_get_sec_ctx;
+ rte_eth_dev_get_supported_ptypes;
rte_eth_dev_get_vlan_offload;
- rte_eth_devices;
rte_eth_dev_info_get;
rte_eth_dev_is_valid_port;
+ rte_eth_dev_l2_tunnel_eth_type_conf;
+ rte_eth_dev_l2_tunnel_offload_set;
+ rte_eth_dev_logtype;
rte_eth_dev_mac_addr_add;
rte_eth_dev_mac_addr_remove;
+ rte_eth_dev_pool_ops_supported;
rte_eth_dev_priority_flow_ctrl_set;
+ rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
+ rte_eth_dev_reset;
rte_eth_dev_rss_hash_conf_get;
rte_eth_dev_rss_hash_update;
rte_eth_dev_rss_reta_query;
@@ -38,6 +56,7 @@ DPDK_2.2 {
rte_eth_dev_rx_intr_ctl_q;
rte_eth_dev_rx_intr_disable;
rte_eth_dev_rx_intr_enable;
+ rte_eth_dev_rx_offload_name;
rte_eth_dev_rx_queue_start;
rte_eth_dev_rx_queue_stop;
rte_eth_dev_set_eeprom;
@@ -47,18 +66,28 @@ DPDK_2.2 {
rte_eth_dev_set_mtu;
rte_eth_dev_set_rx_queue_stats_mapping;
rte_eth_dev_set_tx_queue_stats_mapping;
+ rte_eth_dev_set_vlan_ether_type;
rte_eth_dev_set_vlan_offload;
rte_eth_dev_set_vlan_pvid;
rte_eth_dev_set_vlan_strip_on_queue;
rte_eth_dev_socket_id;
rte_eth_dev_start;
rte_eth_dev_stop;
+ rte_eth_dev_tx_offload_name;
rte_eth_dev_tx_queue_start;
rte_eth_dev_tx_queue_stop;
rte_eth_dev_uc_all_hash_table_set;
rte_eth_dev_uc_hash_table_set;
+ rte_eth_dev_udp_tunnel_port_add;
+ rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
+ rte_eth_devices;
rte_eth_dma_zone_reserve;
+ rte_eth_find_next;
+ rte_eth_find_next_owned_by;
+ rte_eth_iterator_cleanup;
+ rte_eth_iterator_init;
+ rte_eth_iterator_next;
rte_eth_led_off;
rte_eth_led_on;
rte_eth_link;
@@ -75,6 +104,7 @@ DPDK_2.2 {
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
+ rte_eth_speed_bitflag;
rte_eth_stats;
rte_eth_stats_get;
rte_eth_stats_reset;
@@ -85,66 +115,27 @@ DPDK_2.2 {
rte_eth_timesync_read_time;
rte_eth_timesync_read_tx_timestamp;
rte_eth_timesync_write_time;
- rte_eth_tx_queue_info_get;
- rte_eth_tx_queue_setup;
- rte_eth_xstats_get;
- rte_eth_xstats_reset;
-
- local: *;
-};
-
-DPDK_16.04 {
- global:
-
- rte_eth_dev_get_supported_ptypes;
- rte_eth_dev_l2_tunnel_eth_type_conf;
- rte_eth_dev_l2_tunnel_offload_set;
- rte_eth_dev_set_vlan_ether_type;
- rte_eth_dev_udp_tunnel_port_add;
- rte_eth_dev_udp_tunnel_port_delete;
- rte_eth_speed_bitflag;
rte_eth_tx_buffer_count_callback;
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_eth_add_first_rx_callback;
- rte_eth_dev_get_name_by_port;
- rte_eth_dev_get_port_by_name;
- rte_eth_xstats_get_names;
-
-} DPDK_16.04;
-
-DPDK_17.02 {
- global:
-
- _rte_eth_dev_reset;
- rte_eth_dev_fw_version_get;
-
-} DPDK_16.07;
-
-DPDK_17.05 {
- global:
-
- rte_eth_dev_attach_secondary;
- rte_eth_find_next;
rte_eth_tx_done_cleanup;
+ rte_eth_tx_queue_info_get;
+ rte_eth_tx_queue_setup;
+ rte_eth_xstats_get;
rte_eth_xstats_get_by_id;
rte_eth_xstats_get_id_by_name;
+ rte_eth_xstats_get_names;
rte_eth_xstats_get_names_by_id;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- _rte_eth_dev_callback_process;
- rte_eth_dev_adjust_nb_rx_tx_desc;
+ rte_eth_xstats_reset;
+ rte_flow_copy;
+ rte_flow_create;
+ rte_flow_destroy;
+ rte_flow_error_set;
+ rte_flow_flush;
+ rte_flow_isolate;
+ rte_flow_query;
+ rte_flow_validate;
rte_tm_capabilities_get;
rte_tm_get_number_of_leaf_nodes;
rte_tm_hierarchy_commit;
@@ -176,65 +167,8 @@ DPDK_17.08 {
rte_tm_wred_profile_add;
rte_tm_wred_profile_delete;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eth_dev_get_sec_ctx;
- rte_eth_dev_pool_ops_supported;
- rte_eth_dev_reset;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_eth_dev_filter_ctrl;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_eth_dev_count_avail;
- rte_eth_dev_probing_finish;
- rte_eth_find_next_owned_by;
- rte_flow_copy;
- rte_flow_create;
- rte_flow_destroy;
- rte_flow_error_set;
- rte_flow_flush;
- rte_flow_isolate;
- rte_flow_query;
- rte_flow_validate;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eth_dev_logtype;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_eth_dev_rx_offload_name;
- rte_eth_dev_tx_offload_name;
- rte_eth_iterator_cleanup;
- rte_eth_iterator_init;
- rte_eth_iterator_next;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_eth_dev_count_total;
-
-} DPDK_18.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 76b3021d3a..edfc15282d 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -1,61 +1,38 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
- rte_eventdevs;
-
+ rte_event_crypto_adapter_caps_get;
+ rte_event_crypto_adapter_create;
+ rte_event_crypto_adapter_create_ext;
+ rte_event_crypto_adapter_event_port_get;
+ rte_event_crypto_adapter_free;
+ rte_event_crypto_adapter_queue_pair_add;
+ rte_event_crypto_adapter_queue_pair_del;
+ rte_event_crypto_adapter_service_id_get;
+ rte_event_crypto_adapter_start;
+ rte_event_crypto_adapter_stats_get;
+ rte_event_crypto_adapter_stats_reset;
+ rte_event_crypto_adapter_stop;
+ rte_event_dequeue_timeout_ticks;
+ rte_event_dev_attr_get;
+ rte_event_dev_close;
+ rte_event_dev_configure;
rte_event_dev_count;
+ rte_event_dev_dump;
rte_event_dev_get_dev_id;
- rte_event_dev_socket_id;
rte_event_dev_info_get;
- rte_event_dev_configure;
+ rte_event_dev_selftest;
+ rte_event_dev_service_id_get;
+ rte_event_dev_socket_id;
rte_event_dev_start;
rte_event_dev_stop;
- rte_event_dev_close;
- rte_event_dev_dump;
+ rte_event_dev_stop_flush_callback_register;
rte_event_dev_xstats_by_name_get;
rte_event_dev_xstats_get;
rte_event_dev_xstats_names_get;
rte_event_dev_xstats_reset;
-
- rte_event_port_default_conf_get;
- rte_event_port_setup;
- rte_event_port_link;
- rte_event_port_unlink;
- rte_event_port_links_get;
-
- rte_event_queue_default_conf_get;
- rte_event_queue_setup;
-
- rte_event_dequeue_timeout_ticks;
-
- rte_event_pmd_allocate;
- rte_event_pmd_release;
- rte_event_pmd_vdev_init;
- rte_event_pmd_vdev_uninit;
- rte_event_pmd_pci_probe;
- rte_event_pmd_pci_remove;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- rte_event_ring_create;
- rte_event_ring_free;
- rte_event_ring_init;
- rte_event_ring_lookup;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_event_dev_attr_get;
- rte_event_dev_service_id_get;
- rte_event_port_attr_get;
- rte_event_queue_attr_get;
-
rte_event_eth_rx_adapter_caps_get;
+ rte_event_eth_rx_adapter_cb_register;
rte_event_eth_rx_adapter_create;
rte_event_eth_rx_adapter_create_ext;
rte_event_eth_rx_adapter_free;
@@ -63,38 +40,9 @@ DPDK_17.11 {
rte_event_eth_rx_adapter_queue_del;
rte_event_eth_rx_adapter_service_id_get;
rte_event_eth_rx_adapter_start;
+ rte_event_eth_rx_adapter_stats_get;
rte_event_eth_rx_adapter_stats_reset;
rte_event_eth_rx_adapter_stop;
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_event_dev_selftest;
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_event_dev_stop_flush_callback_register;
-} DPDK_18.02;
-
-DPDK_19.05 {
- global:
-
- rte_event_crypto_adapter_caps_get;
- rte_event_crypto_adapter_create;
- rte_event_crypto_adapter_create_ext;
- rte_event_crypto_adapter_event_port_get;
- rte_event_crypto_adapter_free;
- rte_event_crypto_adapter_queue_pair_add;
- rte_event_crypto_adapter_queue_pair_del;
- rte_event_crypto_adapter_service_id_get;
- rte_event_crypto_adapter_start;
- rte_event_crypto_adapter_stats_get;
- rte_event_crypto_adapter_stats_reset;
- rte_event_crypto_adapter_stop;
- rte_event_port_unlinks_in_progress;
rte_event_eth_tx_adapter_caps_get;
rte_event_eth_tx_adapter_create;
rte_event_eth_tx_adapter_create_ext;
@@ -107,6 +55,26 @@ DPDK_19.05 {
rte_event_eth_tx_adapter_stats_get;
rte_event_eth_tx_adapter_stats_reset;
rte_event_eth_tx_adapter_stop;
+ rte_event_pmd_allocate;
+ rte_event_pmd_pci_probe;
+ rte_event_pmd_pci_remove;
+ rte_event_pmd_release;
+ rte_event_pmd_vdev_init;
+ rte_event_pmd_vdev_uninit;
+ rte_event_port_attr_get;
+ rte_event_port_default_conf_get;
+ rte_event_port_link;
+ rte_event_port_links_get;
+ rte_event_port_setup;
+ rte_event_port_unlink;
+ rte_event_port_unlinks_in_progress;
+ rte_event_queue_attr_get;
+ rte_event_queue_default_conf_get;
+ rte_event_queue_setup;
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
rte_event_timer_adapter_caps_get;
rte_event_timer_adapter_create;
rte_event_timer_adapter_create_ext;
@@ -121,11 +89,7 @@ DPDK_19.05 {
rte_event_timer_arm_burst;
rte_event_timer_arm_tmo_tick_burst;
rte_event_timer_cancel_burst;
-} DPDK_18.05;
+ rte_eventdevs;
-DPDK_19.08 {
- global:
-
- rte_event_eth_rx_adapter_cb_register;
- rte_event_eth_rx_adapter_stats_get;
-} DPDK_19.05;
+ local: *;
+};
diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map
index 49bc25c6a0..001ff660e3 100644
--- a/lib/librte_flow_classify/rte_flow_classify_version.map
+++ b/lib/librte_flow_classify/rte_flow_classify_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_gro/rte_gro_version.map b/lib/librte_gro/rte_gro_version.map
index 1606b6dc72..9f6fe79e57 100644
--- a/lib/librte_gro/rte_gro_version.map
+++ b/lib/librte_gro/rte_gro_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_gro_ctx_create;
diff --git a/lib/librte_gso/rte_gso_version.map b/lib/librte_gso/rte_gso_version.map
index e1fd453edb..8505a59c27 100644
--- a/lib/librte_gso/rte_gso_version.map
+++ b/lib/librte_gso/rte_gso_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_gso_segment;
diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_hash_version.map
index 734ae28b04..138c130c1b 100644
--- a/lib/librte_hash/rte_hash_version.map
+++ b/lib/librte_hash/rte_hash_version.map
@@ -1,58 +1,33 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_fbk_hash_create;
rte_fbk_hash_find_existing;
rte_fbk_hash_free;
rte_hash_add_key;
+ rte_hash_add_key_data;
rte_hash_add_key_with_hash;
+ rte_hash_add_key_with_hash_data;
+ rte_hash_count;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
rte_hash_find_existing;
rte_hash_free;
+ rte_hash_get_key_with_position;
rte_hash_hash;
+ rte_hash_iterate;
rte_hash_lookup;
rte_hash_lookup_bulk;
- rte_hash_lookup_with_hash;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_hash_add_key_data;
- rte_hash_add_key_with_hash_data;
- rte_hash_iterate;
rte_hash_lookup_bulk_data;
rte_hash_lookup_data;
+ rte_hash_lookup_with_hash;
rte_hash_lookup_with_hash_data;
rte_hash_reset;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_hash_set_cmp_func;
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_hash_get_key_with_position;
-
-} DPDK_2.2;
-
-
-DPDK_18.08 {
- global:
-
- rte_hash_count;
-
-} DPDK_16.07;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_ip_frag/rte_ip_frag_version.map b/lib/librte_ip_frag/rte_ip_frag_version.map
index a193007c61..5dd34f828c 100644
--- a/lib/librte_ip_frag/rte_ip_frag_version.map
+++ b/lib/librte_ip_frag/rte_ip_frag_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ip_frag_free_death_row;
rte_ip_frag_table_create;
+ rte_ip_frag_table_destroy;
rte_ip_frag_table_statistics_dump;
rte_ipv4_frag_reassemble_packet;
rte_ipv4_fragment_packet;
@@ -12,13 +13,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_17.08 {
- global:
-
- rte_ip_frag_table_destroy;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index ee9f1961b0..3723b812fc 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_jobstats/rte_jobstats_version.map b/lib/librte_jobstats/rte_jobstats_version.map
index f89441438e..dbd2664ae2 100644
--- a/lib/librte_jobstats/rte_jobstats_version.map
+++ b/lib/librte_jobstats/rte_jobstats_version.map
@@ -1,6 +1,7 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_jobstats_abort;
rte_jobstats_context_finish;
rte_jobstats_context_init;
rte_jobstats_context_reset;
@@ -17,10 +18,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_jobstats_abort;
-
-} DPDK_2.0;
diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map
index c877dc6aaa..9cd3cedc54 100644
--- a/lib/librte_kni/rte_kni_version.map
+++ b/lib/librte_kni/rte_kni_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kni_alloc;
diff --git a/lib/librte_kvargs/rte_kvargs_version.map b/lib/librte_kvargs/rte_kvargs_version.map
index 8f4b4e3f8f..3ba0f4b59c 100644
--- a/lib/librte_kvargs/rte_kvargs_version.map
+++ b/lib/librte_kvargs/rte_kvargs_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kvargs_count;
@@ -15,4 +15,4 @@ EXPERIMENTAL {
rte_kvargs_parse_delim;
rte_kvargs_strcmp;
-} DPDK_2.0;
+};
diff --git a/lib/librte_latencystats/rte_latencystats_version.map b/lib/librte_latencystats/rte_latencystats_version.map
index ac8403e821..e04e63463f 100644
--- a/lib/librte_latencystats/rte_latencystats_version.map
+++ b/lib/librte_latencystats/rte_latencystats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_latencystats_get;
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 90beac853d..500f58b806 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,13 +1,6 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
- rte_lpm_add;
- rte_lpm_create;
- rte_lpm_delete;
- rte_lpm_delete_all;
- rte_lpm_find_existing;
- rte_lpm_free;
- rte_lpm_is_rule_present;
rte_lpm6_add;
rte_lpm6_create;
rte_lpm6_delete;
@@ -18,29 +11,13 @@ DPDK_2.0 {
rte_lpm6_is_rule_present;
rte_lpm6_lookup;
rte_lpm6_lookup_bulk_func;
+ rte_lpm_add;
+ rte_lpm_create;
+ rte_lpm_delete;
+ rte_lpm_delete_all;
+ rte_lpm_find_existing;
+ rte_lpm_free;
+ rte_lpm_is_rule_present;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_lpm_add;
- rte_lpm_find_existing;
- rte_lpm_create;
- rte_lpm_free;
- rte_lpm_is_rule_present;
- rte_lpm_delete;
- rte_lpm_delete_all;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_lpm6_add;
- rte_lpm6_is_rule_present;
- rte_lpm6_lookup;
- rte_lpm6_lookup_bulk_func;
-
-} DPDK_16.04;
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index a4f41d7fd3..12c5e2d519 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -1,26 +1,7 @@
-DPDK_2.0 {
- global:
-
- rte_get_rx_ol_flag_name;
- rte_get_tx_ol_flag_name;
- rte_mbuf_sanity_check;
- rte_pktmbuf_dump;
- rte_pktmbuf_init;
- rte_pktmbuf_pool_init;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pktmbuf_pool_create;
-
-} DPDK_2.0;
-
-DPDK_16.11 {
+DPDK_20.0 {
global:
+ __rte_pktmbuf_linearize;
__rte_pktmbuf_read;
rte_get_ptype_inner_l2_name;
rte_get_ptype_inner_l3_name;
@@ -31,28 +12,24 @@ DPDK_16.11 {
rte_get_ptype_name;
rte_get_ptype_tunnel_name;
rte_get_rx_ol_flag_list;
+ rte_get_rx_ol_flag_name;
rte_get_tx_ol_flag_list;
-
-} DPDK_2.1;
-
-DPDK_18.08 {
- global:
-
+ rte_get_tx_ol_flag_name;
rte_mbuf_best_mempool_ops;
rte_mbuf_platform_mempool_ops;
+ rte_mbuf_sanity_check;
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
- rte_pktmbuf_pool_create_by_ops;
-} DPDK_16.11;
-
-DPDK_19.11 {
- global:
-
- __rte_pktmbuf_linearize;
rte_pktmbuf_clone;
+ rte_pktmbuf_dump;
+ rte_pktmbuf_init;
+ rte_pktmbuf_pool_create;
+ rte_pktmbuf_pool_create_by_ops;
+ rte_pktmbuf_pool_init;
-} DPDK_18.08;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -61,4 +38,4 @@ EXPERIMENTAL {
rte_pktmbuf_copy;
rte_pktmbuf_free_bulk;
-} DPDK_18.08;
+};
diff --git a/lib/librte_member/rte_member_version.map b/lib/librte_member/rte_member_version.map
index 019e4cd962..87780ae611 100644
--- a/lib/librte_member/rte_member_version.map
+++ b/lib/librte_member/rte_member_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_member_add;
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17cbca4607..6a425d203a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_mempool_audit;
- rte_mempool_calc_obj_size;
- rte_mempool_create;
- rte_mempool_dump;
- rte_mempool_list_dump;
- rte_mempool_lookup;
- rte_mempool_walk;
-
- local: *;
-};
-
-DPDK_16.07 {
- global:
-
rte_mempool_avail_count;
rte_mempool_cache_create;
rte_mempool_cache_flush;
rte_mempool_cache_free;
+ rte_mempool_calc_obj_size;
rte_mempool_check_cookies;
+ rte_mempool_contig_blocks_check_cookies;
+ rte_mempool_create;
rte_mempool_create_empty;
rte_mempool_default_cache;
+ rte_mempool_dump;
rte_mempool_free;
rte_mempool_generic_get;
rte_mempool_generic_put;
rte_mempool_in_use_count;
+ rte_mempool_list_dump;
+ rte_mempool_lookup;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
+ rte_mempool_op_calc_mem_size_default;
+ rte_mempool_op_populate_default;
rte_mempool_ops_table;
rte_mempool_populate_anon;
rte_mempool_populate_default;
+ rte_mempool_populate_iova;
rte_mempool_populate_virt;
rte_mempool_register_ops;
rte_mempool_set_ops_byname;
+ rte_mempool_walk;
-} DPDK_2.0;
-
-DPDK_17.11 {
- global:
-
- rte_mempool_populate_iova;
-
-} DPDK_16.07;
-
-DPDK_18.05 {
- global:
-
- rte_mempool_contig_blocks_check_cookies;
- rte_mempool_op_calc_mem_size_default;
- rte_mempool_op_populate_default;
-
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 4b460d5803..46410b0369 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -1,21 +1,16 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_meter_srtcm_color_aware_check;
rte_meter_srtcm_color_blind_check;
rte_meter_srtcm_config;
+ rte_meter_srtcm_profile_config;
rte_meter_trtcm_color_aware_check;
rte_meter_trtcm_color_blind_check;
rte_meter_trtcm_config;
-
- local: *;
-};
-
-DPDK_18.08 {
- global:
-
- rte_meter_srtcm_profile_config;
rte_meter_trtcm_profile_config;
+
+ local: *;
};
EXPERIMENTAL {
diff --git a/lib/librte_metrics/rte_metrics_version.map b/lib/librte_metrics/rte_metrics_version.map
index 6ac99a44a1..85663f356e 100644
--- a/lib/librte_metrics/rte_metrics_version.map
+++ b/lib/librte_metrics/rte_metrics_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_metrics_get_names;
diff --git a/lib/librte_net/rte_net_version.map b/lib/librte_net/rte_net_version.map
index fffc4a3723..8a4e75a3a0 100644
--- a/lib/librte_net/rte_net_version.map
+++ b/lib/librte_net/rte_net_version.map
@@ -1,25 +1,14 @@
-DPDK_16.11 {
- global:
- rte_net_get_ptype;
-
- local: *;
-};
-
-DPDK_17.05 {
- global:
-
- rte_net_crc_calc;
- rte_net_crc_set_alg;
-
-} DPDK_16.11;
-
-DPDK_19.08 {
+DPDK_20.0 {
global:
rte_eth_random_addr;
rte_ether_format_addr;
+ rte_net_crc_calc;
+ rte_net_crc_set_alg;
+ rte_net_get_ptype;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c0280277bb..539785f5f4 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
eal_parse_pci_BDF;
diff --git a/lib/librte_pdump/rte_pdump_version.map b/lib/librte_pdump/rte_pdump_version.map
index 3e744f3012..6d02ccce6d 100644
--- a/lib/librte_pdump/rte_pdump_version.map
+++ b/lib/librte_pdump/rte_pdump_version.map
@@ -1,4 +1,4 @@
-DPDK_16.07 {
+DPDK_20.0 {
global:
rte_pdump_disable;
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 420f065d6e..64d38afecd 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -1,6 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_pipeline_ah_packet_drop;
+ rte_pipeline_ah_packet_hijack;
rte_pipeline_check;
rte_pipeline_create;
rte_pipeline_flush;
@@ -9,42 +11,22 @@ DPDK_2.0 {
rte_pipeline_port_in_create;
rte_pipeline_port_in_disable;
rte_pipeline_port_in_enable;
+ rte_pipeline_port_in_stats_read;
rte_pipeline_port_out_create;
rte_pipeline_port_out_packet_insert;
+ rte_pipeline_port_out_stats_read;
rte_pipeline_run;
rte_pipeline_table_create;
rte_pipeline_table_default_entry_add;
rte_pipeline_table_default_entry_delete;
rte_pipeline_table_entry_add;
- rte_pipeline_table_entry_delete;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pipeline_port_in_stats_read;
- rte_pipeline_port_out_stats_read;
- rte_pipeline_table_stats_read;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete;
rte_pipeline_table_entry_delete_bulk;
+ rte_pipeline_table_stats_read;
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_pipeline_ah_packet_hijack;
- rte_pipeline_ah_packet_drop;
-
-} DPDK_2.2;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_port/rte_port_version.map b/lib/librte_port/rte_port_version.map
index 609bcec3ff..db1b8681d9 100644
--- a/lib/librte_port/rte_port_version.map
+++ b/lib/librte_port/rte_port_version.map
@@ -1,62 +1,32 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_port_ethdev_reader_ops;
+ rte_port_ethdev_writer_nodrop_ops;
rte_port_ethdev_writer_ops;
+ rte_port_fd_reader_ops;
+ rte_port_fd_writer_nodrop_ops;
+ rte_port_fd_writer_ops;
+ rte_port_kni_reader_ops;
+ rte_port_kni_writer_nodrop_ops;
+ rte_port_kni_writer_ops;
+ rte_port_ring_multi_reader_ops;
+ rte_port_ring_multi_writer_nodrop_ops;
+ rte_port_ring_multi_writer_ops;
rte_port_ring_reader_ipv4_frag_ops;
+ rte_port_ring_reader_ipv6_frag_ops;
rte_port_ring_reader_ops;
rte_port_ring_writer_ipv4_ras_ops;
+ rte_port_ring_writer_ipv6_ras_ops;
+ rte_port_ring_writer_nodrop_ops;
rte_port_ring_writer_ops;
rte_port_sched_reader_ops;
rte_port_sched_writer_ops;
rte_port_sink_ops;
rte_port_source_ops;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_port_ethdev_writer_nodrop_ops;
- rte_port_ring_reader_ipv6_frag_ops;
- rte_port_ring_writer_ipv6_ras_ops;
- rte_port_ring_writer_nodrop_ops;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_port_ring_multi_reader_ops;
- rte_port_ring_multi_writer_ops;
- rte_port_ring_multi_writer_nodrop_ops;
-
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_port_kni_reader_ops;
- rte_port_kni_writer_ops;
- rte_port_kni_writer_nodrop_ops;
-
-} DPDK_2.2;
-
-DPDK_16.11 {
- global:
-
- rte_port_fd_reader_ops;
- rte_port_fd_writer_ops;
- rte_port_fd_writer_nodrop_ops;
-
-} DPDK_16.07;
-
-DPDK_18.11 {
- global:
-
rte_port_sym_crypto_reader_ops;
- rte_port_sym_crypto_writer_ops;
rte_port_sym_crypto_writer_nodrop_ops;
+ rte_port_sym_crypto_writer_ops;
-} DPDK_16.11;
+ local: *;
+};
diff --git a/lib/librte_power/rte_power_version.map b/lib/librte_power/rte_power_version.map
index 042917360e..a94ab30c3d 100644
--- a/lib/librte_power/rte_power_version.map
+++ b/lib/librte_power/rte_power_version.map
@@ -1,39 +1,27 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_power_exit;
+ rte_power_freq_disable_turbo;
rte_power_freq_down;
+ rte_power_freq_enable_turbo;
rte_power_freq_max;
rte_power_freq_min;
rte_power_freq_up;
rte_power_freqs;
+ rte_power_get_capabilities;
rte_power_get_env;
rte_power_get_freq;
+ rte_power_guest_channel_send_msg;
rte_power_init;
rte_power_set_env;
rte_power_set_freq;
+ rte_power_turbo_status;
rte_power_unset_env;
local: *;
};
-DPDK_17.11 {
- global:
-
- rte_power_guest_channel_send_msg;
- rte_power_freq_disable_turbo;
- rte_power_freq_enable_turbo;
- rte_power_turbo_status;
-
-} DPDK_2.0;
-
-DPDK_18.08 {
- global:
-
- rte_power_get_capabilities;
-
-} DPDK_17.11;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_rawdev/rte_rawdev_version.map b/lib/librte_rawdev/rte_rawdev_version.map
index b61dbff11c..d847c9e0d3 100644
--- a/lib/librte_rawdev/rte_rawdev_version.map
+++ b/lib/librte_rawdev/rte_rawdev_version.map
@@ -1,4 +1,4 @@
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_rawdev_close;
@@ -17,8 +17,8 @@ DPDK_18.08 {
rte_rawdev_pmd_release;
rte_rawdev_queue_conf_get;
rte_rawdev_queue_count;
- rte_rawdev_queue_setup;
rte_rawdev_queue_release;
+ rte_rawdev_queue_setup;
rte_rawdev_reset;
rte_rawdev_selftest;
rte_rawdev_set_attr;
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
index f8b9ef2abb..787e51ef27 100644
--- a/lib/librte_rcu/rte_rcu_version.map
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_reorder/rte_reorder_version.map b/lib/librte_reorder/rte_reorder_version.map
index 0a8a54de83..cf444062df 100644
--- a/lib/librte_reorder/rte_reorder_version.map
+++ b/lib/librte_reorder/rte_reorder_version.map
@@ -1,13 +1,13 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_reorder_create;
- rte_reorder_init;
+ rte_reorder_drain;
rte_reorder_find_existing;
- rte_reorder_reset;
rte_reorder_free;
+ rte_reorder_init;
rte_reorder_insert;
- rte_reorder_drain;
+ rte_reorder_reset;
local: *;
};
diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map
index 510c1386e0..89d84bcf48 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ring_create;
rte_ring_dump;
+ rte_ring_free;
rte_ring_get_memsize;
rte_ring_init;
rte_ring_list_dump;
@@ -11,13 +12,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.2 {
- global:
-
- rte_ring_free;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 729588794e..1b48bfbf36 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_approx;
@@ -14,6 +14,9 @@ DPDK_2.0 {
rte_sched_port_enqueue;
rte_sched_port_free;
rte_sched_port_get_memory_footprint;
+ rte_sched_port_pkt_read_color;
+ rte_sched_port_pkt_read_tree_path;
+ rte_sched_port_pkt_write;
rte_sched_queue_read_stats;
rte_sched_subport_config;
rte_sched_subport_read_stats;
@@ -21,15 +24,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.1 {
- global:
-
- rte_sched_port_pkt_write;
- rte_sched_port_pkt_read_tree_path;
- rte_sched_port_pkt_read_color;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
index 53267bf3cc..b07314bbf4 100644
--- a/lib/librte_security/rte_security_version.map
+++ b/lib/librte_security/rte_security_version.map
@@ -1,4 +1,4 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
rte_security_attach_session;
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c36..adbb7be9d9 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index 6237252bec..40f72b1fe8 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_table_acl_ops;
diff --git a/lib/librte_telemetry/rte_telemetry_version.map b/lib/librte_telemetry/rte_telemetry_version.map
index fa62d7718c..c1f4613af5 100644
--- a/lib/librte_telemetry/rte_telemetry_version.map
+++ b/lib/librte_telemetry/rte_telemetry_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map
index 72f75c8181..2a59d3f081 100644
--- a/lib/librte_timer/rte_timer_version.map
+++ b/lib/librte_timer/rte_timer_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_timer_dump_stats;
@@ -14,16 +14,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_19.05 {
- global:
-
- rte_timer_dump_stats;
- rte_timer_manage;
- rte_timer_reset;
- rte_timer_stop;
- rte_timer_subsystem_init;
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 5f1d4a75c2..8e9ffac2c2 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -1,64 +1,34 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_vhost_avail_entries;
rte_vhost_dequeue_burst;
rte_vhost_driver_callback_register;
- rte_vhost_driver_register;
- rte_vhost_enable_guest_notification;
- rte_vhost_enqueue_burst;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_vhost_driver_unregister;
-
-} DPDK_2.0;
-
-DPDK_16.07 {
- global:
-
- rte_vhost_avail_entries;
- rte_vhost_get_ifname;
- rte_vhost_get_numa_node;
- rte_vhost_get_queue_num;
-
-} DPDK_2.1;
-
-DPDK_17.05 {
- global:
-
rte_vhost_driver_disable_features;
rte_vhost_driver_enable_features;
rte_vhost_driver_get_features;
+ rte_vhost_driver_register;
rte_vhost_driver_set_features;
rte_vhost_driver_start;
+ rte_vhost_driver_unregister;
+ rte_vhost_enable_guest_notification;
+ rte_vhost_enqueue_burst;
+ rte_vhost_get_ifname;
rte_vhost_get_mem_table;
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
+ rte_vhost_get_numa_node;
+ rte_vhost_get_queue_num;
rte_vhost_get_vhost_vring;
rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
rte_vhost_log_used_vring;
rte_vhost_log_write;
-
-} DPDK_16.07;
-
-DPDK_17.08 {
- global:
-
rte_vhost_rx_queue_count;
-
-} DPDK_17.05;
-
-DPDK_18.02 {
- global:
-
rte_vhost_vring_call;
-} DPDK_17.08;
+ local: *;
+};
EXPERIMENTAL {
global:
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 10/10] buildtools: add ABI versioning check script
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (9 preceding siblings ...)
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 09/10] build: change ABI version to 20.0 Anatoly Burakov
@ 2019-10-24 9:46 23% ` Anatoly Burakov
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, john.mcnamara, ray.kinsella, bruce.richardson,
thomas, david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
Add a shell script that checks whether built libraries are
versioned with expected ABI (current ABI, current ABI + 1,
or EXPERIMENTAL).
The following command was used to verify current source tree
(assuming build directory is in ./build):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- Moved this to the end of the patchset
- Fixed bug when ABI symbols were not found because the .so
did not declare any public symbols
buildtools/check-abi-version.sh | 54 +++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100755 buildtools/check-abi-version.sh
diff --git a/buildtools/check-abi-version.sh b/buildtools/check-abi-version.sh
new file mode 100755
index 0000000000..29aea97735
--- /dev/null
+++ b/buildtools/check-abi-version.sh
@@ -0,0 +1,54 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+# Check whether library symbols have correct
+# version (provided ABI number or provided ABI
+# number + 1 or EXPERIMENTAL).
+# Args:
+# $1: path of the library .so file
+# $2: ABI major version number to check
+# (defaults to ABI_VERSION file value)
+
+if [ -z "$1" ]; then
+ echo "Script checks whether library symbols have"
+ echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL)"
+ echo "Usage:"
+ echo " $0 SO_FILE_PATH [ABI_VER]"
+ exit 1
+fi
+
+LIB="$1"
+DEFAULT_ABI=$(cat "$(dirname \
+ $(readlink -f $0))/../config/ABI_VERSION" | \
+ cut -d'.' -f 1)
+ABIVER="DPDK_${2-$DEFAULT_ABI}"
+NEXT_ABIVER="DPDK_$((${2-$DEFAULT_ABI}+1))"
+
+ret=0
+
+# get output of objdump
+OBJ_DUMP_OUTPUT=`objdump -TC --section=.text ${LIB} 2>&1 | grep ".text"`
+
+# there may not be any .text sections in the .so file, in which case exit early
+echo "${OBJ_DUMP_OUTPUT}" | grep "not found in any input file" -q
+if [ "$?" -eq 0 ]; then
+ exit 0
+fi
+
+# we have symbols, so let's see if the versions are correct
+for SYM in `echo "${OBJ_DUMP_OUTPUT}" | awk '{print $(NF-1) "-" $NF}'`
+do
+ version=$(echo $SYM | cut -d'-' -f 1)
+ symbol=$(echo $SYM | cut -d'-' -f 2)
+ case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL")
+ ;;
+ (*)
+ echo "Warning: symbol $symbol ($version) should be annotated " \
+ "as ABI version $ABIVER / $NEXT_ABIVER, or EXPERIMENTAL."
+ ret=1
+ ;;
+ esac
+done
+
+exit $ret
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v5 08/10] drivers/octeontx: add missing public symbol
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (7 preceding siblings ...)
2019-10-24 9:46 6% ` [dpdk-dev] [PATCH v5 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
@ 2019-10-24 9:46 3% ` Anatoly Burakov
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 10/10] buildtools: add ABI versioning check script Anatoly Burakov
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Jerin Jacob, john.mcnamara, ray.kinsella, bruce.richardson,
thomas, david.marchand, pbhagavatula, stable
The logtype symbol was missing from the .map file. Add it.
Fixes: d8dd31652cf4 ("common/octeontx: move mbox to common folder")
Cc: pbhagavatula@caviumnetworks.com
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- add this patch to avoid compile breakage when bumping ABI
drivers/common/octeontx/rte_common_octeontx_version.map | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index f04b3b7f8a..a9b3cff9bc 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,6 +1,7 @@
DPDK_18.05 {
global:
+ octeontx_logtype_mbox;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
octeontx_mbox_send;
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 07/10] distributor: rename v2.0 ABI to _single suffix
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (6 preceding siblings ...)
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 06/10] distributor: " Anatoly Burakov
@ 2019-10-24 9:46 6% ` Anatoly Burakov
2019-10-24 9:46 3% ` [dpdk-dev] [PATCH v5 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
` (2 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, ray.kinsella,
bruce.richardson, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
The original ABI versioning was slightly misleading in that the
DPDK 2.0 ABI was really a single mode for the distributor, and is
used as such throughout the distributor code.
Fix this by renaming all _v20 API's to _single API's, and remove
symbol versioning.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
---
Notes:
v4:
- Changed it back to how it was with v2
- Removed remaining v2.0 symbols
v3:
- Removed single mode from distributor as per Dave's comments
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_distributor/Makefile | 2 +-
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 24 +++++-----
.../rte_distributor_private.h | 10 ++--
...ributor_v20.c => rte_distributor_single.c} | 48 +++++++++----------
...ributor_v20.h => rte_distributor_single.h} | 26 +++++-----
.../rte_distributor_version.map | 18 +------
7 files changed, 58 insertions(+), 72 deletions(-)
rename lib/librte_distributor/{rte_distributor_v20.c => rte_distributor_single.c} (87%)
rename lib/librte_distributor/{rte_distributor_v20.h => rte_distributor_single.h} (89%)
diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile
index 0ef80dcff4..d9d0089166 100644
--- a/lib/librte_distributor/Makefile
+++ b/lib/librte_distributor/Makefile
@@ -15,7 +15,7 @@ EXPORT_MAP := rte_distributor_version.map
LIBABIVER := 1
# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor_v20.c
+SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor_single.c
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor.c
ifeq ($(CONFIG_RTE_ARCH_X86),y)
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor_match_sse.c
diff --git a/lib/librte_distributor/meson.build b/lib/librte_distributor/meson.build
index dba7e3b2aa..bd12ddb2f1 100644
--- a/lib/librte_distributor/meson.build
+++ b/lib/librte_distributor/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
-sources = files('rte_distributor.c', 'rte_distributor_v20.c')
+sources = files('rte_distributor.c', 'rte_distributor_single.c')
if arch_subdir == 'x86'
sources += files('rte_distributor_match_sse.c')
else
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index ca3f21b833..b4fc0bfead 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -18,7 +18,7 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
-#include "rte_distributor_v20.h"
+#include "rte_distributor_single.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -42,7 +42,7 @@ rte_distributor_request_pkt(struct rte_distributor *d,
volatile int64_t *retptr64;
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- rte_distributor_request_pkt_v20(d->d_v20,
+ rte_distributor_request_pkt_single(d->d_single,
worker_id, oldpkt[0]);
return;
}
@@ -88,7 +88,8 @@ rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int i;
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- pkts[0] = rte_distributor_poll_pkt_v20(d->d_v20, worker_id);
+ pkts[0] = rte_distributor_poll_pkt_single(d->d_single,
+ worker_id);
return (pkts[0]) ? 1 : 0;
}
@@ -123,7 +124,7 @@ rte_distributor_get_pkt(struct rte_distributor *d,
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
if (return_count <= 1) {
- pkts[0] = rte_distributor_get_pkt_v20(d->d_v20,
+ pkts[0] = rte_distributor_get_pkt_single(d->d_single,
worker_id, oldpkt[0]);
return (pkts[0]) ? 1 : 0;
} else
@@ -153,7 +154,7 @@ rte_distributor_return_pkt(struct rte_distributor *d,
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
if (num == 1)
- return rte_distributor_return_pkt_v20(d->d_v20,
+ return rte_distributor_return_pkt_single(d->d_single,
worker_id, oldpkt[0]);
else
return -EINVAL;
@@ -330,7 +331,8 @@ rte_distributor_process(struct rte_distributor *d,
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_process_v20(d->d_v20, mbufs, num_mbufs);
+ return rte_distributor_process_single(d->d_single,
+ mbufs, num_mbufs);
}
if (unlikely(num_mbufs == 0)) {
@@ -464,7 +466,7 @@ rte_distributor_returned_pkts(struct rte_distributor *d,
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_returned_pkts_v20(d->d_v20,
+ return rte_distributor_returned_pkts_single(d->d_single,
mbufs, max_mbufs);
}
@@ -507,7 +509,7 @@ rte_distributor_flush(struct rte_distributor *d)
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_flush_v20(d->d_v20);
+ return rte_distributor_flush_single(d->d_single);
}
flushed = total_outstanding(d);
@@ -538,7 +540,7 @@ rte_distributor_clear_returns(struct rte_distributor *d)
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- rte_distributor_clear_returns_v20(d->d_v20);
+ rte_distributor_clear_returns_single(d->d_single);
return;
}
@@ -578,9 +580,9 @@ rte_distributor_create(const char *name,
rte_errno = ENOMEM;
return NULL;
}
- d->d_v20 = rte_distributor_create_v20(name,
+ d->d_single = rte_distributor_create_single(name,
socket_id, num_workers);
- if (d->d_v20 == NULL) {
+ if (d->d_single == NULL) {
free(d);
/* rte_errno will have been set */
return NULL;
diff --git a/lib/librte_distributor/rte_distributor_private.h b/lib/librte_distributor/rte_distributor_private.h
index 33cd89410c..bdb62b6e92 100644
--- a/lib/librte_distributor/rte_distributor_private.h
+++ b/lib/librte_distributor/rte_distributor_private.h
@@ -55,7 +55,7 @@ extern "C" {
* the next cache line to worker 0, we pad this out to three cache lines.
* Only 64-bits of the memory is actually used though.
*/
-union rte_distributor_buffer_v20 {
+union rte_distributor_buffer_single {
volatile int64_t bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
@@ -80,8 +80,8 @@ struct rte_distributor_returned_pkts {
struct rte_mbuf *mbufs[RTE_DISTRIB_MAX_RETURNS];
};
-struct rte_distributor_v20 {
- TAILQ_ENTRY(rte_distributor_v20) next; /**< Next in list. */
+struct rte_distributor_single {
+ TAILQ_ENTRY(rte_distributor_single) next; /**< Next in list. */
char name[RTE_DISTRIBUTOR_NAMESIZE]; /**< Name of the ring. */
unsigned int num_workers; /**< Number of workers polling */
@@ -96,7 +96,7 @@ struct rte_distributor_v20 {
struct rte_distributor_backlog backlog[RTE_DISTRIB_MAX_WORKERS];
- union rte_distributor_buffer_v20 bufs[RTE_DISTRIB_MAX_WORKERS];
+ union rte_distributor_buffer_single bufs[RTE_DISTRIB_MAX_WORKERS];
struct rte_distributor_returned_pkts returns;
};
@@ -154,7 +154,7 @@ struct rte_distributor {
enum rte_distributor_match_function dist_match_fn;
- struct rte_distributor_v20 *d_v20;
+ struct rte_distributor_single *d_single;
};
void
diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_distributor/rte_distributor_single.c
similarity index 87%
rename from lib/librte_distributor/rte_distributor_v20.c
rename to lib/librte_distributor/rte_distributor_single.c
index 14ee0360ec..9a6ef826c9 100644
--- a/lib/librte_distributor/rte_distributor_v20.c
+++ b/lib/librte_distributor/rte_distributor_single.c
@@ -15,10 +15,10 @@
#include <rte_pause.h>
#include <rte_tailq.h>
-#include "rte_distributor_v20.h"
+#include "rte_distributor_single.h"
#include "rte_distributor_private.h"
-TAILQ_HEAD(rte_distributor_list, rte_distributor_v20);
+TAILQ_HEAD(rte_distributor_list, rte_distributor_single);
static struct rte_tailq_elem rte_distributor_tailq = {
.name = "RTE_DISTRIBUTOR",
@@ -28,10 +28,10 @@ EAL_REGISTER_TAILQ(rte_distributor_tailq)
/**** APIs called by workers ****/
void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_request_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK))
@@ -40,10 +40,10 @@ rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
}
struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_poll_pkt_single(struct rte_distributor_single *d,
unsigned worker_id)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
if (buf->bufptr64 & RTE_DISTRIB_GET_BUF)
return NULL;
@@ -53,21 +53,21 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
}
struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_get_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
struct rte_mbuf *ret;
- rte_distributor_request_pkt_v20(d, worker_id, oldpkt);
- while ((ret = rte_distributor_poll_pkt_v20(d, worker_id)) == NULL)
+ rte_distributor_request_pkt_single(d, worker_id, oldpkt);
+ while ((ret = rte_distributor_poll_pkt_single(d, worker_id)) == NULL)
rte_pause();
return ret;
}
int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_return_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
buf->bufptr64 = req;
@@ -98,7 +98,7 @@ backlog_pop(struct rte_distributor_backlog *bl)
/* stores a packet returned from a worker inside the returns array */
static inline void
-store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d,
+store_return(uintptr_t oldbuf, struct rte_distributor_single *d,
unsigned *ret_start, unsigned *ret_count)
{
/* store returns in a circular buffer - code is branch-free */
@@ -109,7 +109,7 @@ store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d,
}
static inline void
-handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
+handle_worker_shutdown(struct rte_distributor_single *d, unsigned int wkr)
{
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
@@ -139,7 +139,7 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
* Note that the tags were set before first level call
* to rte_distributor_process.
*/
- rte_distributor_process_v20(d, pkts, i);
+ rte_distributor_process_single(d, pkts, i);
bl->count = bl->start = 0;
}
}
@@ -149,7 +149,7 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
* to do a partial flush.
*/
static int
-process_returns(struct rte_distributor_v20 *d)
+process_returns(struct rte_distributor_single *d)
{
unsigned wkr;
unsigned flushed = 0;
@@ -188,7 +188,7 @@ process_returns(struct rte_distributor_v20 *d)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
+rte_distributor_process_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned num_mbufs)
{
unsigned next_idx = 0;
@@ -292,7 +292,7 @@ rte_distributor_process_v20(struct rte_distributor_v20 *d,
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
+rte_distributor_returned_pkts_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -314,7 +314,7 @@ rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
* being worked on or queued up in a backlog.
*/
static inline unsigned
-total_outstanding(const struct rte_distributor_v20 *d)
+total_outstanding(const struct rte_distributor_single *d)
{
unsigned wkr, total_outstanding;
@@ -329,19 +329,19 @@ total_outstanding(const struct rte_distributor_v20 *d)
/* flush the distributor, so that there are no outstanding packets in flight or
* queued up. */
int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d)
+rte_distributor_flush_single(struct rte_distributor_single *d)
{
const unsigned flushed = total_outstanding(d);
while (total_outstanding(d) > 0)
- rte_distributor_process_v20(d, NULL, 0);
+ rte_distributor_process_single(d, NULL, 0);
return flushed;
}
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d)
+rte_distributor_clear_returns_single(struct rte_distributor_single *d)
{
d->returns.start = d->returns.count = 0;
#ifndef __OPTIMIZE__
@@ -350,12 +350,12 @@ rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d)
}
/* creates a distributor instance */
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name,
+struct rte_distributor_single *
+rte_distributor_create_single(const char *name,
unsigned socket_id,
unsigned num_workers)
{
- struct rte_distributor_v20 *d;
+ struct rte_distributor_single *d;
struct rte_distributor_list *distributor_list;
char mz_name[RTE_MEMZONE_NAMESIZE];
const struct rte_memzone *mz;
diff --git a/lib/librte_distributor/rte_distributor_v20.h b/lib/librte_distributor/rte_distributor_single.h
similarity index 89%
rename from lib/librte_distributor/rte_distributor_v20.h
rename to lib/librte_distributor/rte_distributor_single.h
index 12865658ba..2f80aa43d1 100644
--- a/lib/librte_distributor/rte_distributor_v20.h
+++ b/lib/librte_distributor/rte_distributor_single.h
@@ -2,8 +2,8 @@
* Copyright(c) 2010-2014 Intel Corporation
*/
-#ifndef _RTE_DISTRIB_V20_H_
-#define _RTE_DISTRIB_V20_H_
+#ifndef _RTE_DISTRIB_SINGLE_H_
+#define _RTE_DISTRIB_SINGLE_H_
/**
* @file
@@ -19,7 +19,7 @@ extern "C" {
#define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */
-struct rte_distributor_v20;
+struct rte_distributor_single;
struct rte_mbuf;
/**
@@ -38,8 +38,8 @@ struct rte_mbuf;
* @return
* The newly created distributor instance
*/
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name, unsigned int socket_id,
+struct rte_distributor_single *
+rte_distributor_create_single(const char *name, unsigned int socket_id,
unsigned int num_workers);
/* *** APIS to be called on the distributor lcore *** */
@@ -74,7 +74,7 @@ rte_distributor_create_v20(const char *name, unsigned int socket_id,
* The number of mbufs processed.
*/
int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
+rte_distributor_process_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs);
/**
@@ -92,7 +92,7 @@ rte_distributor_process_v20(struct rte_distributor_v20 *d,
* The number of mbufs returned in the mbufs array.
*/
int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
+rte_distributor_returned_pkts_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs);
/**
@@ -107,7 +107,7 @@ rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
* The number of queued/in-flight packets that were completed by this call.
*/
int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d);
+rte_distributor_flush_single(struct rte_distributor_single *d);
/**
* Clears the array of returned packets used as the source for the
@@ -119,7 +119,7 @@ rte_distributor_flush_v20(struct rte_distributor_v20 *d);
* The distributor instance to be used
*/
void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d);
+rte_distributor_clear_returns_single(struct rte_distributor_single *d);
/* *** APIS to be called on the worker lcores *** */
/*
@@ -148,7 +148,7 @@ rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d);
* A new packet to be processed by the worker thread.
*/
struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_get_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *oldpkt);
/**
@@ -164,7 +164,7 @@ rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
* The previous packet being processed by the worker
*/
int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_return_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *mbuf);
/**
@@ -188,7 +188,7 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
* The previous packet, if any, being processed by the worker
*/
void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_request_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *oldpkt);
/**
@@ -208,7 +208,7 @@ rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
* packet is yet available.
*/
struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_poll_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id);
#ifdef __cplusplus
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 3a285b394e..00e26b4804 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,19 +1,3 @@
-DPDK_2.0 {
- global:
-
- rte_distributor_clear_returns;
- rte_distributor_create;
- rte_distributor_flush;
- rte_distributor_get_pkt;
- rte_distributor_poll_pkt;
- rte_distributor_process;
- rte_distributor_request_pkt;
- rte_distributor_return_pkt;
- rte_distributor_returned_pkts;
-
- local: *;
-};
-
DPDK_17.05 {
global:
@@ -26,4 +10,4 @@ DPDK_17.05 {
rte_distributor_request_pkt;
rte_distributor_return_pkt;
rte_distributor_returned_pkts;
-} DPDK_2.0;
+};
--
2.17.1
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH v5 06/10] distributor: remove deprecated code
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (5 preceding siblings ...)
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 05/10] lpm: " Anatoly Burakov
@ 2019-10-24 9:46 4% ` Anatoly Burakov
2019-10-24 9:46 6% ` [dpdk-dev] [PATCH v5 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
` (3 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, ray.kinsella,
bruce.richardson, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: David Hunt <david.hunt@intel.com>
---
Notes:
v5:
- Fixed shared library linking error due to versioning still enabled
v2:
- Moved this to before ABI version bump to avoid compile breakage
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_distributor/rte_distributor.c | 56 +++--------------
.../rte_distributor_v1705.h | 61 -------------------
lib/librte_distributor/rte_distributor_v20.c | 9 ---
3 files changed, 9 insertions(+), 117 deletions(-)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 21eb1fb0a1..ca3f21b833 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -19,7 +19,6 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
#include "rte_distributor_v20.h"
-#include "rte_distributor_v1705.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -33,7 +32,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
/**** Burst Packet APIs called by workers ****/
void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
+rte_distributor_request_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt,
unsigned int count)
{
@@ -78,14 +77,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor *d,
*/
*retptr64 |= RTE_DISTRIB_GET_BUF;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_request_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_request_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count),
- rte_distributor_request_pkt_v1705);
int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
+rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -119,13 +113,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_poll_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_poll_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts),
- rte_distributor_poll_pkt_v1705);
int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
+rte_distributor_get_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts,
struct rte_mbuf **oldpkt, unsigned int return_count)
{
@@ -153,14 +143,9 @@ rte_distributor_get_pkt_v1705(struct rte_distributor *d,
}
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_get_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_get_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int return_count),
- rte_distributor_get_pkt_v1705);
int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
+rte_distributor_return_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -187,10 +172,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributor *d,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_return_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_return_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num),
- rte_distributor_return_pkt_v1705);
/**** APIs called on distributor core ***/
@@ -336,7 +317,7 @@ release(struct rte_distributor *d, unsigned int wkr)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v1705(struct rte_distributor *d,
+rte_distributor_process(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs)
{
unsigned int next_idx = 0;
@@ -470,14 +451,10 @@ rte_distributor_process_v1705(struct rte_distributor *d,
return num_mbufs;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_process, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_process(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs),
- rte_distributor_process_v1705);
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
+rte_distributor_returned_pkts(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -502,10 +479,6 @@ rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
return retval;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_returned_pkts, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_returned_pkts(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs),
- rte_distributor_returned_pkts_v1705);
/*
* Return the number of packets in-flight in a distributor, i.e. packets
@@ -527,7 +500,7 @@ total_outstanding(const struct rte_distributor *d)
* queued up.
*/
int
-rte_distributor_flush_v1705(struct rte_distributor *d)
+rte_distributor_flush(struct rte_distributor *d)
{
unsigned int flushed;
unsigned int wkr;
@@ -556,13 +529,10 @@ rte_distributor_flush_v1705(struct rte_distributor *d)
return flushed;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_flush, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_flush(struct rte_distributor *d),
- rte_distributor_flush_v1705);
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d)
+rte_distributor_clear_returns(struct rte_distributor *d)
{
unsigned int wkr;
@@ -576,13 +546,10 @@ rte_distributor_clear_returns_v1705(struct rte_distributor *d)
for (wkr = 0; wkr < d->num_workers; wkr++)
d->bufs[wkr].retptr64[0] = 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_clear_returns, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_clear_returns(struct rte_distributor *d),
- rte_distributor_clear_returns_v1705);
/* creates a distributor instance */
struct rte_distributor *
-rte_distributor_create_v1705(const char *name,
+rte_distributor_create(const char *name,
unsigned int socket_id,
unsigned int num_workers,
unsigned int alg_type)
@@ -656,8 +623,3 @@ rte_distributor_create_v1705(const char *name,
return d;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_create, _v1705, 17.05);
-MAP_STATIC_SYMBOL(struct rte_distributor *rte_distributor_create(
- const char *name, unsigned int socket_id,
- unsigned int num_workers, unsigned int alg_type),
- rte_distributor_create_v1705);
diff --git a/lib/librte_distributor/rte_distributor_v1705.h b/lib/librte_distributor/rte_distributor_v1705.h
deleted file mode 100644
index df4d9e8150..0000000000
--- a/lib/librte_distributor/rte_distributor_v1705.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#ifndef _RTE_DISTRIB_V1705_H_
-#define _RTE_DISTRIB_V1705_H_
-
-/**
- * @file
- * RTE distributor
- *
- * The distributor is a component which is designed to pass packets
- * one-at-a-time to workers, with dynamic load balancing.
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_distributor *
-rte_distributor_create_v1705(const char *name, unsigned int socket_id,
- unsigned int num_workers,
- unsigned int alg_type);
-
-int
-rte_distributor_process_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs);
-
-int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs);
-
-int
-rte_distributor_flush_v1705(struct rte_distributor *d);
-
-void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d);
-
-int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int retcount);
-
-int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num);
-
-void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count);
-
-int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **mbufs);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_distributor/rte_distributor_v20.c
index cdc0969a89..14ee0360ec 100644
--- a/lib/librte_distributor/rte_distributor_v20.c
+++ b/lib/librte_distributor/rte_distributor_v20.c
@@ -38,7 +38,6 @@ rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
rte_pause();
buf->bufptr64 = req;
}
-VERSION_SYMBOL(rte_distributor_request_pkt, _v20, 2.0);
struct rte_mbuf *
rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
@@ -52,7 +51,6 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
int64_t ret = buf->bufptr64 >> RTE_DISTRIB_FLAG_BITS;
return (struct rte_mbuf *)((uintptr_t)ret);
}
-VERSION_SYMBOL(rte_distributor_poll_pkt, _v20, 2.0);
struct rte_mbuf *
rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
@@ -64,7 +62,6 @@ rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
rte_pause();
return ret;
}
-VERSION_SYMBOL(rte_distributor_get_pkt, _v20, 2.0);
int
rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
@@ -76,7 +73,6 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
buf->bufptr64 = req;
return 0;
}
-VERSION_SYMBOL(rte_distributor_return_pkt, _v20, 2.0);
/**** APIs called on distributor core ***/
@@ -293,7 +289,6 @@ rte_distributor_process_v20(struct rte_distributor_v20 *d,
d->returns.count = ret_count;
return num_mbufs;
}
-VERSION_SYMBOL(rte_distributor_process, _v20, 2.0);
/* return to the caller, packets returned from workers */
int
@@ -314,7 +309,6 @@ rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
return retval;
}
-VERSION_SYMBOL(rte_distributor_returned_pkts, _v20, 2.0);
/* return the number of packets in-flight in a distributor, i.e. packets
* being worked on or queued up in a backlog.
@@ -344,7 +338,6 @@ rte_distributor_flush_v20(struct rte_distributor_v20 *d)
return flushed;
}
-VERSION_SYMBOL(rte_distributor_flush, _v20, 2.0);
/* clears the internal returns array in the distributor */
void
@@ -355,7 +348,6 @@ rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d)
memset(d->returns.mbufs, 0, sizeof(d->returns.mbufs));
#endif
}
-VERSION_SYMBOL(rte_distributor_clear_returns, _v20, 2.0);
/* creates a distributor instance */
struct rte_distributor_v20 *
@@ -399,4 +391,3 @@ rte_distributor_create_v20(const char *name,
return d;
}
-VERSION_SYMBOL(rte_distributor_create, _v20, 2.0);
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v5 05/10] lpm: remove deprecated code
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (4 preceding siblings ...)
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 04/10] timer: remove deprecated code Anatoly Burakov
@ 2019-10-24 9:46 2% ` Anatoly Burakov
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 06/10] distributor: " Anatoly Burakov
` (4 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Bruce Richardson, Vladimir Medvedkin,
john.mcnamara, ray.kinsella, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_lpm/rte_lpm.c | 996 ++------------------------------------
lib/librte_lpm/rte_lpm.h | 88 ----
lib/librte_lpm/rte_lpm6.c | 132 +----
lib/librte_lpm/rte_lpm6.h | 25 -
4 files changed, 48 insertions(+), 1193 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 3a929a1b16..2687564194 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -89,34 +89,8 @@ depth_to_range(uint8_t depth)
/*
* Find an existing lpm table and return a pointer to it.
*/
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name)
-{
- struct rte_lpm_v20 *l = NULL;
- struct rte_tailq_entry *te;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_read_lock();
- TAILQ_FOREACH(te, lpm_list, next) {
- l = te->data;
- if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
- rte_mcfg_tailq_read_unlock();
-
- if (te == NULL) {
- rte_errno = ENOENT;
- return NULL;
- }
-
- return l;
-}
-VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name)
+rte_lpm_find_existing(const char *name)
{
struct rte_lpm *l = NULL;
struct rte_tailq_entry *te;
@@ -139,88 +113,12 @@ rte_lpm_find_existing_v1604(const char *name)
return l;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v1604, 16.04);
-MAP_STATIC_SYMBOL(struct rte_lpm *rte_lpm_find_existing(const char *name),
- rte_lpm_find_existing_v1604);
/*
* Allocates memory for LPM object
*/
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
- __rte_unused int flags)
-{
- char mem_name[RTE_LPM_NAMESIZE];
- struct rte_lpm_v20 *lpm = NULL;
- struct rte_tailq_entry *te;
- uint32_t mem_size;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry_v20) != 2);
-
- /* Check user arguments. */
- if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
- rte_errno = EINVAL;
- return NULL;
- }
-
- snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
-
- /* Determine the amount of memory to allocate. */
- mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
- rte_mcfg_tailq_write_lock();
-
- /* guarantee there's no existing */
- TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
-
- if (te != NULL) {
- lpm = NULL;
- rte_errno = EEXIST;
- goto exit;
- }
-
- /* allocate tailq entry */
- te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Allocate memory to store the LPM data structures. */
- lpm = rte_zmalloc_socket(mem_name, mem_size,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
- rte_free(te);
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Save user arguments. */
- lpm->max_rules = max_rules;
- strlcpy(lpm->name, name, sizeof(lpm->name));
-
- te->data = lpm;
-
- TAILQ_INSERT_TAIL(lpm_list, te, next);
-
-exit:
- rte_mcfg_tailq_write_unlock();
-
- return lpm;
-}
-VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
+rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config)
{
char mem_name[RTE_LPM_NAMESIZE];
@@ -320,45 +218,12 @@ rte_lpm_create_v1604(const char *name, int socket_id,
return lpm;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_create, _v1604, 16.04);
-MAP_STATIC_SYMBOL(
- struct rte_lpm *rte_lpm_create(const char *name, int socket_id,
- const struct rte_lpm_config *config), rte_lpm_create_v1604);
/*
* Deallocates memory for given LPM table.
*/
void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm)
-{
- struct rte_lpm_list *lpm_list;
- struct rte_tailq_entry *te;
-
- /* Check user arguments. */
- if (lpm == NULL)
- return;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_write_lock();
-
- /* find our tailq entry */
- TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
- break;
- }
- if (te != NULL)
- TAILQ_REMOVE(lpm_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- rte_free(lpm);
- rte_free(te);
-}
-VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
-
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm)
+rte_lpm_free(struct rte_lpm *lpm)
{
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -386,9 +251,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
rte_free(lpm);
rte_free(te);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
- rte_lpm_free_v1604);
/*
* Adds a rule to the rule table.
@@ -401,79 +263,7 @@ MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t rule_gindex, rule_index, last_rule;
- int i;
-
- VERIFY_DEPTH(depth);
-
- /* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
-
- /* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- /* Initialise rule_index to point to start of rule group. */
- rule_index = rule_gindex;
- /* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- for (; rule_index < last_rule; rule_index++) {
-
- /* If rule already exists update its next_hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- return rule_index;
- }
- }
-
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
- } else {
- /* Calculate the position in which the rule will be stored. */
- rule_index = 0;
-
- for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
- break;
- }
- }
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
-
- lpm->rule_info[depth - 1].first_rule = rule_index;
- }
-
- /* Make room for the new rule in the array. */
- for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
- return -ENOSPC;
-
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
- }
- }
-
- /* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- /* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
-
- return rule_index;
-}
-
-static int32_t
-rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
@@ -549,30 +339,7 @@ rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static void
-rule_delete_v20(struct rte_lpm_v20 *lpm, int32_t rule_index, uint8_t depth)
-{
- int i;
-
- VERIFY_DEPTH(depth);
-
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
-
- for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
- }
- }
-
- lpm->rule_info[depth - 1].used_rules--;
-}
-
-static void
-rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
int i;
@@ -599,28 +366,7 @@ rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_find_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth)
-{
- uint32_t rule_gindex, last_rule, rule_index;
-
- VERIFY_DEPTH(depth);
-
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- /* Scan used rules at given depth to find rule. */
- for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
- /* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
- return rule_index;
- }
-
- /* If rule is not found return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
@@ -644,42 +390,7 @@ rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static int32_t
-tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20 *tbl8)
-{
- uint32_t group_idx; /* tbl8 group index. */
- struct rte_lpm_tbl_entry_v20 *tbl8_entry;
-
- /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < RTE_LPM_TBL8_NUM_GROUPS;
- group_idx++) {
- tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
- /* If a free tbl8 group is found clean it and set as VALID. */
- if (!tbl8_entry->valid_group) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = VALID,
- };
- new_tbl8_entry.next_hop = 0;
-
- memset(&tbl8_entry[0], 0,
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
- sizeof(tbl8_entry[0]));
-
- __atomic_store(tbl8_entry, &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- /* Return group index for allocated tbl8 group. */
- return group_idx;
- }
- }
-
- /* If there are no tbl8 groups free then return error. */
- return -ENOSPC;
-}
-
-static int32_t
-tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
@@ -713,22 +424,7 @@ tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
}
static void
-tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start)
-{
- /* Set tbl8 group invalid*/
- struct rte_lpm_tbl_entry_v20 zero_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = INVALID,
- };
- zero_tbl8_entry.next_hop = 0;
-
- __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
- __ATOMIC_RELAXED);
-}
-
-static void
-tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
@@ -738,78 +434,7 @@ tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
}
static __rte_noinline int32_t
-add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
-
- /* Calculate the index into Table24. */
- tbl24_index = ip >> 8;
- tbl24_range = depth_to_range(depth);
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- /*
- * For invalid OR valid and non-extended tbl 24 entries set
- * entry.
- */
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth)) {
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .valid = VALID,
- .valid_group = 0,
- .depth = depth,
- };
- new_tbl24_entry.next_hop = next_hop;
-
- /* Setting tbl24 entry in one go to avoid race
- * conditions
- */
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- continue;
- }
-
- if (lpm->tbl24[i].valid_group == 1) {
- /* If tbl24 entry is valid and extended calculate the
- * index into tbl8.
- */
- tbl8_index = lpm->tbl24[i].group_idx *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- };
- new_tbl8_entry.next_hop = next_hop;
-
- /*
- * Setting tbl8 entry in one go to avoid
- * race conditions
- */
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -881,150 +506,7 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static __rte_noinline int32_t
-add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index;
- int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
- tbl8_range, i;
-
- tbl24_index = (ip_masked >> 8);
- tbl8_range = depth_to_range(depth);
-
- if (!lpm->tbl24[tbl24_index].valid) {
- /* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- /* Check tbl8 allocation was successful. */
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- /* Find index into tbl8 and range. */
- tbl8_index = (tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
- (ip_masked & 0xFF);
-
- /* Set tbl8 entry. */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } /* If valid entry but not extended calculate the index into Table8. */
- else if (lpm->tbl24[tbl24_index].valid_group == 0) {
- /* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_group_start +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /* Populate new tbl8 with tbl24 value. */
- for (i = tbl8_group_start; i < tbl8_group_end; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = lpm->tbl24[tbl24_index].depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop =
- lpm->tbl24[tbl24_index].next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- /* Insert new rule into the tbl8 entry. */
- for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } else { /*
- * If it is valid, extended entry calculate the index into tbl8.
- */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- /*
- * Setting tbl8 entry in one go to avoid race
- * condition
- */
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -1037,7 +519,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
if (!lpm->tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
/* Check tbl8 allocation was successful. */
if (tbl8_group_index < 0) {
@@ -1083,7 +565,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
} /* If valid entry but not extended calculate the index into Table8. */
else if (lpm->tbl24[tbl24_index].valid_group == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
if (tbl8_group_index < 0) {
return tbl8_group_index;
@@ -1177,48 +659,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* Add a route
*/
int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- int32_t rule_index, status = 0;
- uint32_t ip_masked;
-
- /* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- ip_masked = ip & depth_to_mask(depth);
-
- /* Add the rule to the rule table. */
- rule_index = rule_add_v20(lpm, ip_masked, depth, next_hop);
-
- /* If the is no space available for new rule return error. */
- if (rule_index < 0) {
- return rule_index;
- }
-
- if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v20(lpm, ip_masked, depth, next_hop);
- } else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v20(lpm, ip_masked, depth, next_hop);
-
- /*
- * If add fails due to exhaustion of tbl8 extensions delete
- * rule that was added to rule table.
- */
- if (status < 0) {
- rule_delete_v20(lpm, rule_index, depth);
-
- return status;
- }
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
-
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
int32_t rule_index, status = 0;
@@ -1231,7 +672,7 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add_v1604(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(lpm, ip_masked, depth, next_hop);
/* If the is no space available for new rule return error. */
if (rule_index < 0) {
@@ -1239,16 +680,16 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(lpm, ip_masked, depth, next_hop);
} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(lpm, ip_masked, depth, next_hop);
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
if (status < 0) {
- rule_delete_v1604(lpm, rule_index, depth);
+ rule_delete(lpm, rule_index, depth);
return status;
}
@@ -1256,42 +697,12 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_add, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t next_hop), rte_lpm_add_v1604);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
-{
- uint32_t ip_masked;
- int32_t rule_index;
-
- /* Check user arguments. */
- if ((lpm == NULL) ||
- (next_hop == NULL) ||
- (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- /* Look for the rule using rule_find. */
- ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v20(lpm, ip_masked, depth);
-
- if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
- return 1;
- }
-
- /* If rule is not found return 0. */
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop)
{
uint32_t ip_masked;
@@ -1305,7 +716,7 @@ uint32_t *next_hop)
/* Look for the rule using rule_find. */
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v1604(lpm, ip_masked, depth);
+ rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
*next_hop = lpm->rules_tbl[rule_index].next_hop;
@@ -1315,12 +726,9 @@ uint32_t *next_hop)
/* If rule is not found return 0. */
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t *next_hop), rte_lpm_is_rule_present_v1604);
static int32_t
-find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *sub_rule_depth)
{
int32_t rule_index;
@@ -1330,7 +738,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
- rule_index = rule_find_v20(lpm, ip_masked, prev_depth);
+ rule_index = rule_find(lpm, ip_masked, prev_depth);
if (rule_index >= 0) {
*sub_rule_depth = prev_depth;
@@ -1342,133 +750,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
}
static int32_t
-find_previous_rule_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t *sub_rule_depth)
-{
- int32_t rule_index;
- uint32_t ip_masked;
- uint8_t prev_depth;
-
- for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
- ip_masked = ip & depth_to_mask(prev_depth);
-
- rule_index = rule_find_v1604(lpm, ip_masked, prev_depth);
-
- if (rule_index >= 0) {
- *sub_rule_depth = prev_depth;
- return rule_index;
- }
- }
-
- return -1;
-}
-
-static int32_t
-delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
-
- /* Calculate the range and index into Table24. */
- tbl24_range = depth_to_range(depth);
- tbl24_index = (ip_masked >> 8);
-
- /*
- * Firstly check the sub_rule_index. A -1 indicates no replacement rule
- * and a positive number indicates a sub_rule_index.
- */
- if (sub_rule_index < 0) {
- /*
- * If no replacement rule exists then invalidate entries
- * associated with this rule.
- */
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- zero_tbl24_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = 0,
- };
- zero_tbl24_entry.next_hop = 0;
- __atomic_store(&lpm->tbl24[i],
- &zero_tbl24_entry, __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- lpm->tbl8[j].valid = INVALID;
- }
- }
- }
- } else {
- /*
- * If a replacement rule exists then modify entries
- * associated with this rule.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = sub_rule_depth,
- };
-
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = sub_rule_depth,
- };
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1575,7 +857,7 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* thus can be recycled
*/
static int32_t
-tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8,
uint32_t tbl8_group_start)
{
uint32_t tbl8_group_end, i;
@@ -1622,140 +904,7 @@ tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
}
static int32_t
-tbl8_recycle_check_v1604(struct rte_lpm_tbl_entry *tbl8,
- uint32_t tbl8_group_start)
-{
- uint32_t tbl8_group_end, i;
- tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /*
- * Check the first entry of the given tbl8. If it is invalid we know
- * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
- * (As they would affect all entries in a tbl8) and thus this table
- * can not be recycled.
- */
- if (tbl8[tbl8_group_start].valid) {
- /*
- * If first entry is valid check if the depth is less than 24
- * and if so check the rest of the entries to verify that they
- * are all of this depth.
- */
- if (tbl8[tbl8_group_start].depth <= MAX_DEPTH_TBL24) {
- for (i = (tbl8_group_start + 1); i < tbl8_group_end;
- i++) {
-
- if (tbl8[i].depth !=
- tbl8[tbl8_group_start].depth) {
-
- return -EEXIST;
- }
- }
- /* If all entries are the same return the tb8 index */
- return tbl8_group_start;
- }
-
- return -EEXIST;
- }
- /*
- * If the first entry is invalid check if the rest of the entries in
- * the tbl8 are invalid.
- */
- for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
- if (tbl8[i].valid)
- return -EEXIST;
- }
- /* If no valid entries are found then return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
- tbl8_range, i;
- int32_t tbl8_recycle_index;
-
- /*
- * Calculate the index into tbl24 and range. Note: All depths larger
- * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
- */
- tbl24_index = ip_masked >> 8;
-
- /* Calculate the index into tbl8 and range. */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
- tbl8_range = depth_to_range(depth);
-
- if (sub_rule_index < 0) {
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be removed or modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- lpm->tbl8[i].valid = INVALID;
- }
- } else {
- /* Set new tbl8 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- };
-
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
-
- /*
- * Check if there are any valid entries in this tbl8 group. If all
- * tbl8 entries are invalid we can free the tbl8 and invalidate the
- * associated tbl24 entry.
- */
-
- tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start);
-
- if (tbl8_recycle_index == -EINVAL) {
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- lpm->tbl24[tbl24_index].valid = 0;
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- } else if (tbl8_recycle_index > -1) {
- /* Update tbl24 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = lpm->tbl8[tbl8_recycle_index].depth,
- };
-
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELAXED);
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1810,7 +959,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* associated tbl24 entry.
*/
- tbl8_recycle_index = tbl8_recycle_check_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition.
@@ -1818,7 +967,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
*/
lpm->tbl24[tbl24_index].valid = 0;
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
} else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl_entry new_tbl24_entry = {
@@ -1834,7 +983,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
__atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELAXED);
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
}
#undef group_idx
return 0;
@@ -1844,7 +993,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* Deletes a rule
*/
int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
uint32_t ip_masked;
@@ -1863,7 +1012,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* Find the index of the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find_v20(lpm, ip_masked, depth);
+ rule_to_delete_index = rule_find(lpm, ip_masked, depth);
/*
* Check if rule_to_delete_index was found. If no rule was found the
@@ -1873,7 +1022,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
/* Delete the rule from the rule table. */
- rule_delete_v20(lpm, rule_to_delete_index, depth);
+ rule_delete(lpm, rule_to_delete_index, depth);
/*
* Find rule to replace the rule_to_delete. If there is no rule to
@@ -1881,100 +1030,26 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* entries associated with this rule.
*/
sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v20(lpm, ip, depth, &sub_rule_depth);
+ sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
/*
* If the input depth value is less than 25 use function
* delete_depth_small otherwise use delete_depth_big.
*/
if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v20(lpm, ip_masked, depth,
+ return delete_depth_small(lpm, ip_masked, depth,
sub_rule_index, sub_rule_depth);
} else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v20(lpm, ip_masked, depth, sub_rule_index,
+ return delete_depth_big(lpm, ip_masked, depth, sub_rule_index,
sub_rule_depth);
}
}
-VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
-
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
-{
- int32_t rule_to_delete_index, sub_rule_index;
- uint32_t ip_masked;
- uint8_t sub_rule_depth;
- /*
- * Check input arguments. Note: IP must be a positive integer of 32
- * bits in length therefore it need not be checked.
- */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
- return -EINVAL;
- }
-
- ip_masked = ip & depth_to_mask(depth);
-
- /*
- * Find the index of the input rule, that needs to be deleted, in the
- * rule table.
- */
- rule_to_delete_index = rule_find_v1604(lpm, ip_masked, depth);
-
- /*
- * Check if rule_to_delete_index was found. If no rule was found the
- * function rule_find returns -EINVAL.
- */
- if (rule_to_delete_index < 0)
- return -EINVAL;
-
- /* Delete the rule from the rule table. */
- rule_delete_v1604(lpm, rule_to_delete_index, depth);
-
- /*
- * Find rule to replace the rule_to_delete. If there is no rule to
- * replace the rule_to_delete we return -1 and invalidate the table
- * entries associated with this rule.
- */
- sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v1604(lpm, ip, depth, &sub_rule_depth);
-
- /*
- * If the input depth value is less than 25 use function
- * delete_depth_small otherwise use delete_depth_big.
- */
- if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v1604(lpm, ip_masked, depth,
- sub_rule_index, sub_rule_depth);
- } else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v1604(lpm, ip_masked, depth, sub_rule_index,
- sub_rule_depth);
- }
-}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth), rte_lpm_delete_v1604);
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm)
-{
- /* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
-
- /* Zero tbl24. */
- memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
-
- /* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
-
- /* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
-}
-VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
-
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm)
{
/* Zero rule information. */
memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1989,6 +1064,3 @@ rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm *lpm),
- rte_lpm_delete_all_v1604);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 906ec44830..ca9627a141 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -65,31 +65,6 @@ extern "C" {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- /**
- * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
- * a group index pointing to a tbl8 structure (tbl24 only, when
- * valid_group is set)
- */
- RTE_STD_C11
- union {
- uint8_t next_hop;
- uint8_t group_idx;
- };
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- /**
- * For tbl24:
- * - valid_group == 0: entry stores a next hop
- * - valid_group == 1: entry stores a group_index pointing to a tbl8
- * For tbl8:
- * - valid_group indicates whether the current tbl8 is in use or not
- */
- uint8_t valid_group :1;
- uint8_t depth :6; /**< Rule depth. */
-} __rte_aligned(sizeof(uint16_t));
-
__extension__
struct rte_lpm_tbl_entry {
/**
@@ -112,16 +87,6 @@ struct rte_lpm_tbl_entry {
};
#else
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- union {
- uint8_t group_idx;
- uint8_t next_hop;
- };
-} __rte_aligned(sizeof(uint16_t));
__extension__
struct rte_lpm_tbl_entry {
@@ -142,11 +107,6 @@ struct rte_lpm_config {
};
/** @internal Rule structure. */
-struct rte_lpm_rule_v20 {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
-};
-
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint32_t next_hop; /**< Rule next hop. */
@@ -159,21 +119,6 @@ struct rte_lpm_rule_info {
};
/** @internal LPM structure. */
-struct rte_lpm_v20 {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
- /* LPM Tables. */
- struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl8 table. */
- struct rte_lpm_rule_v20 rules_tbl[]
- __rte_cache_aligned; /**< LPM rules. */
-};
-
struct rte_lpm {
/* LPM metadata. */
char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
@@ -210,11 +155,6 @@ struct rte_lpm {
struct rte_lpm *
rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config);
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
-struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
- const struct rte_lpm_config *config);
/**
* Find an existing LPM object and return a pointer to it.
@@ -228,10 +168,6 @@ rte_lpm_create_v1604(const char *name, int socket_id,
*/
struct rte_lpm *
rte_lpm_find_existing(const char *name);
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name);
-struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name);
/**
* Free an LPM object.
@@ -243,10 +179,6 @@ rte_lpm_find_existing_v1604(const char *name);
*/
void
rte_lpm_free(struct rte_lpm *lpm);
-void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm);
/**
* Add a rule to the LPM table.
@@ -264,12 +196,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm);
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
-int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -289,12 +215,6 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -310,10 +230,6 @@ uint32_t *next_hop);
*/
int
rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
/**
* Delete all rules from the LPM table.
@@ -323,10 +239,6 @@ rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
*/
void
rte_lpm_delete_all(struct rte_lpm *lpm);
-void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
/**
* Lookup an IP into the LPM table.
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 9b8aeb9721..b981e40714 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -808,18 +808,6 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
return 1;
}
-/*
- * Add a route
- */
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop)
-{
- return rte_lpm6_add_v1705(lpm, ip, depth, next_hop);
-}
-VERSION_SYMBOL(rte_lpm6_add, _v20, 2.0);
-
-
/*
* Simulate adding a route to LPM
*
@@ -841,7 +829,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
/* Inspect the first three bytes through tbl24 on the first step. */
ret = simulate_add_step(lpm, lpm->tbl24, &tbl_next, masked_ip,
- ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
+ ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
total_need_tbl_nb = need_tbl_nb;
/*
* Inspect one by one the rest of the bytes until
@@ -850,7 +838,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && ret == 1; i++) {
tbl = tbl_next;
ret = simulate_add_step(lpm, tbl, &tbl_next, masked_ip, 1,
- (uint8_t)(i+1), depth, &need_tbl_nb);
+ (uint8_t)(i + 1), depth, &need_tbl_nb);
total_need_tbl_nb += need_tbl_nb;
}
@@ -861,9 +849,12 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
return 0;
}
+/*
+ * Add a route
+ */
int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop)
+rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+ uint32_t next_hop)
{
struct rte_lpm6_tbl_entry *tbl;
struct rte_lpm6_tbl_entry *tbl_next = NULL;
@@ -895,8 +886,8 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
/* Inspect the first three bytes through tbl24 on the first step. */
tbl = lpm->tbl24;
status = add_step(lpm, tbl, TBL24_IND, &tbl_next, &tbl_next_num,
- masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
- is_new_rule);
+ masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
+ is_new_rule);
assert(status >= 0);
/*
@@ -906,17 +897,13 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && status == 1; i++) {
tbl = tbl_next;
status = add_step(lpm, tbl, tbl_next_num, &tbl_next,
- &tbl_next_num, masked_ip, 1, (uint8_t)(i+1),
- depth, next_hop, is_new_rule);
+ &tbl_next_num, masked_ip, 1, (uint8_t)(i + 1),
+ depth, next_hop, is_new_rule);
assert(status >= 0);
}
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_add, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip,
- uint8_t depth, uint32_t next_hop),
- rte_lpm6_add_v1705);
/*
* Takes a pointer to a table entry and inspect one level.
@@ -955,25 +942,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
* Looks up an IP
*/
int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_lookup_v1705(lpm, ip, &next_hop32);
- if (status == 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-}
-VERSION_SYMBOL(rte_lpm6_lookup, _v20, 2.0);
-
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
uint32_t *next_hop)
{
const struct rte_lpm6_tbl_entry *tbl;
@@ -1000,56 +969,12 @@ rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop), rte_lpm6_lookup_v1705);
/*
* Looks up a group of IP addresses
*/
int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n)
-{
- unsigned i;
- const struct rte_lpm6_tbl_entry *tbl;
- const struct rte_lpm6_tbl_entry *tbl_next = NULL;
- uint32_t tbl24_index, next_hop;
- uint8_t first_byte;
- int status;
-
- /* DEBUG: Check user input arguments. */
- if ((lpm == NULL) || (ips == NULL) || (next_hops == NULL))
- return -EINVAL;
-
- for (i = 0; i < n; i++) {
- first_byte = LOOKUP_FIRST_BYTE;
- tbl24_index = (ips[i][0] << BYTES2_SIZE) |
- (ips[i][1] << BYTE_SIZE) | ips[i][2];
-
- /* Calculate pointer to the first entry to be inspected */
- tbl = &lpm->tbl24[tbl24_index];
-
- do {
- /* Continue inspecting following levels until success or failure */
- status = lookup_step(lpm, tbl, &tbl_next, ips[i], first_byte++,
- &next_hop);
- tbl = tbl_next;
- } while (status == 1);
-
- if (status < 0)
- next_hops[i] = -1;
- else
- next_hops[i] = (int16_t)next_hop;
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm6_lookup_bulk_func, _v20, 2.0);
-
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
+rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n)
{
@@ -1089,37 +1014,12 @@ rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup_bulk_func, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n),
- rte_lpm6_lookup_bulk_func_v1705);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_is_rule_present_v1705(lpm, ip, depth, &next_hop32);
- if (status > 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-
-}
-VERSION_SYMBOL(rte_lpm6_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop)
{
uint8_t masked_ip[RTE_LPM6_IPV6_ADDR_SIZE];
@@ -1135,10 +1035,6 @@ rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
return rule_find(lpm, masked_ip, depth, next_hop);
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_is_rule_present, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_is_rule_present(struct rte_lpm6 *lpm,
- uint8_t *ip, uint8_t depth, uint32_t *next_hop),
- rte_lpm6_is_rule_present_v1705);
/*
* Delete a rule from the rule table.
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index 5d59ccb1fe..37dfb20249 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -96,12 +96,6 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t next_hop);
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -121,12 +115,6 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop);
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -184,11 +172,6 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
*/
int
rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
-int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table.
@@ -210,14 +193,6 @@ int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n);
#ifdef __cplusplus
}
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v5 04/10] timer: remove deprecated code
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (3 preceding siblings ...)
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 03/10] buildtools: add ABI update shell script Anatoly Burakov
@ 2019-10-24 9:46 4% ` Anatoly Burakov
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 05/10] lpm: " Anatoly Burakov
` (5 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, ray.kinsella, bruce.richardson, thomas,
david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_timer/rte_timer.c | 90 ++----------------------------------
lib/librte_timer/rte_timer.h | 15 ------
2 files changed, 5 insertions(+), 100 deletions(-)
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index bdcf05d06b..de6959b809 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -68,9 +68,6 @@ static struct rte_timer_data *rte_timer_data_arr;
static const uint32_t default_data_id;
static uint32_t rte_timer_subsystem_initialized;
-/* For maintaining older interfaces for a period */
-static struct rte_timer_data default_timer_data;
-
/* when debug is enabled, store some statistics */
#ifdef RTE_LIBRTE_TIMER_DEBUG
#define __TIMER_STAT_ADD(priv_timer, name, n) do { \
@@ -131,22 +128,6 @@ rte_timer_data_dealloc(uint32_t id)
return 0;
}
-void
-rte_timer_subsystem_init_v20(void)
-{
- unsigned lcore_id;
- struct priv_timer *priv_timer = default_timer_data.priv_timer;
-
- /* since priv_timer is static, it's zeroed by default, so only init some
- * fields.
- */
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id ++) {
- rte_spinlock_init(&priv_timer[lcore_id].list_lock);
- priv_timer[lcore_id].prev_lcore = lcore_id;
- }
-}
-VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
-
/* Init the timer library. Allocate an array of timer data structs in shared
* memory, and allocate the zeroth entry for use with original timer
* APIs. Since the intersection of the sets of lcore ids in primary and
@@ -154,7 +135,7 @@ VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
* multiple processes.
*/
int
-rte_timer_subsystem_init_v1905(void)
+rte_timer_subsystem_init(void)
{
const struct rte_memzone *mz;
struct rte_timer_data *data;
@@ -209,9 +190,6 @@ rte_timer_subsystem_init_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),
- rte_timer_subsystem_init_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);
void
rte_timer_subsystem_finalize(void)
@@ -552,42 +530,13 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
/* Reset and start the timer associated with the timer handle tim */
int
-rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg)
-{
- uint64_t cur_time = rte_get_timer_cycles();
- uint64_t period;
-
- if (unlikely((tim_lcore != (unsigned)LCORE_ID_ANY) &&
- !(rte_lcore_is_enabled(tim_lcore) ||
- rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))
- return -1;
-
- if (type == PERIODICAL)
- period = ticks;
- else
- period = 0;
-
- return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
- fct, arg, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
-
-int
-rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned int tim_lcore,
rte_timer_cb_t fct, void *arg)
{
return rte_timer_alt_reset(default_data_id, tim, ticks, type,
tim_lcore, fct, arg);
}
-MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type,
- unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg),
- rte_timer_reset_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);
int
rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
@@ -658,20 +607,10 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked,
/* Stop the timer associated with the timer handle tim */
int
-rte_timer_stop_v20(struct rte_timer *tim)
-{
- return __rte_timer_stop(tim, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);
-
-int
-rte_timer_stop_v1905(struct rte_timer *tim)
+rte_timer_stop(struct rte_timer *tim)
{
return rte_timer_alt_stop(default_data_id, tim);
}
-MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),
- rte_timer_stop_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);
int
rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
@@ -817,15 +756,8 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
priv_timer[lcore_id].running_tim = NULL;
}
-void
-rte_timer_manage_v20(void)
-{
- __rte_timer_manage(&default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
-
int
-rte_timer_manage_v1905(void)
+rte_timer_manage(void)
{
struct rte_timer_data *timer_data;
@@ -835,8 +767,6 @@ rte_timer_manage_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);
int
rte_timer_alt_manage(uint32_t timer_data_id,
@@ -1074,21 +1004,11 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
#endif
}
-void
-rte_timer_dump_stats_v20(FILE *f)
-{
- __rte_timer_dump_stats(&default_timer_data, f);
-}
-VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);
-
int
-rte_timer_dump_stats_v1905(FILE *f)
+rte_timer_dump_stats(FILE *f)
{
return rte_timer_alt_dump_stats(default_data_id, f);
}
-MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),
- rte_timer_dump_stats_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);
int
rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
index 05d287d8f2..9dc5fc3092 100644
--- a/lib/librte_timer/rte_timer.h
+++ b/lib/librte_timer/rte_timer.h
@@ -181,8 +181,6 @@ int rte_timer_data_dealloc(uint32_t id);
* subsystem
*/
int rte_timer_subsystem_init(void);
-int rte_timer_subsystem_init_v1905(void);
-void rte_timer_subsystem_init_v20(void);
/**
* @warning
@@ -250,13 +248,6 @@ void rte_timer_init(struct rte_timer *tim);
int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-
/**
* Loop until rte_timer_reset() succeeds.
@@ -313,8 +304,6 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
* - (-1): The timer is in the RUNNING or CONFIG state.
*/
int rte_timer_stop(struct rte_timer *tim);
-int rte_timer_stop_v1905(struct rte_timer *tim);
-int rte_timer_stop_v20(struct rte_timer *tim);
/**
* Loop until rte_timer_stop() succeeds.
@@ -358,8 +347,6 @@ int rte_timer_pending(struct rte_timer *tim);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_manage(void);
-int rte_timer_manage_v1905(void);
-void rte_timer_manage_v20(void);
/**
* Dump statistics about timers.
@@ -371,8 +358,6 @@ void rte_timer_manage_v20(void);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_dump_stats(FILE *f);
-int rte_timer_dump_stats_v1905(FILE *f);
-void rte_timer_dump_stats_v20(FILE *f);
/**
* @warning
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v5 03/10] buildtools: add ABI update shell script
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (2 preceding siblings ...)
2019-10-24 9:46 14% ` [dpdk-dev] [PATCH v5 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
@ 2019-10-24 9:46 23% ` Anatoly Burakov
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 04/10] timer: remove deprecated code Anatoly Burakov
` (6 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, ray.kinsella, bruce.richardson, thomas, david.marchand
In order to facilitate mass updating of version files, add a shell
script that recurses into lib/ and drivers/ directories and calls
the ABI version update script.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Switch to sh rather than bash, and remove bash-isms
- Address review comments
v2:
- Add this patch to split the shell script from previous commit
- Fixup miscellaneous bugs
buildtools/update-abi.sh | 42 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100755 buildtools/update-abi.sh
diff --git a/buildtools/update-abi.sh b/buildtools/update-abi.sh
new file mode 100755
index 0000000000..89ba5804a6
--- /dev/null
+++ b/buildtools/update-abi.sh
@@ -0,0 +1,42 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+abi_version=$1
+abi_version_file="./config/ABI_VERSION"
+update_path="lib drivers"
+
+if [ -z "$1" ]; then
+ # output to stderr
+ >&2 echo "Please provide ABI version"
+ exit 1
+fi
+
+# check version string format
+echo $abi_version | grep -q -e "^[[:digit:]]\{1,2\}\.[[:digit:]]\{1,2\}$"
+if [ "$?" -ne 0 ]; then
+ # output to stderr
+ >&2 echo "ABI version must be formatted as MAJOR.MINOR version"
+ exit 1
+fi
+
+if [ -n "$2" ]; then
+ abi_version_file=$2
+fi
+
+if [ -n "$3" ]; then
+ # drop $1 and $2
+ shift 2
+ # assign all other arguments as update paths
+ update_path=$@
+fi
+
+echo "New ABI version:" $abi_version
+echo "ABI_VERSION path:" $abi_version_file
+echo "Path to update:" $update_path
+
+echo $abi_version > $abi_version_file
+
+find $update_path -name \*version.map -exec \
+ ./buildtools/update_version_map_abi.py {} \
+ $abi_version \; -print
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v5 02/10] buildtools: add script for updating symbols abi version
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
2019-10-24 9:46 8% ` [dpdk-dev] [PATCH v5 " Anatoly Burakov
2019-10-24 9:46 7% ` [dpdk-dev] [PATCH v5 01/10] config: change ABI versioning to global Anatoly Burakov
@ 2019-10-24 9:46 14% ` Anatoly Burakov
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 03/10] buildtools: add ABI update shell script Anatoly Burakov
` (7 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Pawel Modrak, john.mcnamara, ray.kinsella, bruce.richardson,
thomas, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Add a script that automatically merges all stable ABI's under one
ABI section with the new version, while leaving experimental
section exactly as it is.
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Add comments to regex patterns
v2:
- Reworked script to be pep8-compliant and more reliable
buildtools/update_version_map_abi.py | 170 +++++++++++++++++++++++++++
1 file changed, 170 insertions(+)
create mode 100755 buildtools/update_version_map_abi.py
diff --git a/buildtools/update_version_map_abi.py b/buildtools/update_version_map_abi.py
new file mode 100755
index 0000000000..50283e6a3d
--- /dev/null
+++ b/buildtools/update_version_map_abi.py
@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+"""
+A Python program to update the ABI version and function names in a DPDK
+lib_*_version.map file. Called from the buildtools/update_abi.sh utility.
+"""
+
+from __future__ import print_function
+import argparse
+import sys
+import re
+
+
+def __parse_map_file(f_in):
+ # match function name, followed by semicolon, followed by EOL, optionally
+ # with whitespace inbetween each item
+ func_line_regex = re.compile(r"\s*"
+ r"(?P<func>[a-zA-Z_0-9]+)"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+ # match section name, followed by opening bracked, followed by EOL,
+ # optionally with whitespace inbetween each item
+ section_begin_regex = re.compile(r"\s*"
+ r"(?P<version>[a-zA-Z0-9_\.]+)"
+ r"\s*"
+ r"{"
+ r"\s*"
+ r"$")
+ # match closing bracket, optionally followed by section name (for when we
+ # inherit from another ABI version), followed by semicolon, followed by
+ # EOL, optionally with whitespace inbetween each item
+ section_end_regex = re.compile(r"\s*"
+ r"}"
+ r"\s*"
+ r"(?P<parent>[a-zA-Z0-9_\.]+)?"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+
+ # for stable ABI, we don't care about which version introduced which
+ # function, we just flatten the list. there are dupes in certain files, so
+ # use a set instead of a list
+ stable_lines = set()
+ # copy experimental section as is
+ experimental_lines = []
+ is_experimental = False
+
+ # gather all functions
+ for line in f_in:
+ # clean up the line
+ line = line.strip('\n').strip()
+
+ # is this an end of section?
+ match = section_end_regex.match(line)
+ if match:
+ # whatever section this was, it's not active any more
+ is_experimental = False
+ continue
+
+ # if we're in the middle of experimental section, we need to copy
+ # the section verbatim, so just add the line
+ if is_experimental:
+ experimental_lines += [line]
+ continue
+
+ # skip empty lines
+ if not line:
+ continue
+
+ # is this a beginning of a new section?
+ match = section_begin_regex.match(line)
+ if match:
+ cur_section = match.group("version")
+ # is it experimental?
+ is_experimental = cur_section == "EXPERIMENTAL"
+ continue
+
+ # is this a function?
+ match = func_line_regex.match(line)
+ if match:
+ stable_lines.add(match.group("func"))
+
+ return stable_lines, experimental_lines
+
+
+def __regenerate_map_file(f_out, abi_version, stable_lines,
+ experimental_lines):
+ # print ABI version header
+ print("DPDK_{} {{".format(abi_version), file=f_out)
+
+ if stable_lines:
+ # print global section
+ print("\tglobal:", file=f_out)
+ # blank line
+ print(file=f_out)
+
+ # print all stable lines, alphabetically sorted
+ for line in sorted(stable_lines):
+ print("\t{};".format(line), file=f_out)
+
+ # another blank line
+ print(file=f_out)
+
+ # print local section
+ print("\tlocal: *;", file=f_out)
+
+ # end stable version
+ print("};", file=f_out)
+
+ # do we have experimental lines?
+ if not experimental_lines:
+ return
+
+ # another blank line
+ print(file=f_out)
+
+ # start experimental section
+ print("EXPERIMENTAL {", file=f_out)
+
+ # print all experimental lines as they were
+ for line in experimental_lines:
+ # don't print empty whitespace
+ if not line:
+ print("", file=f_out)
+ else:
+ print("\t{}".format(line), file=f_out)
+
+ # end section
+ print("};", file=f_out)
+
+
+def __main():
+ arg_parser = argparse.ArgumentParser(
+ description='Merge versions in linker version script.')
+
+ arg_parser.add_argument("map_file", type=str,
+ help='path to linker version script file '
+ '(pattern: *version.map)')
+ arg_parser.add_argument("abi_version", type=str,
+ help='target ABI version (pattern: MAJOR.MINOR)')
+
+ parsed = arg_parser.parse_args()
+
+ if not parsed.map_file.endswith('version.map'):
+ print("Invalid input file: {}".format(parsed.map_file),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ if not re.match(r"\d{1,2}\.\d{1,2}", parsed.abi_version):
+ print("Invalid ABI version: {}".format(parsed.abi_version),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ with open(parsed.map_file) as f_in:
+ stable_lines, experimental_lines = __parse_map_file(f_in)
+
+ with open(parsed.map_file, 'w') as f_out:
+ __regenerate_map_file(f_out, parsed.abi_version, stable_lines,
+ experimental_lines)
+
+
+if __name__ == "__main__":
+ __main()
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v5 01/10] config: change ABI versioning to global
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
2019-10-24 9:46 8% ` [dpdk-dev] [PATCH v5 " Anatoly Burakov
@ 2019-10-24 9:46 7% ` Anatoly Burakov
2019-10-24 9:46 14% ` [dpdk-dev] [PATCH v5 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
` (8 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Thomas Monjalon, Bruce Richardson, john.mcnamara,
ray.kinsella, david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
As per new ABI policy, all of the libraries are now versioned using
one global ABI version. Changes in this patch implement the
necessary steps to enable that.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Removed Windows support from Makefile changes
- Removed unneeded path conversions from meson files
buildtools/meson.build | 2 ++
config/ABI_VERSION | 1 +
config/meson.build | 4 +++-
drivers/meson.build | 20 ++++++++++++--------
lib/meson.build | 18 +++++++++++-------
meson_options.txt | 2 --
mk/rte.lib.mk | 13 ++++---------
7 files changed, 33 insertions(+), 27 deletions(-)
create mode 100644 config/ABI_VERSION
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 32c79c1308..78ce69977d 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -12,3 +12,5 @@ if python3.found()
else
map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
endif
+
+is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
diff --git a/config/ABI_VERSION b/config/ABI_VERSION
new file mode 100644
index 0000000000..9a7c1e503f
--- /dev/null
+++ b/config/ABI_VERSION
@@ -0,0 +1 @@
+20.0
diff --git a/config/meson.build b/config/meson.build
index acacba704a..40ad34345f 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -18,6 +18,8 @@ endforeach
# depending on the configuration options
pver = meson.project_version().split('.')
major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
+abi_version = run_command(find_program('cat', 'more'),
+ files('ABI_VERSION')).stdout().strip()
# extract all version information into the build configuration
dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
@@ -37,7 +39,7 @@ endif
pmd_subdir_opt = get_option('drivers_install_subdir')
if pmd_subdir_opt.contains('<VERSION>')
- pmd_subdir_opt = major_version.join(pmd_subdir_opt.split('<VERSION>'))
+ pmd_subdir_opt = abi_version.join(pmd_subdir_opt.split('<VERSION>'))
endif
driver_install_path = join_paths(get_option('libdir'), pmd_subdir_opt)
eal_pmd_path = join_paths(get_option('prefix'), driver_install_path)
diff --git a/drivers/meson.build b/drivers/meson.build
index 4a1cb8b5be..1c1190053e 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -119,12 +119,19 @@ foreach class:dpdk_driver_classes
output: out_filename,
depends: [pmdinfogen, tmp_lib])
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/@2@_version.map'.format(
+ meson.current_source_dir(),
+ drv_path, lib_name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# now build the static driver
@@ -137,9 +144,6 @@ foreach class:dpdk_driver_classes
install: true)
# now build the shared driver
- version_map = '@0@/@1@/@2@_version.map'.format(
- meson.current_source_dir(),
- drv_path, lib_name)
shared_lib = shared_library(lib_name,
sources,
objects: objs,
diff --git a/lib/meson.build b/lib/meson.build
index 8ea3671c04..6302c0b680 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -100,12 +100,18 @@ foreach l:libraries
cflags += '-DALLOW_EXPERIMENTAL_API'
endif
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/rte_@2@_version.map'.format(
+ meson.current_source_dir(), dir_name, name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# first build static lib
@@ -123,8 +129,6 @@ foreach l:libraries
# then use pre-build objects to build shared lib
sources = []
objs += static_lib.extract_all_objects(recursive: false)
- version_map = '@0@/@1@/rte_@2@_version.map'.format(
- meson.current_source_dir(), dir_name, name)
implib = dir_name + '.dll.a'
def_file = custom_target(name + '_def',
diff --git a/meson_options.txt b/meson_options.txt
index 89650b0e9c..da6a7f0302 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -30,8 +30,6 @@ option('max_lcores', type: 'integer', value: 128,
description: 'maximum number of cores/threads supported by EAL')
option('max_numa_nodes', type: 'integer', value: 4,
description: 'maximum number of NUMA nodes supported by EAL')
-option('per_library_versions', type: 'boolean', value: true,
- description: 'true: each lib gets its own version number, false: DPDK version used for each lib')
option('tests', type: 'boolean', value: true,
description: 'build unit tests')
option('use_hpet', type: 'boolean', value: false,
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 4df8849a08..e1ea292b6e 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -11,20 +11,15 @@ EXTLIB_BUILD ?= n
# VPATH contains at least SRCDIR
VPATH += $(SRCDIR)
-ifneq ($(CONFIG_RTE_MAJOR_ABI),)
-ifneq ($(LIBABIVER),)
-LIBABIVER := $(CONFIG_RTE_MAJOR_ABI)
-endif
+ifneq ($(shell grep "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
+LIBABIVER := $(shell cat $(RTE_SRCDIR)/config/ABI_VERSION)
+else
+LIBABIVER := 0
endif
ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
LIB := $(patsubst %.a,%.so.$(LIBABIVER),$(LIB))
ifeq ($(EXTLIB_BUILD),n)
-ifeq ($(CONFIG_RTE_MAJOR_ABI),)
-ifeq ($(CONFIG_RTE_NEXT_ABI),y)
-LIB := $(LIB).1
-endif
-endif
CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
endif
endif
--
2.17.1
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v5 00/10] Implement the new ABI policy and add helper scripts
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
@ 2019-10-24 9:46 8% ` Anatoly Burakov
2019-10-24 9:46 7% ` [dpdk-dev] [PATCH v5 01/10] config: change ABI versioning to global Anatoly Burakov
` (9 subsequent siblings)
10 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-24 9:46 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, ray.kinsella, bruce.richardson, thomas, david.marchand
This patchset prepares the codebase for the new ABI policy and
adds a few helper scripts.
There are two new scripts for managing ABI versions added. The
first one is a Python script that will read in a .map file,
flatten it and update the ABI version to the ABI version
specified on the command-line.
The second one is a shell script that will run the above mentioned
Python script recursively over the source tree and set the ABI
version to either that which is defined in config/ABI_VERSION, or
a user-specified one.
Example of its usage: buildtools/update-abi.sh 20.0
This will recurse into lib/ and drivers/ directory and update
whatever .map files it can find.
The other shell script that's added is one that can take in a .so
file and ensure that its declared public ABI matches either
current ABI, next ABI, or EXPERIMENTAL. This was moved to the
last commit because it made no sense to have it beforehand.
The source tree was verified to follow the new ABI policy using
the following command (assuming built binaries are in build/):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
This returns 0.
Changes since v4:
- Fixed shared library build issue for distributor
Changes since v3:
- Put distributor code back and cleaned it up
- Rebased on latest master and regenerated commit 9
Changes since v2:
- Addressed Bruce's review comments
- Removed single distributor mode as per Dave's suggestion
Changes since v1:
- Reordered patchset to have removal of old ABI's before introducing
the new one to avoid compile breakages between patches
- Added a new patch fixing missing symbol in octeontx common
- Split script commits into multiple commits and reordered them
- Re-generated the ABI bump commit
- Verified all scripts to work
Anatoly Burakov (2):
buildtools: add ABI update shell script
drivers/octeontx: add missing public symbol
Marcin Baran (6):
config: change ABI versioning to global
timer: remove deprecated code
lpm: remove deprecated code
distributor: remove deprecated code
distributor: rename v2.0 ABI to _single suffix
buildtools: add ABI versioning check script
Pawel Modrak (2):
buildtools: add script for updating symbols abi version
build: change ABI version to 20.0
buildtools/check-abi-version.sh | 54 +
buildtools/meson.build | 2 +
buildtools/update-abi.sh | 42 +
buildtools/update_version_map_abi.py | 170 +++
config/ABI_VERSION | 1 +
config/meson.build | 4 +-
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++-
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 7 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
drivers/meson.build | 20 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +-
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 +-
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 +-
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 +-
lib/librte_distributor/Makefile | 2 +-
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 80 +-
.../rte_distributor_private.h | 10 +-
...ributor_v20.c => rte_distributor_single.c} | 57 +-
...ributor_v20.h => rte_distributor_single.h} | 26 +-
.../rte_distributor_v1705.h | 61 --
.../rte_distributor_version.map | 16 +-
lib/librte_eal/rte_eal_version.map | 310 ++----
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +--
lib/librte_eventdev/rte_eventdev_version.map | 130 +--
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +-
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm.c | 996 +-----------------
lib/librte_lpm/rte_lpm.h | 88 --
lib/librte_lpm/rte_lpm6.c | 132 +--
lib/librte_lpm/rte_lpm6.h | 25 -
lib/librte_lpm/rte_lpm_version.map | 39 +-
lib/librte_mbuf/rte_mbuf_version.map | 49 +-
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +-
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +-
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer.c | 90 +-
lib/librte_timer/rte_timer.h | 15 -
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +-
lib/meson.build | 18 +-
meson_options.txt | 2 -
mk/rte.lib.mk | 13 +-
177 files changed, 1141 insertions(+), 2912 deletions(-)
create mode 100755 buildtools/check-abi-version.sh
create mode 100755 buildtools/update-abi.sh
create mode 100755 buildtools/update_version_map_abi.py
create mode 100644 config/ABI_VERSION
rename lib/librte_distributor/{rte_distributor_v20.c => rte_distributor_single.c} (84%)
rename lib/librte_distributor/{rte_distributor_v20.h => rte_distributor_single.h} (89%)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
--
2.17.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v2] ethdev: extend flow metadata
2019-10-24 6:49 3% ` Slava Ovsiienko
@ 2019-10-24 9:22 0% ` Olivier Matz
2019-10-24 12:30 0% ` Slava Ovsiienko
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-24 9:22 UTC (permalink / raw)
To: Slava Ovsiienko; +Cc: dev, Matan Azrad, Raslan Darawsheh, Thomas Monjalon
Hi Slava,
On Thu, Oct 24, 2019 at 06:49:41AM +0000, Slava Ovsiienko wrote:
> Hi, Olivier
>
> > > [snip]
> > >
> > > > > +int
> > > > > +rte_flow_dynf_metadata_register(void)
> > > > > +{
> > > > > + int offset;
> > > > > + int flag;
> > > > > +
> > > > > + static const struct rte_mbuf_dynfield desc_offs = {
> > > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > > + .size = MBUF_DYNF_METADATA_SIZE,
> > > > > + .align = MBUF_DYNF_METADATA_ALIGN,
> > > > > + .flags = MBUF_DYNF_METADATA_FLAGS,
> > > > > + };
> > > > > + static const struct rte_mbuf_dynflag desc_flag = {
> > > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > > + };
> > > >
> > > > I don't see think we need #defines.
> > > > You can directly use the name, sizeof() and __alignof__() here.
> > > > If the information is used externally, the structure shall be made
> > > > global non- static.
> > >
> > > The intention was to gather all dynamic fields definitions in one
> > > place (in rte_mbuf_dyn.h).
> >
> > If the dynamic field is only going to be used inside rte_flow, I think there is no
> > need to expose it in rte_mbuf_dyn.h.
> > The other reason is I think the #define are just "passthrough", and do not
> > really bring added value, just an indirection.
> >
> > > It would be easy to see all fields in one sight (some might be shared,
> > > some might be mutual exclusive, estimate mbuf space, required by
> > > various features, etc.). So, we can't just fill structure fields with
> > > simple sizeof() and alignof() instead of definitions (the field
> > > parameters must be defined once).
> > >
> > > I do not see the reasons to make table global. I would prefer the
> > definitions.
> > > - the definitions are compile time processing (table fields are
> > > runtime), it provides code optimization and better performance.
> >
> > There is indeed no need to make the table global if the field is private to
> > rte_flow. About better performance, my understanding is that it would only
> > impact registration, am I missing something?
>
> OK, I thought about some opportunity to allow application to register
> field directly, bypassing rte_flow_dynf_metadata_register(). So either
> definitions or field description table was supposed to be global.
> I agree, let's do not complicate the matter, I'll will make global the
> metadata field name definition only - in the rte_mbuf_dyn.h in order
> just to have some centralizing point.
By reading your mail, things are also clearer to me about which
parts need access to this field.
To summarize what I understand:
- dyn field registration is done in rte_flow lib when configuring
a flow using META
- the dynamic field will never be get/set in a mbuf by a PMD or rte_flow
before a flow using META is added
One question then: why would you need the dyn field name to be exported?
Does the PMD need to know if the field is registered with a lookup or
something like that? If yes, can you detail why?
>
> > >
> > > > > +
> > > > > + offset = rte_mbuf_dynfield_register(&desc_offs);
> > > > > + if (offset < 0)
> > > > > + goto error;
> > > > > + flag = rte_mbuf_dynflag_register(&desc_flag);
> > > > > + if (flag < 0)
> > > > > + goto error;
> > > > > + rte_flow_dynf_metadata_offs = offset;
> > > > > + rte_flow_dynf_metadata_mask = (1ULL << flag);
> > > > > + return 0;
> > > > > +
> > > > > +error:
> > > > > + rte_flow_dynf_metadata_offs = -1;
> > > > > + rte_flow_dynf_metadata_mask = 0ULL;
> > > > > + return -rte_errno;
> > > > > +}
> > > > > +
> > > > > static int
> > > > > flow_err(uint16_t port_id, int ret, struct rte_flow_error *error)
> > > > > { diff --git a/lib/librte_ethdev/rte_flow.h
> > > > > b/lib/librte_ethdev/rte_flow.h index 391a44a..a27e619 100644
> > > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > > @@ -27,6 +27,8 @@
> > > > > #include <rte_udp.h>
> > > > > #include <rte_byteorder.h>
> > > > > #include <rte_esp.h>
> > > > > +#include <rte_mbuf.h>
> > > > > +#include <rte_mbuf_dyn.h>
> > > > >
> > > > > #ifdef __cplusplus
> > > > > extern "C" {
> > > > > @@ -417,7 +419,8 @@ enum rte_flow_item_type {
> > > > > /**
> > > > > * [META]
> > > > > *
> > > > > - * Matches a metadata value specified in mbuf metadata field.
> > > > > + * Matches a metadata value.
> > > > > + *
> > > > > * See struct rte_flow_item_meta.
> > > > > */
> > > > > RTE_FLOW_ITEM_TYPE_META,
> > > > > @@ -1213,9 +1216,17 @@ struct
> > rte_flow_item_icmp6_nd_opt_tla_eth {
> > > > > #endif
> > > > >
> > > > > /**
> > > > > - * RTE_FLOW_ITEM_TYPE_META.
> > > > > + * @warning
> > > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > > + notice
> > > > > *
> > > > > - * Matches a specified metadata value.
> > > > > + * RTE_FLOW_ITEM_TYPE_META
> > > > > + *
> > > > > + * Matches a specified metadata value. On egress, metadata can be
> > > > > + set either by
> > > > > + * mbuf tx_metadata field with PKT_TX_METADATA flag or
> > > > > + * RTE_FLOW_ACTION_TYPE_SET_META. On ingress,
> > > > > + RTE_FLOW_ACTION_TYPE_SET_META sets
> > > > > + * metadata for a packet and the metadata will be reported via
> > > > > + mbuf metadata
> > > > > + * dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic
> > mbuf
> > > > > + field must be
> > > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > > */
> > > > > struct rte_flow_item_meta {
> > > > > rte_be32_t data;
> > > > > @@ -1813,6 +1824,13 @@ enum rte_flow_action_type {
> > > > > * undefined behavior.
> > > > > */
> > > > > RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK,
> > > > > +
> > > > > + /**
> > > > > + * Set metadata on ingress or egress path.
> > > > > + *
> > > > > + * See struct rte_flow_action_set_meta.
> > > > > + */
> > > > > + RTE_FLOW_ACTION_TYPE_SET_META,
> > > > > };
> > > > >
> > > > > /**
> > > > > @@ -2300,6 +2318,43 @@ struct rte_flow_action_set_mac {
> > > > > uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; };
> > > > >
> > > > > +/**
> > > > > + * @warning
> > > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > > +notice
> > > > > + *
> > > > > + * RTE_FLOW_ACTION_TYPE_SET_META
> > > > > + *
> > > > > + * Set metadata. Metadata set by mbuf tx_metadata field with
> > > > > + * PKT_TX_METADATA flag on egress will be overridden by this action.
> > > > > +On
> > > > > + * ingress, the metadata will be carried by mbuf metadata dynamic
> > > > > +field
> > > > > + * with PKT_RX_DYNF_METADATA flag if set. The dynamic mbuf field
> > > > > +must be
> > > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > > + *
> > > > > + * Altering partial bits is supported with mask. For bits which
> > > > > +have never
> > > > > + * been set, unpredictable value will be seen depending on driver
> > > > > + * implementation. For loopback/hairpin packet, metadata set on
> > > > > +Rx/Tx may
> > > > > + * or may not be propagated to the other path depending on HW
> > > > capability.
> > > > > + *
> > > > > + * RTE_FLOW_ITEM_TYPE_META matches metadata.
> > > > > + */
> > > > > +struct rte_flow_action_set_meta {
> > > > > + rte_be32_t data;
> > > > > + rte_be32_t mask;
> > > > > +};
> > > > > +
> > > > > +/* Mbuf dynamic field offset for metadata. */ extern int
> > > > > +rte_flow_dynf_metadata_offs;
> > > > > +
> > > > > +/* Mbuf dynamic field flag mask for metadata. */ extern uint64_t
> > > > > +rte_flow_dynf_metadata_mask;
> > > > > +
> > > > > +/* Mbuf dynamic field pointer for metadata. */ #define
> > > > > +RTE_FLOW_DYNF_METADATA(m) \
> > > > > + RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t
> > > > *)
> > > > > +
> > > > > +/* Mbuf dynamic flag for metadata. */ #define
> > > > > +PKT_RX_DYNF_METADATA
> > > > > +(rte_flow_dynf_metadata_mask)
> > > > > +
> > > >
> > > > I wonder if helpers like this wouldn't be better, because they
> > > > combine the flag and the field:
> > > >
> > > > /**
> > > > * Set metadata dynamic field and flag in mbuf.
> > > > *
> > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > */
> > > > __rte_experimental
> > > > static inline void rte_mbuf_dyn_metadata_set(struct rte_mbuf *m,
> > > > uint32_t metadata) {
> > > > *RTE_MBUF_DYNFIELD(m, rte_flow_dynf_metadata_offs,
> > > > uint32_t *) = metadata;
> > > > m->ol_flags |= rte_flow_dynf_metadata_mask; }
> > > Setting flag looks redundantly.
> > > What if driver just replaces the metadata and flag is already set?
> > > The other option - the flags (for set of fields) might be set in combinations.
> > > mbuf field is supposed to be engaged in datapath, performance is very
> > > critical, adding one more abstraction layer seems not to be relevant.
> >
> > Ok, that was just a suggestion. Let's use your accessors if you fear a
> > performance impact.
> The simple example - mlx5 PMD has the rx_burst routine implemented
> with vector instructions, and it processes four packets at once. No need
> to check field availability four times, and the storing the metadata
> is the subject for further optimization with vector instructions.
> It is a bit difficult to provide common helpers to handle the metadata
> field due to extremely high optimization requirements.
ok, got it
> > Nevertheless I suggest to use static inline functions in place of macros if
> > possible. For RTE_MBUF_DYNFIELD(), I used a macro because it's the only
> > way to provide a type to cast the result. But in your case, you know it's a
> > uint32_t *.
> What If one needs to specify the address of field? Macro allows to do that,
> inline functions - do not. Packets may be processed in bizarre ways,
> for example in a batch, with vector instructions. OK, I'll provide
> the set/get routines, but I'm not sure whether will use ones in mlx5 code.
> In my opinion it just obscures the field nature. Field is just a field, AFAIU,
> it is main idea of your patch, the way to handle dynamic field should be close
> to handling usual static fields, I think. Macro pointer follows this approach,
> routines - does not.
Well, I just think that:
rte_mbuf_set_timestamp(m, 1234);
is more readable than:
*RTE_MBUF_TIMESTAMP(m) = 1234;
Anyway, in your case, if you need to use vector instructions in the PMD,
I guess you will directly use the offset.
> > > Also, metadata is not feature of mbuf. It should have rte_flow prefix.
> >
> > Yes, sure. The example derives from a test I've done, and I forgot to change
> > it.
> >
> >
> > > > /**
> > > > * Get metadata dynamic field value in mbuf.
> > > > *
> > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > */
> > > > __rte_experimental
> > > > static inline int rte_mbuf_dyn_metadata_get(const struct rte_mbuf *m,
> > > > uint32_t *metadata) {
> > > > if ((m->ol_flags & rte_flow_dynf_metadata_mask) == 0)
> > > > return -1;
> > > What if metadata is 0xFFFFFFFF ?
> > > The checking of availability might embrace larger code block, so this
> > > might be not the best place to check availability.
> > >
> > > > *metadata = *RTE_MBUF_DYNFIELD(m,
> > rte_flow_dynf_metadata_offs,
> > > > uint32_t *);
> > > > return 0;
> > > > }
> > > >
> > > > /**
> > > > * Delete the metadata dynamic flag in mbuf.
> > > > *
> > > > * rte_flow_dynf_metadata_register() must have been called first.
> > > > */
> > > > __rte_experimental
> > > > static inline void rte_mbuf_dyn_metadata_del(struct rte_mbuf *m) {
> > > > m->ol_flags &= ~rte_flow_dynf_metadata_mask; }
> > > >
> > > Sorry, I do not see the practical usecase for these helpers. In my opinion it
> > is just some kind of obscuration.
> > > They do replace the very simple code and introduce some risk of
> > performance impact.
> > >
> > > >
> > > > > /*
> > > > > * Definition of a single action.
> > > > > *
> > > > > @@ -2533,6 +2588,32 @@ enum rte_flow_conv_op { };
> > > > >
> > > > > /**
> > > > > + * Check if mbuf dynamic field for metadata is registered.
> > > > > + *
> > > > > + * @return
> > > > > + * True if registered, false otherwise.
> > > > > + */
> > > > > +__rte_experimental
> > > > > +static inline int
> > > > > +rte_flow_dynf_metadata_avail(void) {
> > > > > + return !!rte_flow_dynf_metadata_mask; }
> > > >
> > > > _registered() instead of _avail() ?
> > > Accepted, sounds better.
>
> Hmm, I changed my opinion - we already have
> rte_flow_dynf_metadata_register(void). Is it OK to have
> rte_flow_dynf_metadata_registerED(void) ?
> It would be easy to mistype.
what about xxx_is_registered() ?
if you feel it's too long, ok, let's keep avail()
>
> > >
> > > >
> > > > > +
> > > > > +/**
> > > > > + * Register mbuf dynamic field and flag for metadata.
> > > > > + *
> > > > > + * This function must be called prior to use SET_META action in
> > > > > +order to
> > > > > + * register the dynamic mbuf field. Otherwise, the data cannot be
> > > > > +delivered to
> > > > > + * application.
> > > > > + *
> > > > > + * @return
> > > > > + * 0 on success, a negative errno value otherwise and rte_errno is
> > set.
> > > > > + */
> > > > > +__rte_experimental
> > > > > +int
> > > > > +rte_flow_dynf_metadata_register(void);
> > > > > +
> > > > > +/**
> > > > > * Check whether a flow rule can be created on a given port.
> > > > > *
> > > > > * The flow rule is validated for correctness and whether it
> > > > > could be accepted diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > b/lib/librte_mbuf/rte_mbuf_dyn.h index 6e2c816..4ff33ac 100644
> > > > > --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > > @@ -160,4 +160,12 @@ int rte_mbuf_dynflag_lookup(const char
> > *name,
> > > > > */
> > > > > #define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m)
> > > > > +
> > > > > (offset)))
> > > > >
> > > > > +/**
> > > > > + * Flow metadata dynamic field definitions.
> > > > > + */
> > > > > +#define MBUF_DYNF_METADATA_NAME "flow-metadata"
> > > > > +#define MBUF_DYNF_METADATA_SIZE sizeof(uint32_t) #define
> > > > > +MBUF_DYNF_METADATA_ALIGN __alignof__(uint32_t) #define
> > > > > +MBUF_DYNF_METADATA_FLAGS 0
> > > >
> > > > If this flag is only to be used in rte_flow, it can stay in rte_flow.
> > > > The name should follow the function name conventions, I suggest
> > > > "rte_flow_metadata".
> > >
> > > The definitions:
> > > MBUF_DYNF_METADATA_NAME,
> > > MBUF_DYNF_METADATA_SIZE,
> > > MBUF_DYNF_METADATA_ALIGN
> > > are global. rte_flow proposes only minimal set tyo check and access
> > > the metadata. By knowing the field names applications would have the
> > > more flexibility in processing the fields, for example it allows to
> > > optimize the handling of multiple dynamic fields . The definition of
> > > metadata size allows to generate optimized code:
> > > #if MBUF_DYNF_METADATA_SIZE == sizeof(uint32)
> > > *RTE_MBUF_DYNFIELD(m) = get_metadata_32bit() #else
> > > *RTE_MBUF_DYNFIELD(m) = get_metadata_64bit() #endif
> >
> > I don't see any reason why the same dynamic field could have different sizes,
> > I even think it could be dangerous. Your accessors suppose that the
> > metadata is a uint32_t. Having a compile-time option for that does not look
> > desirable.
>
> I tried to provide maximal flexibility and It was just an example of the thing
> we could do with global definitions. If you think we do not need it - OK,
> let's do things simpler.
>
> >
> > Just a side note: we have to take care when adding a new *public* dynamic
> > field that it won't change in the future: the accessors are macros or static
> > inline functions, so they are embedded in the binaries.
> > This is probably something we should discuss and may not be when updating
> > the dpdk (as shared lib).
>
> Yes, agree, defines just will not work correct in correct way and even break an ABI.
> As we decided - global metadata defines MBUF_DYNF_METADATA_xxxx
> should be removed.
>
> >
> > > MBUF_DYNF_METADATA_FLAGS flag is not used by rte_flow, this flag is
> > > related exclusively to dynamic mbuf " Reserved for future use, must be 0".
> > > Would you like to drop this definition?
> > >
> > > >
> > > > If the flag is going to be used in several places in dpdk (rte_flow,
> > > > pmd, app, ...), I wonder if it shouldn't be defined it in rte_mbuf_dyn.c. I
> > mean:
> > > >
> > > > ====
> > > > /* rte_mbuf_dyn.c */
> > > > const struct rte_mbuf_dynfield rte_mbuf_dynfield_flow_metadata = {
> > > > ...
> > > > };
> > > In this case we would make this descriptor global.
> > > It is no needed, because there Is no supposed any usage, but by
> > > rte_flow_dynf_metadata_register() only. The
> >
> > Yes, in my example I wasn't sure it was going to be private to rte_flow (see
> > "If the flag is going to be used in several places in dpdk (rte_flow, pmd, app,
> > ...)").
> >
> > So yes, I agree the struct should remain private.
> OK.
>
> >
> >
> > > > int rte_mbuf_dynfield_flow_metadata_offset = -1; const struct
> > > > rte_mbuf_dynflag rte_mbuf_dynflag_flow_metadata = {
> > > > ...
> > > > };
> > > > int rte_mbuf_dynflag_flow_metadata_bitnum = -1;
> > > >
> > > > int rte_mbuf_dyn_flow_metadata_register(void)
> > > > {
> > > > ...
> > > > }
> > > >
> > > > /* rte_mbuf_dyn.h */
> > > > extern const struct rte_mbuf_dynfield
> > > > rte_mbuf_dynfield_flow_metadata; extern int
> > > > rte_mbuf_dynfield_flow_metadata_offset;
> > > > extern const struct rte_mbuf_dynflag rte_mbuf_dynflag_flow_metadata;
> > > > extern int rte_mbuf_dynflag_flow_metadata_bitnum;
> > > >
> > > > ...helpers to set/get metadata...
> > > > ===
> > > >
> > > > Centralizing the definitions of non-private dynamic fields/flags in
> > > > rte_mbuf_dyn may help other people to reuse a field that is well
> > > > described if it match their use-case.
> > >
> > > Yes, centralizing is important, that's why MBUF_DYNF_METADATA_xxx
> > > placed in rte_mbuf_dyn.h. Do you think we should share the descriptors
> > either?
> > > I have no idea why someone (but rte_flow_dynf_metadata_register())
> > > might register metadata field directly.
> >
> > If the field is private to rte_flow, yes, there is no need to share the "struct
> > rte_mbuf_dynfield". Even the rte_flow_dynf_metadata_register() could be
> > marked as internal, right?
> rte_flow_dynf_metadata_register() is intended to be called by application.
> Some applications might wish to engage metadata feature, some ones - not.
>
> >
> > One more question: I see the registration is done by
> > parse_vc_action_set_meta(). My understanding is that this function is not in
> > datapath, and is called when configuring rte_flow. Do you confirm?
> Rather it is called to configure application in general. If user sets metadata
> (by issuing the appropriate command) it is assumed he/she would like
> the metadata on Rx side either. This is just for test purposes and it is not brilliant
> example of rte_flow_dynf_metadata_register() use case.
>
>
> >
> > > > In your case, what is carried by metadata? Could it be reused by
> > > > others? I think some more description is needed.
> > > In my case, metadata is just opaquie rte_flow related 32-bit unsigned
> > > value provided by
> > > mlx5 hardrware in rx datapath. I have no guess whether someone wishes
> > to reuse.
> >
> > What is the user supposed to do with this value? If it is hw-specific data, I
> > think the name of the mbuf field should include "MLX", and it should be
> > described.
>
> Metadata are not HW specific at all - they neither control nor are produced
> by HW (abstracting from the flow engine is implemented in HW).
> Metadata are some opaque data, it is some kind of link between data
> path and flow space. With metadata application may provide some per packet
> information to flow engine and get back some information from flow engine.
> it is generic concept, supposed to be neither HW-related nor vendor specific.
ok, understood, it's like a mark or tag.
> > Are these rte_flow actions somehow specific to mellanox drivers ?
>
> AFAIK, currently it is going to be supported by mlx5 PMD only,
> but concept is common and is not vendor specific.
>
> >
> > > Brief summary of you comment (just to make sure I understood your
> > proposal in correct way):
> > > 1. drop all definitions MBUF_DYNF_METADATA_xxx, leave
> > > MBUF_DYNF_METADATA_NAME only 2. move the descriptor const struct
> > > rte_mbuf_dynfield desc_offs = {} to rte_mbuf_dyn.c and make it global
> > > 3. provide helpers to access metadata
> > >
> > > [1] and [2] look OK in general. Although I think these ones make code less
> > flexible, restrict the potential compile time options.
> > > For now it is rather theoretical question, if you insist on your
> > > approach - please, let me know, I'll address [1] and [2] and update.my
> > patch.
> >
> > [1] I think the #define only adds an indirection, and I didn't see any
> > perf constraint here.
> > [2] My previous comment was surely not clear, sorry. The code can stay
> > in rte_flow.
> >
> > > As for [3] - IMHO, the extra abstraction layer is not useful, and might be
> > even harmful.
> > > I tend not to complicate the code, at least, for now.
> >
> > [3] ok for me
> >
> >
> > Thanks,
> > Olivier
>
> With best regards, Slava
Thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code
2019-10-21 13:24 3% ` Kevin Traynor
@ 2019-10-24 9:07 4% ` Burakov, Anatoly
0 siblings, 0 replies; 200+ results
From: Burakov, Anatoly @ 2019-10-24 9:07 UTC (permalink / raw)
To: Kevin Traynor, dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, bruce.richardson, thomas, david.marchand
On 21-Oct-19 2:24 PM, Kevin Traynor wrote:
> On 17/10/2019 15:31, Anatoly Burakov wrote:
>> From: Marcin Baran <marcinx.baran@intel.com>
>>
>> Remove code for old ABI versions ahead of ABI version bump.
>>
>
> I think there needs to be some doc updates for this.
>
> Looking at http://doc.dpdk.org/guides/rel_notes/deprecation.html there
> is nothing saying these functions are deprecated? (probably same issue
> for other 'remove deprecated code' patches)
>
> There should probably be an entry in the API/ABI changes section of the
> release notes too.
>
The new ABI policy implies such deprecation, because everything now
becomes one ABI version across the board. I'm not changing the API -
just removing old ABI.
Regarding doc patches, i had a chat with John and he agreed that doc
patches for this can come later.
--
Thanks,
Anatoly
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v8 00/13] vhost packed ring performance optimization
2019-10-24 8:24 0% ` Maxime Coquelin
@ 2019-10-24 8:29 0% ` Liu, Yong
0 siblings, 0 replies; 200+ results
From: Liu, Yong @ 2019-10-24 8:29 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei, Wang, Zhihong, stephen, gavin.hu; +Cc: dev
Thanks, Maxime. Just sent out v9.
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Thursday, October 24, 2019 4:25 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; stephen@networkplumber.org;
> gavin.hu@arm.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v8 00/13] vhost packed ring performance optimization
>
>
>
> On 10/24/19 9:18 AM, Liu, Yong wrote:
> >
> >
> >> -----Original Message-----
> >> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> >> Sent: Thursday, October 24, 2019 2:50 PM
> >> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>;
> Wang,
> >> Zhihong <zhihong.wang@intel.com>; stephen@networkplumber.org;
> >> gavin.hu@arm.com
> >> Cc: dev@dpdk.org
> >> Subject: Re: [PATCH v8 00/13] vhost packed ring performance optimization
> >>
> >> I get some checkpatch warnings, and build fails with clang.
> >> Could you please fix these issues and send v9?
> >>
> >
> >
> > Hi Maxime,
> > Clang build fails will be fixed in v9. For checkpatch warning, it was due
> to pragma string inside.
> > Previous version can avoid such warning, while format is a little messy
> as below.
> > I prefer to keep code clean and more readable. How about your idea?
> >
> > +#ifdef UNROLL_PRAGMA_PARAM
> > +#define VHOST_UNROLL_PRAGMA(param) _Pragma(param)
> > +#else
> > +#define VHOST_UNROLL_PRAGMA(param) do {} while (0);
> > +#endif
> >
> > + VHOST_UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM)
> > + for (i = 0; i < PACKED_BATCH_SIZE; i++)
>
> That's less clean indeed. I agree to waive the checkpatch errors.
> just fix the Clang build for patch 8 and we're good.
>
> Thanks,
> Maxime
>
> > Regards,
> > Marvin
> >
> >> Thanks,
> >> Maxime
> >>
> >> ### [PATCH] vhost: try to unroll for each loop
> >>
> >> WARNING:CAMELCASE: Avoid CamelCase: <_Pragma>
> >> #78: FILE: lib/librte_vhost/vhost.h:47:
> >> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
> >> 4") \
> >>
> >> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> >> parenthesis
> >> #78: FILE: lib/librte_vhost/vhost.h:47:
> >> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
> >> 4") \
> >> + for (iter = val; iter < size; iter++)
> >>
> >> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> >> parenthesis
> >> #83: FILE: lib/librte_vhost/vhost.h:52:
> >> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll 4")
> \
> >> + for (iter = val; iter < size; iter++)
> >>
> >> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> >> parenthesis
> >> #88: FILE: lib/librte_vhost/vhost.h:57:
> >> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)")
> \
> >> + for (iter = val; iter < size; iter++)
> >>
> >> total: 3 errors, 1 warnings, 67 lines checked
> >>
> >> 0/1 valid patch/tmp/dpdk_build/lib/librte_vhost/virtio_net.c:2065:1:
> >> error: unused function 'free_zmbuf' [-Werror,-Wunused-function]
> >> free_zmbuf(struct vhost_virtqueue *vq)
> >> ^
> >> 1 error generated.
> >> make[5]: *** [virtio_net.o] Error 1
> >> make[4]: *** [librte_vhost] Error 2
> >> make[4]: *** Waiting for unfinished jobs....
> >> make[3]: *** [lib] Error 2
> >> make[2]: *** [all] Error 2
> >> make[1]: *** [pre_install] Error 2
> >> make: *** [install] Error 2
> >>
> >>
> >> On 10/22/19 12:08 AM, Marvin Liu wrote:
> >>> Packed ring has more compact ring format and thus can significantly
> >>> reduce the number of cache miss. It can lead to better performance.
> >>> This has been approved in virtio user driver, on normal E5 Xeon cpu
> >>> single core performance can raise 12%.
> >>>
> >>> http://mails.dpdk.org/archives/dev/2018-April/095470.html
> >>>
> >>> However vhost performance with packed ring performance was decreased.
> >>> Through analysis, mostly extra cost was from the calculating of each
> >>> descriptor flag which depended on ring wrap counter. Moreover, both
> >>> frontend and backend need to write same descriptors which will cause
> >>> cache contention. Especially when doing vhost enqueue function, virtio
> >>> refill packed ring function may write same cache line when vhost doing
> >>> enqueue function. This kind of extra cache cost will reduce the benefit
> >>> of reducing cache misses.
> >>>
> >>> For optimizing vhost packed ring performance, vhost enqueue and dequeue
> >>> function will be splitted into fast and normal path.
> >>>
> >>> Several methods will be taken in fast path:
> >>> Handle descriptors in one cache line by batch.
> >>> Split loop function into more pieces and unroll them.
> >>> Prerequisite check that whether I/O space can copy directly into mbuf
> >>> space and vice versa.
> >>> Prerequisite check that whether descriptor mapping is successful.
> >>> Distinguish vhost used ring update function by enqueue and dequeue
> >>> function.
> >>> Buffer dequeue used descriptors as many as possible.
> >>> Update enqueue used descriptors by cache line.
> >>>
> >>> After all these methods done, single core vhost PvP performance with
> 64B
> >>> packet on Xeon 8180 can boost 35%.
> >>>
> >>> v8:
> >>> - Allocate mbuf by virtio_dev_pktmbuf_alloc
> >>>
> >>> v7:
> >>> - Rebase code
> >>> - Rename unroll macro and definitions
> >>> - Calculate flags when doing single dequeue
> >>>
> >>> v6:
> >>> - Fix dequeue zcopy result check
> >>>
> >>> v5:
> >>> - Remove disable sw prefetch as performance impact is small
> >>> - Change unroll pragma macro format
> >>> - Rename shadow counter elements names
> >>> - Clean dequeue update check condition
> >>> - Add inline functions replace of duplicated code
> >>> - Unify code style
> >>>
> >>> v4:
> >>> - Support meson build
> >>> - Remove memory region cache for no clear performance gain and ABI
> break
> >>> - Not assume ring size is power of two
> >>>
> >>> v3:
> >>> - Check available index overflow
> >>> - Remove dequeue remained descs number check
> >>> - Remove changes in split ring datapath
> >>> - Call memory write barriers once when updating used flags
> >>> - Rename some functions and macros
> >>> - Code style optimization
> >>>
> >>> v2:
> >>> - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> >>> - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> >>> - Optimize dequeue used ring update when in_order negotiated
> >>>
> >>>
> >>> Marvin Liu (13):
> >>> vhost: add packed ring indexes increasing function
> >>> vhost: add packed ring single enqueue
> >>> vhost: try to unroll for each loop
> >>> vhost: add packed ring batch enqueue
> >>> vhost: add packed ring single dequeue
> >>> vhost: add packed ring batch dequeue
> >>> vhost: flush enqueue updates by cacheline
> >>> vhost: flush batched enqueue descs directly
> >>> vhost: buffer packed ring dequeue updates
> >>> vhost: optimize packed ring enqueue
> >>> vhost: add packed ring zcopy batch and single dequeue
> >>> vhost: optimize packed ring dequeue
> >>> vhost: optimize packed ring dequeue when in-order
> >>>
> >>> lib/librte_vhost/Makefile | 18 +
> >>> lib/librte_vhost/meson.build | 7 +
> >>> lib/librte_vhost/vhost.h | 57 ++
> >>> lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
> >>> 4 files changed, 837 insertions(+), 193 deletions(-)
> >>>
> >
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v9 00/13] vhost packed ring performance optimization
2019-10-21 22:08 3% ` [dpdk-dev] [PATCH v8 " Marvin Liu
2019-10-24 6:49 0% ` Maxime Coquelin
@ 2019-10-24 16:08 3% ` Marvin Liu
2019-10-24 10:18 0% ` Maxime Coquelin
1 sibling, 1 reply; 200+ results
From: Marvin Liu @ 2019-10-24 16:08 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen, gavin.hu
Cc: dev, Marvin Liu
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.
http://mails.dpdk.org/archives/dev/2018-April/095470.html
However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses.
For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be split into fast and normal path.
Several methods will be taken in fast path:
Handle descriptors in one cache line by batch.
Split loop function into more pieces and unroll them.
Prerequisite check that whether I/O space can copy directly into mbuf
space and vice versa.
Prerequisite check that whether descriptor mapping is successful.
Distinguish vhost used ring update function by enqueue and dequeue
function.
Buffer dequeue used descriptors as many as possible.
Update enqueue used descriptors by cache line.
After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 35%.
v9:
- Fix clang build error
v8:
- Allocate mbuf by virtio_dev_pktmbuf_alloc
v7:
- Rebase code
- Rename unroll macro and definitions
- Calculate flags when doing single dequeue
v6:
- Fix dequeue zcopy result check
v5:
- Remove disable sw prefetch as performance impact is small
- Change unroll pragma macro format
- Rename shadow counter elements names
- Clean dequeue update check condition
- Add inline functions replace of duplicated code
- Unify code style
v4:
- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two
v3:
- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization
v2:
- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated
Marvin Liu (13):
vhost: add packed ring indexes increasing function
vhost: add packed ring single enqueue
vhost: try to unroll for each loop
vhost: add packed ring batch enqueue
vhost: add packed ring single dequeue
vhost: add packed ring batch dequeue
vhost: flush enqueue updates by cacheline
vhost: flush batched enqueue descs directly
vhost: buffer packed ring dequeue updates
vhost: optimize packed ring enqueue
vhost: add packed ring zcopy batch and single dequeue
vhost: optimize packed ring dequeue
vhost: optimize packed ring dequeue when in-order
lib/librte_vhost/Makefile | 18 +
lib/librte_vhost/meson.build | 7 +
lib/librte_vhost/vhost.h | 57 ++
lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
4 files changed, 837 insertions(+), 193 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v8 00/13] vhost packed ring performance optimization
2019-10-24 7:18 0% ` Liu, Yong
@ 2019-10-24 8:24 0% ` Maxime Coquelin
2019-10-24 8:29 0% ` Liu, Yong
0 siblings, 1 reply; 200+ results
From: Maxime Coquelin @ 2019-10-24 8:24 UTC (permalink / raw)
To: Liu, Yong, Bie, Tiwei, Wang, Zhihong, stephen, gavin.hu; +Cc: dev
On 10/24/19 9:18 AM, Liu, Yong wrote:
>
>
>> -----Original Message-----
>> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
>> Sent: Thursday, October 24, 2019 2:50 PM
>> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>; Wang,
>> Zhihong <zhihong.wang@intel.com>; stephen@networkplumber.org;
>> gavin.hu@arm.com
>> Cc: dev@dpdk.org
>> Subject: Re: [PATCH v8 00/13] vhost packed ring performance optimization
>>
>> I get some checkpatch warnings, and build fails with clang.
>> Could you please fix these issues and send v9?
>>
>
>
> Hi Maxime,
> Clang build fails will be fixed in v9. For checkpatch warning, it was due to pragma string inside.
> Previous version can avoid such warning, while format is a little messy as below.
> I prefer to keep code clean and more readable. How about your idea?
>
> +#ifdef UNROLL_PRAGMA_PARAM
> +#define VHOST_UNROLL_PRAGMA(param) _Pragma(param)
> +#else
> +#define VHOST_UNROLL_PRAGMA(param) do {} while (0);
> +#endif
>
> + VHOST_UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM)
> + for (i = 0; i < PACKED_BATCH_SIZE; i++)
That's less clean indeed. I agree to waive the checkpatch errors.
just fix the Clang build for patch 8 and we're good.
Thanks,
Maxime
> Regards,
> Marvin
>
>> Thanks,
>> Maxime
>>
>> ### [PATCH] vhost: try to unroll for each loop
>>
>> WARNING:CAMELCASE: Avoid CamelCase: <_Pragma>
>> #78: FILE: lib/librte_vhost/vhost.h:47:
>> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
>> 4") \
>>
>> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
>> parenthesis
>> #78: FILE: lib/librte_vhost/vhost.h:47:
>> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
>> 4") \
>> + for (iter = val; iter < size; iter++)
>>
>> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
>> parenthesis
>> #83: FILE: lib/librte_vhost/vhost.h:52:
>> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll 4") \
>> + for (iter = val; iter < size; iter++)
>>
>> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
>> parenthesis
>> #88: FILE: lib/librte_vhost/vhost.h:57:
>> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)") \
>> + for (iter = val; iter < size; iter++)
>>
>> total: 3 errors, 1 warnings, 67 lines checked
>>
>> 0/1 valid patch/tmp/dpdk_build/lib/librte_vhost/virtio_net.c:2065:1:
>> error: unused function 'free_zmbuf' [-Werror,-Wunused-function]
>> free_zmbuf(struct vhost_virtqueue *vq)
>> ^
>> 1 error generated.
>> make[5]: *** [virtio_net.o] Error 1
>> make[4]: *** [librte_vhost] Error 2
>> make[4]: *** Waiting for unfinished jobs....
>> make[3]: *** [lib] Error 2
>> make[2]: *** [all] Error 2
>> make[1]: *** [pre_install] Error 2
>> make: *** [install] Error 2
>>
>>
>> On 10/22/19 12:08 AM, Marvin Liu wrote:
>>> Packed ring has more compact ring format and thus can significantly
>>> reduce the number of cache miss. It can lead to better performance.
>>> This has been approved in virtio user driver, on normal E5 Xeon cpu
>>> single core performance can raise 12%.
>>>
>>> http://mails.dpdk.org/archives/dev/2018-April/095470.html
>>>
>>> However vhost performance with packed ring performance was decreased.
>>> Through analysis, mostly extra cost was from the calculating of each
>>> descriptor flag which depended on ring wrap counter. Moreover, both
>>> frontend and backend need to write same descriptors which will cause
>>> cache contention. Especially when doing vhost enqueue function, virtio
>>> refill packed ring function may write same cache line when vhost doing
>>> enqueue function. This kind of extra cache cost will reduce the benefit
>>> of reducing cache misses.
>>>
>>> For optimizing vhost packed ring performance, vhost enqueue and dequeue
>>> function will be splitted into fast and normal path.
>>>
>>> Several methods will be taken in fast path:
>>> Handle descriptors in one cache line by batch.
>>> Split loop function into more pieces and unroll them.
>>> Prerequisite check that whether I/O space can copy directly into mbuf
>>> space and vice versa.
>>> Prerequisite check that whether descriptor mapping is successful.
>>> Distinguish vhost used ring update function by enqueue and dequeue
>>> function.
>>> Buffer dequeue used descriptors as many as possible.
>>> Update enqueue used descriptors by cache line.
>>>
>>> After all these methods done, single core vhost PvP performance with 64B
>>> packet on Xeon 8180 can boost 35%.
>>>
>>> v8:
>>> - Allocate mbuf by virtio_dev_pktmbuf_alloc
>>>
>>> v7:
>>> - Rebase code
>>> - Rename unroll macro and definitions
>>> - Calculate flags when doing single dequeue
>>>
>>> v6:
>>> - Fix dequeue zcopy result check
>>>
>>> v5:
>>> - Remove disable sw prefetch as performance impact is small
>>> - Change unroll pragma macro format
>>> - Rename shadow counter elements names
>>> - Clean dequeue update check condition
>>> - Add inline functions replace of duplicated code
>>> - Unify code style
>>>
>>> v4:
>>> - Support meson build
>>> - Remove memory region cache for no clear performance gain and ABI break
>>> - Not assume ring size is power of two
>>>
>>> v3:
>>> - Check available index overflow
>>> - Remove dequeue remained descs number check
>>> - Remove changes in split ring datapath
>>> - Call memory write barriers once when updating used flags
>>> - Rename some functions and macros
>>> - Code style optimization
>>>
>>> v2:
>>> - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
>>> - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
>>> - Optimize dequeue used ring update when in_order negotiated
>>>
>>>
>>> Marvin Liu (13):
>>> vhost: add packed ring indexes increasing function
>>> vhost: add packed ring single enqueue
>>> vhost: try to unroll for each loop
>>> vhost: add packed ring batch enqueue
>>> vhost: add packed ring single dequeue
>>> vhost: add packed ring batch dequeue
>>> vhost: flush enqueue updates by cacheline
>>> vhost: flush batched enqueue descs directly
>>> vhost: buffer packed ring dequeue updates
>>> vhost: optimize packed ring enqueue
>>> vhost: add packed ring zcopy batch and single dequeue
>>> vhost: optimize packed ring dequeue
>>> vhost: optimize packed ring dequeue when in-order
>>>
>>> lib/librte_vhost/Makefile | 18 +
>>> lib/librte_vhost/meson.build | 7 +
>>> lib/librte_vhost/vhost.h | 57 ++
>>> lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
>>> 4 files changed, 837 insertions(+), 193 deletions(-)
>>>
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v3] mbuf: support dynamic fields and flags
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
@ 2019-10-24 8:13 3% ` Olivier Matz
2019-10-24 16:40 0% ` Thomas Monjalon
2019-10-26 12:39 3% ` [dpdk-dev] [PATCH v4] " Olivier Matz
3 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-24 8:13 UTC (permalink / raw)
To: dev
Cc: Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Shahaf Shuler, Stephen Hemminger,
Thomas Monjalon, Slava Ovsiienko
Many features require to store data inside the mbuf. As the room in mbuf
structure is limited, it is not possible to have a field for each
feature. Also, changing fields in the mbuf structure can break the API
or ABI.
This commit addresses these issues, by enabling the dynamic registration
of fields or flags:
- a dynamic field is a named area in the rte_mbuf structure, with a
given size (>= 1 byte) and alignment constraint.
- a dynamic flag is a named bit in the rte_mbuf structure.
The typical use case is a PMD that registers space for an offload
feature, when the application requests to enable this feature. As
the space in mbuf is limited, the space should only be reserved if it
is going to be used (i.e when the application explicitly asks for it).
The registration can be done at any moment, but it is not possible
to unregister fields or flags.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
v3
* define mark_free() macro outside the init_shared_mem() function
(Konstantin)
* better document automatic field placement (Konstantin)
* introduce RTE_SIZEOF_FIELD() to get the size of a field in
a structure (Haiyue)
* fix api doc generation (Slava)
* document dynamic field and flags naming conventions
v2
* Rebase on top of master: solve conflict with Stephen's patchset
(packet copy)
* Add new apis to register a dynamic field/flag at a specific place
* Add a dump function (sugg by David)
* Enhance field registration function to select the best offset, keeping
large aligned zones as much as possible (sugg by Konstantin)
* Use a size_t and unsigned int instead of int when relevant
(sugg by Konstantin)
* Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
(sugg by Konstantin)
* Remove unused argument in private function (sugg by Konstantin)
* Fix and simplify locking (sugg by Konstantin)
* Fix minor typo
rfc -> v1
* Rebase on top of master
* Change registration API to use a structure instead of
variables, getting rid of #defines (Stephen's comment)
* Update flag registration to use a similar API as fields.
* Change max name length from 32 to 64 (sugg. by Thomas)
* Enhance API documentation (Haiyue's and Andrew's comments)
* Add a debug log at registration
* Add some words in release note
* Did some performance tests (sugg. by Andrew):
On my platform, reading a dynamic field takes ~3 cycles more
than a static field, and ~2 cycles more for writing.
app/test/test_mbuf.c | 145 +++++-
doc/guides/rel_notes/release_19_11.rst | 7 +
lib/librte_eal/common/include/rte_common.h | 12 +
lib/librte_mbuf/Makefile | 2 +
lib/librte_mbuf/meson.build | 6 +-
lib/librte_mbuf/rte_mbuf.h | 23 +-
lib/librte_mbuf/rte_mbuf_dyn.c | 553 +++++++++++++++++++++
lib/librte_mbuf/rte_mbuf_dyn.h | 239 +++++++++
lib/librte_mbuf/rte_mbuf_version.map | 7 +
9 files changed, 989 insertions(+), 5 deletions(-)
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index c21ef64c8..e9be430af 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -29,6 +29,7 @@
#include <rte_random.h>
#include <rte_cycles.h>
#include <rte_malloc.h>
+#include <rte_mbuf_dyn.h>
#include "test.h"
@@ -661,7 +662,6 @@ test_attach_from_different_pool(struct rte_mempool *pktmbuf_pool,
rte_pktmbuf_free(clone2);
return -1;
}
-#undef GOTO_FAIL
/*
* test allocation and free of mbufs
@@ -1449,6 +1449,143 @@ test_tx_offload(void)
return (v1 == v2) ? 0 : -EINVAL;
}
+static int
+test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
+{
+ const struct rte_mbuf_dynfield dynfield = {
+ .name = "test-dynfield",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield2 = {
+ .name = "test-dynfield2",
+ .size = sizeof(uint16_t),
+ .align = __alignof__(uint16_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield3 = {
+ .name = "test-dynfield3",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_big = {
+ .name = "test-dynfield-fail-big",
+ .size = 256,
+ .align = 1,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_align = {
+ .name = "test-dynfield-fail-align",
+ .size = 1,
+ .align = 3,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag = {
+ .name = "test-dynflag",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag2 = {
+ .name = "test-dynflag2",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag3 = {
+ .name = "test-dynflag3",
+ .flags = 0,
+ };
+ struct rte_mbuf *m = NULL;
+ int offset, offset2, offset3;
+ int flag, flag2, flag3;
+ int ret;
+
+ printf("Test mbuf dynamic fields and flags\n");
+ rte_mbuf_dyn_dump(stdout);
+
+ offset = rte_mbuf_dynfield_register(&dynfield);
+ if (offset == -1)
+ GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
+ offset, strerror(errno));
+
+ ret = rte_mbuf_dynfield_register(&dynfield);
+ if (ret != offset)
+ GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
+ ret, strerror(errno));
+
+ offset2 = rte_mbuf_dynfield_register(&dynfield2);
+ if (offset2 == -1 || offset2 == offset || (offset2 & 1))
+ GOTO_FAIL("failed to register dynamic field 2, offset2=%d: %s",
+ offset2, strerror(errno));
+
+ offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
+ offsetof(struct rte_mbuf, dynfield1[1]));
+ if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
+ GOTO_FAIL("failed to register dynamic field 3, offset=%d: %s",
+ offset3, strerror(errno));
+
+ printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
+ offset, offset2, offset3);
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (too big)");
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (bad alignment)");
+
+ ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
+ offsetof(struct rte_mbuf, ol_flags));
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (not avail)");
+
+ flag = rte_mbuf_dynflag_register(&dynflag);
+ if (flag == -1)
+ GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
+ flag, strerror(errno));
+
+ ret = rte_mbuf_dynflag_register(&dynflag);
+ if (ret != flag)
+ GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
+ ret, strerror(errno));
+
+ flag2 = rte_mbuf_dynflag_register(&dynflag2);
+ if (flag2 == -1 || flag2 == flag)
+ GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
+ flag2, strerror(errno));
+
+ flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
+ rte_bsf64(PKT_LAST_FREE));
+ if (flag3 != rte_bsf64(PKT_LAST_FREE))
+ GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
+ flag3, strerror(errno));
+
+ printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
+
+ /* set, get dynamic field */
+ m = rte_pktmbuf_alloc(pktmbuf_pool);
+ if (m == NULL)
+ GOTO_FAIL("Cannot allocate mbuf");
+
+ *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
+ if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
+ GOTO_FAIL("failed to read dynamic field");
+ *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
+ if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
+ GOTO_FAIL("failed to read dynamic field");
+
+ /* set a dynamic flag */
+ m->ol_flags |= (1ULL << flag);
+
+ rte_mbuf_dyn_dump(stdout);
+ rte_pktmbuf_free(m);
+ return 0;
+fail:
+ rte_pktmbuf_free(m);
+ return -1;
+}
+#undef GOTO_FAIL
+
static int
test_mbuf(void)
{
@@ -1468,6 +1605,12 @@ test_mbuf(void)
goto err;
}
+ /* test registration of dynamic fields and flags */
+ if (test_mbuf_dyn(pktmbuf_pool) < 0) {
+ printf("mbuf dynflag test failed\n");
+ goto err;
+ }
+
/* create a specific pktmbuf pool with a priv_size != 0 and no data
* room size */
pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 856088c5c..b7511a6dc 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -21,6 +21,13 @@ DPDK Release 19.11
xdg-open build/doc/html/guides/rel_notes/release_19_11.html
+* **Add support of support dynamic fields and flags in mbuf.**
+
+ This new feature adds the ability to dynamically register some room
+ for a field or a flag in the mbuf structure. This is typically used
+ for specific offload features, where adding a static field or flag
+ in the mbuf is not justified.
+
New Features
------------
diff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h
index 05a3a6401..6660c77e4 100644
--- a/lib/librte_eal/common/include/rte_common.h
+++ b/lib/librte_eal/common/include/rte_common.h
@@ -630,6 +630,18 @@ rte_log2_u64(uint64_t v)
})
#endif
+/**
+ * Get the size of a field in a structure.
+ *
+ * @param type
+ * The type of the structure.
+ * @param field
+ * The field in the structure.
+ * @return
+ * The size of the field in the structure, in bytes.
+ */
+#define RTE_SIZEOF_FIELD(type, field) (sizeof(((type *)0)->field))
+
#define _RTE_STR(x) #x
/** Take a macro value and get a string version of it */
#define RTE_STR(x) _RTE_STR(x)
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index c8f6d2689..5a9bcee73 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 5
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h rte_mbuf_ptype.h rte_mbuf_pool_ops.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
index 6cc11ebb4..9137e8f26 100644
--- a/lib/librte_mbuf/meson.build
+++ b/lib/librte_mbuf/meson.build
@@ -2,8 +2,10 @@
# Copyright(c) 2017 Intel Corporation
version = 5
-sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
-headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
+sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
+ 'rte_mbuf_dyn.c')
+headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
+ 'rte_mbuf_dyn.h')
deps += ['mempool']
allow_experimental_apis = true
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index b1a92b17a..7567b6ff3 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -198,9 +198,12 @@ extern "C" {
#define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
#define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
-/* add new RX flags here */
+/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-/* add new TX flags here */
+#define PKT_FIRST_FREE (1ULL << 23)
+#define PKT_LAST_FREE (1ULL << 39)
+
+/* add new TX flags here, don't forget to update PKT_LAST_FREE */
/**
* Indicate that the metadata field in the mbuf is in use.
@@ -738,6 +741,7 @@ struct rte_mbuf {
*/
struct rte_mbuf_ext_shared_info *shinfo;
+ uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
} __rte_cache_aligned;
/**
@@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
*/
#define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
+/**
+ * Copy dynamic fields from msrc to mdst.
+ *
+ * @param mdst
+ * The destination mbuf.
+ * @param msrc
+ * The source mbuf.
+ */
+static inline void
+rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
+{
+ memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst->dynfield1));
+}
+
/* internal */
static inline void
__rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
@@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
mdst->hash = msrc->hash;
mdst->packet_type = msrc->packet_type;
mdst->timestamp = msrc->timestamp;
+ rte_mbuf_dynfield_copy(mdst, msrc);
}
/**
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
new file mode 100644
index 000000000..d6931f847
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.c
@@ -0,0 +1,553 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#include <sys/queue.h>
+#include <stdint.h>
+#include <limits.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_tailq.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
+
+struct mbuf_dynfield_elt {
+ TAILQ_ENTRY(mbuf_dynfield_elt) next;
+ struct rte_mbuf_dynfield params;
+ size_t offset;
+};
+TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynfield_tailq = {
+ .name = "RTE_MBUF_DYNFIELD",
+};
+EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
+
+struct mbuf_dynflag_elt {
+ TAILQ_ENTRY(mbuf_dynflag_elt) next;
+ struct rte_mbuf_dynflag params;
+ unsigned int bitnum;
+};
+TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynflag_tailq = {
+ .name = "RTE_MBUF_DYNFLAG",
+};
+EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
+
+struct mbuf_dyn_shm {
+ /**
+ * For each mbuf byte, free_space[i] != 0 if space is free.
+ * The value is the size of the biggest aligned element that
+ * can fit in the zone.
+ */
+ uint8_t free_space[sizeof(struct rte_mbuf)];
+ /** Bitfield of available flags. */
+ uint64_t free_flags;
+};
+static struct mbuf_dyn_shm *shm;
+
+/* Set the value of free_space[] according to the size and alignment of
+ * the free areas. This helps to select the best place when reserving a
+ * dynamic field. Assume tailq is locked.
+ */
+static void
+process_score(void)
+{
+ size_t off, align, size, i;
+
+ /* first, erase previous info */
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if (shm->free_space[i])
+ shm->free_space[i] = 1;
+ }
+
+ for (off = 0; off < sizeof(struct rte_mbuf); off++) {
+ /* get the size of the free zone */
+ for (size = 0; shm->free_space[off + size]; size++)
+ ;
+ if (size == 0)
+ continue;
+
+ /* get the alignment of biggest object that can fit in
+ * the zone at this offset.
+ */
+ for (align = 1;
+ (off % (align << 1)) == 0 && (align << 1) <= size;
+ align <<= 1)
+ ;
+
+ /* save it in free_space[] */
+ for (i = off; i < off + size; i++)
+ shm->free_space[i] = RTE_MAX(align, shm->free_space[i]);
+ }
+}
+
+/* Mark the area occupied by a mbuf field as available in the shm. */
+#define mark_free(field) \
+ memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
+ 1, sizeof(((struct rte_mbuf *)0)->field))
+
+/* Allocate and initialize the shared memory. Assume tailq is locked */
+static int
+init_shared_mem(void)
+{
+ const struct rte_memzone *mz;
+ uint64_t mask;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
+ sizeof(struct mbuf_dyn_shm),
+ SOCKET_ID_ANY, 0,
+ RTE_CACHE_LINE_SIZE);
+ } else {
+ mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
+ }
+ if (mz == NULL)
+ return -1;
+
+ shm = mz->addr;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* init free_space, keep it sync'd with
+ * rte_mbuf_dynfield_copy().
+ */
+ memset(shm, 0, sizeof(*shm));
+ mark_free(dynfield1);
+
+ /* init free_flags */
+ for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+ shm->free_flags |= mask;
+
+ process_score();
+ }
+
+ return 0;
+}
+
+/* check if this offset can be used */
+static int
+check_offset(size_t offset, size_t size, size_t align)
+{
+ size_t i;
+
+ if ((offset & (align - 1)) != 0)
+ return -1;
+ if (offset + size > sizeof(struct rte_mbuf))
+ return -1;
+
+ for (i = 0; i < size; i++) {
+ if (!shm->free_space[i + offset])
+ return -1;
+ }
+
+ return 0;
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynfield_elt *
+__mbuf_dynfield_lookup(const char *name)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
+ if (strcmp(name, mbuf_dynfield->params.name) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynfield;
+}
+
+int
+rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
+{
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynfield = __mbuf_dynfield_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynfield == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynfield->params, sizeof(*params));
+
+ return mbuf_dynfield->offset;
+}
+
+static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
+ const struct rte_mbuf_dynfield *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->size != params2->size)
+ return -1;
+ if (params1->align != params2->align)
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int best_zone = UINT_MAX;
+ size_t i, offset;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
+ if (mbuf_dynfield != NULL) {
+ if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynfield->offset;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == SIZE_MAX) {
+ /* Find the best place to put this field: we search the
+ * lowest value of shm->free_space[offset]: the zones
+ * containing room for larger fields are kept for later.
+ */
+ for (offset = 0;
+ offset < sizeof(struct rte_mbuf);
+ offset++) {
+ if (check_offset(offset, params->size,
+ params->align) == 0 &&
+ shm->free_space[offset] < best_zone) {
+ best_zone = shm->free_space[offset];
+ req = offset;
+ }
+ }
+ if (req == SIZE_MAX) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ } else {
+ if (check_offset(req, params->size, params->align) < 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ }
+
+ offset = req;
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
+ if (mbuf_dynfield == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynfield->params.name, params->name,
+ sizeof(mbuf_dynfield->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
+ rte_errno = ENAMETOOLONG;
+ rte_free(mbuf_dynfield);
+ rte_free(te);
+ return -1;
+ }
+ memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
+ mbuf_dynfield->offset = offset;
+ te->data = mbuf_dynfield;
+
+ TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
+
+ for (i = offset; i < offset + params->size; i++)
+ shm->free_space[i] = 0;
+ process_score();
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
+ params->name, params->size, params->align, params->flags,
+ offset);
+
+ return offset;
+}
+
+int
+rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ int ret;
+
+ if (params->size >= sizeof(struct rte_mbuf)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (!rte_is_power_of_2(params->align)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (params->flags != 0) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynfield_register_offset(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
+{
+ return rte_mbuf_dynfield_register_offset(params, SIZE_MAX);
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynflag_elt *
+__mbuf_dynflag_lookup(const char *name)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
+ if (strncmp(name, mbuf_dynflag->params.name,
+ RTE_MBUF_DYN_NAMESIZE) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynflag;
+}
+
+int
+rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params)
+{
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynflag = __mbuf_dynflag_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynflag == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynflag->params, sizeof(*params));
+
+ return mbuf_dynflag->bitnum;
+}
+
+static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
+ const struct rte_mbuf_dynflag *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int bitnum;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
+ if (mbuf_dynflag != NULL) {
+ if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynflag->bitnum;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == UINT_MAX) {
+ if (shm->free_flags == 0) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ bitnum = rte_bsf64(shm->free_flags);
+ } else {
+ if ((shm->free_flags & (1ULL << req)) == 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ bitnum = req;
+ }
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
+ if (mbuf_dynflag == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynflag->params.name, params->name,
+ sizeof(mbuf_dynflag->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
+ rte_free(mbuf_dynflag);
+ rte_free(te);
+ rte_errno = ENAMETOOLONG;
+ return -1;
+ }
+ mbuf_dynflag->bitnum = bitnum;
+ te->data = mbuf_dynflag;
+
+ TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
+
+ shm->free_flags &= ~(1ULL << bitnum);
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
+ params->name, params->flags, bitnum);
+
+ return bitnum;
+}
+
+int
+rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ int ret;
+
+ if (req >= RTE_SIZEOF_FIELD(struct rte_mbuf, ol_flags) * CHAR_BIT &&
+ req != UINT_MAX) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynflag_register_bitnum(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
+{
+ return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX);
+}
+
+void rte_mbuf_dyn_dump(FILE *out)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *dynfield;
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *dynflag;
+ struct rte_tailq_entry *te;
+ size_t i;
+
+ rte_mcfg_tailq_write_lock();
+ init_shared_mem();
+ fprintf(out, "Reserved fields:\n");
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ dynfield = (struct mbuf_dynfield_elt *)te->data;
+ fprintf(out, " name=%s offset=%zd size=%zd align=%zd flags=%x\n",
+ dynfield->params.name, dynfield->offset,
+ dynfield->params.size, dynfield->params.align,
+ dynfield->params.flags);
+ }
+ fprintf(out, "Reserved flags:\n");
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ dynflag = (struct mbuf_dynflag_elt *)te->data;
+ fprintf(out, " name=%s bitnum=%u flags=%x\n",
+ dynflag->params.name, dynflag->bitnum,
+ dynflag->params.flags);
+ }
+ fprintf(out, "Free space in mbuf (0 = free, value = zone alignment):\n");
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if ((i % 8) == 0)
+ fprintf(out, " %4.4zx: ", i);
+ fprintf(out, "%2.2x%s", shm->free_space[i],
+ (i % 8 != 7) ? " " : "\n");
+ }
+ rte_mcfg_tailq_write_unlock();
+}
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
new file mode 100644
index 000000000..2e9d418cf
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.h
@@ -0,0 +1,239 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#ifndef _RTE_MBUF_DYN_H_
+#define _RTE_MBUF_DYN_H_
+
+/**
+ * @file
+ * RTE Mbuf dynamic fields and flags
+ *
+ * Many DPDK features require to store data inside the mbuf. As the room
+ * in mbuf structure is limited, it is not possible to have a field for
+ * each feature. Also, changing fields in the mbuf structure can break
+ * the API or ABI.
+ *
+ * This module addresses this issue, by enabling the dynamic
+ * registration of fields or flags:
+ *
+ * - a dynamic field is a named area in the rte_mbuf structure, with a
+ * given size (>= 1 byte) and alignment constraint.
+ * - a dynamic flag is a named bit in the rte_mbuf structure, stored
+ * in mbuf->ol_flags.
+ *
+ * The placement of the field or flag can be automatic, in this case the
+ * zones that have the smallest size and alignment constraint are
+ * selected in priority. Else, a specific field offset or flag bit
+ * number can be requested through the API.
+ *
+ * The typical use case is when a specific offload feature requires to
+ * register a dedicated offload field in the mbuf structure, and adding
+ * a static field or flag is not justified.
+ *
+ * Example of use:
+ *
+ * - A rte_mbuf_dynfield structure is defined, containing the parameters
+ * of the dynamic field to be registered:
+ * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
+ * - The application initializes the PMD, and asks for this feature
+ * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * rxconf. This will make the PMD to register the field by calling
+ * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
+ * stores the returned offset.
+ * - The application that uses the offload feature also registers
+ * the field to retrieve the same offset.
+ * - When the PMD receives a packet, it can set the field:
+ * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
+ * - In the main loop, the application can retrieve the value with
+ * the same macro.
+ *
+ * To avoid wasting space, the dynamic fields or flags must only be
+ * reserved on demand, when an application asks for the related feature.
+ *
+ * The registration can be done at any moment, but it is not possible
+ * to unregister fields or flags for now.
+ *
+ * A dynamic field can be reserved and used by an application only.
+ * It can for instance be a packet mark.
+ *
+ * To avoid namespace collisions, the dynamic mbuf field or flag names
+ * have to be chosen with care. It is advised to use the same
+ * conventions than function names in dpdk:
+ * - "rte_mbuf_dynfield_<name>" if defined in mbuf library
+ * - "rte_<libname>_dynfield_<name>" if defined in another library
+ * - "rte_net_<pmd>_dynfield_<name>" if defined in a in PMD
+ * - any name that does not start with "rte_" in an application
+ */
+
+#include <sys/types.h>
+/**
+ * Maximum length of the dynamic field or flag string.
+ */
+#define RTE_MBUF_DYN_NAMESIZE 64
+
+/**
+ * Structure describing the parameters of a mbuf dynamic field.
+ */
+struct rte_mbuf_dynfield {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
+ size_t size; /**< The number of bytes to reserve. */
+ size_t align; /**< The alignment constraint (power of 2). */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Structure describing the parameters of a mbuf dynamic flag.
+ */
+struct rte_mbuf_dynflag {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic flag. */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Register space for a dynamic field in the mbuf structure.
+ *
+ * If the field is already registered (same name and parameters), its
+ * offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
+
+/**
+ * Register space for a dynamic field in the mbuf structure at offset.
+ *
+ * If the field is already registered (same name, parameters and offset),
+ * the offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @param offset
+ * The requested offset. Ignored if SIZE_MAX is passed.
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, flags, or offset).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested offset cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t offset);
+
+/**
+ * Lookup for a registered dynamic mbuf field.
+ *
+ * @param name
+ * A string identifying the dynamic field.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic field.
+ * @return
+ * The offset of this field in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic field matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_lookup(const char *name,
+ struct rte_mbuf_dynfield *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure.
+ *
+ * If the flag is already registered (same name and parameters), its
+ * bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure specifying bitnum.
+ *
+ * If the flag is already registered (same name, parameters and bitnum),
+ * the bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @param bitnum
+ * The requested bitnum. Ignored if UINT_MAX is passed.
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested bitnum cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int bitnum);
+
+/**
+ * Lookup for a registered dynamic mbuf flag.
+ *
+ * @param name
+ * A string identifying the dynamic flag.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic flag.
+ * @return
+ * The offset of this flag in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic flag matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params);
+
+/**
+ * Helper macro to access to a dynamic field.
+ */
+#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
+
+/**
+ * Dump the status of dynamic fields and flags.
+ *
+ * @param out
+ * The stream where the status is displayed.
+ */
+__rte_experimental
+void rte_mbuf_dyn_dump(FILE *out);
+
+/* Placeholder for dynamic fields and flags declarations. */
+
+#endif
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index a4f41d7fd..263dc0a21 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -58,6 +58,13 @@ EXPERIMENTAL {
global:
rte_mbuf_check;
+ rte_mbuf_dynfield_lookup;
+ rte_mbuf_dynfield_register;
+ rte_mbuf_dynfield_register_offset;
+ rte_mbuf_dynflag_lookup;
+ rte_mbuf_dynflag_register;
+ rte_mbuf_dynflag_register_bitnum;
+ rte_mbuf_dyn_dump;
rte_pktmbuf_copy;
rte_pktmbuf_free_bulk;
--
2.20.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-24 7:38 0% ` Slava Ovsiienko
@ 2019-10-24 7:56 0% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2019-10-24 7:56 UTC (permalink / raw)
To: Slava Ovsiienko
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
On Thu, Oct 24, 2019 at 07:38:15AM +0000, Slava Ovsiienko wrote:
> Hi,
>
> Doc building failed, it seems the rte_mbuf_dynfield_copy() description should be fixed:
>
> ./lib/librte_mbuf/rte_mbuf.h:1694: warning: argument 'm_dst' of command @param is not found in the argument list of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> ./lib/librte_mbuf/rte_mbuf.h:1694: warning: argument 'm_src' of command @param is not found in the argument list of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> ./lib/librte_mbuf/rte_mbuf.h:1694: warning: The following parameters of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc) are not documented
Thanks for spotting this, I'm adding the fix to the v3.
>
> With best regards,
> Slava
>
> > -----Original Message-----
> > From: dev <dev-bounces@dpdk.org> On Behalf Of Olivier Matz
> > Sent: Thursday, October 17, 2019 17:42
> > To: dev@dpdk.org
> > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Bruce Richardson
> > <bruce.richardson@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> > Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > <keith.wiles@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; Morten Brørup
> > <mb@smartsharesystems.com>; Stephen Hemminger
> > <stephen@networkplumber.org>; Thomas Monjalon
> > <thomas@monjalon.net>
> > Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
> >
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each feature. Also,
> > changing fields in the mbuf structure can break the API or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration of
> > fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload feature,
> > when the application requests to enable this feature. As the space in mbuf is
> > limited, the space should only be reserved if it is going to be used (i.e when
> > the application explicitly asks for it).
> >
> > The registration can be done at any moment, but it is not possible to
> > unregister fields or flags for now.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >
> > v2
> >
> > * Rebase on top of master: solve conflict with Stephen's patchset
> > (packet copy)
> > * Add new apis to register a dynamic field/flag at a specific place
> > * Add a dump function (sugg by David)
> > * Enhance field registration function to select the best offset, keeping
> > large aligned zones as much as possible (sugg by Konstantin)
> > * Use a size_t and unsigned int instead of int when relevant
> > (sugg by Konstantin)
> > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > (sugg by Konstantin)
> > * Remove unused argument in private function (sugg by Konstantin)
> > * Fix and simplify locking (sugg by Konstantin)
> > * Fix minor typo
> >
> > rfc -> v1
> >
> > * Rebase on top of master
> > * Change registration API to use a structure instead of
> > variables, getting rid of #defines (Stephen's comment)
> > * Update flag registration to use a similar API as fields.
> > * Change max name length from 32 to 64 (sugg. by Thomas)
> > * Enhance API documentation (Haiyue's and Andrew's comments)
> > * Add a debug log at registration
> > * Add some words in release note
> > * Did some performance tests (sugg. by Andrew):
> > On my platform, reading a dynamic field takes ~3 cycles more
> > than a static field, and ~2 cycles more for writing.
> >
> > app/test/test_mbuf.c | 145 ++++++-
> > doc/guides/rel_notes/release_19_11.rst | 7 +
> > lib/librte_mbuf/Makefile | 2 +
> > lib/librte_mbuf/meson.build | 6 +-
> > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > 8 files changed, 959 insertions(+), 5 deletions(-) create mode 100644
> > lib/librte_mbuf/rte_mbuf_dyn.c create mode 100644
> > lib/librte_mbuf/rte_mbuf_dyn.h
> >
> > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index
> > b9c2b2500..01cafad59 100644
> > --- a/app/test/test_mbuf.c
> > +++ b/app/test/test_mbuf.c
> > @@ -28,6 +28,7 @@
> > #include <rte_random.h>
> > #include <rte_cycles.h>
> > #include <rte_malloc.h>
> > +#include <rte_mbuf_dyn.h>
> >
> > #include "test.h"
> >
> > @@ -657,7 +658,6 @@ test_attach_from_different_pool(struct
> > rte_mempool *pktmbuf_pool,
> > rte_pktmbuf_free(clone2);
> > return -1;
> > }
> > -#undef GOTO_FAIL
> >
> > /*
> > * test allocation and free of mbufs
> > @@ -1276,6 +1276,143 @@ test_tx_offload(void)
> > return (v1 == v2) ? 0 : -EINVAL;
> > }
> >
> > +static int
> > +test_mbuf_dyn(struct rte_mempool *pktmbuf_pool) {
> > + const struct rte_mbuf_dynfield dynfield = {
> > + .name = "test-dynfield",
> > + .size = sizeof(uint8_t),
> > + .align = __alignof__(uint8_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield2 = {
> > + .name = "test-dynfield2",
> > + .size = sizeof(uint16_t),
> > + .align = __alignof__(uint16_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield3 = {
> > + .name = "test-dynfield3",
> > + .size = sizeof(uint8_t),
> > + .align = __alignof__(uint8_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield_fail_big = {
> > + .name = "test-dynfield-fail-big",
> > + .size = 256,
> > + .align = 1,
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield_fail_align = {
> > + .name = "test-dynfield-fail-align",
> > + .size = 1,
> > + .align = 3,
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag = {
> > + .name = "test-dynflag",
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag2 = {
> > + .name = "test-dynflag2",
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag3 = {
> > + .name = "test-dynflag3",
> > + .flags = 0,
> > + };
> > + struct rte_mbuf *m = NULL;
> > + int offset, offset2, offset3;
> > + int flag, flag2, flag3;
> > + int ret;
> > +
> > + printf("Test mbuf dynamic fields and flags\n");
> > + rte_mbuf_dyn_dump(stdout);
> > +
> > + offset = rte_mbuf_dynfield_register(&dynfield);
> > + if (offset == -1)
> > + GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
> > + offset, strerror(errno));
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield);
> > + if (ret != offset)
> > + GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
> > + ret, strerror(errno));
> > +
> > + offset2 = rte_mbuf_dynfield_register(&dynfield2);
> > + if (offset2 == -1 || offset2 == offset || (offset2 & 1))
> > + GOTO_FAIL("failed to register dynamic field 2, offset2=%d:
> > %s",
> > + offset2, strerror(errno));
> > +
> > + offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
> > + offsetof(struct rte_mbuf, dynfield1[1]));
> > + if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
> > + GOTO_FAIL("failed to register dynamic field 3, offset=%d:
> > %s",
> > + offset3, strerror(errno));
> > +
> > + printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
> > + offset, offset2, offset3);
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (too big)");
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (bad
> > alignment)");
> > +
> > + ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
> > + offsetof(struct rte_mbuf, ol_flags));
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (not avail)");
> > +
> > + flag = rte_mbuf_dynflag_register(&dynflag);
> > + if (flag == -1)
> > + GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
> > + flag, strerror(errno));
> > +
> > + ret = rte_mbuf_dynflag_register(&dynflag);
> > + if (ret != flag)
> > + GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
> > + ret, strerror(errno));
> > +
> > + flag2 = rte_mbuf_dynflag_register(&dynflag2);
> > + if (flag2 == -1 || flag2 == flag)
> > + GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
> > + flag2, strerror(errno));
> > +
> > + flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
> > + rte_bsf64(PKT_LAST_FREE));
> > + if (flag3 != rte_bsf64(PKT_LAST_FREE))
> > + GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
> > + flag3, strerror(errno));
> > +
> > + printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
> > +
> > + /* set, get dynamic field */
> > + m = rte_pktmbuf_alloc(pktmbuf_pool);
> > + if (m == NULL)
> > + GOTO_FAIL("Cannot allocate mbuf");
> > +
> > + *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
> > + if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
> > + GOTO_FAIL("failed to read dynamic field");
> > + *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
> > + if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
> > + GOTO_FAIL("failed to read dynamic field");
> > +
> > + /* set a dynamic flag */
> > + m->ol_flags |= (1ULL << flag);
> > +
> > + rte_mbuf_dyn_dump(stdout);
> > + rte_pktmbuf_free(m);
> > + return 0;
> > +fail:
> > + rte_pktmbuf_free(m);
> > + return -1;
> > +}
> > +#undef GOTO_FAIL
> > +
> > static int
> > test_mbuf(void)
> > {
> > @@ -1295,6 +1432,12 @@ test_mbuf(void)
> > goto err;
> > }
> >
> > + /* test registration of dynamic fields and flags */
> > + if (test_mbuf_dyn(pktmbuf_pool) < 0) {
> > + printf("mbuf dynflag test failed\n");
> > + goto err;
> > + }
> > +
> > /* create a specific pktmbuf pool with a priv_size != 0 and no data
> > * room size */
> > pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
> > diff --git a/doc/guides/rel_notes/release_19_11.rst
> > b/doc/guides/rel_notes/release_19_11.rst
> > index 85953b962..9e9c94554 100644
> > --- a/doc/guides/rel_notes/release_19_11.rst
> > +++ b/doc/guides/rel_notes/release_19_11.rst
> > @@ -21,6 +21,13 @@ DPDK Release 19.11
> >
> > xdg-open build/doc/html/guides/rel_notes/release_19_11.html
> >
> > +* **Add support of support dynamic fields and flags in mbuf.**
> > +
> > + This new feature adds the ability to dynamically register some room
> > + for a field or a flag in the mbuf structure. This is typically used
> > + for specific offload features, where adding a static field or flag in
> > + the mbuf is not justified.
> > +
> >
> > New Features
> > ------------
> > diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile index
> > c8f6d2689..5a9bcee73 100644
> > --- a/lib/librte_mbuf/Makefile
> > +++ b/lib/librte_mbuf/Makefile
> > @@ -17,8 +17,10 @@ LIBABIVER := 5
> >
> > # all source are stored in SRCS-y
> > SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c
> > rte_mbuf_pool_ops.c
> > +SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
> >
> > # install includes
> > SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
> > rte_mbuf_ptype.h rte_mbuf_pool_ops.h
> > +SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
> >
> > include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build index
> > 6cc11ebb4..9137e8f26 100644
> > --- a/lib/librte_mbuf/meson.build
> > +++ b/lib/librte_mbuf/meson.build
> > @@ -2,8 +2,10 @@
> > # Copyright(c) 2017 Intel Corporation
> >
> > version = 5
> > -sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c') -
> > headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
> > +sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
> > + 'rte_mbuf_dyn.c')
> > +headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
> > + 'rte_mbuf_dyn.h')
> > deps += ['mempool']
> >
> > allow_experimental_apis = true
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index
> > fb0849ac1..5740b1e93 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -198,9 +198,12 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> >
> > -/* add new RX flags here */
> > +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -/* add new TX flags here */
> > +#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_LAST_FREE (1ULL << 39)
> > +
> > +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
> >
> > /**
> > * Indicate that the metadata field in the mbuf is in use.
> > @@ -738,6 +741,7 @@ struct rte_mbuf {
> > */
> > struct rte_mbuf_ext_shared_info *shinfo;
> >
> > + uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
> > } __rte_cache_aligned;
> >
> > /**
> > @@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m,
> > void *buf_addr,
> > */
> > #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
> >
> > +/**
> > + * Copy dynamic fields from m_src to m_dst.
> > + *
> > + * @param m_dst
> > + * The destination mbuf.
> > + * @param m_src
> > + * The source mbuf.
> > + */
> > +static inline void
> > +rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf
> > +*msrc) {
> > + memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst-
> > >dynfield1)); }
> > +
> > /* internal */
> > static inline void
> > __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf
> > *msrc) @@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf
> > *mdst, const struct rte_mbuf *msrc)
> > mdst->hash = msrc->hash;
> > mdst->packet_type = msrc->packet_type;
> > mdst->timestamp = msrc->timestamp;
> > + rte_mbuf_dynfield_copy(mdst, msrc);
> > }
> >
> > /**
> > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c
> > b/lib/librte_mbuf/rte_mbuf_dyn.c new file mode 100644 index
> > 000000000..9ef235483
> > --- /dev/null
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> > @@ -0,0 +1,548 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2019 6WIND S.A.
> > + */
> > +
> > +#include <sys/queue.h>
> > +#include <stdint.h>
> > +#include <limits.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_tailq.h>
> > +#include <rte_errno.h>
> > +#include <rte_malloc.h>
> > +#include <rte_string_fns.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_mbuf_dyn.h>
> > +
> > +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> > +
> > +struct mbuf_dynfield_elt {
> > + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> > + struct rte_mbuf_dynfield params;
> > + size_t offset;
> > +};
> > +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> > + .name = "RTE_MBUF_DYNFIELD",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> > +
> > +struct mbuf_dynflag_elt {
> > + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> > + struct rte_mbuf_dynflag params;
> > + unsigned int bitnum;
> > +};
> > +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> > + .name = "RTE_MBUF_DYNFLAG",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> > +
> > +struct mbuf_dyn_shm {
> > + /**
> > + * For each mbuf byte, free_space[i] != 0 if space is free.
> > + * The value is the size of the biggest aligned element that
> > + * can fit in the zone.
> > + */
> > + uint8_t free_space[sizeof(struct rte_mbuf)];
> > + /** Bitfield of available flags. */
> > + uint64_t free_flags;
> > +};
> > +static struct mbuf_dyn_shm *shm;
> > +
> > +/* Set the value of free_space[] according to the size and alignment of
> > + * the free areas. This helps to select the best place when reserving a
> > + * dynamic field. Assume tailq is locked.
> > + */
> > +static void
> > +process_score(void)
> > +{
> > + size_t off, align, size, i;
> > +
> > + /* first, erase previous info */
> > + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> > + if (shm->free_space[i])
> > + shm->free_space[i] = 1;
> > + }
> > +
> > + for (off = 0; off < sizeof(struct rte_mbuf); off++) {
> > + /* get the size of the free zone */
> > + for (size = 0; shm->free_space[off + size]; size++)
> > + ;
> > + if (size == 0)
> > + continue;
> > +
> > + /* get the alignment of biggest object that can fit in
> > + * the zone at this offset.
> > + */
> > + for (align = 1;
> > + (off % (align << 1)) == 0 && (align << 1) <= size;
> > + align <<= 1)
> > + ;
> > +
> > + /* save it in free_space[] */
> > + for (i = off; i < off + size; i++)
> > + shm->free_space[i] = RTE_MAX(align, shm-
> > >free_space[i]);
> > + }
> > +}
> > +
> > +/* Allocate and initialize the shared memory. Assume tailq is locked */
> > +static int
> > +init_shared_mem(void)
> > +{
> > + const struct rte_memzone *mz;
> > + uint64_t mask;
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + mz =
> > rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> > + sizeof(struct
> > mbuf_dyn_shm),
> > + SOCKET_ID_ANY, 0,
> > + RTE_CACHE_LINE_SIZE);
> > + } else {
> > + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> > + }
> > + if (mz == NULL)
> > + return -1;
> > +
> > + shm = mz->addr;
> > +
> > +#define mark_free(field) \
> > + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> > + 1, sizeof(((struct rte_mbuf *)0)->field))
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + /* init free_space, keep it sync'd with
> > + * rte_mbuf_dynfield_copy().
> > + */
> > + memset(shm, 0, sizeof(*shm));
> > + mark_free(dynfield1);
> > +
> > + /* init free_flags */
> > + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask
> > <<= 1)
> > + shm->free_flags |= mask;
> > +
> > + process_score();
> > + }
> > +#undef mark_free
> > +
> > + return 0;
> > +}
> > +
> > +/* check if this offset can be used */
> > +static int
> > +check_offset(size_t offset, size_t size, size_t align) {
> > + size_t i;
> > +
> > + if ((offset & (align - 1)) != 0)
> > + return -1;
> > + if (offset + size > sizeof(struct rte_mbuf))
> > + return -1;
> > +
> > + for (i = 0; i < size; i++) {
> > + if (!shm->free_space[i + offset])
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynfield_elt *
> > +__mbuf_dynfield_lookup(const char *name) {
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> > + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynfield;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield
> > +*params) {
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynfield == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynfield->params,
> > sizeof(*params));
> > +
> > + return mbuf_dynfield->offset;
> > +}
> > +
> > +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> > + const struct rte_mbuf_dynfield *params2) {
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->size != params2->size)
> > + return -1;
> > + if (params1->align != params2->align)
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static int
> > +__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> > *params,
> > + size_t req)
> > +{
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + unsigned int best_zone = UINT_MAX;
> > + size_t i, offset;
> > + int ret;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + return -1;
> > +
> > + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> > + if (mbuf_dynfield != NULL) {
> > + if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) <
> > 0) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + return mbuf_dynfield->offset;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + return -1;
> > + }
> > +
> > + if (req == SIZE_MAX) {
> > + for (offset = 0;
> > + offset < sizeof(struct rte_mbuf);
> > + offset++) {
> > + if (check_offset(offset, params->size,
> > + params->align) == 0 &&
> > + shm->free_space[offset] <
> > best_zone) {
> > + best_zone = shm->free_space[offset];
> > + req = offset;
> > + }
> > + }
> > + if (req == SIZE_MAX) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > + } else {
> > + if (check_offset(req, params->size, params->align) < 0) {
> > + rte_errno = EBUSY;
> > + return -1;
> > + }
> > + }
> > +
> > + offset = req;
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + return -1;
> > +
> > + mbuf_dynfield = rte_zmalloc("mbuf_dynfield",
> > sizeof(*mbuf_dynfield), 0);
> > + if (mbuf_dynfield == NULL) {
> > + rte_free(te);
> > + return -1;
> > + }
> > +
> > + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> > + sizeof(mbuf_dynfield->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> > + rte_errno = ENAMETOOLONG;
> > + rte_free(mbuf_dynfield);
> > + rte_free(te);
> > + return -1;
> > + }
> > + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield-
> > >params));
> > + mbuf_dynfield->offset = offset;
> > + te->data = mbuf_dynfield;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> > +
> > + for (i = offset; i < offset + params->size; i++)
> > + shm->free_space[i] = 0;
> > + process_score();
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu,
> > al=%zu, fl=0x%x) -> %zd\n",
> > + params->name, params->size, params->align, params->flags,
> > + offset);
> > +
> > + return offset;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> > *params,
> > + size_t req)
> > +{
> > + int ret;
> > +
> > + if (params->size >= sizeof(struct rte_mbuf)) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > + if (!rte_is_power_of_2(params->align)) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > + if (params->flags != 0) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_write_lock();
> > + ret = __rte_mbuf_dynfield_register_offset(params, req);
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return ret;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params) {
> > + return rte_mbuf_dynfield_register_offset(params, SIZE_MAX); }
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynflag_elt *
> > +__mbuf_dynflag_lookup(const char *name) {
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> > + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> > + if (strncmp(name, mbuf_dynflag->params.name,
> > + RTE_MBUF_DYN_NAMESIZE) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynflag;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_lookup(const char *name,
> > + struct rte_mbuf_dynflag *params)
> > +{
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynflag == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynflag->params, sizeof(*params));
> > +
> > + return mbuf_dynflag->bitnum;
> > +}
> > +
> > +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> > + const struct rte_mbuf_dynflag *params2) {
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static int
> > +__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> > *params,
> > + unsigned int req)
> > +{
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + unsigned int bitnum;
> > + int ret;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + return -1;
> > +
> > + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> > + if (mbuf_dynflag != NULL) {
> > + if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0)
> > {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + return mbuf_dynflag->bitnum;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + return -1;
> > + }
> > +
> > + if (req == UINT_MAX) {
> > + if (shm->free_flags == 0) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > + bitnum = rte_bsf64(shm->free_flags);
> > + } else {
> > + if ((shm->free_flags & (1ULL << req)) == 0) {
> > + rte_errno = EBUSY;
> > + return -1;
> > + }
> > + bitnum = req;
> > + }
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + return -1;
> > +
> > + mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag),
> > 0);
> > + if (mbuf_dynflag == NULL) {
> > + rte_free(te);
> > + return -1;
> > + }
> > +
> > + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> > + sizeof(mbuf_dynflag->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> > + rte_free(mbuf_dynflag);
> > + rte_free(te);
> > + rte_errno = ENAMETOOLONG;
> > + return -1;
> > + }
> > + mbuf_dynflag->bitnum = bitnum;
> > + te->data = mbuf_dynflag;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> > +
> > + shm->free_flags &= ~(1ULL << bitnum);
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) ->
> > %u\n",
> > + params->name, params->flags, bitnum);
> > +
> > + return bitnum;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > + unsigned int req)
> > +{
> > + int ret;
> > +
> > + if (req != UINT_MAX && req >= 64) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_write_lock();
> > + ret = __rte_mbuf_dynflag_register_bitnum(params, req);
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return ret;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params) {
> > + return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX); }
> > +
> > +void rte_mbuf_dyn_dump(FILE *out)
> > +{
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *dynfield;
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *dynflag;
> > + struct rte_tailq_entry *te;
> > + size_t i;
> > +
> > + rte_mcfg_tailq_write_lock();
> > + init_shared_mem();
> > + fprintf(out, "Reserved fields:\n");
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > + dynfield = (struct mbuf_dynfield_elt *)te->data;
> > + fprintf(out, " name=%s offset=%zd size=%zd align=%zd
> > flags=%x\n",
> > + dynfield->params.name, dynfield->offset,
> > + dynfield->params.size, dynfield->params.align,
> > + dynfield->params.flags);
> > + }
> > + fprintf(out, "Reserved flags:\n");
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> > + dynflag = (struct mbuf_dynflag_elt *)te->data;
> > + fprintf(out, " name=%s bitnum=%u flags=%x\n",
> > + dynflag->params.name, dynflag->bitnum,
> > + dynflag->params.flags);
> > + }
> > + fprintf(out, "Free space in mbuf (0 = free, value = zone
> > alignment):\n");
> > + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> > + if ((i % 8) == 0)
> > + fprintf(out, " %4.4zx: ", i);
> > + fprintf(out, "%2.2x%s", shm->free_space[i],
> > + (i % 8 != 7) ? " " : "\n");
> > + }
> > + rte_mcfg_tailq_write_unlock();
> > +}
> > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> > b/lib/librte_mbuf/rte_mbuf_dyn.h new file mode 100644 index
> > 000000000..307613c96
> > --- /dev/null
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > @@ -0,0 +1,226 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2019 6WIND S.A.
> > + */
> > +
> > +#ifndef _RTE_MBUF_DYN_H_
> > +#define _RTE_MBUF_DYN_H_
> > +
> > +/**
> > + * @file
> > + * RTE Mbuf dynamic fields and flags
> > + *
> > + * Many features require to store data inside the mbuf. As the room in
> > + * mbuf structure is limited, it is not possible to have a field for
> > + * each feature. Also, changing fields in the mbuf structure can break
> > + * the API or ABI.
> > + *
> > + * This module addresses this issue, by enabling the dynamic
> > + * registration of fields or flags:
> > + *
> > + * - a dynamic field is a named area in the rte_mbuf structure, with a
> > + * given size (>= 1 byte) and alignment constraint.
> > + * - a dynamic flag is a named bit in the rte_mbuf structure, stored
> > + * in mbuf->ol_flags.
> > + *
> > + * The typical use case is when a specific offload feature requires to
> > + * register a dedicated offload field in the mbuf structure, and adding
> > + * a static field or flag is not justified.
> > + *
> > + * Example of use:
> > + *
> > + * - A rte_mbuf_dynfield structure is defined, containing the parameters
> > + * of the dynamic field to be registered:
> > + * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
> > + * - The application initializes the PMD, and asks for this feature
> > + * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
> > + * rxconf. This will make the PMD to register the field by calling
> > + * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
> > + * stores the returned offset.
> > + * - The application that uses the offload feature also registers
> > + * the field to retrieve the same offset.
> > + * - When the PMD receives a packet, it can set the field:
> > + * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
> > + * - In the main loop, the application can retrieve the value with
> > + * the same macro.
> > + *
> > + * To avoid wasting space, the dynamic fields or flags must only be
> > + * reserved on demand, when an application asks for the related feature.
> > + *
> > + * The registration can be done at any moment, but it is not possible
> > + * to unregister fields or flags for now.
> > + *
> > + * A dynamic field can be reserved and used by an application only.
> > + * It can for instance be a packet mark.
> > + */
> > +
> > +#include <sys/types.h>
> > +/**
> > + * Maximum length of the dynamic field or flag string.
> > + */
> > +#define RTE_MBUF_DYN_NAMESIZE 64
> > +
> > +/**
> > + * Structure describing the parameters of a mbuf dynamic field.
> > + */
> > +struct rte_mbuf_dynfield {
> > + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
> > + size_t size; /**< The number of bytes to reserve. */
> > + size_t align; /**< The alignment constraint (power of 2). */
> > + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> > +
> > +/**
> > + * Structure describing the parameters of a mbuf dynamic flag.
> > + */
> > +struct rte_mbuf_dynflag {
> > + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic
> > flag. */
> > + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> > +
> > +/**
> > + * Register space for a dynamic field in the mbuf structure.
> > + *
> > + * If the field is already registered (same name and parameters), its
> > + * offset is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters (name, size,
> > + * alignment constraint and flags).
> > + * @return
> > + * The offset in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, or flags).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: not enough room in mbuf.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name does not ends with \0.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
> > +
> > +/**
> > + * Register space for a dynamic field in the mbuf structure at offset.
> > + *
> > + * If the field is already registered (same name, parameters and
> > +offset),
> > + * the offset is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters (name, size,
> > + * alignment constraint and flags).
> > + * @param offset
> > + * The requested offset. Ignored if SIZE_MAX is passed.
> > + * @return
> > + * The offset in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, flags, or offset).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EBUSY: the requested offset cannot be used.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: not enough room in mbuf.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name does not ends with \0.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> > *params,
> > + size_t offset);
> > +
> > +/**
> > + * Lookup for a registered dynamic mbuf field.
> > + *
> > + * @param name
> > + * A string identifying the dynamic field.
> > + * @param params
> > + * If not NULL, and if the lookup is successful, the structure is
> > + * filled with the parameters of the dynamic field.
> > + * @return
> > + * The offset of this field in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - ENOENT: no dynamic field matches this name.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynfield_lookup(const char *name,
> > + struct rte_mbuf_dynfield *params);
> > +
> > +/**
> > + * Register a dynamic flag in the mbuf structure.
> > + *
> > + * If the flag is already registered (same name and parameters), its
> > + * bitnum is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters of the dynamic
> > + * flag (name and options).
> > + * @return
> > + * The number of the reserved bit, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, or flags).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: no more flag available.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> > 1.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
> > +
> > +/**
> > + * Register a dynamic flag in the mbuf structure specifying bitnum.
> > + *
> > + * If the flag is already registered (same name, parameters and
> > +bitnum),
> > + * the bitnum is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters of the dynamic
> > + * flag (name and options).
> > + * @param bitnum
> > + * The requested bitnum. Ignored if UINT_MAX is passed.
> > + * @return
> > + * The number of the reserved bit, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, or flags).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EBUSY: the requested bitnum cannot be used.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: no more flag available.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> > 1.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> > *params,
> > + unsigned int bitnum);
> > +
> > +/**
> > + * Lookup for a registered dynamic mbuf flag.
> > + *
> > + * @param name
> > + * A string identifying the dynamic flag.
> > + * @param params
> > + * If not NULL, and if the lookup is successful, the structure is
> > + * filled with the parameters of the dynamic flag.
> > + * @return
> > + * The offset of this flag in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - ENOENT: no dynamic flag matches this name.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynflag_lookup(const char *name,
> > + struct rte_mbuf_dynflag *params);
> > +
> > +/**
> > + * Helper macro to access to a dynamic field.
> > + */
> > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) +
> > +(offset)))
> > +
> > +/**
> > + * Dump the status of dynamic fields and flags.
> > + *
> > + * @param out
> > + * The stream where the status is displayed.
> > + */
> > +__rte_experimental
> > +void rte_mbuf_dyn_dump(FILE *out);
> > +
> > +/* Placeholder for dynamic fields and flags declarations. */
> > +
> > +#endif
> > diff --git a/lib/librte_mbuf/rte_mbuf_version.map
> > b/lib/librte_mbuf/rte_mbuf_version.map
> > index 519fead35..9bf5ca37a 100644
> > --- a/lib/librte_mbuf/rte_mbuf_version.map
> > +++ b/lib/librte_mbuf/rte_mbuf_version.map
> > @@ -58,6 +58,13 @@ EXPERIMENTAL {
> > global:
> >
> > rte_mbuf_check;
> > + rte_mbuf_dynfield_lookup;
> > + rte_mbuf_dynfield_register;
> > + rte_mbuf_dynfield_register_offset;
> > + rte_mbuf_dynflag_lookup;
> > + rte_mbuf_dynflag_register;
> > + rte_mbuf_dynflag_register_bitnum;
> > + rte_mbuf_dyn_dump;
> > rte_pktmbuf_copy;
> >
> > } DPDK_18.08;
> > --
> > 2.20.1
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
` (2 preceding siblings ...)
2019-10-23 12:00 0% ` Shahaf Shuler
@ 2019-10-24 7:38 0% ` Slava Ovsiienko
2019-10-24 7:56 0% ` Olivier Matz
3 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2019-10-24 7:38 UTC (permalink / raw)
To: Olivier Matz, dev
Cc: Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi,
Doc building failed, it seems the rte_mbuf_dynfield_copy() description should be fixed:
./lib/librte_mbuf/rte_mbuf.h:1694: warning: argument 'm_dst' of command @param is not found in the argument list of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
./lib/librte_mbuf/rte_mbuf.h:1694: warning: argument 'm_src' of command @param is not found in the argument list of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
./lib/librte_mbuf/rte_mbuf.h:1694: warning: The following parameters of rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc) are not documented
With best regards,
Slava
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Olivier Matz
> Sent: Thursday, October 17, 2019 17:42
> To: dev@dpdk.org
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Bruce Richardson
> <bruce.richardson@intel.com>; Wang, Haiyue <haiyue.wang@intel.com>;
> Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Morten Brørup
> <mb@smartsharesystems.com>; Stephen Hemminger
> <stephen@networkplumber.org>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
>
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each feature. Also,
> changing fields in the mbuf structure can break the API or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration of
> fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload feature,
> when the application requests to enable this feature. As the space in mbuf is
> limited, the space should only be reserved if it is going to be used (i.e when
> the application explicitly asks for it).
>
> The registration can be done at any moment, but it is not possible to
> unregister fields or flags for now.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>
> v2
>
> * Rebase on top of master: solve conflict with Stephen's patchset
> (packet copy)
> * Add new apis to register a dynamic field/flag at a specific place
> * Add a dump function (sugg by David)
> * Enhance field registration function to select the best offset, keeping
> large aligned zones as much as possible (sugg by Konstantin)
> * Use a size_t and unsigned int instead of int when relevant
> (sugg by Konstantin)
> * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> (sugg by Konstantin)
> * Remove unused argument in private function (sugg by Konstantin)
> * Fix and simplify locking (sugg by Konstantin)
> * Fix minor typo
>
> rfc -> v1
>
> * Rebase on top of master
> * Change registration API to use a structure instead of
> variables, getting rid of #defines (Stephen's comment)
> * Update flag registration to use a similar API as fields.
> * Change max name length from 32 to 64 (sugg. by Thomas)
> * Enhance API documentation (Haiyue's and Andrew's comments)
> * Add a debug log at registration
> * Add some words in release note
> * Did some performance tests (sugg. by Andrew):
> On my platform, reading a dynamic field takes ~3 cycles more
> than a static field, and ~2 cycles more for writing.
>
> app/test/test_mbuf.c | 145 ++++++-
> doc/guides/rel_notes/release_19_11.rst | 7 +
> lib/librte_mbuf/Makefile | 2 +
> lib/librte_mbuf/meson.build | 6 +-
> lib/librte_mbuf/rte_mbuf.h | 23 +-
> lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> lib/librte_mbuf/rte_mbuf_version.map | 7 +
> 8 files changed, 959 insertions(+), 5 deletions(-) create mode 100644
> lib/librte_mbuf/rte_mbuf_dyn.c create mode 100644
> lib/librte_mbuf/rte_mbuf_dyn.h
>
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index
> b9c2b2500..01cafad59 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -28,6 +28,7 @@
> #include <rte_random.h>
> #include <rte_cycles.h>
> #include <rte_malloc.h>
> +#include <rte_mbuf_dyn.h>
>
> #include "test.h"
>
> @@ -657,7 +658,6 @@ test_attach_from_different_pool(struct
> rte_mempool *pktmbuf_pool,
> rte_pktmbuf_free(clone2);
> return -1;
> }
> -#undef GOTO_FAIL
>
> /*
> * test allocation and free of mbufs
> @@ -1276,6 +1276,143 @@ test_tx_offload(void)
> return (v1 == v2) ? 0 : -EINVAL;
> }
>
> +static int
> +test_mbuf_dyn(struct rte_mempool *pktmbuf_pool) {
> + const struct rte_mbuf_dynfield dynfield = {
> + .name = "test-dynfield",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield2 = {
> + .name = "test-dynfield2",
> + .size = sizeof(uint16_t),
> + .align = __alignof__(uint16_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield3 = {
> + .name = "test-dynfield3",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_big = {
> + .name = "test-dynfield-fail-big",
> + .size = 256,
> + .align = 1,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_align = {
> + .name = "test-dynfield-fail-align",
> + .size = 1,
> + .align = 3,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag = {
> + .name = "test-dynflag",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag2 = {
> + .name = "test-dynflag2",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag3 = {
> + .name = "test-dynflag3",
> + .flags = 0,
> + };
> + struct rte_mbuf *m = NULL;
> + int offset, offset2, offset3;
> + int flag, flag2, flag3;
> + int ret;
> +
> + printf("Test mbuf dynamic fields and flags\n");
> + rte_mbuf_dyn_dump(stdout);
> +
> + offset = rte_mbuf_dynfield_register(&dynfield);
> + if (offset == -1)
> + GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
> + offset, strerror(errno));
> +
> + ret = rte_mbuf_dynfield_register(&dynfield);
> + if (ret != offset)
> + GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
> + ret, strerror(errno));
> +
> + offset2 = rte_mbuf_dynfield_register(&dynfield2);
> + if (offset2 == -1 || offset2 == offset || (offset2 & 1))
> + GOTO_FAIL("failed to register dynamic field 2, offset2=%d:
> %s",
> + offset2, strerror(errno));
> +
> + offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
> + offsetof(struct rte_mbuf, dynfield1[1]));
> + if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
> + GOTO_FAIL("failed to register dynamic field 3, offset=%d:
> %s",
> + offset3, strerror(errno));
> +
> + printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
> + offset, offset2, offset3);
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (too big)");
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (bad
> alignment)");
> +
> + ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
> + offsetof(struct rte_mbuf, ol_flags));
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (not avail)");
> +
> + flag = rte_mbuf_dynflag_register(&dynflag);
> + if (flag == -1)
> + GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
> + flag, strerror(errno));
> +
> + ret = rte_mbuf_dynflag_register(&dynflag);
> + if (ret != flag)
> + GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
> + ret, strerror(errno));
> +
> + flag2 = rte_mbuf_dynflag_register(&dynflag2);
> + if (flag2 == -1 || flag2 == flag)
> + GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
> + flag2, strerror(errno));
> +
> + flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
> + rte_bsf64(PKT_LAST_FREE));
> + if (flag3 != rte_bsf64(PKT_LAST_FREE))
> + GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
> + flag3, strerror(errno));
> +
> + printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
> +
> + /* set, get dynamic field */
> + m = rte_pktmbuf_alloc(pktmbuf_pool);
> + if (m == NULL)
> + GOTO_FAIL("Cannot allocate mbuf");
> +
> + *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
> + if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
> + GOTO_FAIL("failed to read dynamic field");
> + *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
> + if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
> + GOTO_FAIL("failed to read dynamic field");
> +
> + /* set a dynamic flag */
> + m->ol_flags |= (1ULL << flag);
> +
> + rte_mbuf_dyn_dump(stdout);
> + rte_pktmbuf_free(m);
> + return 0;
> +fail:
> + rte_pktmbuf_free(m);
> + return -1;
> +}
> +#undef GOTO_FAIL
> +
> static int
> test_mbuf(void)
> {
> @@ -1295,6 +1432,12 @@ test_mbuf(void)
> goto err;
> }
>
> + /* test registration of dynamic fields and flags */
> + if (test_mbuf_dyn(pktmbuf_pool) < 0) {
> + printf("mbuf dynflag test failed\n");
> + goto err;
> + }
> +
> /* create a specific pktmbuf pool with a priv_size != 0 and no data
> * room size */
> pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
> diff --git a/doc/guides/rel_notes/release_19_11.rst
> b/doc/guides/rel_notes/release_19_11.rst
> index 85953b962..9e9c94554 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -21,6 +21,13 @@ DPDK Release 19.11
>
> xdg-open build/doc/html/guides/rel_notes/release_19_11.html
>
> +* **Add support of support dynamic fields and flags in mbuf.**
> +
> + This new feature adds the ability to dynamically register some room
> + for a field or a flag in the mbuf structure. This is typically used
> + for specific offload features, where adding a static field or flag in
> + the mbuf is not justified.
> +
>
> New Features
> ------------
> diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile index
> c8f6d2689..5a9bcee73 100644
> --- a/lib/librte_mbuf/Makefile
> +++ b/lib/librte_mbuf/Makefile
> @@ -17,8 +17,10 @@ LIBABIVER := 5
>
> # all source are stored in SRCS-y
> SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c
> rte_mbuf_pool_ops.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
>
> # install includes
> SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
> rte_mbuf_ptype.h rte_mbuf_pool_ops.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build index
> 6cc11ebb4..9137e8f26 100644
> --- a/lib/librte_mbuf/meson.build
> +++ b/lib/librte_mbuf/meson.build
> @@ -2,8 +2,10 @@
> # Copyright(c) 2017 Intel Corporation
>
> version = 5
> -sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c') -
> headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
> +sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
> + 'rte_mbuf_dyn.c')
> +headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
> + 'rte_mbuf_dyn.h')
> deps += ['mempool']
>
> allow_experimental_apis = true
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index
> fb0849ac1..5740b1e93 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -198,9 +198,12 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
>
> -/* add new RX flags here */
> +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -/* add new TX flags here */
> +#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_LAST_FREE (1ULL << 39)
> +
> +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
>
> /**
> * Indicate that the metadata field in the mbuf is in use.
> @@ -738,6 +741,7 @@ struct rte_mbuf {
> */
> struct rte_mbuf_ext_shared_info *shinfo;
>
> + uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
> } __rte_cache_aligned;
>
> /**
> @@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m,
> void *buf_addr,
> */
> #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
>
> +/**
> + * Copy dynamic fields from m_src to m_dst.
> + *
> + * @param m_dst
> + * The destination mbuf.
> + * @param m_src
> + * The source mbuf.
> + */
> +static inline void
> +rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf
> +*msrc) {
> + memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst-
> >dynfield1)); }
> +
> /* internal */
> static inline void
> __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf
> *msrc) @@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf
> *mdst, const struct rte_mbuf *msrc)
> mdst->hash = msrc->hash;
> mdst->packet_type = msrc->packet_type;
> mdst->timestamp = msrc->timestamp;
> + rte_mbuf_dynfield_copy(mdst, msrc);
> }
>
> /**
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c
> b/lib/librte_mbuf/rte_mbuf_dyn.c new file mode 100644 index
> 000000000..9ef235483
> --- /dev/null
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> @@ -0,0 +1,548 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2019 6WIND S.A.
> + */
> +
> +#include <sys/queue.h>
> +#include <stdint.h>
> +#include <limits.h>
> +
> +#include <rte_common.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_tailq.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_string_fns.h>
> +#include <rte_mbuf.h>
> +#include <rte_mbuf_dyn.h>
> +
> +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> +
> +struct mbuf_dynfield_elt {
> + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> + struct rte_mbuf_dynfield params;
> + size_t offset;
> +};
> +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> + .name = "RTE_MBUF_DYNFIELD",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> +
> +struct mbuf_dynflag_elt {
> + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> + struct rte_mbuf_dynflag params;
> + unsigned int bitnum;
> +};
> +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> + .name = "RTE_MBUF_DYNFLAG",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> +
> +struct mbuf_dyn_shm {
> + /**
> + * For each mbuf byte, free_space[i] != 0 if space is free.
> + * The value is the size of the biggest aligned element that
> + * can fit in the zone.
> + */
> + uint8_t free_space[sizeof(struct rte_mbuf)];
> + /** Bitfield of available flags. */
> + uint64_t free_flags;
> +};
> +static struct mbuf_dyn_shm *shm;
> +
> +/* Set the value of free_space[] according to the size and alignment of
> + * the free areas. This helps to select the best place when reserving a
> + * dynamic field. Assume tailq is locked.
> + */
> +static void
> +process_score(void)
> +{
> + size_t off, align, size, i;
> +
> + /* first, erase previous info */
> + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> + if (shm->free_space[i])
> + shm->free_space[i] = 1;
> + }
> +
> + for (off = 0; off < sizeof(struct rte_mbuf); off++) {
> + /* get the size of the free zone */
> + for (size = 0; shm->free_space[off + size]; size++)
> + ;
> + if (size == 0)
> + continue;
> +
> + /* get the alignment of biggest object that can fit in
> + * the zone at this offset.
> + */
> + for (align = 1;
> + (off % (align << 1)) == 0 && (align << 1) <= size;
> + align <<= 1)
> + ;
> +
> + /* save it in free_space[] */
> + for (i = off; i < off + size; i++)
> + shm->free_space[i] = RTE_MAX(align, shm-
> >free_space[i]);
> + }
> +}
> +
> +/* Allocate and initialize the shared memory. Assume tailq is locked */
> +static int
> +init_shared_mem(void)
> +{
> + const struct rte_memzone *mz;
> + uint64_t mask;
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + mz =
> rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> + sizeof(struct
> mbuf_dyn_shm),
> + SOCKET_ID_ANY, 0,
> + RTE_CACHE_LINE_SIZE);
> + } else {
> + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> + }
> + if (mz == NULL)
> + return -1;
> +
> + shm = mz->addr;
> +
> +#define mark_free(field) \
> + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> + 1, sizeof(((struct rte_mbuf *)0)->field))
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + /* init free_space, keep it sync'd with
> + * rte_mbuf_dynfield_copy().
> + */
> + memset(shm, 0, sizeof(*shm));
> + mark_free(dynfield1);
> +
> + /* init free_flags */
> + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask
> <<= 1)
> + shm->free_flags |= mask;
> +
> + process_score();
> + }
> +#undef mark_free
> +
> + return 0;
> +}
> +
> +/* check if this offset can be used */
> +static int
> +check_offset(size_t offset, size_t size, size_t align) {
> + size_t i;
> +
> + if ((offset & (align - 1)) != 0)
> + return -1;
> + if (offset + size > sizeof(struct rte_mbuf))
> + return -1;
> +
> + for (i = 0; i < size; i++) {
> + if (!shm->free_space[i + offset])
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynfield_elt *
> +__mbuf_dynfield_lookup(const char *name) {
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynfield;
> +}
> +
> +int
> +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield
> +*params) {
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynfield == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynfield->params,
> sizeof(*params));
> +
> + return mbuf_dynfield->offset;
> +}
> +
> +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> + const struct rte_mbuf_dynfield *params2) {
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->size != params2->size)
> + return -1;
> + if (params1->align != params2->align)
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t req)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int best_zone = UINT_MAX;
> + size_t i, offset;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> + if (mbuf_dynfield != NULL) {
> + if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) <
> 0) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynfield->offset;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == SIZE_MAX) {
> + for (offset = 0;
> + offset < sizeof(struct rte_mbuf);
> + offset++) {
> + if (check_offset(offset, params->size,
> + params->align) == 0 &&
> + shm->free_space[offset] <
> best_zone) {
> + best_zone = shm->free_space[offset];
> + req = offset;
> + }
> + }
> + if (req == SIZE_MAX) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + } else {
> + if (check_offset(req, params->size, params->align) < 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + }
> +
> + offset = req;
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynfield = rte_zmalloc("mbuf_dynfield",
> sizeof(*mbuf_dynfield), 0);
> + if (mbuf_dynfield == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> + sizeof(mbuf_dynfield->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> + rte_errno = ENAMETOOLONG;
> + rte_free(mbuf_dynfield);
> + rte_free(te);
> + return -1;
> + }
> + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield-
> >params));
> + mbuf_dynfield->offset = offset;
> + te->data = mbuf_dynfield;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> +
> + for (i = offset; i < offset + params->size; i++)
> + shm->free_space[i] = 0;
> + process_score();
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu,
> al=%zu, fl=0x%x) -> %zd\n",
> + params->name, params->size, params->align, params->flags,
> + offset);
> +
> + return offset;
> +}
> +
> +int
> +rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t req)
> +{
> + int ret;
> +
> + if (params->size >= sizeof(struct rte_mbuf)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (!rte_is_power_of_2(params->align)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (params->flags != 0) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynfield_register_offset(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
> +int
> +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params) {
> + return rte_mbuf_dynfield_register_offset(params, SIZE_MAX); }
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynflag_elt *
> +__mbuf_dynflag_lookup(const char *name) {
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> + if (strncmp(name, mbuf_dynflag->params.name,
> + RTE_MBUF_DYN_NAMESIZE) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynflag;
> +}
> +
> +int
> +rte_mbuf_dynflag_lookup(const char *name,
> + struct rte_mbuf_dynflag *params)
> +{
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynflag == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynflag->params, sizeof(*params));
> +
> + return mbuf_dynflag->bitnum;
> +}
> +
> +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> + const struct rte_mbuf_dynflag *params2) {
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> *params,
> + unsigned int req)
> +{
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int bitnum;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> + if (mbuf_dynflag != NULL) {
> + if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0)
> {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynflag->bitnum;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == UINT_MAX) {
> + if (shm->free_flags == 0) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + bitnum = rte_bsf64(shm->free_flags);
> + } else {
> + if ((shm->free_flags & (1ULL << req)) == 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + bitnum = req;
> + }
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag),
> 0);
> + if (mbuf_dynflag == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> + sizeof(mbuf_dynflag->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> + rte_free(mbuf_dynflag);
> + rte_free(te);
> + rte_errno = ENAMETOOLONG;
> + return -1;
> + }
> + mbuf_dynflag->bitnum = bitnum;
> + te->data = mbuf_dynflag;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> +
> + shm->free_flags &= ~(1ULL << bitnum);
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) ->
> %u\n",
> + params->name, params->flags, bitnum);
> +
> + return bitnum;
> +}
> +
> +int
> +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> + unsigned int req)
> +{
> + int ret;
> +
> + if (req != UINT_MAX && req >= 64) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynflag_register_bitnum(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
> +int
> +rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params) {
> + return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX); }
> +
> +void rte_mbuf_dyn_dump(FILE *out)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *dynfield;
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *dynflag;
> + struct rte_tailq_entry *te;
> + size_t i;
> +
> + rte_mcfg_tailq_write_lock();
> + init_shared_mem();
> + fprintf(out, "Reserved fields:\n");
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> + dynfield = (struct mbuf_dynfield_elt *)te->data;
> + fprintf(out, " name=%s offset=%zd size=%zd align=%zd
> flags=%x\n",
> + dynfield->params.name, dynfield->offset,
> + dynfield->params.size, dynfield->params.align,
> + dynfield->params.flags);
> + }
> + fprintf(out, "Reserved flags:\n");
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> + dynflag = (struct mbuf_dynflag_elt *)te->data;
> + fprintf(out, " name=%s bitnum=%u flags=%x\n",
> + dynflag->params.name, dynflag->bitnum,
> + dynflag->params.flags);
> + }
> + fprintf(out, "Free space in mbuf (0 = free, value = zone
> alignment):\n");
> + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> + if ((i % 8) == 0)
> + fprintf(out, " %4.4zx: ", i);
> + fprintf(out, "%2.2x%s", shm->free_space[i],
> + (i % 8 != 7) ? " " : "\n");
> + }
> + rte_mcfg_tailq_write_unlock();
> +}
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> b/lib/librte_mbuf/rte_mbuf_dyn.h new file mode 100644 index
> 000000000..307613c96
> --- /dev/null
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> @@ -0,0 +1,226 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2019 6WIND S.A.
> + */
> +
> +#ifndef _RTE_MBUF_DYN_H_
> +#define _RTE_MBUF_DYN_H_
> +
> +/**
> + * @file
> + * RTE Mbuf dynamic fields and flags
> + *
> + * Many features require to store data inside the mbuf. As the room in
> + * mbuf structure is limited, it is not possible to have a field for
> + * each feature. Also, changing fields in the mbuf structure can break
> + * the API or ABI.
> + *
> + * This module addresses this issue, by enabling the dynamic
> + * registration of fields or flags:
> + *
> + * - a dynamic field is a named area in the rte_mbuf structure, with a
> + * given size (>= 1 byte) and alignment constraint.
> + * - a dynamic flag is a named bit in the rte_mbuf structure, stored
> + * in mbuf->ol_flags.
> + *
> + * The typical use case is when a specific offload feature requires to
> + * register a dedicated offload field in the mbuf structure, and adding
> + * a static field or flag is not justified.
> + *
> + * Example of use:
> + *
> + * - A rte_mbuf_dynfield structure is defined, containing the parameters
> + * of the dynamic field to be registered:
> + * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
> + * - The application initializes the PMD, and asks for this feature
> + * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
> + * rxconf. This will make the PMD to register the field by calling
> + * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
> + * stores the returned offset.
> + * - The application that uses the offload feature also registers
> + * the field to retrieve the same offset.
> + * - When the PMD receives a packet, it can set the field:
> + * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
> + * - In the main loop, the application can retrieve the value with
> + * the same macro.
> + *
> + * To avoid wasting space, the dynamic fields or flags must only be
> + * reserved on demand, when an application asks for the related feature.
> + *
> + * The registration can be done at any moment, but it is not possible
> + * to unregister fields or flags for now.
> + *
> + * A dynamic field can be reserved and used by an application only.
> + * It can for instance be a packet mark.
> + */
> +
> +#include <sys/types.h>
> +/**
> + * Maximum length of the dynamic field or flag string.
> + */
> +#define RTE_MBUF_DYN_NAMESIZE 64
> +
> +/**
> + * Structure describing the parameters of a mbuf dynamic field.
> + */
> +struct rte_mbuf_dynfield {
> + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
> + size_t size; /**< The number of bytes to reserve. */
> + size_t align; /**< The alignment constraint (power of 2). */
> + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> +
> +/**
> + * Structure describing the parameters of a mbuf dynamic flag.
> + */
> +struct rte_mbuf_dynflag {
> + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic
> flag. */
> + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> +
> +/**
> + * Register space for a dynamic field in the mbuf structure.
> + *
> + * If the field is already registered (same name and parameters), its
> + * offset is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters (name, size,
> + * alignment constraint and flags).
> + * @return
> + * The offset in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: not enough room in mbuf.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name does not ends with \0.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
> +
> +/**
> + * Register space for a dynamic field in the mbuf structure at offset.
> + *
> + * If the field is already registered (same name, parameters and
> +offset),
> + * the offset is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters (name, size,
> + * alignment constraint and flags).
> + * @param offset
> + * The requested offset. Ignored if SIZE_MAX is passed.
> + * @return
> + * The offset in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, flags, or offset).
> + * - EEXIST: this name is already register with different parameters.
> + * - EBUSY: the requested offset cannot be used.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: not enough room in mbuf.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name does not ends with \0.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t offset);
> +
> +/**
> + * Lookup for a registered dynamic mbuf field.
> + *
> + * @param name
> + * A string identifying the dynamic field.
> + * @param params
> + * If not NULL, and if the lookup is successful, the structure is
> + * filled with the parameters of the dynamic field.
> + * @return
> + * The offset of this field in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - ENOENT: no dynamic field matches this name.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_lookup(const char *name,
> + struct rte_mbuf_dynfield *params);
> +
> +/**
> + * Register a dynamic flag in the mbuf structure.
> + *
> + * If the flag is already registered (same name and parameters), its
> + * bitnum is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters of the dynamic
> + * flag (name and options).
> + * @return
> + * The number of the reserved bit, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: no more flag available.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> 1.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
> +
> +/**
> + * Register a dynamic flag in the mbuf structure specifying bitnum.
> + *
> + * If the flag is already registered (same name, parameters and
> +bitnum),
> + * the bitnum is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters of the dynamic
> + * flag (name and options).
> + * @param bitnum
> + * The requested bitnum. Ignored if UINT_MAX is passed.
> + * @return
> + * The number of the reserved bit, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EBUSY: the requested bitnum cannot be used.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: no more flag available.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> 1.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> *params,
> + unsigned int bitnum);
> +
> +/**
> + * Lookup for a registered dynamic mbuf flag.
> + *
> + * @param name
> + * A string identifying the dynamic flag.
> + * @param params
> + * If not NULL, and if the lookup is successful, the structure is
> + * filled with the parameters of the dynamic flag.
> + * @return
> + * The offset of this flag in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - ENOENT: no dynamic flag matches this name.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_lookup(const char *name,
> + struct rte_mbuf_dynflag *params);
> +
> +/**
> + * Helper macro to access to a dynamic field.
> + */
> +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) +
> +(offset)))
> +
> +/**
> + * Dump the status of dynamic fields and flags.
> + *
> + * @param out
> + * The stream where the status is displayed.
> + */
> +__rte_experimental
> +void rte_mbuf_dyn_dump(FILE *out);
> +
> +/* Placeholder for dynamic fields and flags declarations. */
> +
> +#endif
> diff --git a/lib/librte_mbuf/rte_mbuf_version.map
> b/lib/librte_mbuf/rte_mbuf_version.map
> index 519fead35..9bf5ca37a 100644
> --- a/lib/librte_mbuf/rte_mbuf_version.map
> +++ b/lib/librte_mbuf/rte_mbuf_version.map
> @@ -58,6 +58,13 @@ EXPERIMENTAL {
> global:
>
> rte_mbuf_check;
> + rte_mbuf_dynfield_lookup;
> + rte_mbuf_dynfield_register;
> + rte_mbuf_dynfield_register_offset;
> + rte_mbuf_dynflag_lookup;
> + rte_mbuf_dynflag_register;
> + rte_mbuf_dynflag_register_bitnum;
> + rte_mbuf_dyn_dump;
> rte_pktmbuf_copy;
>
> } DPDK_18.08;
> --
> 2.20.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-23 21:10 7% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 Stephen Hemminger
@ 2019-10-24 7:32 4% ` David Marchand
2019-10-24 15:37 4% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-24 7:32 UTC (permalink / raw)
To: Stephen Hemminger, Thomas Monjalon; +Cc: dev, Burakov, Anatoly
On Wed, Oct 23, 2019 at 11:10 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Wed, 23 Oct 2019 20:54:12 +0200
> David Marchand <david.marchand@redhat.com> wrote:
>
> > Let's prepare for the ABI freeze.
> >
> > The first patches are about changes that had been announced before (with
> > a patch from Stephen that I took as it is ready as is from my pov).
> >
> > The malloc_heap structure from the memory subsystem can be hidden.
> > The PCI library had some forgotten deprecated APIs that are removed with
> > this series.
> >
> > rte_logs could be hidden, but I am not that confortable about
> > doing it right away: I added an accessor to rte_logs.file, but I am fine
> > with dropping the last patch and wait for actually hiding this in the next
> > ABI break.
>
> 19.11 is an api/abi break so maybe do it now.
Did you look at the 4 new patches too?
Same concern + this was not announced before either.
I went and hid more internals, I did not see an impact on really basic bench.
I would appreciate other opinions.
--
David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v8 00/13] vhost packed ring performance optimization
2019-10-24 6:49 0% ` Maxime Coquelin
@ 2019-10-24 7:18 0% ` Liu, Yong
2019-10-24 8:24 0% ` Maxime Coquelin
0 siblings, 1 reply; 200+ results
From: Liu, Yong @ 2019-10-24 7:18 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei, Wang, Zhihong, stephen, gavin.hu; +Cc: dev
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Thursday, October 24, 2019 2:50 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; stephen@networkplumber.org;
> gavin.hu@arm.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v8 00/13] vhost packed ring performance optimization
>
> I get some checkpatch warnings, and build fails with clang.
> Could you please fix these issues and send v9?
>
Hi Maxime,
Clang build fails will be fixed in v9. For checkpatch warning, it was due to pragma string inside.
Previous version can avoid such warning, while format is a little messy as below.
I prefer to keep code clean and more readable. How about your idea?
+#ifdef UNROLL_PRAGMA_PARAM
+#define VHOST_UNROLL_PRAGMA(param) _Pragma(param)
+#else
+#define VHOST_UNROLL_PRAGMA(param) do {} while (0);
+#endif
+ VHOST_UNROLL_PRAGMA(UNROLL_PRAGMA_PARAM)
+ for (i = 0; i < PACKED_BATCH_SIZE; i++)
Regards,
Marvin
> Thanks,
> Maxime
>
> ### [PATCH] vhost: try to unroll for each loop
>
> WARNING:CAMELCASE: Avoid CamelCase: <_Pragma>
> #78: FILE: lib/librte_vhost/vhost.h:47:
> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
> 4") \
>
> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> parenthesis
> #78: FILE: lib/librte_vhost/vhost.h:47:
> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
> 4") \
> + for (iter = val; iter < size; iter++)
>
> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> parenthesis
> #83: FILE: lib/librte_vhost/vhost.h:52:
> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll 4") \
> + for (iter = val; iter < size; iter++)
>
> ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
> parenthesis
> #88: FILE: lib/librte_vhost/vhost.h:57:
> +#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)") \
> + for (iter = val; iter < size; iter++)
>
> total: 3 errors, 1 warnings, 67 lines checked
>
> 0/1 valid patch/tmp/dpdk_build/lib/librte_vhost/virtio_net.c:2065:1:
> error: unused function 'free_zmbuf' [-Werror,-Wunused-function]
> free_zmbuf(struct vhost_virtqueue *vq)
> ^
> 1 error generated.
> make[5]: *** [virtio_net.o] Error 1
> make[4]: *** [librte_vhost] Error 2
> make[4]: *** Waiting for unfinished jobs....
> make[3]: *** [lib] Error 2
> make[2]: *** [all] Error 2
> make[1]: *** [pre_install] Error 2
> make: *** [install] Error 2
>
>
> On 10/22/19 12:08 AM, Marvin Liu wrote:
> > Packed ring has more compact ring format and thus can significantly
> > reduce the number of cache miss. It can lead to better performance.
> > This has been approved in virtio user driver, on normal E5 Xeon cpu
> > single core performance can raise 12%.
> >
> > http://mails.dpdk.org/archives/dev/2018-April/095470.html
> >
> > However vhost performance with packed ring performance was decreased.
> > Through analysis, mostly extra cost was from the calculating of each
> > descriptor flag which depended on ring wrap counter. Moreover, both
> > frontend and backend need to write same descriptors which will cause
> > cache contention. Especially when doing vhost enqueue function, virtio
> > refill packed ring function may write same cache line when vhost doing
> > enqueue function. This kind of extra cache cost will reduce the benefit
> > of reducing cache misses.
> >
> > For optimizing vhost packed ring performance, vhost enqueue and dequeue
> > function will be splitted into fast and normal path.
> >
> > Several methods will be taken in fast path:
> > Handle descriptors in one cache line by batch.
> > Split loop function into more pieces and unroll them.
> > Prerequisite check that whether I/O space can copy directly into mbuf
> > space and vice versa.
> > Prerequisite check that whether descriptor mapping is successful.
> > Distinguish vhost used ring update function by enqueue and dequeue
> > function.
> > Buffer dequeue used descriptors as many as possible.
> > Update enqueue used descriptors by cache line.
> >
> > After all these methods done, single core vhost PvP performance with 64B
> > packet on Xeon 8180 can boost 35%.
> >
> > v8:
> > - Allocate mbuf by virtio_dev_pktmbuf_alloc
> >
> > v7:
> > - Rebase code
> > - Rename unroll macro and definitions
> > - Calculate flags when doing single dequeue
> >
> > v6:
> > - Fix dequeue zcopy result check
> >
> > v5:
> > - Remove disable sw prefetch as performance impact is small
> > - Change unroll pragma macro format
> > - Rename shadow counter elements names
> > - Clean dequeue update check condition
> > - Add inline functions replace of duplicated code
> > - Unify code style
> >
> > v4:
> > - Support meson build
> > - Remove memory region cache for no clear performance gain and ABI break
> > - Not assume ring size is power of two
> >
> > v3:
> > - Check available index overflow
> > - Remove dequeue remained descs number check
> > - Remove changes in split ring datapath
> > - Call memory write barriers once when updating used flags
> > - Rename some functions and macros
> > - Code style optimization
> >
> > v2:
> > - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> > - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> > - Optimize dequeue used ring update when in_order negotiated
> >
> >
> > Marvin Liu (13):
> > vhost: add packed ring indexes increasing function
> > vhost: add packed ring single enqueue
> > vhost: try to unroll for each loop
> > vhost: add packed ring batch enqueue
> > vhost: add packed ring single dequeue
> > vhost: add packed ring batch dequeue
> > vhost: flush enqueue updates by cacheline
> > vhost: flush batched enqueue descs directly
> > vhost: buffer packed ring dequeue updates
> > vhost: optimize packed ring enqueue
> > vhost: add packed ring zcopy batch and single dequeue
> > vhost: optimize packed ring dequeue
> > vhost: optimize packed ring dequeue when in-order
> >
> > lib/librte_vhost/Makefile | 18 +
> > lib/librte_vhost/meson.build | 7 +
> > lib/librte_vhost/vhost.h | 57 ++
> > lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
> > 4 files changed, 837 insertions(+), 193 deletions(-)
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-24 4:54 0% ` Shahaf Shuler
@ 2019-10-24 7:07 0% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2019-10-24 7:07 UTC (permalink / raw)
To: Shahaf Shuler
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi,
On Thu, Oct 24, 2019 at 04:54:20AM +0000, Shahaf Shuler wrote:
> Wednesday, October 23, 2019 4:34 PM, Olivier Matz:
> > Subject: Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
> >
> > Hi Shahaf,
> >
> > On Wed, Oct 23, 2019 at 12:00:30PM +0000, Shahaf Shuler wrote:
> > > Hi Olivier,
> > >
> > > Thursday, October 17, 2019 5:42 PM, Olivier Matz:
> > > > Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and
> > > > flags
> > > >
> > > > Many features require to store data inside the mbuf. As the room in
> > > > mbuf structure is limited, it is not possible to have a field for
> > > > each feature. Also, changing fields in the mbuf structure can break the
> > API or ABI.
> > > >
> > > > This commit addresses these issues, by enabling the dynamic
> > > > registration of fields or flags:
> > > >
> > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > given size (>= 1 byte) and alignment constraint.
> > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > >
> > > > The typical use case is a PMD that registers space for an offload
> > > > feature, when the application requests to enable this feature. As
> > > > the space in mbuf is limited, the space should only be reserved if
> > > > it is going to be used (i.e when the application explicitly asks for it).
> > >
> > > According to description, the dynamic field enables custom application and
> > supported PMDs to use the dynamic part of the mbuf for their specific
> > needs.
> > > However the mechanism to report and activate the field/flag registration
> > comes from the general OFFLOAD flags.
> > >
> > > Maybe it will be better to an option to query and select dynamic fields for
> > PMD outside of the standard ethdev offload flags?
> >
> > It is not mandatory to use the ethdev layer to register a dynamic field or flag
> > in the mbuf. It is just the typical use case.
> >
> > It can also be enabled when using a library that have specific needs, for
> > instance, you call rte_reorder_init(), and it will register the sequence number
> > dynamic field.
> >
> > An application that requires a specific mbuf field can also do the registration
> > by itself.
> >
> > In other words, when you initialize a subpart that needs a dynamic field or
> > flag, you have to do the registration there.
> >
>
> I guess my question mainly targets one of the use cases for dynamic mbuf fields which is vendor specific offloads.
> On such case we would like to have dynamic fields/flags negotiated between the application and PMD.
>
> The question is whether we provide a unified way for application to query PMD specific dynamic fields or we let PMD vendor to implement this handshake as they wish (devargs, through PMD doc, etc..)
I have no strong opinion. It can be a PMD-specific API (function or
devargs) to enable the feature.
The only important thing is to not register the field if it won't be
used.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8 00/13] vhost packed ring performance optimization
2019-10-21 22:08 3% ` [dpdk-dev] [PATCH v8 " Marvin Liu
@ 2019-10-24 6:49 0% ` Maxime Coquelin
2019-10-24 7:18 0% ` Liu, Yong
2019-10-24 16:08 3% ` [dpdk-dev] [PATCH v9 " Marvin Liu
1 sibling, 1 reply; 200+ results
From: Maxime Coquelin @ 2019-10-24 6:49 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie, zhihong.wang, stephen, gavin.hu; +Cc: dev
I get some checkpatch warnings, and build fails with clang.
Could you please fix these issues and send v9?
Thanks,
Maxime
### [PATCH] vhost: try to unroll for each loop
WARNING:CAMELCASE: Avoid CamelCase: <_Pragma>
#78: FILE: lib/librte_vhost/vhost.h:47:
+#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
4") \
ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
parenthesis
#78: FILE: lib/librte_vhost/vhost.h:47:
+#define vhost_for_each_try_unroll(iter, val, size) _Pragma("GCC unroll
4") \
+ for (iter = val; iter < size; iter++)
ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
parenthesis
#83: FILE: lib/librte_vhost/vhost.h:52:
+#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll 4") \
+ for (iter = val; iter < size; iter++)
ERROR:COMPLEX_MACRO: Macros with complex values should be enclosed in
parenthesis
#88: FILE: lib/librte_vhost/vhost.h:57:
+#define vhost_for_each_try_unroll(iter, val, size) _Pragma("unroll (4)") \
+ for (iter = val; iter < size; iter++)
total: 3 errors, 1 warnings, 67 lines checked
0/1 valid patch/tmp/dpdk_build/lib/librte_vhost/virtio_net.c:2065:1:
error: unused function 'free_zmbuf' [-Werror,-Wunused-function]
free_zmbuf(struct vhost_virtqueue *vq)
^
1 error generated.
make[5]: *** [virtio_net.o] Error 1
make[4]: *** [librte_vhost] Error 2
make[4]: *** Waiting for unfinished jobs....
make[3]: *** [lib] Error 2
make[2]: *** [all] Error 2
make[1]: *** [pre_install] Error 2
make: *** [install] Error 2
On 10/22/19 12:08 AM, Marvin Liu wrote:
> Packed ring has more compact ring format and thus can significantly
> reduce the number of cache miss. It can lead to better performance.
> This has been approved in virtio user driver, on normal E5 Xeon cpu
> single core performance can raise 12%.
>
> http://mails.dpdk.org/archives/dev/2018-April/095470.html
>
> However vhost performance with packed ring performance was decreased.
> Through analysis, mostly extra cost was from the calculating of each
> descriptor flag which depended on ring wrap counter. Moreover, both
> frontend and backend need to write same descriptors which will cause
> cache contention. Especially when doing vhost enqueue function, virtio
> refill packed ring function may write same cache line when vhost doing
> enqueue function. This kind of extra cache cost will reduce the benefit
> of reducing cache misses.
>
> For optimizing vhost packed ring performance, vhost enqueue and dequeue
> function will be splitted into fast and normal path.
>
> Several methods will be taken in fast path:
> Handle descriptors in one cache line by batch.
> Split loop function into more pieces and unroll them.
> Prerequisite check that whether I/O space can copy directly into mbuf
> space and vice versa.
> Prerequisite check that whether descriptor mapping is successful.
> Distinguish vhost used ring update function by enqueue and dequeue
> function.
> Buffer dequeue used descriptors as many as possible.
> Update enqueue used descriptors by cache line.
>
> After all these methods done, single core vhost PvP performance with 64B
> packet on Xeon 8180 can boost 35%.
>
> v8:
> - Allocate mbuf by virtio_dev_pktmbuf_alloc
>
> v7:
> - Rebase code
> - Rename unroll macro and definitions
> - Calculate flags when doing single dequeue
>
> v6:
> - Fix dequeue zcopy result check
>
> v5:
> - Remove disable sw prefetch as performance impact is small
> - Change unroll pragma macro format
> - Rename shadow counter elements names
> - Clean dequeue update check condition
> - Add inline functions replace of duplicated code
> - Unify code style
>
> v4:
> - Support meson build
> - Remove memory region cache for no clear performance gain and ABI break
> - Not assume ring size is power of two
>
> v3:
> - Check available index overflow
> - Remove dequeue remained descs number check
> - Remove changes in split ring datapath
> - Call memory write barriers once when updating used flags
> - Rename some functions and macros
> - Code style optimization
>
> v2:
> - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> - Optimize dequeue used ring update when in_order negotiated
>
>
> Marvin Liu (13):
> vhost: add packed ring indexes increasing function
> vhost: add packed ring single enqueue
> vhost: try to unroll for each loop
> vhost: add packed ring batch enqueue
> vhost: add packed ring single dequeue
> vhost: add packed ring batch dequeue
> vhost: flush enqueue updates by cacheline
> vhost: flush batched enqueue descs directly
> vhost: buffer packed ring dequeue updates
> vhost: optimize packed ring enqueue
> vhost: add packed ring zcopy batch and single dequeue
> vhost: optimize packed ring dequeue
> vhost: optimize packed ring dequeue when in-order
>
> lib/librte_vhost/Makefile | 18 +
> lib/librte_vhost/meson.build | 7 +
> lib/librte_vhost/vhost.h | 57 ++
> lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
> 4 files changed, 837 insertions(+), 193 deletions(-)
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] ethdev: extend flow metadata
@ 2019-10-24 6:49 3% ` Slava Ovsiienko
2019-10-24 9:22 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Slava Ovsiienko @ 2019-10-24 6:49 UTC (permalink / raw)
To: Olivier Matz; +Cc: dev, Matan Azrad, Raslan Darawsheh, Thomas Monjalon
Hi, Olivier
> > [snip]
> >
> > > > +int
> > > > +rte_flow_dynf_metadata_register(void)
> > > > +{
> > > > + int offset;
> > > > + int flag;
> > > > +
> > > > + static const struct rte_mbuf_dynfield desc_offs = {
> > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > + .size = MBUF_DYNF_METADATA_SIZE,
> > > > + .align = MBUF_DYNF_METADATA_ALIGN,
> > > > + .flags = MBUF_DYNF_METADATA_FLAGS,
> > > > + };
> > > > + static const struct rte_mbuf_dynflag desc_flag = {
> > > > + .name = MBUF_DYNF_METADATA_NAME,
> > > > + };
> > >
> > > I don't see think we need #defines.
> > > You can directly use the name, sizeof() and __alignof__() here.
> > > If the information is used externally, the structure shall be made
> > > global non- static.
> >
> > The intention was to gather all dynamic fields definitions in one
> > place (in rte_mbuf_dyn.h).
>
> If the dynamic field is only going to be used inside rte_flow, I think there is no
> need to expose it in rte_mbuf_dyn.h.
> The other reason is I think the #define are just "passthrough", and do not
> really bring added value, just an indirection.
>
> > It would be easy to see all fields in one sight (some might be shared,
> > some might be mutual exclusive, estimate mbuf space, required by
> > various features, etc.). So, we can't just fill structure fields with
> > simple sizeof() and alignof() instead of definitions (the field
> > parameters must be defined once).
> >
> > I do not see the reasons to make table global. I would prefer the
> definitions.
> > - the definitions are compile time processing (table fields are
> > runtime), it provides code optimization and better performance.
>
> There is indeed no need to make the table global if the field is private to
> rte_flow. About better performance, my understanding is that it would only
> impact registration, am I missing something?
OK, I thought about some opportunity to allow application to register
field directly, bypassing rte_flow_dynf_metadata_register(). So either
definitions or field description table was supposed to be global.
I agree, let's do not complicate the matter, I'll will make global the
metadata field name definition only - in the rte_mbuf_dyn.h in order
just to have some centralizing point.
> >
> > > > +
> > > > + offset = rte_mbuf_dynfield_register(&desc_offs);
> > > > + if (offset < 0)
> > > > + goto error;
> > > > + flag = rte_mbuf_dynflag_register(&desc_flag);
> > > > + if (flag < 0)
> > > > + goto error;
> > > > + rte_flow_dynf_metadata_offs = offset;
> > > > + rte_flow_dynf_metadata_mask = (1ULL << flag);
> > > > + return 0;
> > > > +
> > > > +error:
> > > > + rte_flow_dynf_metadata_offs = -1;
> > > > + rte_flow_dynf_metadata_mask = 0ULL;
> > > > + return -rte_errno;
> > > > +}
> > > > +
> > > > static int
> > > > flow_err(uint16_t port_id, int ret, struct rte_flow_error *error)
> > > > { diff --git a/lib/librte_ethdev/rte_flow.h
> > > > b/lib/librte_ethdev/rte_flow.h index 391a44a..a27e619 100644
> > > > --- a/lib/librte_ethdev/rte_flow.h
> > > > +++ b/lib/librte_ethdev/rte_flow.h
> > > > @@ -27,6 +27,8 @@
> > > > #include <rte_udp.h>
> > > > #include <rte_byteorder.h>
> > > > #include <rte_esp.h>
> > > > +#include <rte_mbuf.h>
> > > > +#include <rte_mbuf_dyn.h>
> > > >
> > > > #ifdef __cplusplus
> > > > extern "C" {
> > > > @@ -417,7 +419,8 @@ enum rte_flow_item_type {
> > > > /**
> > > > * [META]
> > > > *
> > > > - * Matches a metadata value specified in mbuf metadata field.
> > > > + * Matches a metadata value.
> > > > + *
> > > > * See struct rte_flow_item_meta.
> > > > */
> > > > RTE_FLOW_ITEM_TYPE_META,
> > > > @@ -1213,9 +1216,17 @@ struct
> rte_flow_item_icmp6_nd_opt_tla_eth {
> > > > #endif
> > > >
> > > > /**
> > > > - * RTE_FLOW_ITEM_TYPE_META.
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > + notice
> > > > *
> > > > - * Matches a specified metadata value.
> > > > + * RTE_FLOW_ITEM_TYPE_META
> > > > + *
> > > > + * Matches a specified metadata value. On egress, metadata can be
> > > > + set either by
> > > > + * mbuf tx_metadata field with PKT_TX_METADATA flag or
> > > > + * RTE_FLOW_ACTION_TYPE_SET_META. On ingress,
> > > > + RTE_FLOW_ACTION_TYPE_SET_META sets
> > > > + * metadata for a packet and the metadata will be reported via
> > > > + mbuf metadata
> > > > + * dynamic field with PKT_RX_DYNF_METADATA flag. The dynamic
> mbuf
> > > > + field must be
> > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > */
> > > > struct rte_flow_item_meta {
> > > > rte_be32_t data;
> > > > @@ -1813,6 +1824,13 @@ enum rte_flow_action_type {
> > > > * undefined behavior.
> > > > */
> > > > RTE_FLOW_ACTION_TYPE_DEC_TCP_ACK,
> > > > +
> > > > + /**
> > > > + * Set metadata on ingress or egress path.
> > > > + *
> > > > + * See struct rte_flow_action_set_meta.
> > > > + */
> > > > + RTE_FLOW_ACTION_TYPE_SET_META,
> > > > };
> > > >
> > > > /**
> > > > @@ -2300,6 +2318,43 @@ struct rte_flow_action_set_mac {
> > > > uint8_t mac_addr[RTE_ETHER_ADDR_LEN]; };
> > > >
> > > > +/**
> > > > + * @warning
> > > > + * @b EXPERIMENTAL: this structure may change without prior
> > > > +notice
> > > > + *
> > > > + * RTE_FLOW_ACTION_TYPE_SET_META
> > > > + *
> > > > + * Set metadata. Metadata set by mbuf tx_metadata field with
> > > > + * PKT_TX_METADATA flag on egress will be overridden by this action.
> > > > +On
> > > > + * ingress, the metadata will be carried by mbuf metadata dynamic
> > > > +field
> > > > + * with PKT_RX_DYNF_METADATA flag if set. The dynamic mbuf field
> > > > +must be
> > > > + * registered in advance by rte_flow_dynf_metadata_register().
> > > > + *
> > > > + * Altering partial bits is supported with mask. For bits which
> > > > +have never
> > > > + * been set, unpredictable value will be seen depending on driver
> > > > + * implementation. For loopback/hairpin packet, metadata set on
> > > > +Rx/Tx may
> > > > + * or may not be propagated to the other path depending on HW
> > > capability.
> > > > + *
> > > > + * RTE_FLOW_ITEM_TYPE_META matches metadata.
> > > > + */
> > > > +struct rte_flow_action_set_meta {
> > > > + rte_be32_t data;
> > > > + rte_be32_t mask;
> > > > +};
> > > > +
> > > > +/* Mbuf dynamic field offset for metadata. */ extern int
> > > > +rte_flow_dynf_metadata_offs;
> > > > +
> > > > +/* Mbuf dynamic field flag mask for metadata. */ extern uint64_t
> > > > +rte_flow_dynf_metadata_mask;
> > > > +
> > > > +/* Mbuf dynamic field pointer for metadata. */ #define
> > > > +RTE_FLOW_DYNF_METADATA(m) \
> > > > + RTE_MBUF_DYNFIELD((m), rte_flow_dynf_metadata_offs, uint32_t
> > > *)
> > > > +
> > > > +/* Mbuf dynamic flag for metadata. */ #define
> > > > +PKT_RX_DYNF_METADATA
> > > > +(rte_flow_dynf_metadata_mask)
> > > > +
> > >
> > > I wonder if helpers like this wouldn't be better, because they
> > > combine the flag and the field:
> > >
> > > /**
> > > * Set metadata dynamic field and flag in mbuf.
> > > *
> > > * rte_flow_dynf_metadata_register() must have been called first.
> > > */
> > > __rte_experimental
> > > static inline void rte_mbuf_dyn_metadata_set(struct rte_mbuf *m,
> > > uint32_t metadata) {
> > > *RTE_MBUF_DYNFIELD(m, rte_flow_dynf_metadata_offs,
> > > uint32_t *) = metadata;
> > > m->ol_flags |= rte_flow_dynf_metadata_mask; }
> > Setting flag looks redundantly.
> > What if driver just replaces the metadata and flag is already set?
> > The other option - the flags (for set of fields) might be set in combinations.
> > mbuf field is supposed to be engaged in datapath, performance is very
> > critical, adding one more abstraction layer seems not to be relevant.
>
> Ok, that was just a suggestion. Let's use your accessors if you fear a
> performance impact.
The simple example - mlx5 PMD has the rx_burst routine implemented
with vector instructions, and it processes four packets at once. No need
to check field availability four times, and the storing the metadata
is the subject for further optimization with vector instructions.
It is a bit difficult to provide common helpers to handle the metadata
field due to extremely high optimization requirements.
>
> Nevertheless I suggest to use static inline functions in place of macros if
> possible. For RTE_MBUF_DYNFIELD(), I used a macro because it's the only
> way to provide a type to cast the result. But in your case, you know it's a
> uint32_t *.
What If one needs to specify the address of field? Macro allows to do that,
inline functions - do not. Packets may be processed in bizarre ways,
for example in a batch, with vector instructions. OK, I'll provide
the set/get routines, but I'm not sure whether will use ones in mlx5 code.
In my opinion it just obscures the field nature. Field is just a field, AFAIU,
it is main idea of your patch, the way to handle dynamic field should be close
to handling usual static fields, I think. Macro pointer follows this approach,
routines - does not.
> > Also, metadata is not feature of mbuf. It should have rte_flow prefix.
>
> Yes, sure. The example derives from a test I've done, and I forgot to change
> it.
>
>
> > > /**
> > > * Get metadata dynamic field value in mbuf.
> > > *
> > > * rte_flow_dynf_metadata_register() must have been called first.
> > > */
> > > __rte_experimental
> > > static inline int rte_mbuf_dyn_metadata_get(const struct rte_mbuf *m,
> > > uint32_t *metadata) {
> > > if ((m->ol_flags & rte_flow_dynf_metadata_mask) == 0)
> > > return -1;
> > What if metadata is 0xFFFFFFFF ?
> > The checking of availability might embrace larger code block, so this
> > might be not the best place to check availability.
> >
> > > *metadata = *RTE_MBUF_DYNFIELD(m,
> rte_flow_dynf_metadata_offs,
> > > uint32_t *);
> > > return 0;
> > > }
> > >
> > > /**
> > > * Delete the metadata dynamic flag in mbuf.
> > > *
> > > * rte_flow_dynf_metadata_register() must have been called first.
> > > */
> > > __rte_experimental
> > > static inline void rte_mbuf_dyn_metadata_del(struct rte_mbuf *m) {
> > > m->ol_flags &= ~rte_flow_dynf_metadata_mask; }
> > >
> > Sorry, I do not see the practical usecase for these helpers. In my opinion it
> is just some kind of obscuration.
> > They do replace the very simple code and introduce some risk of
> performance impact.
> >
> > >
> > > > /*
> > > > * Definition of a single action.
> > > > *
> > > > @@ -2533,6 +2588,32 @@ enum rte_flow_conv_op { };
> > > >
> > > > /**
> > > > + * Check if mbuf dynamic field for metadata is registered.
> > > > + *
> > > > + * @return
> > > > + * True if registered, false otherwise.
> > > > + */
> > > > +__rte_experimental
> > > > +static inline int
> > > > +rte_flow_dynf_metadata_avail(void) {
> > > > + return !!rte_flow_dynf_metadata_mask; }
> > >
> > > _registered() instead of _avail() ?
> > Accepted, sounds better.
Hmm, I changed my opinion - we already have
rte_flow_dynf_metadata_register(void). Is it OK to have
rte_flow_dynf_metadata_registerED(void) ?
It would be easy to mistype.
> >
> > >
> > > > +
> > > > +/**
> > > > + * Register mbuf dynamic field and flag for metadata.
> > > > + *
> > > > + * This function must be called prior to use SET_META action in
> > > > +order to
> > > > + * register the dynamic mbuf field. Otherwise, the data cannot be
> > > > +delivered to
> > > > + * application.
> > > > + *
> > > > + * @return
> > > > + * 0 on success, a negative errno value otherwise and rte_errno is
> set.
> > > > + */
> > > > +__rte_experimental
> > > > +int
> > > > +rte_flow_dynf_metadata_register(void);
> > > > +
> > > > +/**
> > > > * Check whether a flow rule can be created on a given port.
> > > > *
> > > > * The flow rule is validated for correctness and whether it
> > > > could be accepted diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > b/lib/librte_mbuf/rte_mbuf_dyn.h index 6e2c816..4ff33ac 100644
> > > > --- a/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > > > @@ -160,4 +160,12 @@ int rte_mbuf_dynflag_lookup(const char
> *name,
> > > > */
> > > > #define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m)
> > > > +
> > > > (offset)))
> > > >
> > > > +/**
> > > > + * Flow metadata dynamic field definitions.
> > > > + */
> > > > +#define MBUF_DYNF_METADATA_NAME "flow-metadata"
> > > > +#define MBUF_DYNF_METADATA_SIZE sizeof(uint32_t) #define
> > > > +MBUF_DYNF_METADATA_ALIGN __alignof__(uint32_t) #define
> > > > +MBUF_DYNF_METADATA_FLAGS 0
> > >
> > > If this flag is only to be used in rte_flow, it can stay in rte_flow.
> > > The name should follow the function name conventions, I suggest
> > > "rte_flow_metadata".
> >
> > The definitions:
> > MBUF_DYNF_METADATA_NAME,
> > MBUF_DYNF_METADATA_SIZE,
> > MBUF_DYNF_METADATA_ALIGN
> > are global. rte_flow proposes only minimal set tyo check and access
> > the metadata. By knowing the field names applications would have the
> > more flexibility in processing the fields, for example it allows to
> > optimize the handling of multiple dynamic fields . The definition of
> > metadata size allows to generate optimized code:
> > #if MBUF_DYNF_METADATA_SIZE == sizeof(uint32)
> > *RTE_MBUF_DYNFIELD(m) = get_metadata_32bit() #else
> > *RTE_MBUF_DYNFIELD(m) = get_metadata_64bit() #endif
>
> I don't see any reason why the same dynamic field could have different sizes,
> I even think it could be dangerous. Your accessors suppose that the
> metadata is a uint32_t. Having a compile-time option for that does not look
> desirable.
I tried to provide maximal flexibility and It was just an example of the thing
we could do with global definitions. If you think we do not need it - OK,
let's do things simpler.
>
> Just a side note: we have to take care when adding a new *public* dynamic
> field that it won't change in the future: the accessors are macros or static
> inline functions, so they are embedded in the binaries.
> This is probably something we should discuss and may not be when updating
> the dpdk (as shared lib).
Yes, agree, defines just will not work correct in correct way and even break an ABI.
As we decided - global metadata defines MBUF_DYNF_METADATA_xxxx
should be removed.
>
> > MBUF_DYNF_METADATA_FLAGS flag is not used by rte_flow, this flag is
> > related exclusively to dynamic mbuf " Reserved for future use, must be 0".
> > Would you like to drop this definition?
> >
> > >
> > > If the flag is going to be used in several places in dpdk (rte_flow,
> > > pmd, app, ...), I wonder if it shouldn't be defined it in rte_mbuf_dyn.c. I
> mean:
> > >
> > > ====
> > > /* rte_mbuf_dyn.c */
> > > const struct rte_mbuf_dynfield rte_mbuf_dynfield_flow_metadata = {
> > > ...
> > > };
> > In this case we would make this descriptor global.
> > It is no needed, because there Is no supposed any usage, but by
> > rte_flow_dynf_metadata_register() only. The
>
> Yes, in my example I wasn't sure it was going to be private to rte_flow (see
> "If the flag is going to be used in several places in dpdk (rte_flow, pmd, app,
> ...)").
>
> So yes, I agree the struct should remain private.
OK.
>
>
> > > int rte_mbuf_dynfield_flow_metadata_offset = -1; const struct
> > > rte_mbuf_dynflag rte_mbuf_dynflag_flow_metadata = {
> > > ...
> > > };
> > > int rte_mbuf_dynflag_flow_metadata_bitnum = -1;
> > >
> > > int rte_mbuf_dyn_flow_metadata_register(void)
> > > {
> > > ...
> > > }
> > >
> > > /* rte_mbuf_dyn.h */
> > > extern const struct rte_mbuf_dynfield
> > > rte_mbuf_dynfield_flow_metadata; extern int
> > > rte_mbuf_dynfield_flow_metadata_offset;
> > > extern const struct rte_mbuf_dynflag rte_mbuf_dynflag_flow_metadata;
> > > extern int rte_mbuf_dynflag_flow_metadata_bitnum;
> > >
> > > ...helpers to set/get metadata...
> > > ===
> > >
> > > Centralizing the definitions of non-private dynamic fields/flags in
> > > rte_mbuf_dyn may help other people to reuse a field that is well
> > > described if it match their use-case.
> >
> > Yes, centralizing is important, that's why MBUF_DYNF_METADATA_xxx
> > placed in rte_mbuf_dyn.h. Do you think we should share the descriptors
> either?
> > I have no idea why someone (but rte_flow_dynf_metadata_register())
> > might register metadata field directly.
>
> If the field is private to rte_flow, yes, there is no need to share the "struct
> rte_mbuf_dynfield". Even the rte_flow_dynf_metadata_register() could be
> marked as internal, right?
rte_flow_dynf_metadata_register() is intended to be called by application.
Some applications might wish to engage metadata feature, some ones - not.
>
> One more question: I see the registration is done by
> parse_vc_action_set_meta(). My understanding is that this function is not in
> datapath, and is called when configuring rte_flow. Do you confirm?
Rather it is called to configure application in general. If user sets metadata
(by issuing the appropriate command) it is assumed he/she would like
the metadata on Rx side either. This is just for test purposes and it is not brilliant
example of rte_flow_dynf_metadata_register() use case.
>
> > > In your case, what is carried by metadata? Could it be reused by
> > > others? I think some more description is needed.
> > In my case, metadata is just opaquie rte_flow related 32-bit unsigned
> > value provided by
> > mlx5 hardrware in rx datapath. I have no guess whether someone wishes
> to reuse.
>
> What is the user supposed to do with this value? If it is hw-specific data, I
> think the name of the mbuf field should include "MLX", and it should be
> described.
Metadata are not HW specific at all - they neither control nor are produced
by HW (abstracting from the flow engine is implemented in HW).
Metadata are some opaque data, it is some kind of link between data
path and flow space. With metadata application may provide some per packet
information to flow engine and get back some information from flow engine.
it is generic concept, supposed to be neither HW-related nor vendor specific.
>
> Are these rte_flow actions somehow specific to mellanox drivers ?
AFAIK, currently it is going to be supported by mlx5 PMD only,
but concept is common and is not vendor specific.
>
> > Brief summary of you comment (just to make sure I understood your
> proposal in correct way):
> > 1. drop all definitions MBUF_DYNF_METADATA_xxx, leave
> > MBUF_DYNF_METADATA_NAME only 2. move the descriptor const struct
> > rte_mbuf_dynfield desc_offs = {} to rte_mbuf_dyn.c and make it global
> > 3. provide helpers to access metadata
> >
> > [1] and [2] look OK in general. Although I think these ones make code less
> flexible, restrict the potential compile time options.
> > For now it is rather theoretical question, if you insist on your
> > approach - please, let me know, I'll address [1] and [2] and update.my
> patch.
>
> [1] I think the #define only adds an indirection, and I didn't see any
> perf constraint here.
> [2] My previous comment was surely not clear, sorry. The code can stay
> in rte_flow.
>
> > As for [3] - IMHO, the extra abstraction layer is not useful, and might be
> even harmful.
> > I tend not to complicate the code, at least, for now.
>
> [3] ok for me
>
>
> Thanks,
> Olivier
With best regards, Slava
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-23 13:33 0% ` Olivier Matz
@ 2019-10-24 4:54 0% ` Shahaf Shuler
2019-10-24 7:07 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Shahaf Shuler @ 2019-10-24 4:54 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Wednesday, October 23, 2019 4:34 PM, Olivier Matz:
> Subject: Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
>
> Hi Shahaf,
>
> On Wed, Oct 23, 2019 at 12:00:30PM +0000, Shahaf Shuler wrote:
> > Hi Olivier,
> >
> > Thursday, October 17, 2019 5:42 PM, Olivier Matz:
> > > Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and
> > > flags
> > >
> > > Many features require to store data inside the mbuf. As the room in
> > > mbuf structure is limited, it is not possible to have a field for
> > > each feature. Also, changing fields in the mbuf structure can break the
> API or ABI.
> > >
> > > This commit addresses these issues, by enabling the dynamic
> > > registration of fields or flags:
> > >
> > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > given size (>= 1 byte) and alignment constraint.
> > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > >
> > > The typical use case is a PMD that registers space for an offload
> > > feature, when the application requests to enable this feature. As
> > > the space in mbuf is limited, the space should only be reserved if
> > > it is going to be used (i.e when the application explicitly asks for it).
> >
> > According to description, the dynamic field enables custom application and
> supported PMDs to use the dynamic part of the mbuf for their specific
> needs.
> > However the mechanism to report and activate the field/flag registration
> comes from the general OFFLOAD flags.
> >
> > Maybe it will be better to an option to query and select dynamic fields for
> PMD outside of the standard ethdev offload flags?
>
> It is not mandatory to use the ethdev layer to register a dynamic field or flag
> in the mbuf. It is just the typical use case.
>
> It can also be enabled when using a library that have specific needs, for
> instance, you call rte_reorder_init(), and it will register the sequence number
> dynamic field.
>
> An application that requires a specific mbuf field can also do the registration
> by itself.
>
> In other words, when you initialize a subpart that needs a dynamic field or
> flag, you have to do the registration there.
>
I guess my question mainly targets one of the use cases for dynamic mbuf fields which is vendor specific offloads.
On such case we would like to have dynamic fields/flags negotiated between the application and PMD.
The question is whether we provide a unified way for application to query PMD specific dynamic fields or we let PMD vendor to implement this handshake as they wish (devargs, through PMD doc, etc..)
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions
2019-10-15 15:11 5% ` David Marchand
@ 2019-10-24 0:43 11% ` Thomas Monjalon
2019-10-25 9:10 5% ` Ray Kinsella
2019-10-25 12:45 10% ` Ray Kinsella
1 sibling, 2 replies; 200+ results
From: Thomas Monjalon @ 2019-10-24 0:43 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
27/09/2019 18:54, Ray Kinsella:
> This policy change introduces major ABI versions, these are
> declared every year, typically aligned with the LTS release
> and are supported by subsequent releases in the following year.
No, the ABI number may stand for more than one year.
> This change is intended to improve ABI stabilty for those projects
> consuming DPDK.
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> doc/guides/contributing/abi_policy.rst | 321 +++++++++++++++------
> .../contributing/img/abi_stability_policy.png | Bin 0 -> 61277 bytes
> doc/guides/contributing/img/what_is_an_abi.png | Bin 0 -> 151683 bytes
As an Open Source project, binary files are rejected :)
Please provide the image source as SVG if the diagram is really required.
[...]
> +#. Major ABI versions are declared every **year** and are then supported for one
> + year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
As discussed on the cover letter, please avoid making "every year" cadence, the rule.
> +#. The ABI version is managed at a project level in DPDK, with the ABI version
> + reflected in all :ref:`library's soname <what_is_soname>`.
Should we make clear here that an experimental ABI change has no impact
on the ABI version number?
> +#. The ABI should be preserved and not changed lightly. ABI changes must follow
> + the outlined :ref:`deprecation process <abi_changes>`.
> +#. The addition of symbols is generally not problematic. The modification of
> + symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
> +#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
> + once approved these will form part of the next ABI version.
> +#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
> + considered part of an ABI version and may change without constraint.
> +#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
> + support for hardware which was previously supported, should be treated as an
> + ABI change.
> +
> +.. note::
> +
> + In 2019, the DPDK community stated it's intention to move to ABI stable
> + releases, over a number of release cycles. Beginning with maintaining ABI
> + stability through one year of DPDK releases starting from DPDK 19.11.
There is no verb in this sentence.
> + This
> + policy will be reviewed in 2020, with intention of lengthening the stability
> + period.
> +What is an ABI version?
> +~~~~~~~~~~~~~~~~~~~~~~~
> +
> +An ABI version is an instance of a library's ABI at a specific release. Certain
> +releases are considered by the community to be milestone releases, the yearly
> +LTS for example. Supporting those milestone release's ABI for some number of
> +subsequent releases is desirable to facilitate application upgrade. Those ABI
> +version's aligned with milestones release are therefore called 'ABI major
> +versions' and are supported for some number of releases.
If you understand this paragraph, please raise your hand :)
> +More details on major ABI version can be found in the :ref:`ABI versioning
> +<major_abi_versions>` guide.
>
> The DPDK ABI policy
> -~~~~~~~~~~~~~~~~~~~
> +-------------------
> +
> +A major ABI version is declared every year, aligned with that year's LTS
> +release, e.g. v19.11. This ABI version is then supported for one year by all
> +subsequent releases within that time period, until the next LTS release, e.g.
> +v20.11.
Again, the "one year" limit should not be documented as a general rule.
> +At the declaration of a major ABI version, major version numbers encoded in
> +libraries soname's are bumped to indicate the new version, with the minor
> +version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
> +``librte_eal.so.21.0``.
>
> +The ABI may then change multiple times, without warning, between the last major
> +ABI version increment and the HEAD label of the git tree, with the condition
> +that ABI compatibility with the major ABI version is preserved and therefore
> +soname's do not change.
>
> +Minor versions are incremented to indicate the release of a new ABI compatible
> +DPDK release, typically the DPDK quarterly releases. An example of this, might
> +be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
> +release, following the declaration of the new major ABI version ``20``.
I don't understand the benefit of having a minor ABI version number.
Can we just have v20 and v21 as we discussed in the techboard?
Is it because an application linked with v20.2 cannot work with v20.1?
If we must have a minor number, I suggest a numbering closer to release numbers:
release 19.11 -> ABI 19.11
release 20.02 -> ABI 19.14
release 20.05 -> ABI 19.17
release 20.08 -> ABI 19.20
It shows the month number as if the first year never finishes.
And when a new ABI is declared, release and ABI versions are the same:
release 20.11 -> ABI 20.11
> +ABI versions, are supported by each release until such time as the next major
> +ABI version is declared. At that time, the deprecation of the previous major ABI
> +version will be noted in the Release Notes with guidance on individual symbol
> +depreciation and upgrade notes provided.
I suggest a rewording:
"
An ABI version is supported in all new releases
until the next major ABI version is declared.
When changing the major ABI version,
the release notes give details about all ABI changes.
"
[...]
> + - The acknowledgment of a member of the technical board, as a delegate of the
> + `technical board <https://core.dpdk.org/techboard/>`_ acknowledging the
> + need for the ABI change, is also mandatory.
Only one? What about 3 members minimum?
[...]
> +#. If a newly proposed API functionally replaces an existing one, when the new
> + API becomes non-experimental, then the old one is marked with
> + ``__rte_deprecated``.
> +
> + - The depreciated API should follow the notification process to be removed,
> + see :ref:`deprecation_notices`.
> +
> + - At the declaration of the next major ABI version, those ABI changes then
> + become a formal part of the new ABI and the requirement to preserve ABI
> + compatibility with the last major ABI version is then dropped.
> +
> + - The responsibility for removing redundant ABI compatibility code rests
> + with the original contributor of the ABI changes, failing that, then with
> + the contributor's company and then finally with the maintainer.
Having too many responsibles look like nobody is really responsible.
I would tend to think that only the maintainer is responsible,
but he can ask for help.
^ permalink raw reply [relevance 11%]
* Re: [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (6 preceding siblings ...)
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 12/12] eal: make the global configuration private David Marchand
@ 2019-10-23 21:10 7% ` Stephen Hemminger
2019-10-24 7:32 4% ` David Marchand
2019-10-24 16:37 4% ` Thomas Monjalon
8 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2019-10-23 21:10 UTC (permalink / raw)
To: David Marchand; +Cc: dev, anatoly.burakov, thomas
On Wed, 23 Oct 2019 20:54:12 +0200
David Marchand <david.marchand@redhat.com> wrote:
> Let's prepare for the ABI freeze.
>
> The first patches are about changes that had been announced before (with
> a patch from Stephen that I took as it is ready as is from my pov).
>
> The malloc_heap structure from the memory subsystem can be hidden.
> The PCI library had some forgotten deprecated APIs that are removed with
> this series.
>
> rte_logs could be hidden, but I am not that confortable about
> doing it right away: I added an accessor to rte_logs.file, but I am fine
> with dropping the last patch and wait for actually hiding this in the next
> ABI break.
19.11 is an api/abi break so maybe do it now.
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v2 12/12] eal: make the global configuration private
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (5 preceding siblings ...)
2019-10-23 18:54 3% ` [dpdk-dev] [PATCH v2 10/12] eal: deinline lcore APIs David Marchand
@ 2019-10-23 18:54 5% ` David Marchand
2019-10-23 21:10 7% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 Stephen Hemminger
2019-10-24 16:37 4% ` Thomas Monjalon
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas, John McNamara, Marko Kovacevic
Now that all elements of the rte_config structure have (deinlined)
accessors, we can hide it.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/eal_common_mcfg.c | 1 +
lib/librte_eal/common/eal_private.h | 32 ++++++++++++++++++++++++++++++++
lib/librte_eal/common/include/rte_eal.h | 32 --------------------------------
lib/librte_eal/common/malloc_heap.c | 1 +
lib/librte_eal/common/rte_malloc.c | 1 +
lib/librte_eal/rte_eal_version.map | 1 -
7 files changed, 38 insertions(+), 33 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 082c570..ae0f21e 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -212,6 +212,9 @@ ABI Changes
* eal: made the ``rte_logs`` struct and global symbol private.
+* eal: made the ``rte_config`` struct and ``rte_eal_get_configuration``
+ function private.
+
* pci: removed the following deprecated functions since dpdk:
- ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
diff --git a/lib/librte_eal/common/eal_common_mcfg.c b/lib/librte_eal/common/eal_common_mcfg.c
index 0665494..0cf9a62 100644
--- a/lib/librte_eal/common/eal_common_mcfg.c
+++ b/lib/librte_eal/common/eal_common_mcfg.c
@@ -8,6 +8,7 @@
#include "eal_internal_cfg.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
void
eal_mcfg_complete(void)
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index 0e4b033..52eea9a 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -37,6 +37,38 @@ struct lcore_config {
extern struct lcore_config lcore_config[RTE_MAX_LCORE];
/**
+ * The global RTE configuration structure.
+ */
+struct rte_config {
+ uint32_t master_lcore; /**< Id of the master lcore */
+ uint32_t lcore_count; /**< Number of available logical cores. */
+ uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
+ uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
+ uint32_t service_lcore_count;/**< Number of available service cores. */
+ enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
+
+ /** Primary or secondary configuration */
+ enum rte_proc_type_t process_type;
+
+ /** PA or VA mapping mode */
+ enum rte_iova_mode iova_mode;
+
+ /**
+ * Pointer to memory configuration, which may be shared across multiple
+ * DPDK instances
+ */
+ struct rte_mem_config *mem_config;
+} __attribute__((__packed__));
+
+/**
+ * Get the global configuration structure.
+ *
+ * @return
+ * A pointer to the global configuration structure.
+ */
+struct rte_config *rte_eal_get_configuration(void);
+
+/**
* Initialize the memzone subsystem (private to eal).
*
* @return
diff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h
index ea3c9df..2f9ed29 100644
--- a/lib/librte_eal/common/include/rte_eal.h
+++ b/lib/librte_eal/common/include/rte_eal.h
@@ -52,38 +52,6 @@ enum rte_proc_type_t {
};
/**
- * The global RTE configuration structure.
- */
-struct rte_config {
- uint32_t master_lcore; /**< Id of the master lcore */
- uint32_t lcore_count; /**< Number of available logical cores. */
- uint32_t numa_node_count; /**< Number of detected NUMA nodes. */
- uint32_t numa_nodes[RTE_MAX_NUMA_NODES]; /**< List of detected NUMA nodes. */
- uint32_t service_lcore_count;/**< Number of available service cores. */
- enum rte_lcore_role_t lcore_role[RTE_MAX_LCORE]; /**< State of cores. */
-
- /** Primary or secondary configuration */
- enum rte_proc_type_t process_type;
-
- /** PA or VA mapping mode */
- enum rte_iova_mode iova_mode;
-
- /**
- * Pointer to memory configuration, which may be shared across multiple
- * DPDK instances
- */
- struct rte_mem_config *mem_config;
-} __attribute__((__packed__));
-
-/**
- * Get the global configuration structure.
- *
- * @return
- * A pointer to the global configuration structure.
- */
-struct rte_config *rte_eal_get_configuration(void);
-
-/**
* Get the process type in a multi-process setup
*
* @return
diff --git a/lib/librte_eal/common/malloc_heap.c b/lib/librte_eal/common/malloc_heap.c
index 634ca21..842eb9d 100644
--- a/lib/librte_eal/common/malloc_heap.c
+++ b/lib/librte_eal/common/malloc_heap.c
@@ -27,6 +27,7 @@
#include "eal_internal_cfg.h"
#include "eal_memalloc.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
#include "malloc_elem.h"
#include "malloc_heap.h"
#include "malloc_mp.h"
diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c
index fecd9a9..044d3a9 100644
--- a/lib/librte_eal/common/rte_malloc.c
+++ b/lib/librte_eal/common/rte_malloc.c
@@ -26,6 +26,7 @@
#include "malloc_heap.h"
#include "eal_memalloc.h"
#include "eal_memcfg.h"
+#include "eal_private.h"
/* Free the memory space back to heap */
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index e7422d4..009641f 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -17,7 +17,6 @@ DPDK_2.0 {
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
- rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
rte_eal_has_hugepages;
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 10/12] eal: deinline lcore APIs
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (4 preceding siblings ...)
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure David Marchand
@ 2019-10-23 18:54 3% ` David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 12/12] eal: make the global configuration private David Marchand
` (2 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas
Those functions are used to setup or take control decisions.
Move them into the EAL common code and put them directly in the stable
ABI.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
lib/librte_eal/common/eal_common_lcore.c | 38 ++++++++++++++++++++++++++++
lib/librte_eal/common/include/rte_lcore.h | 41 +++----------------------------
lib/librte_eal/rte_eal_version.map | 10 ++++++++
3 files changed, 52 insertions(+), 37 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_lcore.c b/lib/librte_eal/common/eal_common_lcore.c
index 38af260..abd2cf8 100644
--- a/lib/librte_eal/common/eal_common_lcore.c
+++ b/lib/librte_eal/common/eal_common_lcore.c
@@ -16,6 +16,16 @@
#include "eal_private.h"
#include "eal_thread.h"
+unsigned int rte_get_master_lcore(void)
+{
+ return rte_eal_get_configuration()->master_lcore;
+}
+
+unsigned int rte_lcore_count(void)
+{
+ return rte_eal_get_configuration()->lcore_count;
+}
+
int rte_lcore_index(int lcore_id)
{
if (unlikely(lcore_id >= RTE_MAX_LCORE))
@@ -43,6 +53,34 @@ rte_cpuset_t rte_lcore_cpuset(unsigned int lcore_id)
return lcore_config[lcore_id].cpuset;
}
+int rte_lcore_is_enabled(unsigned int lcore_id)
+{
+ struct rte_config *cfg = rte_eal_get_configuration();
+
+ if (lcore_id >= RTE_MAX_LCORE)
+ return 0;
+ return cfg->lcore_role[lcore_id] == ROLE_RTE;
+}
+
+unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
+{
+ i++;
+ if (wrap)
+ i %= RTE_MAX_LCORE;
+
+ while (i < RTE_MAX_LCORE) {
+ if (!rte_lcore_is_enabled(i) ||
+ (skip_master && (i == rte_get_master_lcore()))) {
+ i++;
+ if (wrap)
+ i %= RTE_MAX_LCORE;
+ continue;
+ }
+ break;
+ }
+ return i;
+}
+
unsigned int
rte_lcore_to_socket_id(unsigned int lcore_id)
{
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index 0c68391..ea40c25 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -93,11 +93,7 @@ rte_lcore_id(void)
* @return
* the id of the master lcore
*/
-static inline unsigned
-rte_get_master_lcore(void)
-{
- return rte_eal_get_configuration()->master_lcore;
-}
+unsigned int rte_get_master_lcore(void);
/**
* Return the number of execution units (lcores) on the system.
@@ -105,12 +101,7 @@ rte_get_master_lcore(void)
* @return
* the number of execution units (lcores) on the system.
*/
-static inline unsigned
-rte_lcore_count(void)
-{
- const struct rte_config *cfg = rte_eal_get_configuration();
- return cfg->lcore_count;
-}
+unsigned int rte_lcore_count(void);
/**
* Return the index of the lcore starting from zero.
@@ -215,14 +206,7 @@ rte_lcore_cpuset(unsigned int lcore_id);
* @return
* True if the given lcore is enabled; false otherwise.
*/
-static inline int
-rte_lcore_is_enabled(unsigned int lcore_id)
-{
- struct rte_config *cfg = rte_eal_get_configuration();
- if (lcore_id >= RTE_MAX_LCORE)
- return 0;
- return cfg->lcore_role[lcore_id] == ROLE_RTE;
-}
+int rte_lcore_is_enabled(unsigned int lcore_id);
/**
* Get the next enabled lcore ID.
@@ -237,25 +221,8 @@ rte_lcore_is_enabled(unsigned int lcore_id)
* @return
* The next lcore_id or RTE_MAX_LCORE if not found.
*/
-static inline unsigned int
-rte_get_next_lcore(unsigned int i, int skip_master, int wrap)
-{
- i++;
- if (wrap)
- i %= RTE_MAX_LCORE;
+unsigned int rte_get_next_lcore(unsigned int i, int skip_master, int wrap);
- while (i < RTE_MAX_LCORE) {
- if (!rte_lcore_is_enabled(i) ||
- (skip_master && (i == rte_get_master_lcore()))) {
- i++;
- if (wrap)
- i %= RTE_MAX_LCORE;
- continue;
- }
- break;
- }
- return i;
-}
/**
* Macro to browse all running lcores.
*/
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index ca9ace0..e7422d4 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -309,6 +309,16 @@ DPDK_19.08 {
} DPDK_19.05;
+DPDK_19.11 {
+ global:
+
+ rte_get_master_lcore;
+ rte_get_next_lcore;
+ rte_lcore_count;
+ rte_lcore_is_enabled;
+
+} DPDK_19.08;
+
EXPERIMENTAL {
global:
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (3 preceding siblings ...)
2019-10-23 18:54 4% ` [dpdk-dev] [PATCH v2 06/12] pci: remove deprecated functions David Marchand
@ 2019-10-23 18:54 8% ` David Marchand
2019-10-24 16:30 0% ` Thomas Monjalon
2019-10-23 18:54 3% ` [dpdk-dev] [PATCH v2 10/12] eal: deinline lcore APIs David Marchand
` (3 subsequent siblings)
8 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas, John McNamara, Marko Kovacevic
No need to expose rte_logs, hide it and remove it from the current ABI.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
Changelog since v1:
- updated release notes,
---
doc/guides/rel_notes/release_19_11.rst | 2 ++
lib/librte_eal/common/eal_common_log.c | 23 ++++++++++++++++-------
lib/librte_eal/common/include/rte_log.h | 20 +++-----------------
lib/librte_eal/rte_eal_version.map | 1 -
4 files changed, 21 insertions(+), 25 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 579311d..082c570 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -210,6 +210,8 @@ ABI Changes
* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
``rte_malloc_virt2iova`` since v17.11.
+* eal: made the ``rte_logs`` struct and global symbol private.
+
* pci: removed the following deprecated functions since dpdk:
- ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
diff --git a/lib/librte_eal/common/eal_common_log.c b/lib/librte_eal/common/eal_common_log.c
index e0a7bef..57d35a4 100644
--- a/lib/librte_eal/common/eal_common_log.c
+++ b/lib/librte_eal/common/eal_common_log.c
@@ -17,13 +17,6 @@
#include "eal_private.h"
-/* global log structure */
-struct rte_logs rte_logs = {
- .type = ~0,
- .level = RTE_LOG_DEBUG,
- .file = NULL,
-};
-
struct rte_eal_opt_loglevel {
/** Next list entry */
TAILQ_ENTRY(rte_eal_opt_loglevel) next;
@@ -58,6 +51,22 @@ struct rte_log_dynamic_type {
uint32_t loglevel;
};
+/** The rte_log structure. */
+struct rte_logs {
+ uint32_t type; /**< Bitfield with enabled logs. */
+ uint32_t level; /**< Log level. */
+ FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
+ size_t dynamic_types_len;
+ struct rte_log_dynamic_type *dynamic_types;
+};
+
+/* global log structure */
+static struct rte_logs rte_logs = {
+ .type = ~0,
+ .level = RTE_LOG_DEBUG,
+ .file = NULL,
+};
+
/* per core log */
static RTE_DEFINE_PER_LCORE(struct log_cur_msg, log_cur_msg);
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 1bb0e66..a8d0eb7 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -26,20 +26,6 @@ extern "C" {
#include <rte_config.h>
#include <rte_compat.h>
-struct rte_log_dynamic_type;
-
-/** The rte_log structure. */
-struct rte_logs {
- uint32_t type; /**< Bitfield with enabled logs. */
- uint32_t level; /**< Log level. */
- FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
- size_t dynamic_types_len;
- struct rte_log_dynamic_type *dynamic_types;
-};
-
-/** Global log information */
-extern struct rte_logs rte_logs;
-
/* SDK log type */
#define RTE_LOGTYPE_EAL 0 /**< Log related to eal. */
#define RTE_LOGTYPE_MALLOC 1 /**< Log related to malloc. */
@@ -260,7 +246,7 @@ void rte_log_dump(FILE *f);
* to rte_openlog_stream().
*
* The level argument determines if the log should be displayed or
- * not, depending on the global rte_logs variable.
+ * not, depending on the global log level and the per logtype level.
*
* The preferred alternative is the RTE_LOG() because it adds the
* level and type in the logged string.
@@ -291,8 +277,8 @@ int rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
* to rte_openlog_stream().
*
* The level argument determines if the log should be displayed or
- * not, depending on the global rte_logs variable. A trailing
- * newline may be added if needed.
+ * not, depending on the global log level and the per logtype level.
+ * A trailing newline may be added if needed.
*
* The preferred alternative is the RTE_LOG() because it adds the
* level and type in the logged string.
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 6d7e0e4..ca9ace0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -45,7 +45,6 @@ DPDK_2.0 {
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
- rte_logs;
rte_malloc;
rte_malloc_dump_stats;
rte_malloc_get_socket_stats;
--
1.8.3.1
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v2 06/12] pci: remove deprecated functions
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
` (2 preceding siblings ...)
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 03/12] eal: remove deprecated malloc virt2phys function David Marchand
@ 2019-10-23 18:54 4% ` David Marchand
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure David Marchand
` (4 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic, Gaetan Rivet
Those functions have been deprecated since 17.11 and have 1:1
replacement.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 7 -----
doc/guides/rel_notes/release_19_11.rst | 6 +++++
lib/librte_pci/rte_pci.c | 19 --------------
lib/librte_pci/rte_pci.h | 47 ----------------------------------
lib/librte_pci/rte_pci_version.map | 3 ---
5 files changed, 6 insertions(+), 76 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bbd5863..cf7744e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -38,13 +38,6 @@ Deprecation Notices
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
-* pci: Several exposed functions are misnamed.
- The following functions are deprecated starting from v17.11 and are replaced:
-
- - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
- - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
- - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
-
* dpaa2: removal of ``rte_dpaa2_memsegs`` structure which has been replaced
by a pa-va search library. This structure was earlier being used for holding
memory segments used by dpaa2 driver for faster pa->va translation. This
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 0c61c1c..579311d 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -210,6 +210,12 @@ ABI Changes
* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
``rte_malloc_virt2iova`` since v17.11.
+* pci: removed the following deprecated functions since dpdk:
+
+ - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
+ - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
+ - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_pci/rte_pci.c b/lib/librte_pci/rte_pci.c
index f400178..a753cf3 100644
--- a/lib/librte_pci/rte_pci.c
+++ b/lib/librte_pci/rte_pci.c
@@ -87,18 +87,6 @@ pci_dbdf_parse(const char *input, struct rte_pci_addr *dev_addr)
return 0;
}
-int
-eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_bdf_parse(input, dev_addr);
-}
-
-int
-eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_dbdf_parse(input, dev_addr);
-}
-
void
rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size)
@@ -110,13 +98,6 @@ rte_pci_device_name(const struct rte_pci_addr *addr,
}
int
-rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2)
-{
- return rte_pci_addr_cmp(addr, addr2);
-}
-
-int
rte_pci_addr_cmp(const struct rte_pci_addr *addr,
const struct rte_pci_addr *addr2)
{
diff --git a/lib/librte_pci/rte_pci.h b/lib/librte_pci/rte_pci.h
index eaa9d07..c878914 100644
--- a/lib/librte_pci/rte_pci.h
+++ b/lib/librte_pci/rte_pci.h
@@ -106,37 +106,6 @@ struct mapped_pci_resource {
TAILQ_HEAD(mapped_pci_res_list, mapped_pci_resource);
/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided without
- * a domain prefix (i.e. domain returned is always 0)
- *
- * @param input
- * The input string to be parsed. Should have the format XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned.
- * Domain will always be returned as 0
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided including
- * a domain prefix.
- *
- * @param input
- * The input string to be parsed. Should have the format XXXX:XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
* Utility function to write a pci device name, this device name can later be
* used to retrieve the corresponding rte_pci_addr using eal_parse_pci_*
* BDF helpers.
@@ -152,22 +121,6 @@ void rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size);
/**
- * @deprecated
- * Utility function to compare two PCI device addresses.
- *
- * @param addr
- * The PCI Bus-Device-Function address to compare
- * @param addr2
- * The PCI Bus-Device-Function address to compare
- * @return
- * 0 on equal PCI address.
- * Positive on addr is greater than addr2.
- * Negative on addr is less than addr2, or error.
- */
-int rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2);
-
-/**
* Utility function to compare two PCI device addresses.
*
* @param addr
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c028027..03790cb 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,11 +1,8 @@
DPDK_17.11 {
global:
- eal_parse_pci_BDF;
- eal_parse_pci_DomBDF;
pci_map_resource;
pci_unmap_resource;
- rte_eal_compare_pci_addr;
rte_pci_addr_cmp;
rte_pci_addr_parse;
rte_pci_device_name;
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 03/12] eal: remove deprecated malloc virt2phys function
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-23 18:54 12% ` [dpdk-dev] [PATCH v2 01/12] eal: make lcore config private David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 02/12] eal: remove deprecated CPU flags check function David Marchand
@ 2019-10-23 18:54 5% ` David Marchand
2019-10-23 18:54 4% ` [dpdk-dev] [PATCH v2 06/12] pci: remove deprecated functions David Marchand
` (5 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic
Remove rte_malloc_virt2phy as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/include/rte_malloc.h | 7 -------
3 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 50ac348..bbd5863 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
- by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
-
* vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 8bf2437..0c61c1c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -207,6 +207,9 @@ ABI Changes
* eal: removed the ``rte_cpu_check_supported`` function, replaced by
``rte_cpu_is_supported`` since dpdk v17.08.
+* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
+ ``rte_malloc_virt2iova`` since v17.11.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/include/rte_malloc.h b/lib/librte_eal/common/include/rte_malloc.h
index 3593fb4..42ca051 100644
--- a/lib/librte_eal/common/include/rte_malloc.h
+++ b/lib/librte_eal/common/include/rte_malloc.h
@@ -553,13 +553,6 @@ rte_malloc_set_limit(const char *type, size_t max);
rte_iova_t
rte_malloc_virt2iova(const void *addr);
-__rte_deprecated
-static inline phys_addr_t
-rte_malloc_virt2phy(const void *addr)
-{
- return rte_malloc_virt2iova(addr);
-}
-
#ifdef __cplusplus
}
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 02/12] eal: remove deprecated CPU flags check function
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-23 18:54 12% ` [dpdk-dev] [PATCH v2 01/12] eal: make lcore config private David Marchand
@ 2019-10-23 18:54 5% ` David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 03/12] eal: remove deprecated malloc virt2phys function David Marchand
` (6 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic
Remove rte_cpu_check_supported as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/eal_common_cpuflags.c | 11 -----------
lib/librte_eal/common/include/generic/rte_cpuflags.h | 9 ---------
lib/librte_eal/rte_eal_version.map | 1 -
5 files changed, 3 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e4a33e0..50ac348 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_cpu_check_supported`` function has been deprecated since
- v17.08 and will be removed.
-
* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index d7e14b4..8bf2437 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -204,6 +204,9 @@ ABI Changes
* eal: made the ``lcore_config`` struct and global symbol private.
+* eal: removed the ``rte_cpu_check_supported`` function, replaced by
+ ``rte_cpu_is_supported`` since dpdk v17.08.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_cpuflags.c b/lib/librte_eal/common/eal_common_cpuflags.c
index 3a055f7..dc5f75d 100644
--- a/lib/librte_eal/common/eal_common_cpuflags.c
+++ b/lib/librte_eal/common/eal_common_cpuflags.c
@@ -7,17 +7,6 @@
#include <rte_common.h>
#include <rte_cpuflags.h>
-/**
- * Checks if the machine is adequate for running the binary. If it is not, the
- * program exits with status 1.
- */
-void
-rte_cpu_check_supported(void)
-{
- if (!rte_cpu_is_supported())
- exit(1);
-}
-
int
rte_cpu_is_supported(void)
{
diff --git a/lib/librte_eal/common/include/generic/rte_cpuflags.h b/lib/librte_eal/common/include/generic/rte_cpuflags.h
index 156ea00..872f0eb 100644
--- a/lib/librte_eal/common/include/generic/rte_cpuflags.h
+++ b/lib/librte_eal/common/include/generic/rte_cpuflags.h
@@ -49,15 +49,6 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature);
/**
* This function checks that the currently used CPU supports the CPU features
* that were specified at compile time. It is called automatically within the
- * EAL, so does not need to be used by applications.
- */
-__rte_deprecated
-void
-rte_cpu_check_supported(void);
-
-/**
- * This function checks that the currently used CPU supports the CPU features
- * that were specified at compile time. It is called automatically within the
* EAL, so does not need to be used by applications. This version returns a
* result so that decisions may be made (for instance, graceful shutdowns).
*/
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index aeedf39..0887549 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -8,7 +8,6 @@ DPDK_2.0 {
per_lcore__rte_errno;
rte_calloc;
rte_calloc_socket;
- rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
rte_cycles_vmware_tsc_map;
rte_delay_us;
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v2 01/12] eal: make lcore config private
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
@ 2019-10-23 18:54 12% ` David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 02/12] eal: remove deprecated CPU flags check function David Marchand
` (7 subsequent siblings)
8 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic, Harry van Haaren, Harini Ramakrishnan,
Omar Cardona, Anand Rawat, Ranjit Menon
From: Stephen Hemminger <stephen@networkplumber.org>
The internal structure of lcore_config does not need to be part of
visible API/ABI. Make it private to EAL.
Rearrange the structure so it takes less memory (and cache footprint).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Based on Stephen v8: http://patchwork.dpdk.org/patch/60443/
Changes since Stephen v8:
- do not change core_id, socket_id and core_index types,
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_19_11.rst | 2 ++
lib/librte_eal/common/eal_common_launch.c | 2 ++
lib/librte_eal/common/eal_private.h | 25 +++++++++++++++++++++++++
lib/librte_eal/common/include/rte_lcore.h | 24 ------------------------
lib/librte_eal/common/rte_service.c | 2 ++
lib/librte_eal/rte_eal_version.map | 1 -
lib/librte_eal/windows/eal/eal_thread.c | 1 +
8 files changed, 32 insertions(+), 29 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 237813b..e4a33e0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -23,10 +23,6 @@ Deprecation Notices
* eal: The function ``rte_eal_remote_launch`` will return new error codes
after read or write error on the pipe, instead of calling ``rte_panic``.
-* eal: The ``lcore_config`` struct and global symbol will be made private to
- remove it from the externally visible ABI and allow it to be updated in the
- future.
-
* eal: both declaring and identifying devices will be streamlined in v18.11.
New functions will appear to query a specific port from buses, classes of
device and device drivers. Device declaration will be made coherent with the
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 40121b9..d7e14b4 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -202,6 +202,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* eal: made the ``lcore_config`` struct and global symbol private.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index fe0ba3f..cf52d71 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -15,6 +15,8 @@
#include <rte_per_lcore.h>
#include <rte_lcore.h>
+#include "eal_private.h"
+
/*
* Wait until a lcore finished its job.
*/
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index 798ede5..0e4b033 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -10,6 +10,31 @@
#include <stdio.h>
#include <rte_dev.h>
+#include <rte_lcore.h>
+
+/**
+ * Structure storing internal configuration (per-lcore)
+ */
+struct lcore_config {
+ pthread_t thread_id; /**< pthread identifier */
+ int pipe_master2slave[2]; /**< communication pipe with master */
+ int pipe_slave2master[2]; /**< communication pipe with master */
+
+ lcore_function_t * volatile f; /**< function to call */
+ void * volatile arg; /**< argument of function */
+ volatile int ret; /**< return value of function */
+
+ volatile enum rte_lcore_state_t state; /**< lcore state */
+ unsigned int socket_id; /**< physical socket id for this lcore */
+ unsigned int core_id; /**< core number on socket for this lcore */
+ int core_index; /**< relative index, starting from 0 */
+ uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
+ uint8_t detected; /**< true if lcore was detected */
+
+ rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+};
+
+extern struct lcore_config lcore_config[RTE_MAX_LCORE];
/**
* Initialize the memzone subsystem (private to eal).
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index c86f72e..0c68391 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -66,30 +66,6 @@ typedef cpuset_t rte_cpuset_t;
} while (0)
#endif
-/**
- * Structure storing internal configuration (per-lcore)
- */
-struct lcore_config {
- unsigned detected; /**< true if lcore was detected */
- pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
- lcore_function_t * volatile f; /**< function to call */
- void * volatile arg; /**< argument of function */
- volatile int ret; /**< return value of function */
- volatile enum rte_lcore_state_t state; /**< lcore state */
- unsigned socket_id; /**< physical socket id for this lcore */
- unsigned core_id; /**< core number on socket for this lcore */
- int core_index; /**< relative index, starting from 0 */
- rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
- uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
-};
-
-/**
- * Internal configuration (per-lcore)
- */
-extern struct lcore_config lcore_config[RTE_MAX_LCORE];
-
RTE_DECLARE_PER_LCORE(unsigned, _lcore_id); /**< Per thread "lcore id". */
RTE_DECLARE_PER_LCORE(rte_cpuset_t, _cpuset); /**< Per thread "cpuset". */
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index beb9691..79235c0 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -21,6 +21,8 @@
#include <rte_memory.h>
#include <rte_malloc.h>
+#include "eal_private.h"
+
#define RTE_SERVICE_NUM_MAX 64
#define SERVICE_F_REGISTERED (1 << 0)
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d..aeedf39 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -4,7 +4,6 @@ DPDK_2.0 {
__rte_panic;
eal_parse_sysfs_value;
eal_timer_source;
- lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
rte_calloc;
diff --git a/lib/librte_eal/windows/eal/eal_thread.c b/lib/librte_eal/windows/eal/eal_thread.c
index 906502f..0591d4c 100644
--- a/lib/librte_eal/windows/eal/eal_thread.c
+++ b/lib/librte_eal/windows/eal/eal_thread.c
@@ -12,6 +12,7 @@
#include <rte_common.h>
#include <eal_thread.h>
+#include "eal_private.h"
RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
--
1.8.3.1
^ permalink raw reply [relevance 12%]
* [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
` (4 preceding siblings ...)
2019-10-22 9:32 3% ` [dpdk-dev] [PATCH 8/8] log: hide internal log structure David Marchand
@ 2019-10-23 18:54 8% ` David Marchand
2019-10-23 18:54 12% ` [dpdk-dev] [PATCH v2 01/12] eal: make lcore config private David Marchand
` (8 more replies)
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
6 siblings, 9 replies; 200+ results
From: David Marchand @ 2019-10-23 18:54 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas
Let's prepare for the ABI freeze.
The first patches are about changes that had been announced before (with
a patch from Stephen that I took as it is ready as is from my pov).
The malloc_heap structure from the memory subsystem can be hidden.
The PCI library had some forgotten deprecated APIs that are removed with
this series.
rte_logs could be hidden, but I am not that confortable about
doing it right away: I added an accessor to rte_logs.file, but I am fine
with dropping the last patch and wait for actually hiding this in the next
ABI break.
Changelog since v1:
- I went a step further, hiding rte_config after de-inlining non critical
functions
Comments?
--
David Marchand
David Marchand (11):
eal: remove deprecated CPU flags check function
eal: remove deprecated malloc virt2phys function
mem: hide internal heap header
net/bonding: use non deprecated PCI API
pci: remove deprecated functions
log: add log stream accessor
log: hide internal log structure
test/mem: remove dependency on EAL internals
eal: deinline lcore APIs
eal: factorize lcore role code in common code
eal: make the global configuration private
Stephen Hemminger (1):
eal: make lcore config private
app/test-pmd/testpmd.c | 1 -
app/test/test_memzone.c | 50 +++++++++------
doc/guides/rel_notes/deprecation.rst | 17 -----
doc/guides/rel_notes/release_19_11.rst | 19 ++++++
drivers/common/qat/qat_logs.c | 3 +-
drivers/common/qat/qat_logs.h | 3 +-
drivers/net/bonding/rte_eth_bond_args.c | 5 +-
lib/librte_eal/common/Makefile | 2 +-
lib/librte_eal/common/eal_common_cpuflags.c | 11 ----
lib/librte_eal/common/eal_common_launch.c | 2 +
lib/librte_eal/common/eal_common_lcore.c | 48 ++++++++++++++
lib/librte_eal/common/eal_common_log.c | 56 ++++++++++-------
lib/librte_eal/common/eal_common_mcfg.c | 1 +
lib/librte_eal/common/eal_memcfg.h | 3 +-
lib/librte_eal/common/eal_private.h | 57 +++++++++++++++++
.../common/include/generic/rte_cpuflags.h | 9 ---
lib/librte_eal/common/include/rte_eal.h | 43 -------------
lib/librte_eal/common/include/rte_lcore.h | 73 ++++------------------
lib/librte_eal/common/include/rte_log.h | 33 +++++-----
lib/librte_eal/common/include/rte_malloc.h | 7 ---
lib/librte_eal/common/include/rte_malloc_heap.h | 35 -----------
lib/librte_eal/common/malloc_heap.c | 1 +
lib/librte_eal/common/malloc_heap.h | 25 +++++++-
lib/librte_eal/common/meson.build | 1 -
lib/librte_eal/common/rte_malloc.c | 1 +
lib/librte_eal/common/rte_service.c | 2 +
lib/librte_eal/freebsd/eal/eal.c | 7 ---
lib/librte_eal/linux/eal/eal.c | 7 ---
lib/librte_eal/rte_eal_version.map | 17 +++--
lib/librte_eal/windows/eal/eal_thread.c | 1 +
lib/librte_pci/rte_pci.c | 19 ------
lib/librte_pci/rte_pci.h | 47 --------------
lib/librte_pci/rte_pci_version.map | 3 -
33 files changed, 271 insertions(+), 338 deletions(-)
delete mode 100644 lib/librte_eal/common/include/rte_malloc_heap.h
--
1.8.3.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-23 15:00 0% ` Stephen Hemminger
@ 2019-10-23 15:12 0% ` Wang, Haiyue
0 siblings, 0 replies; 200+ results
From: Wang, Haiyue @ 2019-10-23 15:12 UTC (permalink / raw)
To: Stephen Hemminger, Olivier Matz
Cc: Ananyev, Konstantin, dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Morten Brørup,
Thomas Monjalon
> -----Original Message-----
> From: Stephen Hemminger [mailto:stephen@networkplumber.org]
> Sent: Wednesday, October 23, 2019 23:00
> To: Olivier Matz <olivier.matz@6wind.com>
> Cc: Wang, Haiyue <haiyue.wang@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Morten Brørup <mb@smartsharesystems.com>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: Re: [PATCH v2] mbuf: support dynamic fields and flags
>
> On Wed, 23 Oct 2019 12:21:43 +0200
> Olivier Matz <olivier.matz@6wind.com> wrote:
>
> > On Wed, Oct 23, 2019 at 03:16:13AM +0000, Wang, Haiyue wrote:
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Wednesday, October 23, 2019 06:52
> > > > To: Olivier Matz <olivier.matz@6wind.com>; dev@dpdk.org
> > > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Wang,
> > > > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > > <keith.wiles@intel.com>; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger
> > > > <stephen@networkplumber.org>; Thomas Monjalon <thomas@monjalon.net>
> > > > Subject: RE: [PATCH v2] mbuf: support dynamic fields and flags
> > > >
> > > >
> > > > > Many features require to store data inside the mbuf. As the room in mbuf
> > > > > structure is limited, it is not possible to have a field for each
> > > > > feature. Also, changing fields in the mbuf structure can break the API
> > > > > or ABI.
> > > > >
> > > > > This commit addresses these issues, by enabling the dynamic registration
> > > > > of fields or flags:
> > > > >
> > > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > > given size (>= 1 byte) and alignment constraint.
> > > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > > >
> > > > > The typical use case is a PMD that registers space for an offload
> > > > > feature, when the application requests to enable this feature. As
> > > > > the space in mbuf is limited, the space should only be reserved if it
> > > > > is going to be used (i.e when the application explicitly asks for it).
> > > > >
> > > > > The registration can be done at any moment, but it is not possible
> > > > > to unregister fields or flags for now.
> > > > >
> > > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > > > ---
> > > > >
> > > > > v2
> > > > >
> > > > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > > > (packet copy)
> > > > > * Add new apis to register a dynamic field/flag at a specific place
> > > > > * Add a dump function (sugg by David)
> > > > > * Enhance field registration function to select the best offset, keeping
> > > > > large aligned zones as much as possible (sugg by Konstantin)
> > > > > * Use a size_t and unsigned int instead of int when relevant
> > > > > (sugg by Konstantin)
> > > > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > > > (sugg by Konstantin)
> > > > > * Remove unused argument in private function (sugg by Konstantin)
> > > > > * Fix and simplify locking (sugg by Konstantin)
> > > > > * Fix minor typo
> > > > >
> > > > > rfc -> v1
> > > > >
> > > > > * Rebase on top of master
> > > > > * Change registration API to use a structure instead of
> > > > > variables, getting rid of #defines (Stephen's comment)
> > > > > * Update flag registration to use a similar API as fields.
> > > > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > > > * Add a debug log at registration
> > > > > * Add some words in release note
> > > > > * Did some performance tests (sugg. by Andrew):
> > > > > On my platform, reading a dynamic field takes ~3 cycles more
> > > > > than a static field, and ~2 cycles more for writing.
> > > > >
> > > > > app/test/test_mbuf.c | 145 ++++++-
> > > > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > > > lib/librte_mbuf/Makefile | 2 +
> > > > > lib/librte_mbuf/meson.build | 6 +-
> > > > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > > > >
> > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > > > index b9c2b2500..01cafad59 100644
> > > > > --- a/app/test/test_mbuf.c
> > > > > +++ b/app/test/test_mbuf.c
> > > > > @@ -28,6 +28,7 @@
> > > > > #include <rte_random.h>
> > > > > #include <rte_cycles.h>
> > > > > #include <rte_malloc.h>
> > > > > +#include <rte_mbuf_dyn.h>
> > > > >
> > >
> > > [snip]
> > > > > +int
> > > > > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > > > > + unsigned int req)
> > > > > +{
> > > > > + int ret;
> > > > > +
> > > > > + if (req != UINT_MAX && req >= 64) {
> > > >
> > > > Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
> > >
> > > Might introduce a new macro like kernel:
> > >
> > > /**
> > > * FIELD_SIZEOF - get the size of a struct's field
> > > * @t: the target struct
> > > * @f: the target struct's field
> > > * Return: the size of @f in the struct definition without having a
> > > * declared instance of @t.
> > > */
> > > #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
> > >
> > > Then: FIELD_SIZEOF(rte_mbuf, ol_flags) * CHAR_BIT
> >
> > Good idea, thanks
> >
>
> Kernel is replacing FIELD_SIZEOF with sizeof_member
Yes, but looks like in 5.5 ? 5.4 hasn't merged. ;-)
https://www.phoronix.com/scan.php?page=news_item&px=Linux-5.4-Size-Of-Member
https://patchwork.kernel.org/patch/11184583/
+/**
+ * sizeof_member(TYPE, MEMBER) - get the size of a struct's member
+ *
+ * @TYPE: the target struct
+ * @MEMBER: the target struct's member
+ *
+ * Return: the size of @MEMBER in the struct definition without having a
+ * declared instance of @TYPE.
+ */
+#define sizeof_member(TYPE, MEMBER) (sizeof(((TYPE *)0)->MEMBER))
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-23 10:21 0% ` Olivier Matz
@ 2019-10-23 15:00 0% ` Stephen Hemminger
2019-10-23 15:12 0% ` Wang, Haiyue
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2019-10-23 15:00 UTC (permalink / raw)
To: Olivier Matz
Cc: Wang, Haiyue, Ananyev, Konstantin, dev, Andrew Rybchenko,
Richardson, Bruce, Jerin Jacob Kollanukkaran, Wiles, Keith,
Morten Brørup, Thomas Monjalon
On Wed, 23 Oct 2019 12:21:43 +0200
Olivier Matz <olivier.matz@6wind.com> wrote:
> On Wed, Oct 23, 2019 at 03:16:13AM +0000, Wang, Haiyue wrote:
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Wednesday, October 23, 2019 06:52
> > > To: Olivier Matz <olivier.matz@6wind.com>; dev@dpdk.org
> > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>; Wang,
> > > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > <keith.wiles@intel.com>; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger
> > > <stephen@networkplumber.org>; Thomas Monjalon <thomas@monjalon.net>
> > > Subject: RE: [PATCH v2] mbuf: support dynamic fields and flags
> > >
> > >
> > > > Many features require to store data inside the mbuf. As the room in mbuf
> > > > structure is limited, it is not possible to have a field for each
> > > > feature. Also, changing fields in the mbuf structure can break the API
> > > > or ABI.
> > > >
> > > > This commit addresses these issues, by enabling the dynamic registration
> > > > of fields or flags:
> > > >
> > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > given size (>= 1 byte) and alignment constraint.
> > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > >
> > > > The typical use case is a PMD that registers space for an offload
> > > > feature, when the application requests to enable this feature. As
> > > > the space in mbuf is limited, the space should only be reserved if it
> > > > is going to be used (i.e when the application explicitly asks for it).
> > > >
> > > > The registration can be done at any moment, but it is not possible
> > > > to unregister fields or flags for now.
> > > >
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > > ---
> > > >
> > > > v2
> > > >
> > > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > > (packet copy)
> > > > * Add new apis to register a dynamic field/flag at a specific place
> > > > * Add a dump function (sugg by David)
> > > > * Enhance field registration function to select the best offset, keeping
> > > > large aligned zones as much as possible (sugg by Konstantin)
> > > > * Use a size_t and unsigned int instead of int when relevant
> > > > (sugg by Konstantin)
> > > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > > (sugg by Konstantin)
> > > > * Remove unused argument in private function (sugg by Konstantin)
> > > > * Fix and simplify locking (sugg by Konstantin)
> > > > * Fix minor typo
> > > >
> > > > rfc -> v1
> > > >
> > > > * Rebase on top of master
> > > > * Change registration API to use a structure instead of
> > > > variables, getting rid of #defines (Stephen's comment)
> > > > * Update flag registration to use a similar API as fields.
> > > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > > * Add a debug log at registration
> > > > * Add some words in release note
> > > > * Did some performance tests (sugg. by Andrew):
> > > > On my platform, reading a dynamic field takes ~3 cycles more
> > > > than a static field, and ~2 cycles more for writing.
> > > >
> > > > app/test/test_mbuf.c | 145 ++++++-
> > > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > > lib/librte_mbuf/Makefile | 2 +
> > > > lib/librte_mbuf/meson.build | 6 +-
> > > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > > >
> > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > > index b9c2b2500..01cafad59 100644
> > > > --- a/app/test/test_mbuf.c
> > > > +++ b/app/test/test_mbuf.c
> > > > @@ -28,6 +28,7 @@
> > > > #include <rte_random.h>
> > > > #include <rte_cycles.h>
> > > > #include <rte_malloc.h>
> > > > +#include <rte_mbuf_dyn.h>
> > > >
> >
> > [snip]
> > > > +int
> > > > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > > > + unsigned int req)
> > > > +{
> > > > + int ret;
> > > > +
> > > > + if (req != UINT_MAX && req >= 64) {
> > >
> > > Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
> >
> > Might introduce a new macro like kernel:
> >
> > /**
> > * FIELD_SIZEOF - get the size of a struct's field
> > * @t: the target struct
> > * @f: the target struct's field
> > * Return: the size of @f in the struct definition without having a
> > * declared instance of @t.
> > */
> > #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
> >
> > Then: FIELD_SIZEOF(rte_mbuf, ol_flags) * CHAR_BIT
>
> Good idea, thanks
>
Kernel is replacing FIELD_SIZEOF with sizeof_member
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-23 12:00 0% ` Shahaf Shuler
@ 2019-10-23 13:33 0% ` Olivier Matz
2019-10-24 4:54 0% ` Shahaf Shuler
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-23 13:33 UTC (permalink / raw)
To: Shahaf Shuler
Cc: dev, Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi Shahaf,
On Wed, Oct 23, 2019 at 12:00:30PM +0000, Shahaf Shuler wrote:
> Hi Olivier,
>
> Thursday, October 17, 2019 5:42 PM, Olivier Matz:
> > Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
> >
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each feature. Also,
> > changing fields in the mbuf structure can break the API or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration of
> > fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload feature,
> > when the application requests to enable this feature. As the space in mbuf is
> > limited, the space should only be reserved if it is going to be used (i.e when
> > the application explicitly asks for it).
>
> According to description, the dynamic field enables custom application and supported PMDs to use the dynamic part of the mbuf for their specific needs.
> However the mechanism to report and activate the field/flag registration comes from the general OFFLOAD flags.
>
> Maybe it will be better to an option to query and select dynamic fields for PMD outside of the standard ethdev offload flags?
It is not mandatory to use the ethdev layer to register a dynamic field
or flag in the mbuf. It is just the typical use case.
It can also be enabled when using a library that have specific needs,
for instance, you call rte_reorder_init(), and it will register the
sequence number dynamic field.
An application that requires a specific mbuf field can also do the
registration by itself.
In other words, when you initialize a subpart that needs a dynamic field
or flag, you have to do the registration there.
>
> >
> > The registration can be done at any moment, but it is not possible to
> > unregister fields or flags for now.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/3] net definitions fixes
2019-10-23 13:00 0% ` David Marchand
@ 2019-10-23 13:19 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2019-10-23 13:19 UTC (permalink / raw)
To: David Marchand; +Cc: dev, Olivier Matz
On 10/23/2019 2:00 PM, David Marchand wrote:
> On Wed, Oct 23, 2019 at 2:57 PM David Marchand
> <david.marchand@redhat.com> wrote:
>>
>> On Wed, Oct 23, 2019 at 2:12 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>>
>>> On 10/23/2019 9:51 AM, David Marchand wrote:
>>>> Small patchset with fixes after inspecting the librte_net.
>>>> I copied stable@dpdk.org in the 2nd patch for information only.
>>>>
>>>
>>> Overall lgtm. And this release seems the one to make these changes, and we
>>> already break the API for net library on this release BUT should we update the
>>> ABIVER for net library?
>>
>> This patchset breaks API by renaming structures/defines and remove
>> some constant defines.
>> ABI should be the same ?
>
> But I suppose adding some words in the release notes can't be wrong.
>
You are right, there is not point on increasing ABIVER since only API is
changing, +1 to document change in release notes. (also for previous rte_esp change)
Thanks,
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 8/8] log: hide internal log structure
2019-10-22 9:32 3% ` [dpdk-dev] [PATCH 8/8] log: hide internal log structure David Marchand
2019-10-22 16:35 0% ` Stephen Hemminger
@ 2019-10-23 13:02 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: David Marchand @ 2019-10-23 13:02 UTC (permalink / raw)
To: dev; +Cc: Stephen Hemminger, Burakov, Anatoly, Thomas Monjalon
On Tue, Oct 22, 2019 at 11:33 AM David Marchand
<david.marchand@redhat.com> wrote:
>
> No need to expose rte_logs, hide it and remove it from the current ABI.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> lib/librte_eal/common/eal_common_log.c | 23 ++++++++++++++++-------
> lib/librte_eal/common/include/rte_log.h | 20 +++-----------------
> lib/librte_eal/rte_eal_version.map | 1 -
> 3 files changed, 19 insertions(+), 25 deletions(-)
Note to self.
If we go with this patch, an update of the release notes is missing.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/3] net definitions fixes
2019-10-23 12:57 3% ` David Marchand
@ 2019-10-23 13:00 0% ` David Marchand
2019-10-23 13:19 0% ` Ferruh Yigit
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-23 13:00 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Olivier Matz
On Wed, Oct 23, 2019 at 2:57 PM David Marchand
<david.marchand@redhat.com> wrote:
>
> On Wed, Oct 23, 2019 at 2:12 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
> >
> > On 10/23/2019 9:51 AM, David Marchand wrote:
> > > Small patchset with fixes after inspecting the librte_net.
> > > I copied stable@dpdk.org in the 2nd patch for information only.
> > >
> >
> > Overall lgtm. And this release seems the one to make these changes, and we
> > already break the API for net library on this release BUT should we update the
> > ABIVER for net library?
>
> This patchset breaks API by renaming structures/defines and remove
> some constant defines.
> ABI should be the same ?
But I suppose adding some words in the release notes can't be wrong.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 0/3] net definitions fixes
@ 2019-10-23 12:57 3% ` David Marchand
2019-10-23 13:00 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-23 12:57 UTC (permalink / raw)
To: Ferruh Yigit; +Cc: dev, Olivier Matz
On Wed, Oct 23, 2019 at 2:12 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 10/23/2019 9:51 AM, David Marchand wrote:
> > Small patchset with fixes after inspecting the librte_net.
> > I copied stable@dpdk.org in the 2nd patch for information only.
> >
>
> Overall lgtm. And this release seems the one to make these changes, and we
> already break the API for net library on this release BUT should we update the
> ABIVER for net library?
This patchset breaks API by renaming structures/defines and remove
some constant defines.
ABI should be the same ?
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
2019-10-18 2:47 0% ` Wang, Haiyue
2019-10-22 22:51 0% ` Ananyev, Konstantin
@ 2019-10-23 12:00 0% ` Shahaf Shuler
2019-10-23 13:33 0% ` Olivier Matz
2019-10-24 7:38 0% ` Slava Ovsiienko
3 siblings, 1 reply; 200+ results
From: Shahaf Shuler @ 2019-10-23 12:00 UTC (permalink / raw)
To: Olivier Matz, dev
Cc: Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi Olivier,
Thursday, October 17, 2019 5:42 PM, Olivier Matz:
> Subject: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
>
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each feature. Also,
> changing fields in the mbuf structure can break the API or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration of
> fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload feature,
> when the application requests to enable this feature. As the space in mbuf is
> limited, the space should only be reserved if it is going to be used (i.e when
> the application explicitly asks for it).
According to description, the dynamic field enables custom application and supported PMDs to use the dynamic part of the mbuf for their specific needs.
However the mechanism to report and activate the field/flag registration comes from the general OFFLOAD flags.
Maybe it will be better to an option to query and select dynamic fields for PMD outside of the standard ethdev offload flags?
>
> The registration can be done at any moment, but it is not possible to
> unregister fields or flags for now.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>
> v2
>
> * Rebase on top of master: solve conflict with Stephen's patchset
> (packet copy)
> * Add new apis to register a dynamic field/flag at a specific place
> * Add a dump function (sugg by David)
> * Enhance field registration function to select the best offset, keeping
> large aligned zones as much as possible (sugg by Konstantin)
> * Use a size_t and unsigned int instead of int when relevant
> (sugg by Konstantin)
> * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> (sugg by Konstantin)
> * Remove unused argument in private function (sugg by Konstantin)
> * Fix and simplify locking (sugg by Konstantin)
> * Fix minor typo
>
> rfc -> v1
>
> * Rebase on top of master
> * Change registration API to use a structure instead of
> variables, getting rid of #defines (Stephen's comment)
> * Update flag registration to use a similar API as fields.
> * Change max name length from 32 to 64 (sugg. by Thomas)
> * Enhance API documentation (Haiyue's and Andrew's comments)
> * Add a debug log at registration
> * Add some words in release note
> * Did some performance tests (sugg. by Andrew):
> On my platform, reading a dynamic field takes ~3 cycles more
> than a static field, and ~2 cycles more for writing.
>
> app/test/test_mbuf.c | 145 ++++++-
> doc/guides/rel_notes/release_19_11.rst | 7 +
> lib/librte_mbuf/Makefile | 2 +
> lib/librte_mbuf/meson.build | 6 +-
> lib/librte_mbuf/rte_mbuf.h | 23 +-
> lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> lib/librte_mbuf/rte_mbuf_version.map | 7 +
> 8 files changed, 959 insertions(+), 5 deletions(-) create mode 100644
> lib/librte_mbuf/rte_mbuf_dyn.c create mode 100644
> lib/librte_mbuf/rte_mbuf_dyn.h
>
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c index
> b9c2b2500..01cafad59 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -28,6 +28,7 @@
> #include <rte_random.h>
> #include <rte_cycles.h>
> #include <rte_malloc.h>
> +#include <rte_mbuf_dyn.h>
>
> #include "test.h"
>
> @@ -657,7 +658,6 @@ test_attach_from_different_pool(struct
> rte_mempool *pktmbuf_pool,
> rte_pktmbuf_free(clone2);
> return -1;
> }
> -#undef GOTO_FAIL
>
> /*
> * test allocation and free of mbufs
> @@ -1276,6 +1276,143 @@ test_tx_offload(void)
> return (v1 == v2) ? 0 : -EINVAL;
> }
>
> +static int
> +test_mbuf_dyn(struct rte_mempool *pktmbuf_pool) {
> + const struct rte_mbuf_dynfield dynfield = {
> + .name = "test-dynfield",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield2 = {
> + .name = "test-dynfield2",
> + .size = sizeof(uint16_t),
> + .align = __alignof__(uint16_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield3 = {
> + .name = "test-dynfield3",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_big = {
> + .name = "test-dynfield-fail-big",
> + .size = 256,
> + .align = 1,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_align = {
> + .name = "test-dynfield-fail-align",
> + .size = 1,
> + .align = 3,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag = {
> + .name = "test-dynflag",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag2 = {
> + .name = "test-dynflag2",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag3 = {
> + .name = "test-dynflag3",
> + .flags = 0,
> + };
> + struct rte_mbuf *m = NULL;
> + int offset, offset2, offset3;
> + int flag, flag2, flag3;
> + int ret;
> +
> + printf("Test mbuf dynamic fields and flags\n");
> + rte_mbuf_dyn_dump(stdout);
> +
> + offset = rte_mbuf_dynfield_register(&dynfield);
> + if (offset == -1)
> + GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
> + offset, strerror(errno));
> +
> + ret = rte_mbuf_dynfield_register(&dynfield);
> + if (ret != offset)
> + GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
> + ret, strerror(errno));
> +
> + offset2 = rte_mbuf_dynfield_register(&dynfield2);
> + if (offset2 == -1 || offset2 == offset || (offset2 & 1))
> + GOTO_FAIL("failed to register dynamic field 2, offset2=%d:
> %s",
> + offset2, strerror(errno));
> +
> + offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
> + offsetof(struct rte_mbuf, dynfield1[1]));
> + if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
> + GOTO_FAIL("failed to register dynamic field 3, offset=%d:
> %s",
> + offset3, strerror(errno));
> +
> + printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
> + offset, offset2, offset3);
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (too big)");
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (bad
> alignment)");
> +
> + ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
> + offsetof(struct rte_mbuf, ol_flags));
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (not avail)");
> +
> + flag = rte_mbuf_dynflag_register(&dynflag);
> + if (flag == -1)
> + GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
> + flag, strerror(errno));
> +
> + ret = rte_mbuf_dynflag_register(&dynflag);
> + if (ret != flag)
> + GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
> + ret, strerror(errno));
> +
> + flag2 = rte_mbuf_dynflag_register(&dynflag2);
> + if (flag2 == -1 || flag2 == flag)
> + GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
> + flag2, strerror(errno));
> +
> + flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
> + rte_bsf64(PKT_LAST_FREE));
> + if (flag3 != rte_bsf64(PKT_LAST_FREE))
> + GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
> + flag3, strerror(errno));
> +
> + printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
> +
> + /* set, get dynamic field */
> + m = rte_pktmbuf_alloc(pktmbuf_pool);
> + if (m == NULL)
> + GOTO_FAIL("Cannot allocate mbuf");
> +
> + *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
> + if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
> + GOTO_FAIL("failed to read dynamic field");
> + *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
> + if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
> + GOTO_FAIL("failed to read dynamic field");
> +
> + /* set a dynamic flag */
> + m->ol_flags |= (1ULL << flag);
> +
> + rte_mbuf_dyn_dump(stdout);
> + rte_pktmbuf_free(m);
> + return 0;
> +fail:
> + rte_pktmbuf_free(m);
> + return -1;
> +}
> +#undef GOTO_FAIL
> +
> static int
> test_mbuf(void)
> {
> @@ -1295,6 +1432,12 @@ test_mbuf(void)
> goto err;
> }
>
> + /* test registration of dynamic fields and flags */
> + if (test_mbuf_dyn(pktmbuf_pool) < 0) {
> + printf("mbuf dynflag test failed\n");
> + goto err;
> + }
> +
> /* create a specific pktmbuf pool with a priv_size != 0 and no data
> * room size */
> pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
> diff --git a/doc/guides/rel_notes/release_19_11.rst
> b/doc/guides/rel_notes/release_19_11.rst
> index 85953b962..9e9c94554 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -21,6 +21,13 @@ DPDK Release 19.11
>
> xdg-open build/doc/html/guides/rel_notes/release_19_11.html
>
> +* **Add support of support dynamic fields and flags in mbuf.**
> +
> + This new feature adds the ability to dynamically register some room
> + for a field or a flag in the mbuf structure. This is typically used
> + for specific offload features, where adding a static field or flag in
> + the mbuf is not justified.
> +
>
> New Features
> ------------
> diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile index
> c8f6d2689..5a9bcee73 100644
> --- a/lib/librte_mbuf/Makefile
> +++ b/lib/librte_mbuf/Makefile
> @@ -17,8 +17,10 @@ LIBABIVER := 5
>
> # all source are stored in SRCS-y
> SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c
> rte_mbuf_pool_ops.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
>
> # install includes
> SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h
> rte_mbuf_ptype.h rte_mbuf_pool_ops.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build index
> 6cc11ebb4..9137e8f26 100644
> --- a/lib/librte_mbuf/meson.build
> +++ b/lib/librte_mbuf/meson.build
> @@ -2,8 +2,10 @@
> # Copyright(c) 2017 Intel Corporation
>
> version = 5
> -sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c') -
> headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
> +sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
> + 'rte_mbuf_dyn.c')
> +headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
> + 'rte_mbuf_dyn.h')
> deps += ['mempool']
>
> allow_experimental_apis = true
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h index
> fb0849ac1..5740b1e93 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -198,9 +198,12 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
>
> -/* add new RX flags here */
> +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -/* add new TX flags here */
> +#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_LAST_FREE (1ULL << 39)
> +
> +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
>
> /**
> * Indicate that the metadata field in the mbuf is in use.
> @@ -738,6 +741,7 @@ struct rte_mbuf {
> */
> struct rte_mbuf_ext_shared_info *shinfo;
>
> + uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
> } __rte_cache_aligned;
>
> /**
> @@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m,
> void *buf_addr,
> */
> #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
>
> +/**
> + * Copy dynamic fields from m_src to m_dst.
> + *
> + * @param m_dst
> + * The destination mbuf.
> + * @param m_src
> + * The source mbuf.
> + */
> +static inline void
> +rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf
> +*msrc) {
> + memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst-
> >dynfield1)); }
> +
> /* internal */
> static inline void
> __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf
> *msrc) @@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf
> *mdst, const struct rte_mbuf *msrc)
> mdst->hash = msrc->hash;
> mdst->packet_type = msrc->packet_type;
> mdst->timestamp = msrc->timestamp;
> + rte_mbuf_dynfield_copy(mdst, msrc);
> }
>
> /**
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c
> b/lib/librte_mbuf/rte_mbuf_dyn.c new file mode 100644 index
> 000000000..9ef235483
> --- /dev/null
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> @@ -0,0 +1,548 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2019 6WIND S.A.
> + */
> +
> +#include <sys/queue.h>
> +#include <stdint.h>
> +#include <limits.h>
> +
> +#include <rte_common.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_tailq.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_string_fns.h>
> +#include <rte_mbuf.h>
> +#include <rte_mbuf_dyn.h>
> +
> +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> +
> +struct mbuf_dynfield_elt {
> + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> + struct rte_mbuf_dynfield params;
> + size_t offset;
> +};
> +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> + .name = "RTE_MBUF_DYNFIELD",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> +
> +struct mbuf_dynflag_elt {
> + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> + struct rte_mbuf_dynflag params;
> + unsigned int bitnum;
> +};
> +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> + .name = "RTE_MBUF_DYNFLAG",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> +
> +struct mbuf_dyn_shm {
> + /**
> + * For each mbuf byte, free_space[i] != 0 if space is free.
> + * The value is the size of the biggest aligned element that
> + * can fit in the zone.
> + */
> + uint8_t free_space[sizeof(struct rte_mbuf)];
> + /** Bitfield of available flags. */
> + uint64_t free_flags;
> +};
> +static struct mbuf_dyn_shm *shm;
> +
> +/* Set the value of free_space[] according to the size and alignment of
> + * the free areas. This helps to select the best place when reserving a
> + * dynamic field. Assume tailq is locked.
> + */
> +static void
> +process_score(void)
> +{
> + size_t off, align, size, i;
> +
> + /* first, erase previous info */
> + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> + if (shm->free_space[i])
> + shm->free_space[i] = 1;
> + }
> +
> + for (off = 0; off < sizeof(struct rte_mbuf); off++) {
> + /* get the size of the free zone */
> + for (size = 0; shm->free_space[off + size]; size++)
> + ;
> + if (size == 0)
> + continue;
> +
> + /* get the alignment of biggest object that can fit in
> + * the zone at this offset.
> + */
> + for (align = 1;
> + (off % (align << 1)) == 0 && (align << 1) <= size;
> + align <<= 1)
> + ;
> +
> + /* save it in free_space[] */
> + for (i = off; i < off + size; i++)
> + shm->free_space[i] = RTE_MAX(align, shm-
> >free_space[i]);
> + }
> +}
> +
> +/* Allocate and initialize the shared memory. Assume tailq is locked */
> +static int
> +init_shared_mem(void)
> +{
> + const struct rte_memzone *mz;
> + uint64_t mask;
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + mz =
> rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> + sizeof(struct
> mbuf_dyn_shm),
> + SOCKET_ID_ANY, 0,
> + RTE_CACHE_LINE_SIZE);
> + } else {
> + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> + }
> + if (mz == NULL)
> + return -1;
> +
> + shm = mz->addr;
> +
> +#define mark_free(field) \
> + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> + 1, sizeof(((struct rte_mbuf *)0)->field))
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + /* init free_space, keep it sync'd with
> + * rte_mbuf_dynfield_copy().
> + */
> + memset(shm, 0, sizeof(*shm));
> + mark_free(dynfield1);
> +
> + /* init free_flags */
> + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask
> <<= 1)
> + shm->free_flags |= mask;
> +
> + process_score();
> + }
> +#undef mark_free
> +
> + return 0;
> +}
> +
> +/* check if this offset can be used */
> +static int
> +check_offset(size_t offset, size_t size, size_t align) {
> + size_t i;
> +
> + if ((offset & (align - 1)) != 0)
> + return -1;
> + if (offset + size > sizeof(struct rte_mbuf))
> + return -1;
> +
> + for (i = 0; i < size; i++) {
> + if (!shm->free_space[i + offset])
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynfield_elt *
> +__mbuf_dynfield_lookup(const char *name) {
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynfield;
> +}
> +
> +int
> +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield
> +*params) {
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynfield == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynfield->params,
> sizeof(*params));
> +
> + return mbuf_dynfield->offset;
> +}
> +
> +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> + const struct rte_mbuf_dynfield *params2) {
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->size != params2->size)
> + return -1;
> + if (params1->align != params2->align)
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t req)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int best_zone = UINT_MAX;
> + size_t i, offset;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> + if (mbuf_dynfield != NULL) {
> + if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) <
> 0) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynfield->offset;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == SIZE_MAX) {
> + for (offset = 0;
> + offset < sizeof(struct rte_mbuf);
> + offset++) {
> + if (check_offset(offset, params->size,
> + params->align) == 0 &&
> + shm->free_space[offset] <
> best_zone) {
> + best_zone = shm->free_space[offset];
> + req = offset;
> + }
> + }
> + if (req == SIZE_MAX) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + } else {
> + if (check_offset(req, params->size, params->align) < 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + }
> +
> + offset = req;
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynfield = rte_zmalloc("mbuf_dynfield",
> sizeof(*mbuf_dynfield), 0);
> + if (mbuf_dynfield == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> + sizeof(mbuf_dynfield->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> + rte_errno = ENAMETOOLONG;
> + rte_free(mbuf_dynfield);
> + rte_free(te);
> + return -1;
> + }
> + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield-
> >params));
> + mbuf_dynfield->offset = offset;
> + te->data = mbuf_dynfield;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> +
> + for (i = offset; i < offset + params->size; i++)
> + shm->free_space[i] = 0;
> + process_score();
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu,
> al=%zu, fl=0x%x) -> %zd\n",
> + params->name, params->size, params->align, params->flags,
> + offset);
> +
> + return offset;
> +}
> +
> +int
> +rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t req)
> +{
> + int ret;
> +
> + if (params->size >= sizeof(struct rte_mbuf)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (!rte_is_power_of_2(params->align)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (params->flags != 0) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynfield_register_offset(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
> +int
> +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params) {
> + return rte_mbuf_dynfield_register_offset(params, SIZE_MAX); }
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynflag_elt *
> +__mbuf_dynflag_lookup(const char *name) {
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> + if (strncmp(name, mbuf_dynflag->params.name,
> + RTE_MBUF_DYN_NAMESIZE) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynflag;
> +}
> +
> +int
> +rte_mbuf_dynflag_lookup(const char *name,
> + struct rte_mbuf_dynflag *params)
> +{
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynflag == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynflag->params,
> sizeof(*params));
> +
> + return mbuf_dynflag->bitnum;
> +}
> +
> +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> + const struct rte_mbuf_dynflag *params2) {
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> *params,
> + unsigned int req)
> +{
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int bitnum;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> + if (mbuf_dynflag != NULL) {
> + if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) <
> 0) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynflag->bitnum;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == UINT_MAX) {
> + if (shm->free_flags == 0) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + bitnum = rte_bsf64(shm->free_flags);
> + } else {
> + if ((shm->free_flags & (1ULL << req)) == 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + bitnum = req;
> + }
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynflag = rte_zmalloc("mbuf_dynflag",
> sizeof(*mbuf_dynflag), 0);
> + if (mbuf_dynflag == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> + sizeof(mbuf_dynflag->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> + rte_free(mbuf_dynflag);
> + rte_free(te);
> + rte_errno = ENAMETOOLONG;
> + return -1;
> + }
> + mbuf_dynflag->bitnum = bitnum;
> + te->data = mbuf_dynflag;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> +
> + shm->free_flags &= ~(1ULL << bitnum);
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) ->
> %u\n",
> + params->name, params->flags, bitnum);
> +
> + return bitnum;
> +}
> +
> +int
> +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> *params,
> + unsigned int req)
> +{
> + int ret;
> +
> + if (req != UINT_MAX && req >= 64) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynflag_register_bitnum(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
> +int
> +rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params) {
> + return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX); }
> +
> +void rte_mbuf_dyn_dump(FILE *out)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *dynfield;
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *dynflag;
> + struct rte_tailq_entry *te;
> + size_t i;
> +
> + rte_mcfg_tailq_write_lock();
> + init_shared_mem();
> + fprintf(out, "Reserved fields:\n");
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> + dynfield = (struct mbuf_dynfield_elt *)te->data;
> + fprintf(out, " name=%s offset=%zd size=%zd align=%zd
> flags=%x\n",
> + dynfield->params.name, dynfield->offset,
> + dynfield->params.size, dynfield->params.align,
> + dynfield->params.flags);
> + }
> + fprintf(out, "Reserved flags:\n");
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> + dynflag = (struct mbuf_dynflag_elt *)te->data;
> + fprintf(out, " name=%s bitnum=%u flags=%x\n",
> + dynflag->params.name, dynflag->bitnum,
> + dynflag->params.flags);
> + }
> + fprintf(out, "Free space in mbuf (0 = free, value = zone
> alignment):\n");
> + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> + if ((i % 8) == 0)
> + fprintf(out, " %4.4zx: ", i);
> + fprintf(out, "%2.2x%s", shm->free_space[i],
> + (i % 8 != 7) ? " " : "\n");
> + }
> + rte_mcfg_tailq_write_unlock();
> +}
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h
> b/lib/librte_mbuf/rte_mbuf_dyn.h new file mode 100644 index
> 000000000..307613c96
> --- /dev/null
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> @@ -0,0 +1,226 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2019 6WIND S.A.
> + */
> +
> +#ifndef _RTE_MBUF_DYN_H_
> +#define _RTE_MBUF_DYN_H_
> +
> +/**
> + * @file
> + * RTE Mbuf dynamic fields and flags
> + *
> + * Many features require to store data inside the mbuf. As the room in
> + * mbuf structure is limited, it is not possible to have a field for
> + * each feature. Also, changing fields in the mbuf structure can break
> + * the API or ABI.
> + *
> + * This module addresses this issue, by enabling the dynamic
> + * registration of fields or flags:
> + *
> + * - a dynamic field is a named area in the rte_mbuf structure, with a
> + * given size (>= 1 byte) and alignment constraint.
> + * - a dynamic flag is a named bit in the rte_mbuf structure, stored
> + * in mbuf->ol_flags.
> + *
> + * The typical use case is when a specific offload feature requires to
> + * register a dedicated offload field in the mbuf structure, and adding
> + * a static field or flag is not justified.
> + *
> + * Example of use:
> + *
> + * - A rte_mbuf_dynfield structure is defined, containing the parameters
> + * of the dynamic field to be registered:
> + * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
> + * - The application initializes the PMD, and asks for this feature
> + * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
> + * rxconf. This will make the PMD to register the field by calling
> + * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
> + * stores the returned offset.
> + * - The application that uses the offload feature also registers
> + * the field to retrieve the same offset.
> + * - When the PMD receives a packet, it can set the field:
> + * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
> + * - In the main loop, the application can retrieve the value with
> + * the same macro.
> + *
> + * To avoid wasting space, the dynamic fields or flags must only be
> + * reserved on demand, when an application asks for the related feature.
> + *
> + * The registration can be done at any moment, but it is not possible
> + * to unregister fields or flags for now.
> + *
> + * A dynamic field can be reserved and used by an application only.
> + * It can for instance be a packet mark.
> + */
> +
> +#include <sys/types.h>
> +/**
> + * Maximum length of the dynamic field or flag string.
> + */
> +#define RTE_MBUF_DYN_NAMESIZE 64
> +
> +/**
> + * Structure describing the parameters of a mbuf dynamic field.
> + */
> +struct rte_mbuf_dynfield {
> + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
> + size_t size; /**< The number of bytes to reserve. */
> + size_t align; /**< The alignment constraint (power of 2). */
> + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> +
> +/**
> + * Structure describing the parameters of a mbuf dynamic flag.
> + */
> +struct rte_mbuf_dynflag {
> + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic
> flag. */
> + unsigned int flags; /**< Reserved for future use, must be 0. */ };
> +
> +/**
> + * Register space for a dynamic field in the mbuf structure.
> + *
> + * If the field is already registered (same name and parameters), its
> + * offset is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters (name, size,
> + * alignment constraint and flags).
> + * @return
> + * The offset in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: not enough room in mbuf.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name does not ends with \0.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
> +
> +/**
> + * Register space for a dynamic field in the mbuf structure at offset.
> + *
> + * If the field is already registered (same name, parameters and
> +offset),
> + * the offset is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters (name, size,
> + * alignment constraint and flags).
> + * @param offset
> + * The requested offset. Ignored if SIZE_MAX is passed.
> + * @return
> + * The offset in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, flags, or offset).
> + * - EEXIST: this name is already register with different parameters.
> + * - EBUSY: the requested offset cannot be used.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: not enough room in mbuf.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name does not ends with \0.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield
> *params,
> + size_t offset);
> +
> +/**
> + * Lookup for a registered dynamic mbuf field.
> + *
> + * @param name
> + * A string identifying the dynamic field.
> + * @param params
> + * If not NULL, and if the lookup is successful, the structure is
> + * filled with the parameters of the dynamic field.
> + * @return
> + * The offset of this field in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - ENOENT: no dynamic field matches this name.
> + */
> +__rte_experimental
> +int rte_mbuf_dynfield_lookup(const char *name,
> + struct rte_mbuf_dynfield *params);
> +
> +/**
> + * Register a dynamic flag in the mbuf structure.
> + *
> + * If the flag is already registered (same name and parameters), its
> + * bitnum is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters of the dynamic
> + * flag (name and options).
> + * @return
> + * The number of the reserved bit, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: no more flag available.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> 1.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
> +
> +/**
> + * Register a dynamic flag in the mbuf structure specifying bitnum.
> + *
> + * If the flag is already registered (same name, parameters and
> +bitnum),
> + * the bitnum is returned.
> + *
> + * @param params
> + * A structure containing the requested parameters of the dynamic
> + * flag (name and options).
> + * @param bitnum
> + * The requested bitnum. Ignored if UINT_MAX is passed.
> + * @return
> + * The number of the reserved bit, or -1 on error.
> + * Possible values for rte_errno:
> + * - EINVAL: invalid parameters (size, align, or flags).
> + * - EEXIST: this name is already register with different parameters.
> + * - EBUSY: the requested bitnum cannot be used.
> + * - EPERM: called from a secondary process.
> + * - ENOENT: no more flag available.
> + * - ENOMEM: allocation failure.
> + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE -
> 1.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag
> *params,
> + unsigned int bitnum);
> +
> +/**
> + * Lookup for a registered dynamic mbuf flag.
> + *
> + * @param name
> + * A string identifying the dynamic flag.
> + * @param params
> + * If not NULL, and if the lookup is successful, the structure is
> + * filled with the parameters of the dynamic flag.
> + * @return
> + * The offset of this flag in the mbuf structure, or -1 on error.
> + * Possible values for rte_errno:
> + * - ENOENT: no dynamic flag matches this name.
> + */
> +__rte_experimental
> +int rte_mbuf_dynflag_lookup(const char *name,
> + struct rte_mbuf_dynflag *params);
> +
> +/**
> + * Helper macro to access to a dynamic field.
> + */
> +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) +
> +(offset)))
> +
> +/**
> + * Dump the status of dynamic fields and flags.
> + *
> + * @param out
> + * The stream where the status is displayed.
> + */
> +__rte_experimental
> +void rte_mbuf_dyn_dump(FILE *out);
> +
> +/* Placeholder for dynamic fields and flags declarations. */
> +
> +#endif
> diff --git a/lib/librte_mbuf/rte_mbuf_version.map
> b/lib/librte_mbuf/rte_mbuf_version.map
> index 519fead35..9bf5ca37a 100644
> --- a/lib/librte_mbuf/rte_mbuf_version.map
> +++ b/lib/librte_mbuf/rte_mbuf_version.map
> @@ -58,6 +58,13 @@ EXPERIMENTAL {
> global:
>
> rte_mbuf_check;
> + rte_mbuf_dynfield_lookup;
> + rte_mbuf_dynfield_register;
> + rte_mbuf_dynfield_register_offset;
> + rte_mbuf_dynflag_lookup;
> + rte_mbuf_dynflag_register;
> + rte_mbuf_dynflag_register_bitnum;
> + rte_mbuf_dyn_dump;
> rte_pktmbuf_copy;
>
> } DPDK_18.08;
> --
> 2.20.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-23 3:16 0% ` Wang, Haiyue
@ 2019-10-23 10:21 0% ` Olivier Matz
2019-10-23 15:00 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-23 10:21 UTC (permalink / raw)
To: Wang, Haiyue
Cc: Ananyev, Konstantin, dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Morten Brørup,
Stephen Hemminger, Thomas Monjalon
On Wed, Oct 23, 2019 at 03:16:13AM +0000, Wang, Haiyue wrote:
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Wednesday, October 23, 2019 06:52
> > To: Olivier Matz <olivier.matz@6wind.com>; dev@dpdk.org
> > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>; Wang,
> > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > <keith.wiles@intel.com>; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger
> > <stephen@networkplumber.org>; Thomas Monjalon <thomas@monjalon.net>
> > Subject: RE: [PATCH v2] mbuf: support dynamic fields and flags
> >
> >
> > > Many features require to store data inside the mbuf. As the room in mbuf
> > > structure is limited, it is not possible to have a field for each
> > > feature. Also, changing fields in the mbuf structure can break the API
> > > or ABI.
> > >
> > > This commit addresses these issues, by enabling the dynamic registration
> > > of fields or flags:
> > >
> > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > given size (>= 1 byte) and alignment constraint.
> > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > >
> > > The typical use case is a PMD that registers space for an offload
> > > feature, when the application requests to enable this feature. As
> > > the space in mbuf is limited, the space should only be reserved if it
> > > is going to be used (i.e when the application explicitly asks for it).
> > >
> > > The registration can be done at any moment, but it is not possible
> > > to unregister fields or flags for now.
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> > >
> > > v2
> > >
> > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > (packet copy)
> > > * Add new apis to register a dynamic field/flag at a specific place
> > > * Add a dump function (sugg by David)
> > > * Enhance field registration function to select the best offset, keeping
> > > large aligned zones as much as possible (sugg by Konstantin)
> > > * Use a size_t and unsigned int instead of int when relevant
> > > (sugg by Konstantin)
> > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > (sugg by Konstantin)
> > > * Remove unused argument in private function (sugg by Konstantin)
> > > * Fix and simplify locking (sugg by Konstantin)
> > > * Fix minor typo
> > >
> > > rfc -> v1
> > >
> > > * Rebase on top of master
> > > * Change registration API to use a structure instead of
> > > variables, getting rid of #defines (Stephen's comment)
> > > * Update flag registration to use a similar API as fields.
> > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > * Add a debug log at registration
> > > * Add some words in release note
> > > * Did some performance tests (sugg. by Andrew):
> > > On my platform, reading a dynamic field takes ~3 cycles more
> > > than a static field, and ~2 cycles more for writing.
> > >
> > > app/test/test_mbuf.c | 145 ++++++-
> > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > lib/librte_mbuf/Makefile | 2 +
> > > lib/librte_mbuf/meson.build | 6 +-
> > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > >
> > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > index b9c2b2500..01cafad59 100644
> > > --- a/app/test/test_mbuf.c
> > > +++ b/app/test/test_mbuf.c
> > > @@ -28,6 +28,7 @@
> > > #include <rte_random.h>
> > > #include <rte_cycles.h>
> > > #include <rte_malloc.h>
> > > +#include <rte_mbuf_dyn.h>
> > >
>
> [snip]
> > > +int
> > > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > > + unsigned int req)
> > > +{
> > > + int ret;
> > > +
> > > + if (req != UINT_MAX && req >= 64) {
> >
> > Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
>
> Might introduce a new macro like kernel:
>
> /**
> * FIELD_SIZEOF - get the size of a struct's field
> * @t: the target struct
> * @f: the target struct's field
> * Return: the size of @f in the struct definition without having a
> * declared instance of @t.
> */
> #define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
>
> Then: FIELD_SIZEOF(rte_mbuf, ol_flags) * CHAR_BIT
Good idea, thanks
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-22 22:51 0% ` Ananyev, Konstantin
2019-10-23 3:16 0% ` Wang, Haiyue
@ 2019-10-23 10:19 0% ` Olivier Matz
1 sibling, 0 replies; 200+ results
From: Olivier Matz @ 2019-10-23 10:19 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: dev, Andrew Rybchenko, Richardson, Bruce, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Morten Brørup,
Stephen Hemminger, Thomas Monjalon
On Tue, Oct 22, 2019 at 10:51:51PM +0000, Ananyev, Konstantin wrote:
>
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each
> > feature. Also, changing fields in the mbuf structure can break the API
> > or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration
> > of fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload
> > feature, when the application requests to enable this feature. As
> > the space in mbuf is limited, the space should only be reserved if it
> > is going to be used (i.e when the application explicitly asks for it).
> >
> > The registration can be done at any moment, but it is not possible
> > to unregister fields or flags for now.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >
> > v2
> >
> > * Rebase on top of master: solve conflict with Stephen's patchset
> > (packet copy)
> > * Add new apis to register a dynamic field/flag at a specific place
> > * Add a dump function (sugg by David)
> > * Enhance field registration function to select the best offset, keeping
> > large aligned zones as much as possible (sugg by Konstantin)
> > * Use a size_t and unsigned int instead of int when relevant
> > (sugg by Konstantin)
> > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > (sugg by Konstantin)
> > * Remove unused argument in private function (sugg by Konstantin)
> > * Fix and simplify locking (sugg by Konstantin)
> > * Fix minor typo
> >
> > rfc -> v1
> >
> > * Rebase on top of master
> > * Change registration API to use a structure instead of
> > variables, getting rid of #defines (Stephen's comment)
> > * Update flag registration to use a similar API as fields.
> > * Change max name length from 32 to 64 (sugg. by Thomas)
> > * Enhance API documentation (Haiyue's and Andrew's comments)
> > * Add a debug log at registration
> > * Add some words in release note
> > * Did some performance tests (sugg. by Andrew):
> > On my platform, reading a dynamic field takes ~3 cycles more
> > than a static field, and ~2 cycles more for writing.
> >
> > app/test/test_mbuf.c | 145 ++++++-
> > doc/guides/rel_notes/release_19_11.rst | 7 +
> > lib/librte_mbuf/Makefile | 2 +
> > lib/librte_mbuf/meson.build | 6 +-
> > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > 8 files changed, 959 insertions(+), 5 deletions(-)
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> >
> > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > index b9c2b2500..01cafad59 100644
> > --- a/app/test/test_mbuf.c
> > +++ b/app/test/test_mbuf.c
> > @@ -28,6 +28,7 @@
> > #include <rte_random.h>
> > #include <rte_cycles.h>
> > #include <rte_malloc.h>
> > +#include <rte_mbuf_dyn.h>
> >
> > #include "test.h"
> >
> > @@ -657,7 +658,6 @@ test_attach_from_different_pool(struct rte_mempool *pktmbuf_pool,
> > rte_pktmbuf_free(clone2);
> > return -1;
> > }
> > -#undef GOTO_FAIL
> >
> > /*
> > * test allocation and free of mbufs
> > @@ -1276,6 +1276,143 @@ test_tx_offload(void)
> > return (v1 == v2) ? 0 : -EINVAL;
> > }
> >
> > +static int
> > +test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
> > +{
> > + const struct rte_mbuf_dynfield dynfield = {
> > + .name = "test-dynfield",
> > + .size = sizeof(uint8_t),
> > + .align = __alignof__(uint8_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield2 = {
> > + .name = "test-dynfield2",
> > + .size = sizeof(uint16_t),
> > + .align = __alignof__(uint16_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield3 = {
> > + .name = "test-dynfield3",
> > + .size = sizeof(uint8_t),
> > + .align = __alignof__(uint8_t),
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield_fail_big = {
> > + .name = "test-dynfield-fail-big",
> > + .size = 256,
> > + .align = 1,
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynfield dynfield_fail_align = {
> > + .name = "test-dynfield-fail-align",
> > + .size = 1,
> > + .align = 3,
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag = {
> > + .name = "test-dynflag",
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag2 = {
> > + .name = "test-dynflag2",
> > + .flags = 0,
> > + };
> > + const struct rte_mbuf_dynflag dynflag3 = {
> > + .name = "test-dynflag3",
> > + .flags = 0,
> > + };
> > + struct rte_mbuf *m = NULL;
> > + int offset, offset2, offset3;
> > + int flag, flag2, flag3;
> > + int ret;
> > +
> > + printf("Test mbuf dynamic fields and flags\n");
> > + rte_mbuf_dyn_dump(stdout);
> > +
> > + offset = rte_mbuf_dynfield_register(&dynfield);
> > + if (offset == -1)
> > + GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
> > + offset, strerror(errno));
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield);
> > + if (ret != offset)
> > + GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
> > + ret, strerror(errno));
> > +
> > + offset2 = rte_mbuf_dynfield_register(&dynfield2);
> > + if (offset2 == -1 || offset2 == offset || (offset2 & 1))
> > + GOTO_FAIL("failed to register dynamic field 2, offset2=%d: %s",
> > + offset2, strerror(errno));
> > +
> > + offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
> > + offsetof(struct rte_mbuf, dynfield1[1]));
> > + if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
> > + GOTO_FAIL("failed to register dynamic field 3, offset=%d: %s",
> > + offset3, strerror(errno));
> > +
> > + printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
> > + offset, offset2, offset3);
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (too big)");
> > +
> > + ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (bad alignment)");
> > +
> > + ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
> > + offsetof(struct rte_mbuf, ol_flags));
> > + if (ret != -1)
> > + GOTO_FAIL("dynamic field creation should fail (not avail)");
> > +
> > + flag = rte_mbuf_dynflag_register(&dynflag);
> > + if (flag == -1)
> > + GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
> > + flag, strerror(errno));
> > +
> > + ret = rte_mbuf_dynflag_register(&dynflag);
> > + if (ret != flag)
> > + GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
> > + ret, strerror(errno));
> > +
> > + flag2 = rte_mbuf_dynflag_register(&dynflag2);
> > + if (flag2 == -1 || flag2 == flag)
> > + GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
> > + flag2, strerror(errno));
> > +
> > + flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
> > + rte_bsf64(PKT_LAST_FREE));
> > + if (flag3 != rte_bsf64(PKT_LAST_FREE))
> > + GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
> > + flag3, strerror(errno));
> > +
> > + printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
> > +
> > + /* set, get dynamic field */
> > + m = rte_pktmbuf_alloc(pktmbuf_pool);
> > + if (m == NULL)
> > + GOTO_FAIL("Cannot allocate mbuf");
> > +
> > + *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
> > + if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
> > + GOTO_FAIL("failed to read dynamic field");
> > + *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
> > + if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
> > + GOTO_FAIL("failed to read dynamic field");
> > +
> > + /* set a dynamic flag */
> > + m->ol_flags |= (1ULL << flag);
> > +
> > + rte_mbuf_dyn_dump(stdout);
> > + rte_pktmbuf_free(m);
> > + return 0;
> > +fail:
> > + rte_pktmbuf_free(m);
> > + return -1;
> > +}
> > +#undef GOTO_FAIL
> > +
> > static int
> > test_mbuf(void)
> > {
> > @@ -1295,6 +1432,12 @@ test_mbuf(void)
> > goto err;
> > }
> >
> > + /* test registration of dynamic fields and flags */
> > + if (test_mbuf_dyn(pktmbuf_pool) < 0) {
> > + printf("mbuf dynflag test failed\n");
> > + goto err;
> > + }
> > +
> > /* create a specific pktmbuf pool with a priv_size != 0 and no data
> > * room size */
> > pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
> > diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
> > index 85953b962..9e9c94554 100644
> > --- a/doc/guides/rel_notes/release_19_11.rst
> > +++ b/doc/guides/rel_notes/release_19_11.rst
> > @@ -21,6 +21,13 @@ DPDK Release 19.11
> >
> > xdg-open build/doc/html/guides/rel_notes/release_19_11.html
> >
> > +* **Add support of support dynamic fields and flags in mbuf.**
> > +
> > + This new feature adds the ability to dynamically register some room
> > + for a field or a flag in the mbuf structure. This is typically used
> > + for specific offload features, where adding a static field or flag
> > + in the mbuf is not justified.
> > +
> >
> > New Features
> > ------------
> > diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
> > index c8f6d2689..5a9bcee73 100644
> > --- a/lib/librte_mbuf/Makefile
> > +++ b/lib/librte_mbuf/Makefile
> > @@ -17,8 +17,10 @@ LIBABIVER := 5
> >
> > # all source are stored in SRCS-y
> > SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
> > +SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
> >
> > # install includes
> > SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h rte_mbuf_ptype.h rte_mbuf_pool_ops.h
> > +SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
> >
> > include $(RTE_SDK)/mk/rte.lib.mk
> > diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
> > index 6cc11ebb4..9137e8f26 100644
> > --- a/lib/librte_mbuf/meson.build
> > +++ b/lib/librte_mbuf/meson.build
> > @@ -2,8 +2,10 @@
> > # Copyright(c) 2017 Intel Corporation
> >
> > version = 5
> > -sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
> > -headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
> > +sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
> > + 'rte_mbuf_dyn.c')
> > +headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
> > + 'rte_mbuf_dyn.h')
> > deps += ['mempool']
> >
> > allow_experimental_apis = true
> > diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> > index fb0849ac1..5740b1e93 100644
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -198,9 +198,12 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> >
> > -/* add new RX flags here */
> > +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -/* add new TX flags here */
> > +#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_LAST_FREE (1ULL << 39)
> > +
> > +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
> >
> > /**
> > * Indicate that the metadata field in the mbuf is in use.
> > @@ -738,6 +741,7 @@ struct rte_mbuf {
> > */
> > struct rte_mbuf_ext_shared_info *shinfo;
> >
> > + uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
> > } __rte_cache_aligned;
> >
> > /**
> > @@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
> > */
> > #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
> >
> > +/**
> > + * Copy dynamic fields from m_src to m_dst.
> > + *
> > + * @param m_dst
> > + * The destination mbuf.
> > + * @param m_src
> > + * The source mbuf.
> > + */
> > +static inline void
> > +rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> > +{
> > + memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst->dynfield1));
> > +}
> > +
> > /* internal */
> > static inline void
> > __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> > @@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> > mdst->hash = msrc->hash;
> > mdst->packet_type = msrc->packet_type;
> > mdst->timestamp = msrc->timestamp;
> > + rte_mbuf_dynfield_copy(mdst, msrc);
> > }
> >
> > /**
> > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
> > new file mode 100644
> > index 000000000..9ef235483
> > --- /dev/null
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> > @@ -0,0 +1,548 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2019 6WIND S.A.
> > + */
> > +
> > +#include <sys/queue.h>
> > +#include <stdint.h>
> > +#include <limits.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_tailq.h>
> > +#include <rte_errno.h>
> > +#include <rte_malloc.h>
> > +#include <rte_string_fns.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_mbuf_dyn.h>
> > +
> > +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> > +
> > +struct mbuf_dynfield_elt {
> > + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> > + struct rte_mbuf_dynfield params;
> > + size_t offset;
> > +};
> > +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> > + .name = "RTE_MBUF_DYNFIELD",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> > +
> > +struct mbuf_dynflag_elt {
> > + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> > + struct rte_mbuf_dynflag params;
> > + unsigned int bitnum;
> > +};
> > +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> > + .name = "RTE_MBUF_DYNFLAG",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> > +
> > +struct mbuf_dyn_shm {
> > + /**
> > + * For each mbuf byte, free_space[i] != 0 if space is free.
> > + * The value is the size of the biggest aligned element that
> > + * can fit in the zone.
> > + */
> > + uint8_t free_space[sizeof(struct rte_mbuf)];
> > + /** Bitfield of available flags. */
> > + uint64_t free_flags;
> > +};
> > +static struct mbuf_dyn_shm *shm;
> > +
> > +/* Set the value of free_space[] according to the size and alignment of
> > + * the free areas. This helps to select the best place when reserving a
> > + * dynamic field. Assume tailq is locked.
> > + */
> > +static void
> > +process_score(void)
> > +{
> > + size_t off, align, size, i;
> > +
> > + /* first, erase previous info */
> > + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> > + if (shm->free_space[i])
> > + shm->free_space[i] = 1;
> > + }
> > +
> > + for (off = 0; off < sizeof(struct rte_mbuf); off++) {
> > + /* get the size of the free zone */
> > + for (size = 0; shm->free_space[off + size]; size++)
> > + ;
> > + if (size == 0)
> > + continue;
> > +
> > + /* get the alignment of biggest object that can fit in
> > + * the zone at this offset.
> > + */
> > + for (align = 1;
> > + (off % (align << 1)) == 0 && (align << 1) <= size;
> > + align <<= 1)
> > + ;
> > +
> > + /* save it in free_space[] */
> > + for (i = off; i < off + size; i++)
> > + shm->free_space[i] = RTE_MAX(align, shm->free_space[i]);
> > + }
> > +}
> > +
> > +/* Allocate and initialize the shared memory. Assume tailq is locked */
> > +static int
> > +init_shared_mem(void)
> > +{
> > + const struct rte_memzone *mz;
> > + uint64_t mask;
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> > + sizeof(struct mbuf_dyn_shm),
> > + SOCKET_ID_ANY, 0,
> > + RTE_CACHE_LINE_SIZE);
> > + } else {
> > + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> > + }
> > + if (mz == NULL)
> > + return -1;
> > +
> > + shm = mz->addr;
> > +
> > +#define mark_free(field) \
> > + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> > + 1, sizeof(((struct rte_mbuf *)0)->field))
>
> Still think it would look nicer without multi-line macro defines/undef in the middle of the function.
I rather think that macro helps to make the code more readable, but it's
probably just a matter of taste. Will someone puts a contract on me if I
keep it like this? If yes I'll do the change ;)
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + /* init free_space, keep it sync'd with
> > + * rte_mbuf_dynfield_copy().
> > + */
> > + memset(shm, 0, sizeof(*shm));
> > + mark_free(dynfield1);
> > +
> > + /* init free_flags */
> > + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
> > + shm->free_flags |= mask;
> > +
> > + process_score();
> > + }
> > +#undef mark_free
> > +
> > + return 0;
> > +}
> > +
> > +/* check if this offset can be used */
> > +static int
> > +check_offset(size_t offset, size_t size, size_t align)
> > +{
> > + size_t i;
> > +
> > + if ((offset & (align - 1)) != 0)
> > + return -1;
> > + if (offset + size > sizeof(struct rte_mbuf))
> > + return -1;
> > +
> > + for (i = 0; i < size; i++) {
> > + if (!shm->free_space[i + offset])
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynfield_elt *
> > +__mbuf_dynfield_lookup(const char *name)
> > +{
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> > + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynfield;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
> > +{
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynfield == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynfield->params, sizeof(*params));
> > +
> > + return mbuf_dynfield->offset;
> > +}
> > +
> > +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> > + const struct rte_mbuf_dynfield *params2)
> > +{
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->size != params2->size)
> > + return -1;
> > + if (params1->align != params2->align)
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static int
> > +__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
> > + size_t req)
> > +{
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + unsigned int best_zone = UINT_MAX;
> > + size_t i, offset;
> > + int ret;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + return -1;
> > +
> > + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> > + if (mbuf_dynfield != NULL) {
> > + if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + return mbuf_dynfield->offset;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + return -1;
> > + }
> > +
> > + if (req == SIZE_MAX) {
> > + for (offset = 0;
> > + offset < sizeof(struct rte_mbuf);
> > + offset++) {
> > + if (check_offset(offset, params->size,
> > + params->align) == 0 &&
> > + shm->free_space[offset] < best_zone) {
>
> Probably worth to explain a bit more here about best_zone logic -
> trying to find offset with minimal score (minimal continuous length), etc.
Yes, will do.
> > + best_zone = shm->free_space[offset];
> > + req = offset;
> > + }
> > + }
> > + if (req == SIZE_MAX) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > + } else {
> > + if (check_offset(req, params->size, params->align) < 0) {
> > + rte_errno = EBUSY;
> > + return -1;
> > + }
> > + }
> > +
> > + offset = req;
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + return -1;
> > +
> > + mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
> > + if (mbuf_dynfield == NULL) {
> > + rte_free(te);
> > + return -1;
> > + }
> > +
> > + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> > + sizeof(mbuf_dynfield->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> > + rte_errno = ENAMETOOLONG;
> > + rte_free(mbuf_dynfield);
> > + rte_free(te);
> > + return -1;
> > + }
> > + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
> > + mbuf_dynfield->offset = offset;
> > + te->data = mbuf_dynfield;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> > +
> > + for (i = offset; i < offset + params->size; i++)
> > + shm->free_space[i] = 0;
> > + process_score();
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
> > + params->name, params->size, params->align, params->flags,
> > + offset);
> > +
> > + return offset;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
> > + size_t req)
> > +{
> > + int ret;
> > +
> > + if (params->size >= sizeof(struct rte_mbuf)) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > + if (!rte_is_power_of_2(params->align)) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > + if (params->flags != 0) {
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_write_lock();
> > + ret = __rte_mbuf_dynfield_register_offset(params, req);
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return ret;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
> > +{
> > + return rte_mbuf_dynfield_register_offset(params, SIZE_MAX);
> > +}
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynflag_elt *
> > +__mbuf_dynflag_lookup(const char *name)
> > +{
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> > + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> > + if (strncmp(name, mbuf_dynflag->params.name,
> > + RTE_MBUF_DYN_NAMESIZE) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynflag;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_lookup(const char *name,
> > + struct rte_mbuf_dynflag *params)
> > +{
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynflag == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynflag->params, sizeof(*params));
> > +
> > + return mbuf_dynflag->bitnum;
> > +}
> > +
> > +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> > + const struct rte_mbuf_dynflag *params2)
> > +{
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static int
> > +__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > + unsigned int req)
> > +{
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + unsigned int bitnum;
> > + int ret;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + return -1;
> > +
> > + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> > + if (mbuf_dynflag != NULL) {
> > + if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
> > + rte_errno = EEXIST;
> > + return -1;
> > + }
> > + return mbuf_dynflag->bitnum;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + return -1;
> > + }
> > +
> > + if (req == UINT_MAX) {
> > + if (shm->free_flags == 0) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > + bitnum = rte_bsf64(shm->free_flags);
> > + } else {
> > + if ((shm->free_flags & (1ULL << req)) == 0) {
> > + rte_errno = EBUSY;
> > + return -1;
> > + }
> > + bitnum = req;
> > + }
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + return -1;
> > +
> > + mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
> > + if (mbuf_dynflag == NULL) {
> > + rte_free(te);
> > + return -1;
> > + }
> > +
> > + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> > + sizeof(mbuf_dynflag->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> > + rte_free(mbuf_dynflag);
> > + rte_free(te);
> > + rte_errno = ENAMETOOLONG;
> > + return -1;
> > + }
> > + mbuf_dynflag->bitnum = bitnum;
> > + te->data = mbuf_dynflag;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> > +
> > + shm->free_flags &= ~(1ULL << bitnum);
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
> > + params->name, params->flags, bitnum);
> > +
> > + return bitnum;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > + unsigned int req)
> > +{
> > + int ret;
> > +
> > + if (req != UINT_MAX && req >= 64) {
>
> Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
Will do.
> Apart from that:
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Thanks for the review
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-22 17:44 0% ` Ananyev, Konstantin
2019-10-22 22:21 0% ` Ananyev, Konstantin
@ 2019-10-23 10:05 0% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Akhil Goyal @ 2019-10-23 10:05 UTC (permalink / raw)
To: Ananyev, Konstantin, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', Hemant Agrawal
Hi Konstantin,
>
> Hi Akhil,
>
>
> > > > Added my comments inline with your draft.
> > > > [snip]..
> > > >
> > > > >
> > > > > Ok, then my suggestion:
> > > > > Let's at least write down all points about crypto-dev approach where we
> > > > > disagree and then probably try to resolve them one by one....
> > > > > If we fail to make an agreement/progress in next week or so,
> > > > > (and no more reviews from the community)
> > > > > will have bring that subject to TB meeting to decide.
> > > > > Sounds fair to you?
> > > > Agreed
> > > > >
> > > > > List is below.
> > > > > Please add/correct me, if I missed something.
> > > > >
> > > > > Konstantin
> > > >
> > > > Before going into comparison, we should define the requirement as well.
> > >
> > > Good point.
> > >
> > > > What I understood from the patchset,
> > > > "You need a synchronous API to perform crypto operations on raw data
> using
> > > SW PMDs"
> > > > So,
> > > > - no crypto-ops,
> > > > - no separate enq-deq, only single process API for data path
> > > > - Do not need any value addition to the session parameters.
> > > > (You would need some parameters from the crypto-op which
> > > > Are constant per session and since you wont use crypto-op,
> > > > You need some place to store that)
> > >
> > > Yes, this is correct, I think.
> > >
> > > >
> > > > Now as per your mail, the comparison
> > > > 1. extra input parameters to create/init rte_(cpu)_sym_session.
> > > >
> > > > Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo'
> and
> > > 'key' fields.
> > > > New fields will be optional and would be used by PMD only when cpu-
> crypto
> > > session is requested.
> > > > For lksd-crypto session PMD is free to ignore these fields.
> > > > No ABI breakage is required.
> > > >
> > > > [Akhil] Agreed, no issues.
> > > >
> > > > 2. cpu-crypto create/init.
> > > > a) Our suggestion - introduce new API for that:
> > > > - rte_crypto_cpu_sym_init() that would init completely opaque
> > > rte_crypto_cpu_sym_session.
> > > > - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear);
> > > /*whatever else we'll need *'};
> > > > - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform
> > > *xforms)
> > > > that would return const struct rte_crypto_cpu_sym_session_ops
> *based
> > > on input xforms.
> > > > Advantages:
> > > > 1) totally opaque data structure (no ABI breakages in future), PMD
> > > writer is totally free
> > > > with it format and contents.
> > > >
> > > > [Akhil] It will have breakage at some point till we don't hit the union size.
> > >
> > > Not sure, what union you are talking about?
> >
> > Union of xforms in rte_security_session_conf
>
> Hmm, how does it relates here?
> I thought we discussing pure rte_cryptodev_sym_session, no?
>
> >
> > >
> > > > Rather I don't suspect there will be more parameters added.
> > > > Or do we really care about the ABI breakage when the argument is about
> > > > the correct place to add a piece of code or do we really agree to add code
> > > > anywhere just to avoid that breakage.
> > >
> > > I am talking about maintaining it in future.
> > > if your struct is not seen externally, no chances to introduce ABI breakage.
> > >
> > > >
> > > > 2) each session entity is self-contained, user doesn't need to bring along
> > > dev_id etc.
> > > > dev_id is needed only at init stage, after that user will use session ops
> > > to perform
> > > > all operations on that session (process(), clear(), etc.).
> > > >
> > > > [Akhil] There is nothing called as session ops in current DPDK.
> > >
> > > True, but it doesn't mean we can't/shouldn't have it.
> >
> > We can have it if it is not adding complexity for the user. Creating 2 different
> code
> > Paths for user is not desirable for the stack developers.
> >
> > >
> > > > What you are proposing
> > > > is a new concept which doesn't have any extra benefit, rather it is adding
> > > complexity
> > > > to have two different code paths for session create.
> > > >
> > > >
> > > > 3) User can decide does he wants to store ops[] pointer on a per session
> > > basis,
> > > > or on a per group of same sessions, or...
> > > >
> > > > [Akhil] Will the user really care which process API should be called from the
> > > PMD.
> > > > Rather it should be driver's responsibility to store that in the session private
> > > data
> > > > which would be opaque to the user. As per my suggestion same process
> > > function can
> > > > be added to multiple sessions or a single session can be managed inside the
> > > PMD.
> > >
> > > In that case we either need to have a function per session (stored internally),
> > > or make decision (branches) at run-time.
> > > But as I said in other mail - I am ok to add small shim structure here:
> > > either rte_crypto_cpu_sym_session { void *ses; struct
> > > rte_crypto_cpu_sym_session_ops ops; }
> > > or rte_crypto_cpu_sym_session { void *ses; struct
> > > rte_crypto_cpu_sym_session_ops *ops; }
> > > And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops()
> into
> > > one (init).
> >
> > Again that will be a separate API call from the user perspective which is not
> good.
> >
> > >
> > > >
> > > >
> > > > 4) No mandatory mempools for private sessions. User can allocate
> > > memory for cpu-crypto
> > > > session whenever he likes.
> > > >
> > > > [Akhil] you mean session private data?
> > >
> > > Yes.
> > >
> > > > You would need that memory anyways, user will be
> > > > allocating that already. You do not need to manage that.
> > >
> > > What I am saying - right now user has no choice but to allocate it via
> mempool.
> > > Which is probably not the best options for all cases.
> > >
> > > >
> > > > Disadvantages:
> > > > 5) Extra changes in control path
> > > > 6) User has to store session_ops pointer explicitly.
> > > >
> > > > [Akhil] More disadvantages:
> > > > - All supporting PMDs will need to maintain TWO types of session for the
> > > > same crypto processing. Suppose a fix or a new feature(or algo) is added,
> PMD
> > > owner
> > > > will need to add code in both the session create APIs. Hence more
> > > maintenance and
> > > > error prone.
> > >
> > > I think majority of code for both paths will be common, plus even we'll reuse
> > > current sym_session_init() -
> > > changes in PMD session_init() code will be unavoidable.
> > > But yes, it will be new entry in devops, that PMD will have to support.
> > > Ok to add it as 7) to the list.
> > >
> > > > - Stacks which will be using these new APIs also need to maintain two
> > > > code path for the same processing while doing session initialization
> > > > for sync and async
> > >
> > > That's the same as #5 above, I think.
> > >
> > > >
> > > >
> > > > b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and
> > > existing rte_cryptodev_sym_session
> > > > structure.
> > > > Advantages:
> > > > 1) allows to reuse same struct and init/create/clear() functions.
> > > > Probably less changes in control path.
> > > > Disadvantages:
> > > > 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id,
> > > which means that
> > > > we can't use the same rte_cryptodev_sym_session to hold private
> > > sessions pointers
> > > > for both sync and async mode for the same device.
> > > > So the only option we have - make PMD devops-
> > > >sym_session_configure()
> > > > always create a session that can work in both cpu and lksd modes.
> > > > For some implementations that would probably mean that under the
> > > hood PMD would create
> > > > 2 different session structs (sync/async) and then use one or another
> > > depending on from what API been called.
> > > > Seems doable, but ...:
> > > > - will contradict with statement from 1:
> > > > " New fields will be optional and would be used by PMD only when
> > > cpu-crypto session is requested."
> > > > Now it becomes mandatory for all apps to specify cpu-crypto
> > > related parameters too,
> > > > even if they don't plan to use that mode - i.e. behavior change,
> > > existing app change.
> > > > - might cause extra space overhead.
> > > >
> > > > [Akhil] It will not contradict with #1, you will only have few checks in the
> > > session init PMD
> > > > Which support this mode, find appropriate values and set the appropriate
> > > process() in it.
> > > > User should be able to call, legacy enq-deq as well as the new process()
> > > without any issue.
> > > > User would be at runtime will be able to change the datapath.
> > > > So this is not a disadvantage, it would be additional flexibility for the user.
> > >
> > > Ok, but that's what I am saying - if PMD would *always* have to create a
> > > session that can handle
> > > both modes (sync/async), then user would *always* have to provide
> parameters
> > > for both modes too.
> > > Otherwise if let say user didn't setup sync specific parameters at all, what
> PMD
> > > should do?
> > > - return with error?
> > > - init session that can be used with async path only?
> > > My current assumption is #1.
> > > If #2, then how user will be able to distinguish is that session valid for both
> > > modes, or only for one?
> >
> > I would say a 3rd option, do nothing if sync params are not set.
> > Probably have a debug print in the PMD(which support sync mode) to specify
> that
> > session is not configured properly for sync mode.
>
> So, just print warning and proceed with init session that can be used with async
> path only?
> Then it sounds the same as #2 above.
> Which actually means that sync mode parameters for sym_session_init()
> becomes optional.
> Then we need an API to provide to the user information what modes
> (sync+async/async only) is supported by that session for given dev_id.
> And user would have to query/retain this information at control-path,
> and store it somewhere in user-space together with session pointer and dev_ids
> to use later at data-path (same as we do now for session type).
> That definitely requires changes in control-path to start using it.
> Plus the fact that this value can differ for different dev_ids for the same session -
> doesn't make things easier here.
API wont be required to specify that. Feature flag will be sufficient, not a big change
From the application perspective.
Here is some pseudo code just to elaborate my understanding. This will need some
From application,
If(dev_info->feature_flags & RTE_CRYPTODEV_FF_SYNC) {
/* set additional params in crypto xform */
}
Now in the driver,
pmd_sym_session_configure(dev,xform,sess,mempool) {
...
If(dev_info->feature_flags & RTE_CRYPTODEV_FF_SYNC
&& xform->/*sync params are set*/) {
/*Assign process function pointer in sess->priv_data*/
} /* It may return error if FF_SYNC is set and params are not correct.
It would be upto the driver whether it support both SYNC and ASYNC.*/
}
Now the new sync API
pmd_process(...) {
If(dev_info->feature_flags & RTE_CRYPTODEV_FF_SYNC
&& sess_priv->process != NULL)
sess_priv->process(...);
else
ASSERT("sync mode not configured properly or not supported");
}
In the data path, there is no extra processing happening.
Even in case of your suggestion, you should have these type of error checks,
You cannot blindly trust on the application that the pointers are correct.
>
> > Internally the PMD will not store the process() API in the session priv data
> > And while calling the first packet, devops->process will give an assert that
> session
> > Is not configured for sync mode. The session validation would be done in any
> case
> > your suggestion or mine. So no extra overhead at runtime.
>
> I believe that after session_init() user should get either an error or
> valid session handler that he can use at runtime.
> Pushing session validation to runtime doesn't seem like a good idea.
>
It may get a warning from the PMD, that FF_SYNC is set but params are not
Correct/available. See above.
> >
> > >
> > >
> > > >
> > > >
> > > > 3) not possible to store device (not driver) specific data within the
> > > session, but I think it is not really needed right now.
> > > > So probably minor compared to 2.b.2.
> > > >
> > > > [Akhil] So lets omit this for current discussion. And I hope we can find some
> > > way to deal with it.
> > >
> > > I don't think there is an easy way to fix that with existing API.
> > >
> > > >
> > > >
> > > > Actually #3 follows from #2, but decided to have them separated.
> > > >
> > > > 3. process() parameters/behavior
> > > > a) Our suggestion: user stores ptr to session ops (or to (*process) itself)
> and
> > > just does:
> > > > session_ops->process(sess, ...);
> > > > Advantages:
> > > > 1) fastest possible execution path
> > > > 2) no need to carry on dev_id for data-path
> > > >
> > > > [Akhil] I don't see any overhead of carrying dev id, at least it would be
> inline
> > > with the
> > > > current DPDK methodology.
> > >
> > > If we'll add process() into rte_cryptodev itself (same as we have
> > > enqueue_burst/dequeue_burst),
> > > then it will be an ABI breakage.
> > > Also there are discussions to get rid of that approach completely:
> > >
> https://eur01.safelinks.protection.outlook.com/?url=http%3A%2F%2Fmails.dpd
> k.org%2Farchives%2Fdev%2F2019-
> September%2F144674.html&data=02%7C01%7Cakhil.goyal%40nxp.com%7
> C1859dc1d29cd45a51e9908d7571784bb%7C686ea1d3bc2b4c6fa92cd99c5c301
> 635%7C0%7C0%7C637073630835415165&sdata=Bz9jgisyVzRJNt1BijtvSlurh
> JU1vXBbynNwlMDjaco%3D&reserved=0
> > > So I am not sure this is a recommended way these days.
> >
> > We can either have it in rte_cryptodev or in rte_cryptodev_ops whichever
> > is good for you.
> >
> > Whether it is ABI breakage or not, as per your requirements, this is the correct
> > approach. Do you agree with this or not?
>
> I think it is possible approach, but not the best one:
> it looks quite flakey to me (see all these uncertainty with sym_session_init
> above),
> plus introduces extra overhead at data-path.
Uncertainties can be handled appropriately using a feature flag
And As per my understanding there is no extra overhead in data path.
>
> >
> > Now handling the API/ABI breakage is a separate story. In 19.11 release we
> > Are not much concerned about the ABI breakages, this was discussed in
> > community. So adding a new dev_ops wouldn't have been an issue.
> > Now since we are so close to RC1 deadline, we should come up with some
> > other solution for next release. May be having a pmd API in 20.02 and
> > converting it into formal one in 20.11
> >
> >
> > >
> > > > What you are suggesting is a new way to get the things done without much
> > > benefit.
> > >
> > > Would help with ABI stability plus better performance, isn't it enough?
> > >
> > > > Also I don't see any performance difference as crypto workload is heavier
> than
> > > > Code cycles, so that wont matter.
> > >
> > > It depends.
> > > Suppose function call costs you ~30 cycles.
> > > If you have burst of big packets (let say crypto for each will take ~2K cycles)
> that
> > > belong
> > > to the same session, then yes you wouldn't notice these extra 30 cycles at all.
> > > If you have burst of small packets (let say crypto for each will take ~300
> cycles)
> > > each
> > > belongs to different session, then it will cost you ~10% extra.
> >
> > Let us do some profiling on openssl with both the approaches and find out the
> > difference.
> >
> > >
> > > > So IMO, there is no advantage in your suggestion as well.
> > > >
> > > >
> > > > Disadvantages:
> > > > 3) user has to carry on session_ops pointer explicitly
> > > > b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and
> then:
> > > > rte_crypto_cpu_sym_process(uint8_t dev_id,
> rte_cryptodev_sym_session
> > > *sess, /*data parameters*/) {...
> > > > rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> > > > /*and then inside PMD specifc process: */
> > > > pmd_private_session = sess-
> >sess_data[this_pmd_driver_id].data;
> > > > /* and then most likely either */
> > > > pmd_private_session->process(pmd_private_session, ...);
> > > > /* or jump based on session/input data */
> > > > Advantages:
> > > > 1) don't see any...
> > > > Disadvantages:
> > > > 2) User has to carry on dev_id inside data-path
> > > > 3) Extra level of indirection (plus data dependency) - both for data and
> > > instructions.
> > > > Possible slowdown compared to a) (not measured).
> > > >
> > > > Having said all this, if the disagreements cannot be resolved, you can go
> for a
> > > pmd API specific
> > > > to your PMDs,
> > >
> > > I don't think it is good idea.
> > > PMD specific API is sort of deprecated path, also there is no clean way to use
> it
> > > within the libraries.
> >
> > I know that this is a deprecated path, we can use it until we are not allowed
> > to break ABI/API
> >
> > >
> > > > because as per my understanding the solution doesn't look scalable to
> other
> > > PMDs.
> > > > Your approach is aligned only to Intel , will not benefit others like openssl
> > > which is used by all
> > > > vendors.
> > >
> > > I feel quite opposite, from my perspective majority of SW backed PMDs will
> > > benefit from it.
> > > And I don't see anything Intel specific in my proposals above.
> > > About openssl PMD: I am not an expert here, but looking at the code, I think
> it
> > > will fit really well.
> > > Look yourself at its internal functions:
> > > process_openssl_auth_op/process_openssl_crypto_op,
> > > I think they doing exactly the same - they use sync API underneath, and they
> are
> > > session based
> > > (AFAIK you don't need any device/queue data, everything that needed for
> > > crypto/auth is stored inside session).
> > >
> > By vendor specific, I mean,
> > - no PMD would like to have 2 different variants of session Init APIs for doing
> the same stuff.
> > - stacks will become vendor specific while using 2 separate session create APIs.
> No stack would
> > Like to support 2 variants of session create- one for HW PMDs and one for SW
> PMDs.
>
> I think what you refer on has nothing to do with 'vendor specific'.
> I would name it 'extra overhead for PMD and stack writers'.
> Yes, for sure there is extra overhead (as always with new API) -
> for both producer (PMD writer) and consumer (stack writer):
> New function(s) to support, probably more tests to create/run, etc.
> Though this API is optional - if PMD/stack maintainer doesn't see
> value in it, they are free not to support it.
> From other side, re-using rte_cryptodev_sym_session_init()
> wouldn't help anyway - both data-path and control-path would differ
> from async mode anyway.
> BTW, right now to support different HW flavors
> we do have 4 different control and data-paths for both
> ipsec-secgw and librte_ipsec:
> lkds-none/lksd-proto/inline-crypto/inline-proto.
> And that is considered to be ok.
No that is not ok. We cannot add new paths for every other case.
Those 4 are controlled using 2 set of APIs. We should try our best to
Have minimum overhead to the application writer. This pain was also discussed
In the one of DPDK conference as well.
DPDK is not a standalone entity, there are stacks running over it always.
We should not add API for every other use case when we have an alternative
Approach with the existing API set.
Now introducing another one would add to that pain and a lot of work for
Both producer and consumer.
It would be interesting to see how much performance difference will be there in the
Two approaches. As per my understanding it wont be much as compared to the
Extra work that you will be inducing.
-Akhil
> Honestly, I don't understand why SW backed implementations
> can't have their own path that would suite them most.
> Konstantin
>
>
>
>
>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [RFC 6/6] build: add drivers abi checks to meson
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
` (3 preceding siblings ...)
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 5/6] build: add lib abi checks to meson Kevin Laatz
@ 2019-10-23 1:07 14% ` Kevin Laatz
4 siblings, 0 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella, Kevin Laatz
This patch adds the ABI compatibility check for the drivers directory to
the meson build. If enabled, the ABI compatibility checks will run for all
.so's in the lib directory (provided a matching dump file exists). The
build will fail if an ABI incompatibility is detected.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
drivers/meson.build | 17 ++++++++++++++++-
1 file changed, 16 insertions(+), 1 deletion(-)
diff --git a/drivers/meson.build b/drivers/meson.build
index 3202ba00d..0fda5a9e0 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -158,7 +158,9 @@ foreach class:dpdk_driver_classes
version_map, '@INPUT@'],
capture: true,
input: static_lib,
- output: lib_name + '.exp_chk')
+ output: lib_name + '.exp_chk'
+ install: false,
+ build_by_default: get_option('abi_compat_checks'))
endif
shared_lib = shared_library(lib_name,
@@ -183,6 +185,19 @@ foreach class:dpdk_driver_classes
include_directories: includes,
dependencies: static_objs)
+ if is_experimental == 0
+ custom_target('lib' + lib_name + '.abi_chk',
+ command: [abidiff,
+ meson.source_root() + '/drivers/abi/lib'
+ + lib_name + '.dump',
+ '@INPUT@'],
+ input: shared_lib,
+ output: 'lib' + lib_name + '.abi_chk',
+ capture: true,
+ install: false,
+ build_by_default: get_option('abi_compat_checks'))
+ endif
+
dpdk_drivers += static_lib
set_variable('shared_@0@'.format(lib_name), shared_dep)
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [RFC 5/6] build: add lib abi checks to meson
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
` (2 preceding siblings ...)
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 4/6] build: add meson option for abi related checks Kevin Laatz
@ 2019-10-23 1:07 14% ` Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 6/6] build: add drivers " Kevin Laatz
4 siblings, 0 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella, Kevin Laatz
This patch adds the ABI compatibility check for the lib directory to the
meson build. If enabled, the ABI compatibility checks will run for all
.so's in the lib directory (provided a matching dump file exists). The
build will fail if an ABI incompatibility is detected.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
buildtools/meson.build | 4 ++++
lib/meson.build | 17 ++++++++++++++++-
2 files changed, 20 insertions(+), 1 deletion(-)
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 1ec2c2f95..a895c791c 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -7,6 +7,10 @@ pmdinfo = find_program('gen-pmdinfo-cfile.sh')
check_experimental_syms = find_program('check-experimental-syms.sh')
+if get_option('abi_compat_checks')
+ abidiff = find_program('abidiff')
+endif
+
# set up map-to-def script using python, either built-in or external
python3 = import('python').find_installation(required: false)
if python3.found()
diff --git a/lib/meson.build b/lib/meson.build
index 7849ac9f7..da180fb37 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -146,7 +146,9 @@ foreach l:libraries
version_map, '@INPUT@'],
capture: true,
input: static_lib,
- output: name + '.exp_chk')
+ output: name + '.exp_chk',
+ install: false,
+ build_by_default: get_option('abi_compat_checks'))
endif
shared_lib = shared_library(libname,
@@ -164,6 +166,19 @@ foreach l:libraries
include_directories: includes,
dependencies: shared_deps)
+ if is_experimental == 0
+ custom_target(dir_name + '.abi_chk',
+ command: [abidiff,
+ meson.source_root() + '/lib/abi/'
+ + dir_name + '.dump',
+ '@INPUT@'],
+ input: shared_lib,
+ output: dir_name + '.abi_chk',
+ capture: true,
+ install: false,
+ build_by_default: get_option('abi_compat_checks'))
+ endif
+
dpdk_libraries = [shared_lib] + dpdk_libraries
dpdk_static_libraries = [static_lib] + dpdk_static_libraries
endif # sources.length() > 0
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [RFC 4/6] build: add meson option for abi related checks
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
2019-10-23 1:07 3% ` [dpdk-dev] [RFC 1/6] build: enable debug info by default in meson builds Kevin Laatz
2019-10-23 1:07 22% ` [dpdk-dev] [RFC 3/6] devtools: add abi dump generation script Kevin Laatz
@ 2019-10-23 1:07 14% ` Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 5/6] build: add lib abi checks to meson Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 6/6] build: add drivers " Kevin Laatz
4 siblings, 0 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella, Kevin Laatz
This patch adds a new meson option for running ABI compatibility checks
during the build. If enabled, the lib and drivers .so files will be
compared against any existing ABI dump files in lib|drivers/abi of the
source directory. If there are any incompatibilities, the build will fail
and display the incompatibility.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
---
meson_options.txt | 2 ++
1 file changed, 2 insertions(+)
diff --git a/meson_options.txt b/meson_options.txt
index 000e38fd9..aefab391a 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -1,5 +1,7 @@
# Please keep these options sorted alphabetically.
+option('abi_compat_checks', type: 'boolean', value: true,
+ description: 'enable abi compatibility checks to run during the build')
option('allow_invalid_socket_id', type: 'boolean', value: false,
description: 'allow out-of-range NUMA socket id\'s for platforms that don\'t report the value correctly')
option('drivers_install_subdir', type: 'string', value: 'dpdk/pmds-<VERSION>',
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [RFC 3/6] devtools: add abi dump generation script
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
2019-10-23 1:07 3% ` [dpdk-dev] [RFC 1/6] build: enable debug info by default in meson builds Kevin Laatz
@ 2019-10-23 1:07 22% ` Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 4/6] build: add meson option for abi related checks Kevin Laatz
` (2 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella, Kevin Laatz
This patch adds a script to generate ABI dump files. These files will be
required to perform ABI compatibility checks during the build later in the
patchset. This script should be run on a DPDK version with a stable ABI.
Since this is a tool designed for human use, we simplify it to just work
off a whole build directory, taking the parameter of the builddir and
generating the lib|drivers/abi dir. This is hardcoded into the script since
the meson build expects the .dump files in these directories.
Signed-off-by: Kevin Laatz <kevin.laatz@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
devtools/gen-abi-dump.sh | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
create mode 100755 devtools/gen-abi-dump.sh
diff --git a/devtools/gen-abi-dump.sh b/devtools/gen-abi-dump.sh
new file mode 100755
index 000000000..ffedef10c
--- /dev/null
+++ b/devtools/gen-abi-dump.sh
@@ -0,0 +1,24 @@
+#!/bin/sh
+
+builddir=$1
+
+if [ -z "$builddir" ] ; then
+ echo "Usage: $(basename $0) build_dir"
+ exit 1
+fi
+
+if [ ! -d "$builddir" ] ; then
+ echo "Error: build directory, '$builddir', doesn't exist"
+ exit 1
+fi
+
+for d in lib drivers ; do
+ mkdir -p $d/abi
+
+ for f in $builddir/$d/*.so* ; do
+ test -L "$f" && continue
+
+ libname=$(basename $f)
+ abidw --out-file $d/abi/${libname%.so*}.dump $f || exit 1
+ done
+done
--
2.17.1
^ permalink raw reply [relevance 22%]
* [dpdk-dev] [RFC 1/6] build: enable debug info by default in meson builds
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
@ 2019-10-23 1:07 3% ` Kevin Laatz
2019-10-23 1:07 22% ` [dpdk-dev] [RFC 3/6] devtools: add abi dump generation script Kevin Laatz
` (3 subsequent siblings)
4 siblings, 0 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella
From: Bruce Richardson <bruce.richardson@intel.com>
We can turn on debug info by default in meson builds, since it has no
performance penalty. This is done by changing the default build type from
"release" to "debugoptimized". Since the latter using O2, we can using
extra cflags to override that back to O3, which will make little real
difference for actual debugging.
For real debug builds, the user can still do "meson --buildtype=debug ..."
and to remove the debug info "meson --buildtype=release ..." can be used.
These are all standard meson options.
The advantage of having debug builds by default using meson settings is
that we can then add checks for ABI compatibility into each build, and
disable them if we detect that the user has turned off the debug info.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
meson.build | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/meson.build b/meson.build
index c5a3dda26..b77ccd6ef 100644
--- a/meson.build
+++ b/meson.build
@@ -7,10 +7,16 @@ project('DPDK', 'C',
version: run_command(find_program('cat', 'more'),
files('VERSION')).stdout().strip(),
license: 'BSD',
- default_options: ['buildtype=release', 'default_library=static'],
+ default_options: ['buildtype=debugoptimized',
+ 'default_library=static'],
meson_version: '>= 0.47.1'
)
+# for default "debugoptimized" builds override optimization level 2 with 3
+if get_option('buildtype') == 'debugoptimized'
+ add_project_arguments('-O3', language: 'c')
+endif
+
# set up some global vars for compiler, platform, configuration, etc.
cc = meson.get_compiler('c')
dpdk_conf = configuration_data()
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build
@ 2019-10-23 1:07 9% Kevin Laatz
2019-10-23 1:07 3% ` [dpdk-dev] [RFC 1/6] build: enable debug info by default in meson builds Kevin Laatz
` (4 more replies)
0 siblings, 5 replies; 200+ results
From: Kevin Laatz @ 2019-10-23 1:07 UTC (permalink / raw)
To: dev; +Cc: bruce.richardson, thomas, ray.kinsella, Kevin Laatz
With the recent changes made to stabilize ABI versioning in DPDK, it will
become increasingly important to check patches for ABI compatibility. We
propose adding the ABI compatibility checking to be done as part of the
build.
The advantages to adding the ABI compatibility checking to the build are
two-fold. Firstly, developers can easily check their patches to make sure
they don’t break the ABI without adding any extra steps. Secondly, it
makes the integration into existing CI seamless since there are no extra
scripts to make the CI run. The build will run as usual and if an
incompatibility is detected in the ABI, the build will fail and show the
incompatibility. As an added bonus, enabling the ABI compatibility checks
does not impact the build speed.
The proposed solution works as follows:
1. Generate the ABI dump of the baseline. This can be done with the new
script added in this RFC. This step will only need to be done when the
ABI version changes (so once a year) and can be added to master so it
exists by default. This step can be skipped if the dump files for the
baseline already exist.
2. Build with meson. If there is an ABI incompatibility, the build will
fail and print the incompatibility information.
The patches accompanying this RFC add the ABI dump file generating script,
the meson option required to enable/disable the checks, and the required
meson changes to run the compatibility checks during the build.
This patchset depends on:
- The "Implement the new ABI policy and add helper scripts" patchset
(http://patches.dpdk.org/project/dpdk/list/?series=6913).
- The "Add scanning for experimental symbols to meson" patchset
(http://patches.dpdk.org/project/dpdk/list/?series=6744).
- "build: enable extra warnings for meson build" patch
(http://patches.dpdk.org/patch/60622/).
Bruce Richardson (2):
build: enable debug info by default in meson builds
build: use meson warning levels
Kevin Laatz (4):
devtools: add abi dump generation script
build: add meson option for abi related checks
build: add lib abi checks to meson
build: add drivers abi checks to meson
buildtools/meson.build | 4 ++++
config/meson.build | 40 +++++++++++++++++++++-------------------
devtools/gen-abi-dump.sh | 24 ++++++++++++++++++++++++
drivers/meson.build | 17 ++++++++++++++++-
lib/meson.build | 17 ++++++++++++++++-
meson.build | 9 ++++++++-
meson_options.txt | 2 ++
7 files changed, 91 insertions(+), 22 deletions(-)
create mode 100755 devtools/gen-abi-dump.sh
--
2.17.1
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-22 22:51 0% ` Ananyev, Konstantin
@ 2019-10-23 3:16 0% ` Wang, Haiyue
2019-10-23 10:21 0% ` Olivier Matz
2019-10-23 10:19 0% ` Olivier Matz
1 sibling, 1 reply; 200+ results
From: Wang, Haiyue @ 2019-10-23 3:16 UTC (permalink / raw)
To: Ananyev, Konstantin, Olivier Matz, dev
Cc: Andrew Rybchenko, Richardson, Bruce, Jerin Jacob Kollanukkaran,
Wiles, Keith, Morten Brørup, Stephen Hemminger,
Thomas Monjalon
> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Wednesday, October 23, 2019 06:52
> To: Olivier Matz <olivier.matz@6wind.com>; dev@dpdk.org
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Morten Brørup <mb@smartsharesystems.com>; Stephen Hemminger
> <stephen@networkplumber.org>; Thomas Monjalon <thomas@monjalon.net>
> Subject: RE: [PATCH v2] mbuf: support dynamic fields and flags
>
>
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each
> > feature. Also, changing fields in the mbuf structure can break the API
> > or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration
> > of fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload
> > feature, when the application requests to enable this feature. As
> > the space in mbuf is limited, the space should only be reserved if it
> > is going to be used (i.e when the application explicitly asks for it).
> >
> > The registration can be done at any moment, but it is not possible
> > to unregister fields or flags for now.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >
> > v2
> >
> > * Rebase on top of master: solve conflict with Stephen's patchset
> > (packet copy)
> > * Add new apis to register a dynamic field/flag at a specific place
> > * Add a dump function (sugg by David)
> > * Enhance field registration function to select the best offset, keeping
> > large aligned zones as much as possible (sugg by Konstantin)
> > * Use a size_t and unsigned int instead of int when relevant
> > (sugg by Konstantin)
> > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > (sugg by Konstantin)
> > * Remove unused argument in private function (sugg by Konstantin)
> > * Fix and simplify locking (sugg by Konstantin)
> > * Fix minor typo
> >
> > rfc -> v1
> >
> > * Rebase on top of master
> > * Change registration API to use a structure instead of
> > variables, getting rid of #defines (Stephen's comment)
> > * Update flag registration to use a similar API as fields.
> > * Change max name length from 32 to 64 (sugg. by Thomas)
> > * Enhance API documentation (Haiyue's and Andrew's comments)
> > * Add a debug log at registration
> > * Add some words in release note
> > * Did some performance tests (sugg. by Andrew):
> > On my platform, reading a dynamic field takes ~3 cycles more
> > than a static field, and ~2 cycles more for writing.
> >
> > app/test/test_mbuf.c | 145 ++++++-
> > doc/guides/rel_notes/release_19_11.rst | 7 +
> > lib/librte_mbuf/Makefile | 2 +
> > lib/librte_mbuf/meson.build | 6 +-
> > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > 8 files changed, 959 insertions(+), 5 deletions(-)
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> >
> > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > index b9c2b2500..01cafad59 100644
> > --- a/app/test/test_mbuf.c
> > +++ b/app/test/test_mbuf.c
> > @@ -28,6 +28,7 @@
> > #include <rte_random.h>
> > #include <rte_cycles.h>
> > #include <rte_malloc.h>
> > +#include <rte_mbuf_dyn.h>
> >
[snip]
> > +int
> > +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> > + unsigned int req)
> > +{
> > + int ret;
> > +
> > + if (req != UINT_MAX && req >= 64) {
>
> Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
Might introduce a new macro like kernel:
/**
* FIELD_SIZEOF - get the size of a struct's field
* @t: the target struct
* @f: the target struct's field
* Return: the size of @f in the struct definition without having a
* declared instance of @t.
*/
#define FIELD_SIZEOF(t, f) (sizeof(((t*)0)->f))
Then: FIELD_SIZEOF(rte_mbuf, ol_flags) * CHAR_BIT
> Apart from that:
> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
>
> > + rte_errno = EINVAL;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_write_lock();
> > + ret = __rte_mbuf_dynflag_register_bitnum(params, req);
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return ret;
> > +}
> > +
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
2019-10-18 2:47 0% ` Wang, Haiyue
@ 2019-10-22 22:51 0% ` Ananyev, Konstantin
2019-10-23 3:16 0% ` Wang, Haiyue
2019-10-23 10:19 0% ` Olivier Matz
2019-10-23 12:00 0% ` Shahaf Shuler
2019-10-24 7:38 0% ` Slava Ovsiienko
3 siblings, 2 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-22 22:51 UTC (permalink / raw)
To: Olivier Matz, dev
Cc: Andrew Rybchenko, Richardson, Bruce, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Morten Brørup,
Stephen Hemminger, Thomas Monjalon
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each
> feature. Also, changing fields in the mbuf structure can break the API
> or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration
> of fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload
> feature, when the application requests to enable this feature. As
> the space in mbuf is limited, the space should only be reserved if it
> is going to be used (i.e when the application explicitly asks for it).
>
> The registration can be done at any moment, but it is not possible
> to unregister fields or flags for now.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>
> v2
>
> * Rebase on top of master: solve conflict with Stephen's patchset
> (packet copy)
> * Add new apis to register a dynamic field/flag at a specific place
> * Add a dump function (sugg by David)
> * Enhance field registration function to select the best offset, keeping
> large aligned zones as much as possible (sugg by Konstantin)
> * Use a size_t and unsigned int instead of int when relevant
> (sugg by Konstantin)
> * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> (sugg by Konstantin)
> * Remove unused argument in private function (sugg by Konstantin)
> * Fix and simplify locking (sugg by Konstantin)
> * Fix minor typo
>
> rfc -> v1
>
> * Rebase on top of master
> * Change registration API to use a structure instead of
> variables, getting rid of #defines (Stephen's comment)
> * Update flag registration to use a similar API as fields.
> * Change max name length from 32 to 64 (sugg. by Thomas)
> * Enhance API documentation (Haiyue's and Andrew's comments)
> * Add a debug log at registration
> * Add some words in release note
> * Did some performance tests (sugg. by Andrew):
> On my platform, reading a dynamic field takes ~3 cycles more
> than a static field, and ~2 cycles more for writing.
>
> app/test/test_mbuf.c | 145 ++++++-
> doc/guides/rel_notes/release_19_11.rst | 7 +
> lib/librte_mbuf/Makefile | 2 +
> lib/librte_mbuf/meson.build | 6 +-
> lib/librte_mbuf/rte_mbuf.h | 23 +-
> lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> lib/librte_mbuf/rte_mbuf_version.map | 7 +
> 8 files changed, 959 insertions(+), 5 deletions(-)
> create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
>
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index b9c2b2500..01cafad59 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -28,6 +28,7 @@
> #include <rte_random.h>
> #include <rte_cycles.h>
> #include <rte_malloc.h>
> +#include <rte_mbuf_dyn.h>
>
> #include "test.h"
>
> @@ -657,7 +658,6 @@ test_attach_from_different_pool(struct rte_mempool *pktmbuf_pool,
> rte_pktmbuf_free(clone2);
> return -1;
> }
> -#undef GOTO_FAIL
>
> /*
> * test allocation and free of mbufs
> @@ -1276,6 +1276,143 @@ test_tx_offload(void)
> return (v1 == v2) ? 0 : -EINVAL;
> }
>
> +static int
> +test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
> +{
> + const struct rte_mbuf_dynfield dynfield = {
> + .name = "test-dynfield",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield2 = {
> + .name = "test-dynfield2",
> + .size = sizeof(uint16_t),
> + .align = __alignof__(uint16_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield3 = {
> + .name = "test-dynfield3",
> + .size = sizeof(uint8_t),
> + .align = __alignof__(uint8_t),
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_big = {
> + .name = "test-dynfield-fail-big",
> + .size = 256,
> + .align = 1,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynfield dynfield_fail_align = {
> + .name = "test-dynfield-fail-align",
> + .size = 1,
> + .align = 3,
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag = {
> + .name = "test-dynflag",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag2 = {
> + .name = "test-dynflag2",
> + .flags = 0,
> + };
> + const struct rte_mbuf_dynflag dynflag3 = {
> + .name = "test-dynflag3",
> + .flags = 0,
> + };
> + struct rte_mbuf *m = NULL;
> + int offset, offset2, offset3;
> + int flag, flag2, flag3;
> + int ret;
> +
> + printf("Test mbuf dynamic fields and flags\n");
> + rte_mbuf_dyn_dump(stdout);
> +
> + offset = rte_mbuf_dynfield_register(&dynfield);
> + if (offset == -1)
> + GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
> + offset, strerror(errno));
> +
> + ret = rte_mbuf_dynfield_register(&dynfield);
> + if (ret != offset)
> + GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
> + ret, strerror(errno));
> +
> + offset2 = rte_mbuf_dynfield_register(&dynfield2);
> + if (offset2 == -1 || offset2 == offset || (offset2 & 1))
> + GOTO_FAIL("failed to register dynamic field 2, offset2=%d: %s",
> + offset2, strerror(errno));
> +
> + offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
> + offsetof(struct rte_mbuf, dynfield1[1]));
> + if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
> + GOTO_FAIL("failed to register dynamic field 3, offset=%d: %s",
> + offset3, strerror(errno));
> +
> + printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
> + offset, offset2, offset3);
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (too big)");
> +
> + ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (bad alignment)");
> +
> + ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
> + offsetof(struct rte_mbuf, ol_flags));
> + if (ret != -1)
> + GOTO_FAIL("dynamic field creation should fail (not avail)");
> +
> + flag = rte_mbuf_dynflag_register(&dynflag);
> + if (flag == -1)
> + GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
> + flag, strerror(errno));
> +
> + ret = rte_mbuf_dynflag_register(&dynflag);
> + if (ret != flag)
> + GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
> + ret, strerror(errno));
> +
> + flag2 = rte_mbuf_dynflag_register(&dynflag2);
> + if (flag2 == -1 || flag2 == flag)
> + GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
> + flag2, strerror(errno));
> +
> + flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
> + rte_bsf64(PKT_LAST_FREE));
> + if (flag3 != rte_bsf64(PKT_LAST_FREE))
> + GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
> + flag3, strerror(errno));
> +
> + printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
> +
> + /* set, get dynamic field */
> + m = rte_pktmbuf_alloc(pktmbuf_pool);
> + if (m == NULL)
> + GOTO_FAIL("Cannot allocate mbuf");
> +
> + *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
> + if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
> + GOTO_FAIL("failed to read dynamic field");
> + *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
> + if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
> + GOTO_FAIL("failed to read dynamic field");
> +
> + /* set a dynamic flag */
> + m->ol_flags |= (1ULL << flag);
> +
> + rte_mbuf_dyn_dump(stdout);
> + rte_pktmbuf_free(m);
> + return 0;
> +fail:
> + rte_pktmbuf_free(m);
> + return -1;
> +}
> +#undef GOTO_FAIL
> +
> static int
> test_mbuf(void)
> {
> @@ -1295,6 +1432,12 @@ test_mbuf(void)
> goto err;
> }
>
> + /* test registration of dynamic fields and flags */
> + if (test_mbuf_dyn(pktmbuf_pool) < 0) {
> + printf("mbuf dynflag test failed\n");
> + goto err;
> + }
> +
> /* create a specific pktmbuf pool with a priv_size != 0 and no data
> * room size */
> pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
> diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
> index 85953b962..9e9c94554 100644
> --- a/doc/guides/rel_notes/release_19_11.rst
> +++ b/doc/guides/rel_notes/release_19_11.rst
> @@ -21,6 +21,13 @@ DPDK Release 19.11
>
> xdg-open build/doc/html/guides/rel_notes/release_19_11.html
>
> +* **Add support of support dynamic fields and flags in mbuf.**
> +
> + This new feature adds the ability to dynamically register some room
> + for a field or a flag in the mbuf structure. This is typically used
> + for specific offload features, where adding a static field or flag
> + in the mbuf is not justified.
> +
>
> New Features
> ------------
> diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
> index c8f6d2689..5a9bcee73 100644
> --- a/lib/librte_mbuf/Makefile
> +++ b/lib/librte_mbuf/Makefile
> @@ -17,8 +17,10 @@ LIBABIVER := 5
>
> # all source are stored in SRCS-y
> SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
> +SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
>
> # install includes
> SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h rte_mbuf_ptype.h rte_mbuf_pool_ops.h
> +SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
>
> include $(RTE_SDK)/mk/rte.lib.mk
> diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
> index 6cc11ebb4..9137e8f26 100644
> --- a/lib/librte_mbuf/meson.build
> +++ b/lib/librte_mbuf/meson.build
> @@ -2,8 +2,10 @@
> # Copyright(c) 2017 Intel Corporation
>
> version = 5
> -sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
> -headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
> +sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
> + 'rte_mbuf_dyn.c')
> +headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
> + 'rte_mbuf_dyn.h')
> deps += ['mempool']
>
> allow_experimental_apis = true
> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index fb0849ac1..5740b1e93 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -198,9 +198,12 @@ extern "C" {
> #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
>
> -/* add new RX flags here */
> +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
>
> -/* add new TX flags here */
> +#define PKT_FIRST_FREE (1ULL << 23)
> +#define PKT_LAST_FREE (1ULL << 39)
> +
> +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
>
> /**
> * Indicate that the metadata field in the mbuf is in use.
> @@ -738,6 +741,7 @@ struct rte_mbuf {
> */
> struct rte_mbuf_ext_shared_info *shinfo;
>
> + uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
> } __rte_cache_aligned;
>
> /**
> @@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
> */
> #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
>
> +/**
> + * Copy dynamic fields from m_src to m_dst.
> + *
> + * @param m_dst
> + * The destination mbuf.
> + * @param m_src
> + * The source mbuf.
> + */
> +static inline void
> +rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> +{
> + memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst->dynfield1));
> +}
> +
> /* internal */
> static inline void
> __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> @@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
> mdst->hash = msrc->hash;
> mdst->packet_type = msrc->packet_type;
> mdst->timestamp = msrc->timestamp;
> + rte_mbuf_dynfield_copy(mdst, msrc);
> }
>
> /**
> diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
> new file mode 100644
> index 000000000..9ef235483
> --- /dev/null
> +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> @@ -0,0 +1,548 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright 2019 6WIND S.A.
> + */
> +
> +#include <sys/queue.h>
> +#include <stdint.h>
> +#include <limits.h>
> +
> +#include <rte_common.h>
> +#include <rte_eal.h>
> +#include <rte_eal_memconfig.h>
> +#include <rte_tailq.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_string_fns.h>
> +#include <rte_mbuf.h>
> +#include <rte_mbuf_dyn.h>
> +
> +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> +
> +struct mbuf_dynfield_elt {
> + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> + struct rte_mbuf_dynfield params;
> + size_t offset;
> +};
> +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> + .name = "RTE_MBUF_DYNFIELD",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> +
> +struct mbuf_dynflag_elt {
> + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> + struct rte_mbuf_dynflag params;
> + unsigned int bitnum;
> +};
> +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> +
> +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> + .name = "RTE_MBUF_DYNFLAG",
> +};
> +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> +
> +struct mbuf_dyn_shm {
> + /**
> + * For each mbuf byte, free_space[i] != 0 if space is free.
> + * The value is the size of the biggest aligned element that
> + * can fit in the zone.
> + */
> + uint8_t free_space[sizeof(struct rte_mbuf)];
> + /** Bitfield of available flags. */
> + uint64_t free_flags;
> +};
> +static struct mbuf_dyn_shm *shm;
> +
> +/* Set the value of free_space[] according to the size and alignment of
> + * the free areas. This helps to select the best place when reserving a
> + * dynamic field. Assume tailq is locked.
> + */
> +static void
> +process_score(void)
> +{
> + size_t off, align, size, i;
> +
> + /* first, erase previous info */
> + for (i = 0; i < sizeof(struct rte_mbuf); i++) {
> + if (shm->free_space[i])
> + shm->free_space[i] = 1;
> + }
> +
> + for (off = 0; off < sizeof(struct rte_mbuf); off++) {
> + /* get the size of the free zone */
> + for (size = 0; shm->free_space[off + size]; size++)
> + ;
> + if (size == 0)
> + continue;
> +
> + /* get the alignment of biggest object that can fit in
> + * the zone at this offset.
> + */
> + for (align = 1;
> + (off % (align << 1)) == 0 && (align << 1) <= size;
> + align <<= 1)
> + ;
> +
> + /* save it in free_space[] */
> + for (i = off; i < off + size; i++)
> + shm->free_space[i] = RTE_MAX(align, shm->free_space[i]);
> + }
> +}
> +
> +/* Allocate and initialize the shared memory. Assume tailq is locked */
> +static int
> +init_shared_mem(void)
> +{
> + const struct rte_memzone *mz;
> + uint64_t mask;
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> + sizeof(struct mbuf_dyn_shm),
> + SOCKET_ID_ANY, 0,
> + RTE_CACHE_LINE_SIZE);
> + } else {
> + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> + }
> + if (mz == NULL)
> + return -1;
> +
> + shm = mz->addr;
> +
> +#define mark_free(field) \
> + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> + 1, sizeof(((struct rte_mbuf *)0)->field))
Still think it would look nicer without multi-line macro defines/undef in the middle of the function.
> +
> + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> + /* init free_space, keep it sync'd with
> + * rte_mbuf_dynfield_copy().
> + */
> + memset(shm, 0, sizeof(*shm));
> + mark_free(dynfield1);
> +
> + /* init free_flags */
> + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
> + shm->free_flags |= mask;
> +
> + process_score();
> + }
> +#undef mark_free
> +
> + return 0;
> +}
> +
> +/* check if this offset can be used */
> +static int
> +check_offset(size_t offset, size_t size, size_t align)
> +{
> + size_t i;
> +
> + if ((offset & (align - 1)) != 0)
> + return -1;
> + if (offset + size > sizeof(struct rte_mbuf))
> + return -1;
> +
> + for (i = 0; i < size; i++) {
> + if (!shm->free_space[i + offset])
> + return -1;
> + }
> +
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynfield_elt *
> +__mbuf_dynfield_lookup(const char *name)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynfield;
> +}
> +
> +int
> +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
> +{
> + struct mbuf_dynfield_elt *mbuf_dynfield;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynfield == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynfield->params, sizeof(*params));
> +
> + return mbuf_dynfield->offset;
> +}
> +
> +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> + const struct rte_mbuf_dynfield *params2)
> +{
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->size != params2->size)
> + return -1;
> + if (params1->align != params2->align)
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
> + size_t req)
> +{
> + struct mbuf_dynfield_list *mbuf_dynfield_list;
> + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int best_zone = UINT_MAX;
> + size_t i, offset;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> + if (mbuf_dynfield != NULL) {
> + if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynfield->offset;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == SIZE_MAX) {
> + for (offset = 0;
> + offset < sizeof(struct rte_mbuf);
> + offset++) {
> + if (check_offset(offset, params->size,
> + params->align) == 0 &&
> + shm->free_space[offset] < best_zone) {
Probably worth to explain a bit more here about best_zone logic -
trying to find offset with minimal score (minimal continuous length), etc.
> + best_zone = shm->free_space[offset];
> + req = offset;
> + }
> + }
> + if (req == SIZE_MAX) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + } else {
> + if (check_offset(req, params->size, params->align) < 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + }
> +
> + offset = req;
> + mbuf_dynfield_list = RTE_TAILQ_CAST(
> + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> +
> + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
> + if (mbuf_dynfield == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> + sizeof(mbuf_dynfield->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> + rte_errno = ENAMETOOLONG;
> + rte_free(mbuf_dynfield);
> + rte_free(te);
> + return -1;
> + }
> + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
> + mbuf_dynfield->offset = offset;
> + te->data = mbuf_dynfield;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> +
> + for (i = offset; i < offset + params->size; i++)
> + shm->free_space[i] = 0;
> + process_score();
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
> + params->name, params->size, params->align, params->flags,
> + offset);
> +
> + return offset;
> +}
> +
> +int
> +rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
> + size_t req)
> +{
> + int ret;
> +
> + if (params->size >= sizeof(struct rte_mbuf)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (!rte_is_power_of_2(params->align)) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> + if (params->flags != 0) {
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynfield_register_offset(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
> +int
> +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
> +{
> + return rte_mbuf_dynfield_register_offset(params, SIZE_MAX);
> +}
> +
> +/* assume tailq is locked */
> +static struct mbuf_dynflag_elt *
> +__mbuf_dynflag_lookup(const char *name)
> +{
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> + struct rte_tailq_entry *te;
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> + if (strncmp(name, mbuf_dynflag->params.name,
> + RTE_MBUF_DYN_NAMESIZE) == 0)
> + break;
> + }
> +
> + if (te == NULL) {
> + rte_errno = ENOENT;
> + return NULL;
> + }
> +
> + return mbuf_dynflag;
> +}
> +
> +int
> +rte_mbuf_dynflag_lookup(const char *name,
> + struct rte_mbuf_dynflag *params)
> +{
> + struct mbuf_dynflag_elt *mbuf_dynflag;
> +
> + if (shm == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_read_lock();
> + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> + rte_mcfg_tailq_read_unlock();
> +
> + if (mbuf_dynflag == NULL) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> +
> + if (params != NULL)
> + memcpy(params, &mbuf_dynflag->params, sizeof(*params));
> +
> + return mbuf_dynflag->bitnum;
> +}
> +
> +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> + const struct rte_mbuf_dynflag *params2)
> +{
> + if (strcmp(params1->name, params2->name))
> + return -1;
> + if (params1->flags != params2->flags)
> + return -1;
> + return 0;
> +}
> +
> +/* assume tailq is locked */
> +static int
> +__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> + unsigned int req)
> +{
> + struct mbuf_dynflag_list *mbuf_dynflag_list;
> + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> + struct rte_tailq_entry *te = NULL;
> + unsigned int bitnum;
> + int ret;
> +
> + if (shm == NULL && init_shared_mem() < 0)
> + return -1;
> +
> + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> + if (mbuf_dynflag != NULL) {
> + if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
> + rte_errno = EEXIST;
> + return -1;
> + }
> + return mbuf_dynflag->bitnum;
> + }
> +
> + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> + rte_errno = EPERM;
> + return -1;
> + }
> +
> + if (req == UINT_MAX) {
> + if (shm->free_flags == 0) {
> + rte_errno = ENOENT;
> + return -1;
> + }
> + bitnum = rte_bsf64(shm->free_flags);
> + } else {
> + if ((shm->free_flags & (1ULL << req)) == 0) {
> + rte_errno = EBUSY;
> + return -1;
> + }
> + bitnum = req;
> + }
> +
> + mbuf_dynflag_list = RTE_TAILQ_CAST(
> + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> +
> + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> + if (te == NULL)
> + return -1;
> +
> + mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
> + if (mbuf_dynflag == NULL) {
> + rte_free(te);
> + return -1;
> + }
> +
> + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> + sizeof(mbuf_dynflag->params.name));
> + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> + rte_free(mbuf_dynflag);
> + rte_free(te);
> + rte_errno = ENAMETOOLONG;
> + return -1;
> + }
> + mbuf_dynflag->bitnum = bitnum;
> + te->data = mbuf_dynflag;
> +
> + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> +
> + shm->free_flags &= ~(1ULL << bitnum);
> +
> + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
> + params->name, params->flags, bitnum);
> +
> + return bitnum;
> +}
> +
> +int
> +rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
> + unsigned int req)
> +{
> + int ret;
> +
> + if (req != UINT_MAX && req >= 64) {
Might be better to replace 64 with something like sizeof(mbuf->ol_flags) * CHAR_BIT or so.
Apart from that:
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
> + rte_errno = EINVAL;
> + return -1;
> + }
> +
> + rte_mcfg_tailq_write_lock();
> + ret = __rte_mbuf_dynflag_register_bitnum(params, req);
> + rte_mcfg_tailq_write_unlock();
> +
> + return ret;
> +}
> +
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-22 17:44 0% ` Ananyev, Konstantin
@ 2019-10-22 22:21 0% ` Ananyev, Konstantin
2019-10-23 10:05 0% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-22 22:21 UTC (permalink / raw)
To: 'Akhil Goyal', 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', 'Hemant Agrawal'
> > > > Added my comments inline with your draft.
> > > > [snip]..
> > > >
> > > > >
> > > > > Ok, then my suggestion:
> > > > > Let's at least write down all points about crypto-dev approach where we
> > > > > disagree and then probably try to resolve them one by one....
> > > > > If we fail to make an agreement/progress in next week or so,
> > > > > (and no more reviews from the community)
> > > > > will have bring that subject to TB meeting to decide.
> > > > > Sounds fair to you?
> > > > Agreed
> > > > >
> > > > > List is below.
> > > > > Please add/correct me, if I missed something.
> > > > >
> > > > > Konstantin
> > > >
> > > > Before going into comparison, we should define the requirement as well.
> > >
> > > Good point.
> > >
> > > > What I understood from the patchset,
> > > > "You need a synchronous API to perform crypto operations on raw data using
> > > SW PMDs"
> > > > So,
> > > > - no crypto-ops,
> > > > - no separate enq-deq, only single process API for data path
> > > > - Do not need any value addition to the session parameters.
> > > > (You would need some parameters from the crypto-op which
> > > > Are constant per session and since you wont use crypto-op,
> > > > You need some place to store that)
> > >
> > > Yes, this is correct, I think.
> > >
> > > >
> > > > Now as per your mail, the comparison
> > > > 1. extra input parameters to create/init rte_(cpu)_sym_session.
> > > >
> > > > Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and
> > > 'key' fields.
> > > > New fields will be optional and would be used by PMD only when cpu-crypto
> > > session is requested.
> > > > For lksd-crypto session PMD is free to ignore these fields.
> > > > No ABI breakage is required.
> > > >
> > > > [Akhil] Agreed, no issues.
> > > >
> > > > 2. cpu-crypto create/init.
> > > > a) Our suggestion - introduce new API for that:
> > > > - rte_crypto_cpu_sym_init() that would init completely opaque
> > > rte_crypto_cpu_sym_session.
> > > > - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear);
> > > /*whatever else we'll need *'};
> > > > - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform
> > > *xforms)
> > > > that would return const struct rte_crypto_cpu_sym_session_ops *based
> > > on input xforms.
> > > > Advantages:
> > > > 1) totally opaque data structure (no ABI breakages in future), PMD
> > > writer is totally free
> > > > with it format and contents.
> > > >
> > > > [Akhil] It will have breakage at some point till we don't hit the union size.
> > >
> > > Not sure, what union you are talking about?
> >
> > Union of xforms in rte_security_session_conf
>
> Hmm, how does it relates here?
> I thought we discussing pure rte_cryptodev_sym_session, no?
>
> >
> > >
> > > > Rather I don't suspect there will be more parameters added.
> > > > Or do we really care about the ABI breakage when the argument is about
> > > > the correct place to add a piece of code or do we really agree to add code
> > > > anywhere just to avoid that breakage.
> > >
> > > I am talking about maintaining it in future.
> > > if your struct is not seen externally, no chances to introduce ABI breakage.
> > >
> > > >
> > > > 2) each session entity is self-contained, user doesn't need to bring along
> > > dev_id etc.
> > > > dev_id is needed only at init stage, after that user will use session ops
> > > to perform
> > > > all operations on that session (process(), clear(), etc.).
> > > >
> > > > [Akhil] There is nothing called as session ops in current DPDK.
> > >
> > > True, but it doesn't mean we can't/shouldn't have it.
> >
> > We can have it if it is not adding complexity for the user. Creating 2 different code
> > Paths for user is not desirable for the stack developers.
> >
> > >
> > > > What you are proposing
> > > > is a new concept which doesn't have any extra benefit, rather it is adding
> > > complexity
> > > > to have two different code paths for session create.
> > > >
> > > >
> > > > 3) User can decide does he wants to store ops[] pointer on a per session
> > > basis,
> > > > or on a per group of same sessions, or...
> > > >
> > > > [Akhil] Will the user really care which process API should be called from the
> > > PMD.
> > > > Rather it should be driver's responsibility to store that in the session private
> > > data
> > > > which would be opaque to the user. As per my suggestion same process
> > > function can
> > > > be added to multiple sessions or a single session can be managed inside the
> > > PMD.
> > >
> > > In that case we either need to have a function per session (stored internally),
> > > or make decision (branches) at run-time.
> > > But as I said in other mail - I am ok to add small shim structure here:
> > > either rte_crypto_cpu_sym_session { void *ses; struct
> > > rte_crypto_cpu_sym_session_ops ops; }
> > > or rte_crypto_cpu_sym_session { void *ses; struct
> > > rte_crypto_cpu_sym_session_ops *ops; }
> > > And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops() into
> > > one (init).
> >
> > Again that will be a separate API call from the user perspective which is not good.
> >
> > >
> > > >
> > > >
> > > > 4) No mandatory mempools for private sessions. User can allocate
> > > memory for cpu-crypto
> > > > session whenever he likes.
> > > >
> > > > [Akhil] you mean session private data?
> > >
> > > Yes.
> > >
> > > > You would need that memory anyways, user will be
> > > > allocating that already. You do not need to manage that.
> > >
> > > What I am saying - right now user has no choice but to allocate it via mempool.
> > > Which is probably not the best options for all cases.
> > >
> > > >
> > > > Disadvantages:
> > > > 5) Extra changes in control path
> > > > 6) User has to store session_ops pointer explicitly.
> > > >
> > > > [Akhil] More disadvantages:
> > > > - All supporting PMDs will need to maintain TWO types of session for the
> > > > same crypto processing. Suppose a fix or a new feature(or algo) is added, PMD
> > > owner
> > > > will need to add code in both the session create APIs. Hence more
> > > maintenance and
> > > > error prone.
> > >
> > > I think majority of code for both paths will be common, plus even we'll reuse
> > > current sym_session_init() -
> > > changes in PMD session_init() code will be unavoidable.
> > > But yes, it will be new entry in devops, that PMD will have to support.
> > > Ok to add it as 7) to the list.
> > >
> > > > - Stacks which will be using these new APIs also need to maintain two
> > > > code path for the same processing while doing session initialization
> > > > for sync and async
> > >
> > > That's the same as #5 above, I think.
> > >
> > > >
> > > >
> > > > b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and
> > > existing rte_cryptodev_sym_session
> > > > structure.
> > > > Advantages:
> > > > 1) allows to reuse same struct and init/create/clear() functions.
> > > > Probably less changes in control path.
> > > > Disadvantages:
> > > > 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id,
> > > which means that
> > > > we can't use the same rte_cryptodev_sym_session to hold private
> > > sessions pointers
> > > > for both sync and async mode for the same device.
> > > > So the only option we have - make PMD devops-
> > > >sym_session_configure()
> > > > always create a session that can work in both cpu and lksd modes.
> > > > For some implementations that would probably mean that under the
> > > hood PMD would create
> > > > 2 different session structs (sync/async) and then use one or another
> > > depending on from what API been called.
> > > > Seems doable, but ...:
> > > > - will contradict with statement from 1:
> > > > " New fields will be optional and would be used by PMD only when
> > > cpu-crypto session is requested."
> > > > Now it becomes mandatory for all apps to specify cpu-crypto
> > > related parameters too,
> > > > even if they don't plan to use that mode - i.e. behavior change,
> > > existing app change.
> > > > - might cause extra space overhead.
> > > >
> > > > [Akhil] It will not contradict with #1, you will only have few checks in the
> > > session init PMD
> > > > Which support this mode, find appropriate values and set the appropriate
> > > process() in it.
> > > > User should be able to call, legacy enq-deq as well as the new process()
> > > without any issue.
> > > > User would be at runtime will be able to change the datapath.
> > > > So this is not a disadvantage, it would be additional flexibility for the user.
> > >
> > > Ok, but that's what I am saying - if PMD would *always* have to create a
> > > session that can handle
> > > both modes (sync/async), then user would *always* have to provide parameters
> > > for both modes too.
> > > Otherwise if let say user didn't setup sync specific parameters at all, what PMD
> > > should do?
> > > - return with error?
> > > - init session that can be used with async path only?
> > > My current assumption is #1.
> > > If #2, then how user will be able to distinguish is that session valid for both
> > > modes, or only for one?
> >
> > I would say a 3rd option, do nothing if sync params are not set.
> > Probably have a debug print in the PMD(which support sync mode) to specify that
> > session is not configured properly for sync mode.
>
> So, just print warning and proceed with init session that can be used with async path only?
> Then it sounds the same as #2 above.
> Which actually means that sync mode parameters for sym_session_init() becomes optional.
> Then we need an API to provide to the user information what modes
> (sync+async/async only) is supported by that session for given dev_id.
> And user would have to query/retain this information at control-path,
> and store it somewhere in user-space together with session pointer and dev_ids
> to use later at data-path (same as we do now for session type).
> That definitely requires changes in control-path to start using it.
> Plus the fact that this value can differ for different dev_ids for the same session -
> doesn't make things easier here.
>
> > Internally the PMD will not store the process() API in the session priv data
> > And while calling the first packet, devops->process will give an assert that session
> > Is not configured for sync mode. The session validation would be done in any case
> > your suggestion or mine. So no extra overhead at runtime.
>
> I believe that after session_init() user should get either an error or
> valid session handler that he can use at runtime.
> Pushing session validation to runtime doesn't seem like a good idea.
>
> >
> > >
> > >
> > > >
> > > >
> > > > 3) not possible to store device (not driver) specific data within the
> > > session, but I think it is not really needed right now.
> > > > So probably minor compared to 2.b.2.
> > > >
> > > > [Akhil] So lets omit this for current discussion. And I hope we can find some
> > > way to deal with it.
> > >
> > > I don't think there is an easy way to fix that with existing API.
> > >
> > > >
> > > >
> > > > Actually #3 follows from #2, but decided to have them separated.
> > > >
> > > > 3. process() parameters/behavior
> > > > a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and
> > > just does:
> > > > session_ops->process(sess, ...);
> > > > Advantages:
> > > > 1) fastest possible execution path
> > > > 2) no need to carry on dev_id for data-path
> > > >
> > > > [Akhil] I don't see any overhead of carrying dev id, at least it would be inline
> > > with the
> > > > current DPDK methodology.
> > >
> > > If we'll add process() into rte_cryptodev itself (same as we have
> > > enqueue_burst/dequeue_burst),
> > > then it will be an ABI breakage.
> > > Also there are discussions to get rid of that approach completely:
> > > http://mails.dpdk.org/archives/dev/2019-September/144674.html
> > > So I am not sure this is a recommended way these days.
> >
> > We can either have it in rte_cryptodev or in rte_cryptodev_ops whichever
> > is good for you.
> >
> > Whether it is ABI breakage or not, as per your requirements, this is the correct
> > approach. Do you agree with this or not?
>
> I think it is possible approach, but not the best one:
> it looks quite flakey to me (see all these uncertainty with sym_session_init above),
> plus introduces extra overhead at data-path.
>
> >
> > Now handling the API/ABI breakage is a separate story. In 19.11 release we
> > Are not much concerned about the ABI breakages, this was discussed in
> > community. So adding a new dev_ops wouldn't have been an issue.
> > Now since we are so close to RC1 deadline, we should come up with some
> > other solution for next release. May be having a pmd API in 20.02 and
> > converting it into formal one in 20.11
> >
> >
> > >
> > > > What you are suggesting is a new way to get the things done without much
> > > benefit.
> > >
> > > Would help with ABI stability plus better performance, isn't it enough?
> > >
> > > > Also I don't see any performance difference as crypto workload is heavier than
> > > > Code cycles, so that wont matter.
> > >
> > > It depends.
> > > Suppose function call costs you ~30 cycles.
> > > If you have burst of big packets (let say crypto for each will take ~2K cycles) that
> > > belong
> > > to the same session, then yes you wouldn't notice these extra 30 cycles at all.
> > > If you have burst of small packets (let say crypto for each will take ~300 cycles)
> > > each
> > > belongs to different session, then it will cost you ~10% extra.
> >
> > Let us do some profiling on openssl with both the approaches and find out the
> > difference.
> >
> > >
> > > > So IMO, there is no advantage in your suggestion as well.
> > > >
> > > >
> > > > Disadvantages:
> > > > 3) user has to carry on session_ops pointer explicitly
> > > > b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
> > > > rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session
> > > *sess, /*data parameters*/) {...
> > > > rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> > > > /*and then inside PMD specifc process: */
> > > > pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
> > > > /* and then most likely either */
> > > > pmd_private_session->process(pmd_private_session, ...);
> > > > /* or jump based on session/input data */
> > > > Advantages:
> > > > 1) don't see any...
> > > > Disadvantages:
> > > > 2) User has to carry on dev_id inside data-path
> > > > 3) Extra level of indirection (plus data dependency) - both for data and
> > > instructions.
> > > > Possible slowdown compared to a) (not measured).
> > > >
> > > > Having said all this, if the disagreements cannot be resolved, you can go for a
> > > pmd API specific
> > > > to your PMDs,
> > >
> > > I don't think it is good idea.
> > > PMD specific API is sort of deprecated path, also there is no clean way to use it
> > > within the libraries.
> >
> > I know that this is a deprecated path, we can use it until we are not allowed
> > to break ABI/API
> >
> > >
> > > > because as per my understanding the solution doesn't look scalable to other
> > > PMDs.
> > > > Your approach is aligned only to Intel , will not benefit others like openssl
> > > which is used by all
> > > > vendors.
> > >
> > > I feel quite opposite, from my perspective majority of SW backed PMDs will
> > > benefit from it.
> > > And I don't see anything Intel specific in my proposals above.
> > > About openssl PMD: I am not an expert here, but looking at the code, I think it
> > > will fit really well.
> > > Look yourself at its internal functions:
> > > process_openssl_auth_op/process_openssl_crypto_op,
> > > I think they doing exactly the same - they use sync API underneath, and they are
> > > session based
> > > (AFAIK you don't need any device/queue data, everything that needed for
> > > crypto/auth is stored inside session).
Looked at drivers/crypto/armv8 - same story here, I believe.
> > >
> > By vendor specific, I mean,
> > - no PMD would like to have 2 different variants of session Init APIs for doing the same stuff.
> > - stacks will become vendor specific while using 2 separate session create APIs. No stack would
> > Like to support 2 variants of session create- one for HW PMDs and one for SW PMDs.
>
> I think what you refer on has nothing to do with 'vendor specific'.
> I would name it 'extra overhead for PMD and stack writers'.
> Yes, for sure there is extra overhead (as always with new API) -
> for both producer (PMD writer) and consumer (stack writer):
> New function(s) to support, probably more tests to create/run, etc.
> Though this API is optional - if PMD/stack maintainer doesn't see
> value in it, they are free not to support it.
> From other side, re-using rte_cryptodev_sym_session_init()
> wouldn't help anyway - both data-path and control-path would differ
> from async mode anyway.
> BTW, right now to support different HW flavors
> we do have 4 different control and data-paths for both
> ipsec-secgw and librte_ipsec:
> lkds-none/lksd-proto/inline-crypto/inline-proto.
> And that is considered to be ok.
> Honestly, I don't understand why SW backed implementations
> can't have their own path that would suite them most.
> Konstantin
>
>
>
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-22 13:31 5% ` Akhil Goyal
@ 2019-10-22 17:44 0% ` Ananyev, Konstantin
2019-10-22 22:21 0% ` Ananyev, Konstantin
2019-10-23 10:05 0% ` Akhil Goyal
0 siblings, 2 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-22 17:44 UTC (permalink / raw)
To: Akhil Goyal, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', Hemant Agrawal
Hi Akhil,
> > > Added my comments inline with your draft.
> > > [snip]..
> > >
> > > >
> > > > Ok, then my suggestion:
> > > > Let's at least write down all points about crypto-dev approach where we
> > > > disagree and then probably try to resolve them one by one....
> > > > If we fail to make an agreement/progress in next week or so,
> > > > (and no more reviews from the community)
> > > > will have bring that subject to TB meeting to decide.
> > > > Sounds fair to you?
> > > Agreed
> > > >
> > > > List is below.
> > > > Please add/correct me, if I missed something.
> > > >
> > > > Konstantin
> > >
> > > Before going into comparison, we should define the requirement as well.
> >
> > Good point.
> >
> > > What I understood from the patchset,
> > > "You need a synchronous API to perform crypto operations on raw data using
> > SW PMDs"
> > > So,
> > > - no crypto-ops,
> > > - no separate enq-deq, only single process API for data path
> > > - Do not need any value addition to the session parameters.
> > > (You would need some parameters from the crypto-op which
> > > Are constant per session and since you wont use crypto-op,
> > > You need some place to store that)
> >
> > Yes, this is correct, I think.
> >
> > >
> > > Now as per your mail, the comparison
> > > 1. extra input parameters to create/init rte_(cpu)_sym_session.
> > >
> > > Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and
> > 'key' fields.
> > > New fields will be optional and would be used by PMD only when cpu-crypto
> > session is requested.
> > > For lksd-crypto session PMD is free to ignore these fields.
> > > No ABI breakage is required.
> > >
> > > [Akhil] Agreed, no issues.
> > >
> > > 2. cpu-crypto create/init.
> > > a) Our suggestion - introduce new API for that:
> > > - rte_crypto_cpu_sym_init() that would init completely opaque
> > rte_crypto_cpu_sym_session.
> > > - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear);
> > /*whatever else we'll need *'};
> > > - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform
> > *xforms)
> > > that would return const struct rte_crypto_cpu_sym_session_ops *based
> > on input xforms.
> > > Advantages:
> > > 1) totally opaque data structure (no ABI breakages in future), PMD
> > writer is totally free
> > > with it format and contents.
> > >
> > > [Akhil] It will have breakage at some point till we don't hit the union size.
> >
> > Not sure, what union you are talking about?
>
> Union of xforms in rte_security_session_conf
Hmm, how does it relates here?
I thought we discussing pure rte_cryptodev_sym_session, no?
>
> >
> > > Rather I don't suspect there will be more parameters added.
> > > Or do we really care about the ABI breakage when the argument is about
> > > the correct place to add a piece of code or do we really agree to add code
> > > anywhere just to avoid that breakage.
> >
> > I am talking about maintaining it in future.
> > if your struct is not seen externally, no chances to introduce ABI breakage.
> >
> > >
> > > 2) each session entity is self-contained, user doesn't need to bring along
> > dev_id etc.
> > > dev_id is needed only at init stage, after that user will use session ops
> > to perform
> > > all operations on that session (process(), clear(), etc.).
> > >
> > > [Akhil] There is nothing called as session ops in current DPDK.
> >
> > True, but it doesn't mean we can't/shouldn't have it.
>
> We can have it if it is not adding complexity for the user. Creating 2 different code
> Paths for user is not desirable for the stack developers.
>
> >
> > > What you are proposing
> > > is a new concept which doesn't have any extra benefit, rather it is adding
> > complexity
> > > to have two different code paths for session create.
> > >
> > >
> > > 3) User can decide does he wants to store ops[] pointer on a per session
> > basis,
> > > or on a per group of same sessions, or...
> > >
> > > [Akhil] Will the user really care which process API should be called from the
> > PMD.
> > > Rather it should be driver's responsibility to store that in the session private
> > data
> > > which would be opaque to the user. As per my suggestion same process
> > function can
> > > be added to multiple sessions or a single session can be managed inside the
> > PMD.
> >
> > In that case we either need to have a function per session (stored internally),
> > or make decision (branches) at run-time.
> > But as I said in other mail - I am ok to add small shim structure here:
> > either rte_crypto_cpu_sym_session { void *ses; struct
> > rte_crypto_cpu_sym_session_ops ops; }
> > or rte_crypto_cpu_sym_session { void *ses; struct
> > rte_crypto_cpu_sym_session_ops *ops; }
> > And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops() into
> > one (init).
>
> Again that will be a separate API call from the user perspective which is not good.
>
> >
> > >
> > >
> > > 4) No mandatory mempools for private sessions. User can allocate
> > memory for cpu-crypto
> > > session whenever he likes.
> > >
> > > [Akhil] you mean session private data?
> >
> > Yes.
> >
> > > You would need that memory anyways, user will be
> > > allocating that already. You do not need to manage that.
> >
> > What I am saying - right now user has no choice but to allocate it via mempool.
> > Which is probably not the best options for all cases.
> >
> > >
> > > Disadvantages:
> > > 5) Extra changes in control path
> > > 6) User has to store session_ops pointer explicitly.
> > >
> > > [Akhil] More disadvantages:
> > > - All supporting PMDs will need to maintain TWO types of session for the
> > > same crypto processing. Suppose a fix or a new feature(or algo) is added, PMD
> > owner
> > > will need to add code in both the session create APIs. Hence more
> > maintenance and
> > > error prone.
> >
> > I think majority of code for both paths will be common, plus even we'll reuse
> > current sym_session_init() -
> > changes in PMD session_init() code will be unavoidable.
> > But yes, it will be new entry in devops, that PMD will have to support.
> > Ok to add it as 7) to the list.
> >
> > > - Stacks which will be using these new APIs also need to maintain two
> > > code path for the same processing while doing session initialization
> > > for sync and async
> >
> > That's the same as #5 above, I think.
> >
> > >
> > >
> > > b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and
> > existing rte_cryptodev_sym_session
> > > structure.
> > > Advantages:
> > > 1) allows to reuse same struct and init/create/clear() functions.
> > > Probably less changes in control path.
> > > Disadvantages:
> > > 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id,
> > which means that
> > > we can't use the same rte_cryptodev_sym_session to hold private
> > sessions pointers
> > > for both sync and async mode for the same device.
> > > So the only option we have - make PMD devops-
> > >sym_session_configure()
> > > always create a session that can work in both cpu and lksd modes.
> > > For some implementations that would probably mean that under the
> > hood PMD would create
> > > 2 different session structs (sync/async) and then use one or another
> > depending on from what API been called.
> > > Seems doable, but ...:
> > > - will contradict with statement from 1:
> > > " New fields will be optional and would be used by PMD only when
> > cpu-crypto session is requested."
> > > Now it becomes mandatory for all apps to specify cpu-crypto
> > related parameters too,
> > > even if they don't plan to use that mode - i.e. behavior change,
> > existing app change.
> > > - might cause extra space overhead.
> > >
> > > [Akhil] It will not contradict with #1, you will only have few checks in the
> > session init PMD
> > > Which support this mode, find appropriate values and set the appropriate
> > process() in it.
> > > User should be able to call, legacy enq-deq as well as the new process()
> > without any issue.
> > > User would be at runtime will be able to change the datapath.
> > > So this is not a disadvantage, it would be additional flexibility for the user.
> >
> > Ok, but that's what I am saying - if PMD would *always* have to create a
> > session that can handle
> > both modes (sync/async), then user would *always* have to provide parameters
> > for both modes too.
> > Otherwise if let say user didn't setup sync specific parameters at all, what PMD
> > should do?
> > - return with error?
> > - init session that can be used with async path only?
> > My current assumption is #1.
> > If #2, then how user will be able to distinguish is that session valid for both
> > modes, or only for one?
>
> I would say a 3rd option, do nothing if sync params are not set.
> Probably have a debug print in the PMD(which support sync mode) to specify that
> session is not configured properly for sync mode.
So, just print warning and proceed with init session that can be used with async path only?
Then it sounds the same as #2 above.
Which actually means that sync mode parameters for sym_session_init() becomes optional.
Then we need an API to provide to the user information what modes
(sync+async/async only) is supported by that session for given dev_id.
And user would have to query/retain this information at control-path,
and store it somewhere in user-space together with session pointer and dev_ids
to use later at data-path (same as we do now for session type).
That definitely requires changes in control-path to start using it.
Plus the fact that this value can differ for different dev_ids for the same session -
doesn't make things easier here.
> Internally the PMD will not store the process() API in the session priv data
> And while calling the first packet, devops->process will give an assert that session
> Is not configured for sync mode. The session validation would be done in any case
> your suggestion or mine. So no extra overhead at runtime.
I believe that after session_init() user should get either an error or
valid session handler that he can use at runtime.
Pushing session validation to runtime doesn't seem like a good idea.
>
> >
> >
> > >
> > >
> > > 3) not possible to store device (not driver) specific data within the
> > session, but I think it is not really needed right now.
> > > So probably minor compared to 2.b.2.
> > >
> > > [Akhil] So lets omit this for current discussion. And I hope we can find some
> > way to deal with it.
> >
> > I don't think there is an easy way to fix that with existing API.
> >
> > >
> > >
> > > Actually #3 follows from #2, but decided to have them separated.
> > >
> > > 3. process() parameters/behavior
> > > a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and
> > just does:
> > > session_ops->process(sess, ...);
> > > Advantages:
> > > 1) fastest possible execution path
> > > 2) no need to carry on dev_id for data-path
> > >
> > > [Akhil] I don't see any overhead of carrying dev id, at least it would be inline
> > with the
> > > current DPDK methodology.
> >
> > If we'll add process() into rte_cryptodev itself (same as we have
> > enqueue_burst/dequeue_burst),
> > then it will be an ABI breakage.
> > Also there are discussions to get rid of that approach completely:
> > http://mails.dpdk.org/archives/dev/2019-September/144674.html
> > So I am not sure this is a recommended way these days.
>
> We can either have it in rte_cryptodev or in rte_cryptodev_ops whichever
> is good for you.
>
> Whether it is ABI breakage or not, as per your requirements, this is the correct
> approach. Do you agree with this or not?
I think it is possible approach, but not the best one:
it looks quite flakey to me (see all these uncertainty with sym_session_init above),
plus introduces extra overhead at data-path.
>
> Now handling the API/ABI breakage is a separate story. In 19.11 release we
> Are not much concerned about the ABI breakages, this was discussed in
> community. So adding a new dev_ops wouldn't have been an issue.
> Now since we are so close to RC1 deadline, we should come up with some
> other solution for next release. May be having a pmd API in 20.02 and
> converting it into formal one in 20.11
>
>
> >
> > > What you are suggesting is a new way to get the things done without much
> > benefit.
> >
> > Would help with ABI stability plus better performance, isn't it enough?
> >
> > > Also I don't see any performance difference as crypto workload is heavier than
> > > Code cycles, so that wont matter.
> >
> > It depends.
> > Suppose function call costs you ~30 cycles.
> > If you have burst of big packets (let say crypto for each will take ~2K cycles) that
> > belong
> > to the same session, then yes you wouldn't notice these extra 30 cycles at all.
> > If you have burst of small packets (let say crypto for each will take ~300 cycles)
> > each
> > belongs to different session, then it will cost you ~10% extra.
>
> Let us do some profiling on openssl with both the approaches and find out the
> difference.
>
> >
> > > So IMO, there is no advantage in your suggestion as well.
> > >
> > >
> > > Disadvantages:
> > > 3) user has to carry on session_ops pointer explicitly
> > > b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
> > > rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session
> > *sess, /*data parameters*/) {...
> > > rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> > > /*and then inside PMD specifc process: */
> > > pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
> > > /* and then most likely either */
> > > pmd_private_session->process(pmd_private_session, ...);
> > > /* or jump based on session/input data */
> > > Advantages:
> > > 1) don't see any...
> > > Disadvantages:
> > > 2) User has to carry on dev_id inside data-path
> > > 3) Extra level of indirection (plus data dependency) - both for data and
> > instructions.
> > > Possible slowdown compared to a) (not measured).
> > >
> > > Having said all this, if the disagreements cannot be resolved, you can go for a
> > pmd API specific
> > > to your PMDs,
> >
> > I don't think it is good idea.
> > PMD specific API is sort of deprecated path, also there is no clean way to use it
> > within the libraries.
>
> I know that this is a deprecated path, we can use it until we are not allowed
> to break ABI/API
>
> >
> > > because as per my understanding the solution doesn't look scalable to other
> > PMDs.
> > > Your approach is aligned only to Intel , will not benefit others like openssl
> > which is used by all
> > > vendors.
> >
> > I feel quite opposite, from my perspective majority of SW backed PMDs will
> > benefit from it.
> > And I don't see anything Intel specific in my proposals above.
> > About openssl PMD: I am not an expert here, but looking at the code, I think it
> > will fit really well.
> > Look yourself at its internal functions:
> > process_openssl_auth_op/process_openssl_crypto_op,
> > I think they doing exactly the same - they use sync API underneath, and they are
> > session based
> > (AFAIK you don't need any device/queue data, everything that needed for
> > crypto/auth is stored inside session).
> >
> By vendor specific, I mean,
> - no PMD would like to have 2 different variants of session Init APIs for doing the same stuff.
> - stacks will become vendor specific while using 2 separate session create APIs. No stack would
> Like to support 2 variants of session create- one for HW PMDs and one for SW PMDs.
I think what you refer on has nothing to do with 'vendor specific'.
I would name it 'extra overhead for PMD and stack writers'.
Yes, for sure there is extra overhead (as always with new API) -
for both producer (PMD writer) and consumer (stack writer):
New function(s) to support, probably more tests to create/run, etc.
Though this API is optional - if PMD/stack maintainer doesn't see
value in it, they are free not to support it.
From other side, re-using rte_cryptodev_sym_session_init()
wouldn't help anyway - both data-path and control-path would differ
from async mode anyway.
BTW, right now to support different HW flavors
we do have 4 different control and data-paths for both
ipsec-secgw and librte_ipsec:
lkds-none/lksd-proto/inline-crypto/inline-proto.
And that is considered to be ok.
Honestly, I don't understand why SW backed implementations
can't have their own path that would suite them most.
Konstantin
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8] eal: make lcore_config private
2019-10-22 16:30 0% ` Stephen Hemminger
@ 2019-10-22 16:49 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-22 16:49 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On Tue, Oct 22, 2019 at 6:30 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
>
> On Tue, 22 Oct 2019 11:05:01 +0200
> David Marchand <david.marchand@redhat.com> wrote:
>
> > On Wed, Oct 2, 2019 at 9:40 PM Stephen Hemminger
> > <stephen@networkplumber.org> wrote:
> > > +struct lcore_config {
> > > + pthread_t thread_id; /**< pthread identifier */
> > > + int pipe_master2slave[2]; /**< communication pipe with master */
> > > + int pipe_slave2master[2]; /**< communication pipe with master */
> > > +
> > > + lcore_function_t * volatile f; /**< function to call */
> > > + void * volatile arg; /**< argument of function */
> > > + volatile int ret; /**< return value of function */
> > > +
> > > + uint32_t core_id; /**< core number on socket for this lcore */
> > > + uint32_t core_index; /**< relative index, starting from 0 */
> > > + uint16_t socket_id; /**< physical socket id for this lcore */
> > > + uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
> > > + uint8_t detected; /**< true if lcore was detected */
> > > + volatile enum rte_lcore_state_t state; /**< lcore state */
> > > + rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
> > > +};
> >
> > There are still changes on the core_id, core_index, socket_id that I
> > am not confortable with (at this point).
> >
> > I prepared a series for -rc1 on ABI changes in EAL (that I will send shortly).
> > I took your patch without the changes on core_id, core_index and socket_id.
>
>
> Why, please be more precise.
>
I commented earlier that there were integer conversion with the fields
you changed.
core_id is ok, and a uint32_t would be fine, but this does not change the size.
socket_id needs investigation, but should be safe.
I am nervous about core_index, because it is used as a signed integer.
It looks too dangerous to blindly accept this change with the only
reason of saving space.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 8/8] log: hide internal log structure
2019-10-22 9:32 3% ` [dpdk-dev] [PATCH 8/8] log: hide internal log structure David Marchand
@ 2019-10-22 16:35 0% ` Stephen Hemminger
2019-10-23 13:02 0% ` David Marchand
1 sibling, 0 replies; 200+ results
From: Stephen Hemminger @ 2019-10-22 16:35 UTC (permalink / raw)
To: David Marchand; +Cc: dev, anatoly.burakov, thomas
On Tue, 22 Oct 2019 11:32:41 +0200
David Marchand <david.marchand@redhat.com> wrote:
> No need to expose rte_logs, hide it and remove it from the current ABI.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> ---
> lib/librte_eal/common/eal_common_log.c | 23 ++++++++++++++++-------
> lib/librte_eal/common/include/rte_log.h | 20 +++-----------------
> lib/librte_eal/rte_eal_version.map | 1 -
> 3 files changed, 19 insertions(+), 25 deletions(-)
>
> diff --git a/lib/librte_eal/common/eal_common_log.c b/lib/librte_eal/common/eal_common_log.c
> index cfe9599..3a7ab88 100644
> --- a/lib/librte_eal/common/eal_common_log.c
> +++ b/lib/librte_eal/common/eal_common_log.c
> @@ -17,13 +17,6 @@
>
> #include "eal_private.h"
>
> -/* global log structure */
> -struct rte_logs rte_logs = {
> - .type = ~0,
> - .level = RTE_LOG_DEBUG,
> - .file = NULL,
> -};
> -
> struct rte_eal_opt_loglevel {
> /** Next list entry */
> TAILQ_ENTRY(rte_eal_opt_loglevel) next;
> @@ -58,6 +51,22 @@ struct rte_log_dynamic_type {
> uint32_t loglevel;
> };
>
> +/** The rte_log structure. */
> +struct rte_logs {
> + uint32_t type; /**< Bitfield with enabled logs. */
> + uint32_t level; /**< Log level. */
> + FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
> + size_t dynamic_types_len;
> + struct rte_log_dynamic_type *dynamic_types;
> +};
> +
> +/* global log structure */
> +static struct rte_logs rte_logs = {
> + .type = ~0,
> + .level = RTE_LOG_DEBUG,
> + .file = NULL,
> +};
> +
> /* per core log */
> static RTE_DEFINE_PER_LCORE(struct log_cur_msg, log_cur_msg);
>
> diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
> index 1bb0e66..a8d0eb7 100644
> --- a/lib/librte_eal/common/include/rte_log.h
> +++ b/lib/librte_eal/common/include/rte_log.h
> @@ -26,20 +26,6 @@ extern "C" {
> #include <rte_config.h>
> #include <rte_compat.h>
>
> -struct rte_log_dynamic_type;
> -
> -/** The rte_log structure. */
> -struct rte_logs {
> - uint32_t type; /**< Bitfield with enabled logs. */
> - uint32_t level; /**< Log level. */
> - FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
> - size_t dynamic_types_len;
> - struct rte_log_dynamic_type *dynamic_types;
> -};
> -
> -/** Global log information */
> -extern struct rte_logs rte_logs;
> -
> /* SDK log type */
> #define RTE_LOGTYPE_EAL 0 /**< Log related to eal. */
> #define RTE_LOGTYPE_MALLOC 1 /**< Log related to malloc. */
> @@ -260,7 +246,7 @@ void rte_log_dump(FILE *f);
> * to rte_openlog_stream().
> *
> * The level argument determines if the log should be displayed or
> - * not, depending on the global rte_logs variable.
> + * not, depending on the global log level and the per logtype level.
> *
> * The preferred alternative is the RTE_LOG() because it adds the
> * level and type in the logged string.
> @@ -291,8 +277,8 @@ int rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
> * to rte_openlog_stream().
> *
> * The level argument determines if the log should be displayed or
> - * not, depending on the global rte_logs variable. A trailing
> - * newline may be added if needed.
> + * not, depending on the global log level and the per logtype level.
> + * A trailing newline may be added if needed.
> *
> * The preferred alternative is the RTE_LOG() because it adds the
> * level and type in the logged string.
> diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
> index 6d7e0e4..ca9ace0 100644
> --- a/lib/librte_eal/rte_eal_version.map
> +++ b/lib/librte_eal/rte_eal_version.map
> @@ -45,7 +45,6 @@ DPDK_2.0 {
> rte_log;
> rte_log_cur_msg_loglevel;
> rte_log_cur_msg_logtype;
> - rte_logs;
> rte_malloc;
> rte_malloc_dump_stats;
> rte_malloc_get_socket_stats;
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v8] eal: make lcore_config private
2019-10-22 9:05 3% ` David Marchand
@ 2019-10-22 16:30 0% ` Stephen Hemminger
2019-10-22 16:49 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Stephen Hemminger @ 2019-10-22 16:30 UTC (permalink / raw)
To: David Marchand; +Cc: dev
On Tue, 22 Oct 2019 11:05:01 +0200
David Marchand <david.marchand@redhat.com> wrote:
> On Wed, Oct 2, 2019 at 9:40 PM Stephen Hemminger
> <stephen@networkplumber.org> wrote:
> > +struct lcore_config {
> > + pthread_t thread_id; /**< pthread identifier */
> > + int pipe_master2slave[2]; /**< communication pipe with master */
> > + int pipe_slave2master[2]; /**< communication pipe with master */
> > +
> > + lcore_function_t * volatile f; /**< function to call */
> > + void * volatile arg; /**< argument of function */
> > + volatile int ret; /**< return value of function */
> > +
> > + uint32_t core_id; /**< core number on socket for this lcore */
> > + uint32_t core_index; /**< relative index, starting from 0 */
> > + uint16_t socket_id; /**< physical socket id for this lcore */
> > + uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
> > + uint8_t detected; /**< true if lcore was detected */
> > + volatile enum rte_lcore_state_t state; /**< lcore state */
> > + rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
> > +};
>
> There are still changes on the core_id, core_index, socket_id that I
> am not confortable with (at this point).
>
> I prepared a series for -rc1 on ABI changes in EAL (that I will send shortly).
> I took your patch without the changes on core_id, core_index and socket_id.
Why, please be more precise.
Do you expect to support more than 32 bit worth of cores?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-21 13:47 4% ` Ananyev, Konstantin
@ 2019-10-22 13:31 5% ` Akhil Goyal
2019-10-22 17:44 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Akhil Goyal @ 2019-10-22 13:31 UTC (permalink / raw)
To: Ananyev, Konstantin, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', Hemant Agrawal
Hi Konstantin,
>
>
> Hi Akhil,
>
>
> > Added my comments inline with your draft.
> > [snip]..
> >
> > >
> > > Ok, then my suggestion:
> > > Let's at least write down all points about crypto-dev approach where we
> > > disagree and then probably try to resolve them one by one....
> > > If we fail to make an agreement/progress in next week or so,
> > > (and no more reviews from the community)
> > > will have bring that subject to TB meeting to decide.
> > > Sounds fair to you?
> > Agreed
> > >
> > > List is below.
> > > Please add/correct me, if I missed something.
> > >
> > > Konstantin
> >
> > Before going into comparison, we should define the requirement as well.
>
> Good point.
>
> > What I understood from the patchset,
> > "You need a synchronous API to perform crypto operations on raw data using
> SW PMDs"
> > So,
> > - no crypto-ops,
> > - no separate enq-deq, only single process API for data path
> > - Do not need any value addition to the session parameters.
> > (You would need some parameters from the crypto-op which
> > Are constant per session and since you wont use crypto-op,
> > You need some place to store that)
>
> Yes, this is correct, I think.
>
> >
> > Now as per your mail, the comparison
> > 1. extra input parameters to create/init rte_(cpu)_sym_session.
> >
> > Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and
> 'key' fields.
> > New fields will be optional and would be used by PMD only when cpu-crypto
> session is requested.
> > For lksd-crypto session PMD is free to ignore these fields.
> > No ABI breakage is required.
> >
> > [Akhil] Agreed, no issues.
> >
> > 2. cpu-crypto create/init.
> > a) Our suggestion - introduce new API for that:
> > - rte_crypto_cpu_sym_init() that would init completely opaque
> rte_crypto_cpu_sym_session.
> > - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear);
> /*whatever else we'll need *'};
> > - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform
> *xforms)
> > that would return const struct rte_crypto_cpu_sym_session_ops *based
> on input xforms.
> > Advantages:
> > 1) totally opaque data structure (no ABI breakages in future), PMD
> writer is totally free
> > with it format and contents.
> >
> > [Akhil] It will have breakage at some point till we don't hit the union size.
>
> Not sure, what union you are talking about?
Union of xforms in rte_security_session_conf
>
> > Rather I don't suspect there will be more parameters added.
> > Or do we really care about the ABI breakage when the argument is about
> > the correct place to add a piece of code or do we really agree to add code
> > anywhere just to avoid that breakage.
>
> I am talking about maintaining it in future.
> if your struct is not seen externally, no chances to introduce ABI breakage.
>
> >
> > 2) each session entity is self-contained, user doesn't need to bring along
> dev_id etc.
> > dev_id is needed only at init stage, after that user will use session ops
> to perform
> > all operations on that session (process(), clear(), etc.).
> >
> > [Akhil] There is nothing called as session ops in current DPDK.
>
> True, but it doesn't mean we can't/shouldn't have it.
We can have it if it is not adding complexity for the user. Creating 2 different code
Paths for user is not desirable for the stack developers.
>
> > What you are proposing
> > is a new concept which doesn't have any extra benefit, rather it is adding
> complexity
> > to have two different code paths for session create.
> >
> >
> > 3) User can decide does he wants to store ops[] pointer on a per session
> basis,
> > or on a per group of same sessions, or...
> >
> > [Akhil] Will the user really care which process API should be called from the
> PMD.
> > Rather it should be driver's responsibility to store that in the session private
> data
> > which would be opaque to the user. As per my suggestion same process
> function can
> > be added to multiple sessions or a single session can be managed inside the
> PMD.
>
> In that case we either need to have a function per session (stored internally),
> or make decision (branches) at run-time.
> But as I said in other mail - I am ok to add small shim structure here:
> either rte_crypto_cpu_sym_session { void *ses; struct
> rte_crypto_cpu_sym_session_ops ops; }
> or rte_crypto_cpu_sym_session { void *ses; struct
> rte_crypto_cpu_sym_session_ops *ops; }
> And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops() into
> one (init).
Again that will be a separate API call from the user perspective which is not good.
>
> >
> >
> > 4) No mandatory mempools for private sessions. User can allocate
> memory for cpu-crypto
> > session whenever he likes.
> >
> > [Akhil] you mean session private data?
>
> Yes.
>
> > You would need that memory anyways, user will be
> > allocating that already. You do not need to manage that.
>
> What I am saying - right now user has no choice but to allocate it via mempool.
> Which is probably not the best options for all cases.
>
> >
> > Disadvantages:
> > 5) Extra changes in control path
> > 6) User has to store session_ops pointer explicitly.
> >
> > [Akhil] More disadvantages:
> > - All supporting PMDs will need to maintain TWO types of session for the
> > same crypto processing. Suppose a fix or a new feature(or algo) is added, PMD
> owner
> > will need to add code in both the session create APIs. Hence more
> maintenance and
> > error prone.
>
> I think majority of code for both paths will be common, plus even we'll reuse
> current sym_session_init() -
> changes in PMD session_init() code will be unavoidable.
> But yes, it will be new entry in devops, that PMD will have to support.
> Ok to add it as 7) to the list.
>
> > - Stacks which will be using these new APIs also need to maintain two
> > code path for the same processing while doing session initialization
> > for sync and async
>
> That's the same as #5 above, I think.
>
> >
> >
> > b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and
> existing rte_cryptodev_sym_session
> > structure.
> > Advantages:
> > 1) allows to reuse same struct and init/create/clear() functions.
> > Probably less changes in control path.
> > Disadvantages:
> > 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id,
> which means that
> > we can't use the same rte_cryptodev_sym_session to hold private
> sessions pointers
> > for both sync and async mode for the same device.
> > So the only option we have - make PMD devops-
> >sym_session_configure()
> > always create a session that can work in both cpu and lksd modes.
> > For some implementations that would probably mean that under the
> hood PMD would create
> > 2 different session structs (sync/async) and then use one or another
> depending on from what API been called.
> > Seems doable, but ...:
> > - will contradict with statement from 1:
> > " New fields will be optional and would be used by PMD only when
> cpu-crypto session is requested."
> > Now it becomes mandatory for all apps to specify cpu-crypto
> related parameters too,
> > even if they don't plan to use that mode - i.e. behavior change,
> existing app change.
> > - might cause extra space overhead.
> >
> > [Akhil] It will not contradict with #1, you will only have few checks in the
> session init PMD
> > Which support this mode, find appropriate values and set the appropriate
> process() in it.
> > User should be able to call, legacy enq-deq as well as the new process()
> without any issue.
> > User would be at runtime will be able to change the datapath.
> > So this is not a disadvantage, it would be additional flexibility for the user.
>
> Ok, but that's what I am saying - if PMD would *always* have to create a
> session that can handle
> both modes (sync/async), then user would *always* have to provide parameters
> for both modes too.
> Otherwise if let say user didn't setup sync specific parameters at all, what PMD
> should do?
> - return with error?
> - init session that can be used with async path only?
> My current assumption is #1.
> If #2, then how user will be able to distinguish is that session valid for both
> modes, or only for one?
I would say a 3rd option, do nothing if sync params are not set.
Probably have a debug print in the PMD(which support sync mode) to specify that
session is not configured properly for sync mode.
Internally the PMD will not store the process() API in the session priv data
And while calling the first packet, devops->process will give an assert that session
Is not configured for sync mode. The session validation would be done in any case
your suggestion or mine. So no extra overhead at runtime.
>
>
> >
> >
> > 3) not possible to store device (not driver) specific data within the
> session, but I think it is not really needed right now.
> > So probably minor compared to 2.b.2.
> >
> > [Akhil] So lets omit this for current discussion. And I hope we can find some
> way to deal with it.
>
> I don't think there is an easy way to fix that with existing API.
>
> >
> >
> > Actually #3 follows from #2, but decided to have them separated.
> >
> > 3. process() parameters/behavior
> > a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and
> just does:
> > session_ops->process(sess, ...);
> > Advantages:
> > 1) fastest possible execution path
> > 2) no need to carry on dev_id for data-path
> >
> > [Akhil] I don't see any overhead of carrying dev id, at least it would be inline
> with the
> > current DPDK methodology.
>
> If we'll add process() into rte_cryptodev itself (same as we have
> enqueue_burst/dequeue_burst),
> then it will be an ABI breakage.
> Also there are discussions to get rid of that approach completely:
> http://mails.dpdk.org/archives/dev/2019-September/144674.html
> So I am not sure this is a recommended way these days.
We can either have it in rte_cryptodev or in rte_cryptodev_ops whichever
is good for you.
Whether it is ABI breakage or not, as per your requirements, this is the correct
approach. Do you agree with this or not?
Now handling the API/ABI breakage is a separate story. In 19.11 release we
Are not much concerned about the ABI breakages, this was discussed in
community. So adding a new dev_ops wouldn't have been an issue.
Now since we are so close to RC1 deadline, we should come up with some
other solution for next release. May be having a pmd API in 20.02 and
converting it into formal one in 20.11
>
> > What you are suggesting is a new way to get the things done without much
> benefit.
>
> Would help with ABI stability plus better performance, isn't it enough?
>
> > Also I don't see any performance difference as crypto workload is heavier than
> > Code cycles, so that wont matter.
>
> It depends.
> Suppose function call costs you ~30 cycles.
> If you have burst of big packets (let say crypto for each will take ~2K cycles) that
> belong
> to the same session, then yes you wouldn't notice these extra 30 cycles at all.
> If you have burst of small packets (let say crypto for each will take ~300 cycles)
> each
> belongs to different session, then it will cost you ~10% extra.
Let us do some profiling on openssl with both the approaches and find out the
difference.
>
> > So IMO, there is no advantage in your suggestion as well.
> >
> >
> > Disadvantages:
> > 3) user has to carry on session_ops pointer explicitly
> > b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
> > rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session
> *sess, /*data parameters*/) {...
> > rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> > /*and then inside PMD specifc process: */
> > pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
> > /* and then most likely either */
> > pmd_private_session->process(pmd_private_session, ...);
> > /* or jump based on session/input data */
> > Advantages:
> > 1) don't see any...
> > Disadvantages:
> > 2) User has to carry on dev_id inside data-path
> > 3) Extra level of indirection (plus data dependency) - both for data and
> instructions.
> > Possible slowdown compared to a) (not measured).
> >
> > Having said all this, if the disagreements cannot be resolved, you can go for a
> pmd API specific
> > to your PMDs,
>
> I don't think it is good idea.
> PMD specific API is sort of deprecated path, also there is no clean way to use it
> within the libraries.
I know that this is a deprecated path, we can use it until we are not allowed
to break ABI/API
>
> > because as per my understanding the solution doesn't look scalable to other
> PMDs.
> > Your approach is aligned only to Intel , will not benefit others like openssl
> which is used by all
> > vendors.
>
> I feel quite opposite, from my perspective majority of SW backed PMDs will
> benefit from it.
> And I don't see anything Intel specific in my proposals above.
> About openssl PMD: I am not an expert here, but looking at the code, I think it
> will fit really well.
> Look yourself at its internal functions:
> process_openssl_auth_op/process_openssl_crypto_op,
> I think they doing exactly the same - they use sync API underneath, and they are
> session based
> (AFAIK you don't need any device/queue data, everything that needed for
> crypto/auth is stored inside session).
>
By vendor specific, I mean,
- no PMD would like to have 2 different variants of session Init APIs for doing the same stuff.
- stacks will become vendor specific while using 2 separate session create APIs. No stack would
Like to support 2 variants of session create- one for HW PMDs and one for SW PMDs.
-Akhil
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [RFC] ethdev: add new fields for max LRO session size
2019-10-18 16:35 0% ` Ferruh Yigit
2019-10-18 18:05 0% ` Ananyev, Konstantin
@ 2019-10-22 12:56 0% ` Andrew Rybchenko
1 sibling, 0 replies; 200+ results
From: Andrew Rybchenko @ 2019-10-22 12:56 UTC (permalink / raw)
To: Ferruh Yigit, Thomas Monjalon, Matan Azrad
Cc: dev, Konstantin Ananyev, Olivier Matz
On 10/18/19 7:35 PM, Ferruh Yigit wrote:
> On 10/2/2019 2:58 PM, Thomas Monjalon wrote:
>> 24/09/2019 14:03, Matan Azrad:
>>> From: Ferruh Yigit
>>>> On 9/15/2019 8:48 AM, Matan Azrad wrote:
>>>>> Hi Ferruh
>>>>>
>>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>>> On 8/29/2019 8:47 AM, Matan Azrad wrote:
>>>>>>> It may be needed by the user to limit the LRO session packet size.
>>>>>>> In order to allow the above limitation, add new Rx configuration for
>>>>>>> the maximum LRO session size.
>>>>>>>
>>>>>>> In addition, Add a new capability to expose the maximum LRO session
>>>>>>> size supported by the port.
>>>>>>>
>>>>>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>>>>>> Hi Matan,
>>>>>>
>>>>>> Is there any existing user of this new field?
>>>>> All the LRO users need it due to the next reasons:
>>>>>
>>>>> 1. If scatter is enabled - The dpdk user can limit the LRO session size created
>>>> by the HW by this field, if no field like that - there is no way to limit it.
>>>>> 2. No scatter - the dpdk user may want to limit the LRO packet size in order
>>>> to save enough tail-room in the mbuf for its own usage.
>>>>> 3. The limitation of max_rx_pkt_len is not enough - doesn't make sense to
>>>> limit LRO traffic as single packet.
>>>> So should there be more complement patches to this RFC? To update the
>>>> users of the field with the new field.
>>>
>>> We already exposed it as ABI breakage in the last deprecation notice.
>>> We probably cannot complete it for 19.11 version, hopefully for 20.02 it will be completed.
>> We won't break the ABI in 20.02.
>> What should be done in 19.11?
>>
> The ask was to add code that uses new added fields, this patch only adds new
> field to two public ethdev struct.
>
> @Thomas, @Andrew, if this patch doesn't goes it on this release it will have to
> wait a year. I would like to see the implementation but it is not there, what is
> your comment?
I don't mind to accept it in 19.11 modulo better description of
what is LRO session length/size.
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH 6/8] pci: remove deprecated functions
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
` (2 preceding siblings ...)
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 3/8] eal: remove deprecated malloc virt2phys function David Marchand
@ 2019-10-22 9:32 4% ` David Marchand
2019-10-22 9:32 3% ` [dpdk-dev] [PATCH 8/8] log: hide internal log structure David Marchand
` (2 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic, Gaetan Rivet
Those functions have been deprecated since 17.11 and have 1:1
replacement.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 7 -----
doc/guides/rel_notes/release_19_11.rst | 6 +++++
lib/librte_pci/rte_pci.c | 19 --------------
lib/librte_pci/rte_pci.h | 47 ----------------------------------
lib/librte_pci/rte_pci_version.map | 3 ---
5 files changed, 6 insertions(+), 76 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index bbd5863..cf7744e 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -38,13 +38,6 @@ Deprecation Notices
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
-* pci: Several exposed functions are misnamed.
- The following functions are deprecated starting from v17.11 and are replaced:
-
- - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
- - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
- - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
-
* dpaa2: removal of ``rte_dpaa2_memsegs`` structure which has been replaced
by a pa-va search library. This structure was earlier being used for holding
memory segments used by dpaa2 driver for faster pa->va translation. This
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 0c61c1c..579311d 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -210,6 +210,12 @@ ABI Changes
* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
``rte_malloc_virt2iova`` since v17.11.
+* pci: removed the following deprecated functions since dpdk:
+
+ - ``eal_parse_pci_BDF`` replaced by ``rte_pci_addr_parse``
+ - ``eal_parse_pci_DomBDF`` replaced by ``rte_pci_addr_parse``
+ - ``rte_eal_compare_pci_addr`` replaced by ``rte_pci_addr_cmp``
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_pci/rte_pci.c b/lib/librte_pci/rte_pci.c
index f400178..a753cf3 100644
--- a/lib/librte_pci/rte_pci.c
+++ b/lib/librte_pci/rte_pci.c
@@ -87,18 +87,6 @@ pci_dbdf_parse(const char *input, struct rte_pci_addr *dev_addr)
return 0;
}
-int
-eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_bdf_parse(input, dev_addr);
-}
-
-int
-eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr)
-{
- return pci_dbdf_parse(input, dev_addr);
-}
-
void
rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size)
@@ -110,13 +98,6 @@ rte_pci_device_name(const struct rte_pci_addr *addr,
}
int
-rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2)
-{
- return rte_pci_addr_cmp(addr, addr2);
-}
-
-int
rte_pci_addr_cmp(const struct rte_pci_addr *addr,
const struct rte_pci_addr *addr2)
{
diff --git a/lib/librte_pci/rte_pci.h b/lib/librte_pci/rte_pci.h
index eaa9d07..c878914 100644
--- a/lib/librte_pci/rte_pci.h
+++ b/lib/librte_pci/rte_pci.h
@@ -106,37 +106,6 @@ struct mapped_pci_resource {
TAILQ_HEAD(mapped_pci_res_list, mapped_pci_resource);
/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided without
- * a domain prefix (i.e. domain returned is always 0)
- *
- * @param input
- * The input string to be parsed. Should have the format XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned.
- * Domain will always be returned as 0
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_BDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
- * @deprecated
- * Utility function to produce a PCI Bus-Device-Function value
- * given a string representation. Assumes that the BDF is provided including
- * a domain prefix.
- *
- * @param input
- * The input string to be parsed. Should have the format XXXX:XX:XX.X
- * @param dev_addr
- * The PCI Bus-Device-Function address to be returned
- * @return
- * 0 on success, negative on error.
- */
-int eal_parse_pci_DomBDF(const char *input, struct rte_pci_addr *dev_addr);
-
-/**
* Utility function to write a pci device name, this device name can later be
* used to retrieve the corresponding rte_pci_addr using eal_parse_pci_*
* BDF helpers.
@@ -152,22 +121,6 @@ void rte_pci_device_name(const struct rte_pci_addr *addr,
char *output, size_t size);
/**
- * @deprecated
- * Utility function to compare two PCI device addresses.
- *
- * @param addr
- * The PCI Bus-Device-Function address to compare
- * @param addr2
- * The PCI Bus-Device-Function address to compare
- * @return
- * 0 on equal PCI address.
- * Positive on addr is greater than addr2.
- * Negative on addr is less than addr2, or error.
- */
-int rte_eal_compare_pci_addr(const struct rte_pci_addr *addr,
- const struct rte_pci_addr *addr2);
-
-/**
* Utility function to compare two PCI device addresses.
*
* @param addr
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c028027..03790cb 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,11 +1,8 @@
DPDK_17.11 {
global:
- eal_parse_pci_BDF;
- eal_parse_pci_DomBDF;
pci_map_resource;
pci_unmap_resource;
- rte_eal_compare_pci_addr;
rte_pci_addr_cmp;
rte_pci_addr_parse;
rte_pci_device_name;
--
1.8.3.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH 3/8] eal: remove deprecated malloc virt2phys function
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-22 9:32 12% ` [dpdk-dev] [PATCH 1/8] eal: make lcore config private David Marchand
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 2/8] eal: remove deprecated CPU flags check function David Marchand
@ 2019-10-22 9:32 5% ` David Marchand
2019-10-22 9:32 4% ` [dpdk-dev] [PATCH 6/8] pci: remove deprecated functions David Marchand
` (3 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic
Remove rte_malloc_virt2phy as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/include/rte_malloc.h | 7 -------
3 files changed, 3 insertions(+), 10 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 50ac348..bbd5863 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
- by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
-
* vfio: removal of ``rte_vfio_dma_map`` and ``rte_vfio_dma_unmap`` APIs which
have been replaced with ``rte_dev_dma_map`` and ``rte_dev_dma_unmap``
functions. The due date for the removal targets DPDK 20.02.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 8bf2437..0c61c1c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -207,6 +207,9 @@ ABI Changes
* eal: removed the ``rte_cpu_check_supported`` function, replaced by
``rte_cpu_is_supported`` since dpdk v17.08.
+* eal: removed the ``rte_malloc_virt2phy`` function, replaced by
+ ``rte_malloc_virt2iova`` since v17.11.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/include/rte_malloc.h b/lib/librte_eal/common/include/rte_malloc.h
index 3593fb4..42ca051 100644
--- a/lib/librte_eal/common/include/rte_malloc.h
+++ b/lib/librte_eal/common/include/rte_malloc.h
@@ -553,13 +553,6 @@ rte_malloc_set_limit(const char *type, size_t max);
rte_iova_t
rte_malloc_virt2iova(const void *addr);
-__rte_deprecated
-static inline phys_addr_t
-rte_malloc_virt2phy(const void *addr)
-{
- return rte_malloc_virt2iova(addr);
-}
-
#ifdef __cplusplus
}
#endif
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH 8/8] log: hide internal log structure
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
` (3 preceding siblings ...)
2019-10-22 9:32 4% ` [dpdk-dev] [PATCH 6/8] pci: remove deprecated functions David Marchand
@ 2019-10-22 9:32 3% ` David Marchand
2019-10-22 16:35 0% ` Stephen Hemminger
2019-10-23 13:02 0% ` David Marchand
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
6 siblings, 2 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas
No need to expose rte_logs, hide it and remove it from the current ABI.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
lib/librte_eal/common/eal_common_log.c | 23 ++++++++++++++++-------
lib/librte_eal/common/include/rte_log.h | 20 +++-----------------
lib/librte_eal/rte_eal_version.map | 1 -
3 files changed, 19 insertions(+), 25 deletions(-)
diff --git a/lib/librte_eal/common/eal_common_log.c b/lib/librte_eal/common/eal_common_log.c
index cfe9599..3a7ab88 100644
--- a/lib/librte_eal/common/eal_common_log.c
+++ b/lib/librte_eal/common/eal_common_log.c
@@ -17,13 +17,6 @@
#include "eal_private.h"
-/* global log structure */
-struct rte_logs rte_logs = {
- .type = ~0,
- .level = RTE_LOG_DEBUG,
- .file = NULL,
-};
-
struct rte_eal_opt_loglevel {
/** Next list entry */
TAILQ_ENTRY(rte_eal_opt_loglevel) next;
@@ -58,6 +51,22 @@ struct rte_log_dynamic_type {
uint32_t loglevel;
};
+/** The rte_log structure. */
+struct rte_logs {
+ uint32_t type; /**< Bitfield with enabled logs. */
+ uint32_t level; /**< Log level. */
+ FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
+ size_t dynamic_types_len;
+ struct rte_log_dynamic_type *dynamic_types;
+};
+
+/* global log structure */
+static struct rte_logs rte_logs = {
+ .type = ~0,
+ .level = RTE_LOG_DEBUG,
+ .file = NULL,
+};
+
/* per core log */
static RTE_DEFINE_PER_LCORE(struct log_cur_msg, log_cur_msg);
diff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h
index 1bb0e66..a8d0eb7 100644
--- a/lib/librte_eal/common/include/rte_log.h
+++ b/lib/librte_eal/common/include/rte_log.h
@@ -26,20 +26,6 @@ extern "C" {
#include <rte_config.h>
#include <rte_compat.h>
-struct rte_log_dynamic_type;
-
-/** The rte_log structure. */
-struct rte_logs {
- uint32_t type; /**< Bitfield with enabled logs. */
- uint32_t level; /**< Log level. */
- FILE *file; /**< Output file set by rte_openlog_stream, or NULL. */
- size_t dynamic_types_len;
- struct rte_log_dynamic_type *dynamic_types;
-};
-
-/** Global log information */
-extern struct rte_logs rte_logs;
-
/* SDK log type */
#define RTE_LOGTYPE_EAL 0 /**< Log related to eal. */
#define RTE_LOGTYPE_MALLOC 1 /**< Log related to malloc. */
@@ -260,7 +246,7 @@ void rte_log_dump(FILE *f);
* to rte_openlog_stream().
*
* The level argument determines if the log should be displayed or
- * not, depending on the global rte_logs variable.
+ * not, depending on the global log level and the per logtype level.
*
* The preferred alternative is the RTE_LOG() because it adds the
* level and type in the logged string.
@@ -291,8 +277,8 @@ int rte_log(uint32_t level, uint32_t logtype, const char *format, ...)
* to rte_openlog_stream().
*
* The level argument determines if the log should be displayed or
- * not, depending on the global rte_logs variable. A trailing
- * newline may be added if needed.
+ * not, depending on the global log level and the per logtype level.
+ * A trailing newline may be added if needed.
*
* The preferred alternative is the RTE_LOG() because it adds the
* level and type in the logged string.
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 6d7e0e4..ca9ace0 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -45,7 +45,6 @@ DPDK_2.0 {
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
- rte_logs;
rte_malloc;
rte_malloc_dump_stats;
rte_malloc_get_socket_stats;
--
1.8.3.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH 2/8] eal: remove deprecated CPU flags check function
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-22 9:32 12% ` [dpdk-dev] [PATCH 1/8] eal: make lcore config private David Marchand
@ 2019-10-22 9:32 5% ` David Marchand
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 3/8] eal: remove deprecated malloc virt2phys function David Marchand
` (4 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic
Remove rte_cpu_check_supported as announced previously.
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
doc/guides/rel_notes/deprecation.rst | 3 ---
doc/guides/rel_notes/release_19_11.rst | 3 +++
lib/librte_eal/common/eal_common_cpuflags.c | 11 -----------
lib/librte_eal/common/include/generic/rte_cpuflags.h | 9 ---------
lib/librte_eal/rte_eal_version.map | 1 -
5 files changed, 3 insertions(+), 24 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index e4a33e0..50ac348 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -34,9 +34,6 @@ Deprecation Notices
+ ``rte_eal_devargs_type_count``
-* eal: The ``rte_cpu_check_supported`` function has been deprecated since
- v17.08 and will be removed.
-
* eal: The ``rte_malloc_virt2phy`` function has been deprecated and replaced
by ``rte_malloc_virt2iova`` since v17.11 and will be removed.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index d7e14b4..8bf2437 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -204,6 +204,9 @@ ABI Changes
* eal: made the ``lcore_config`` struct and global symbol private.
+* eal: removed the ``rte_cpu_check_supported`` function, replaced by
+ ``rte_cpu_is_supported`` since dpdk v17.08.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_cpuflags.c b/lib/librte_eal/common/eal_common_cpuflags.c
index 3a055f7..dc5f75d 100644
--- a/lib/librte_eal/common/eal_common_cpuflags.c
+++ b/lib/librte_eal/common/eal_common_cpuflags.c
@@ -7,17 +7,6 @@
#include <rte_common.h>
#include <rte_cpuflags.h>
-/**
- * Checks if the machine is adequate for running the binary. If it is not, the
- * program exits with status 1.
- */
-void
-rte_cpu_check_supported(void)
-{
- if (!rte_cpu_is_supported())
- exit(1);
-}
-
int
rte_cpu_is_supported(void)
{
diff --git a/lib/librte_eal/common/include/generic/rte_cpuflags.h b/lib/librte_eal/common/include/generic/rte_cpuflags.h
index 156ea00..872f0eb 100644
--- a/lib/librte_eal/common/include/generic/rte_cpuflags.h
+++ b/lib/librte_eal/common/include/generic/rte_cpuflags.h
@@ -49,15 +49,6 @@ rte_cpu_get_flag_enabled(enum rte_cpu_flag_t feature);
/**
* This function checks that the currently used CPU supports the CPU features
* that were specified at compile time. It is called automatically within the
- * EAL, so does not need to be used by applications.
- */
-__rte_deprecated
-void
-rte_cpu_check_supported(void);
-
-/**
- * This function checks that the currently used CPU supports the CPU features
- * that were specified at compile time. It is called automatically within the
* EAL, so does not need to be used by applications. This version returns a
* result so that decisions may be made (for instance, graceful shutdowns).
*/
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index aeedf39..0887549 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -8,7 +8,6 @@ DPDK_2.0 {
per_lcore__rte_errno;
rte_calloc;
rte_calloc_socket;
- rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
rte_cycles_vmware_tsc_map;
rte_delay_us;
--
1.8.3.1
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH 1/8] eal: make lcore config private
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
@ 2019-10-22 9:32 12% ` David Marchand
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 2/8] eal: remove deprecated CPU flags check function David Marchand
` (5 subsequent siblings)
6 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev
Cc: stephen, anatoly.burakov, thomas, Neil Horman, John McNamara,
Marko Kovacevic, Harry van Haaren, Harini Ramakrishnan,
Omar Cardona, Anand Rawat, Ranjit Menon
From: Stephen Hemminger <stephen@networkplumber.org>
The internal structure of lcore_config does not need to be part of
visible API/ABI. Make it private to EAL.
Rearrange the structure so it takes less memory (and cache footprint).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David Marchand <david.marchand@redhat.com>
---
Based on Stephen v8: http://patchwork.dpdk.org/patch/60443/
Changes since Stephen v8:
- do not change core_id, socket_id and core_index types,
---
doc/guides/rel_notes/deprecation.rst | 4 ----
doc/guides/rel_notes/release_19_11.rst | 2 ++
lib/librte_eal/common/eal_common_launch.c | 2 ++
lib/librte_eal/common/eal_private.h | 25 +++++++++++++++++++++++++
lib/librte_eal/common/include/rte_lcore.h | 24 ------------------------
lib/librte_eal/common/rte_service.c | 2 ++
lib/librte_eal/rte_eal_version.map | 1 -
lib/librte_eal/windows/eal/eal_thread.c | 1 +
8 files changed, 32 insertions(+), 29 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 237813b..e4a33e0 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -23,10 +23,6 @@ Deprecation Notices
* eal: The function ``rte_eal_remote_launch`` will return new error codes
after read or write error on the pipe, instead of calling ``rte_panic``.
-* eal: The ``lcore_config`` struct and global symbol will be made private to
- remove it from the externally visible ABI and allow it to be updated in the
- future.
-
* eal: both declaring and identifying devices will be streamlined in v18.11.
New functions will appear to query a specific port from buses, classes of
device and device drivers. Device declaration will be made coherent with the
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 40121b9..d7e14b4 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -202,6 +202,8 @@ ABI Changes
Also, make sure to start the actual text at the margin.
=========================================================
+* eal: made the ``lcore_config`` struct and global symbol private.
+
Shared Library Versions
-----------------------
diff --git a/lib/librte_eal/common/eal_common_launch.c b/lib/librte_eal/common/eal_common_launch.c
index fe0ba3f..cf52d71 100644
--- a/lib/librte_eal/common/eal_common_launch.c
+++ b/lib/librte_eal/common/eal_common_launch.c
@@ -15,6 +15,8 @@
#include <rte_per_lcore.h>
#include <rte_lcore.h>
+#include "eal_private.h"
+
/*
* Wait until a lcore finished its job.
*/
diff --git a/lib/librte_eal/common/eal_private.h b/lib/librte_eal/common/eal_private.h
index 798ede5..0e4b033 100644
--- a/lib/librte_eal/common/eal_private.h
+++ b/lib/librte_eal/common/eal_private.h
@@ -10,6 +10,31 @@
#include <stdio.h>
#include <rte_dev.h>
+#include <rte_lcore.h>
+
+/**
+ * Structure storing internal configuration (per-lcore)
+ */
+struct lcore_config {
+ pthread_t thread_id; /**< pthread identifier */
+ int pipe_master2slave[2]; /**< communication pipe with master */
+ int pipe_slave2master[2]; /**< communication pipe with master */
+
+ lcore_function_t * volatile f; /**< function to call */
+ void * volatile arg; /**< argument of function */
+ volatile int ret; /**< return value of function */
+
+ volatile enum rte_lcore_state_t state; /**< lcore state */
+ unsigned int socket_id; /**< physical socket id for this lcore */
+ unsigned int core_id; /**< core number on socket for this lcore */
+ int core_index; /**< relative index, starting from 0 */
+ uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
+ uint8_t detected; /**< true if lcore was detected */
+
+ rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
+};
+
+extern struct lcore_config lcore_config[RTE_MAX_LCORE];
/**
* Initialize the memzone subsystem (private to eal).
diff --git a/lib/librte_eal/common/include/rte_lcore.h b/lib/librte_eal/common/include/rte_lcore.h
index c86f72e..0c68391 100644
--- a/lib/librte_eal/common/include/rte_lcore.h
+++ b/lib/librte_eal/common/include/rte_lcore.h
@@ -66,30 +66,6 @@ typedef cpuset_t rte_cpuset_t;
} while (0)
#endif
-/**
- * Structure storing internal configuration (per-lcore)
- */
-struct lcore_config {
- unsigned detected; /**< true if lcore was detected */
- pthread_t thread_id; /**< pthread identifier */
- int pipe_master2slave[2]; /**< communication pipe with master */
- int pipe_slave2master[2]; /**< communication pipe with master */
- lcore_function_t * volatile f; /**< function to call */
- void * volatile arg; /**< argument of function */
- volatile int ret; /**< return value of function */
- volatile enum rte_lcore_state_t state; /**< lcore state */
- unsigned socket_id; /**< physical socket id for this lcore */
- unsigned core_id; /**< core number on socket for this lcore */
- int core_index; /**< relative index, starting from 0 */
- rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
- uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
-};
-
-/**
- * Internal configuration (per-lcore)
- */
-extern struct lcore_config lcore_config[RTE_MAX_LCORE];
-
RTE_DECLARE_PER_LCORE(unsigned, _lcore_id); /**< Per thread "lcore id". */
RTE_DECLARE_PER_LCORE(rte_cpuset_t, _cpuset); /**< Per thread "cpuset". */
diff --git a/lib/librte_eal/common/rte_service.c b/lib/librte_eal/common/rte_service.c
index beb9691..79235c0 100644
--- a/lib/librte_eal/common/rte_service.c
+++ b/lib/librte_eal/common/rte_service.c
@@ -21,6 +21,8 @@
#include <rte_memory.h>
#include <rte_malloc.h>
+#include "eal_private.h"
+
#define RTE_SERVICE_NUM_MAX 64
#define SERVICE_F_REGISTERED (1 << 0)
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d..aeedf39 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -4,7 +4,6 @@ DPDK_2.0 {
__rte_panic;
eal_parse_sysfs_value;
eal_timer_source;
- lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
rte_calloc;
diff --git a/lib/librte_eal/windows/eal/eal_thread.c b/lib/librte_eal/windows/eal/eal_thread.c
index 906502f..0591d4c 100644
--- a/lib/librte_eal/windows/eal/eal_thread.c
+++ b/lib/librte_eal/windows/eal/eal_thread.c
@@ -12,6 +12,7 @@
#include <rte_common.h>
#include <eal_thread.h>
+#include "eal_private.h"
RTE_DEFINE_PER_LCORE(unsigned int, _lcore_id) = LCORE_ID_ANY;
--
1.8.3.1
^ permalink raw reply [relevance 12%]
* [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11
@ 2019-10-22 9:32 8% David Marchand
2019-10-22 9:32 12% ` [dpdk-dev] [PATCH 1/8] eal: make lcore config private David Marchand
` (6 more replies)
0 siblings, 7 replies; 200+ results
From: David Marchand @ 2019-10-22 9:32 UTC (permalink / raw)
To: dev; +Cc: stephen, anatoly.burakov, thomas
Let's prepare for the ABI freeze.
The first patches are about changes that had been announced before (with
a patch from Stephen that I took as it is ready as is from my pov).
The malloc_heap structure from the memory subsystem can be hidden.
The PCI library had some forgotten deprecated APIs that are removed with
this series.
Finally, rte_logs could be hidden, but I am not that confortable about
doing it right away: I added an accessor to rte_logs.file, but I am fine
with dropping the last patch and wait for actually hiding this in the next
ABI break.
Comments?
--
David Marchand
David Marchand (7):
eal: remove deprecated CPU flags check function
eal: remove deprecated malloc virt2phys function
mem: hide internal heap header
net/bonding: use non deprecated PCI API
pci: remove deprecated functions
log: add log stream accessor
log: hide internal log structure
Stephen Hemminger (1):
eal: make lcore config private
app/test-pmd/testpmd.c | 1 -
doc/guides/rel_notes/deprecation.rst | 17 -------
doc/guides/rel_notes/release_19_11.rst | 14 +++++
drivers/common/qat/qat_logs.c | 3 +-
drivers/common/qat/qat_logs.h | 3 +-
drivers/net/bonding/rte_eth_bond_args.c | 5 +-
lib/librte_eal/common/Makefile | 2 +-
lib/librte_eal/common/eal_common_cpuflags.c | 11 ----
lib/librte_eal/common/eal_common_launch.c | 2 +
lib/librte_eal/common/eal_common_log.c | 59 ++++++++++++++--------
lib/librte_eal/common/eal_memcfg.h | 3 +-
lib/librte_eal/common/eal_private.h | 25 +++++++++
.../common/include/generic/rte_cpuflags.h | 9 ----
lib/librte_eal/common/include/rte_lcore.h | 24 ---------
lib/librte_eal/common/include/rte_log.h | 33 ++++++------
lib/librte_eal/common/include/rte_malloc.h | 7 ---
lib/librte_eal/common/include/rte_malloc_heap.h | 35 -------------
lib/librte_eal/common/malloc_heap.h | 25 ++++++++-
lib/librte_eal/common/meson.build | 1 -
lib/librte_eal/common/rte_service.c | 2 +
lib/librte_eal/rte_eal_version.map | 6 +--
lib/librte_eal/windows/eal/eal_thread.c | 1 +
lib/librte_pci/rte_pci.c | 19 -------
lib/librte_pci/rte_pci.h | 47 -----------------
lib/librte_pci/rte_pci_version.map | 3 --
25 files changed, 132 insertions(+), 225 deletions(-)
delete mode 100644 lib/librte_eal/common/include/rte_malloc_heap.h
--
1.8.3.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v8] eal: make lcore_config private
@ 2019-10-22 9:05 3% ` David Marchand
2019-10-22 16:30 0% ` Stephen Hemminger
0 siblings, 1 reply; 200+ results
From: David Marchand @ 2019-10-22 9:05 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: dev
On Wed, Oct 2, 2019 at 9:40 PM Stephen Hemminger
<stephen@networkplumber.org> wrote:
> +struct lcore_config {
> + pthread_t thread_id; /**< pthread identifier */
> + int pipe_master2slave[2]; /**< communication pipe with master */
> + int pipe_slave2master[2]; /**< communication pipe with master */
> +
> + lcore_function_t * volatile f; /**< function to call */
> + void * volatile arg; /**< argument of function */
> + volatile int ret; /**< return value of function */
> +
> + uint32_t core_id; /**< core number on socket for this lcore */
> + uint32_t core_index; /**< relative index, starting from 0 */
> + uint16_t socket_id; /**< physical socket id for this lcore */
> + uint8_t core_role; /**< role of core eg: OFF, RTE, SERVICE */
> + uint8_t detected; /**< true if lcore was detected */
> + volatile enum rte_lcore_state_t state; /**< lcore state */
> + rte_cpuset_t cpuset; /**< cpu set which the lcore affinity to */
> +};
There are still changes on the core_id, core_index, socket_id that I
am not confortable with (at this point).
I prepared a series for -rc1 on ABI changes in EAL (that I will send shortly).
I took your patch without the changes on core_id, core_index and socket_id.
We can discuss those changes later, thanks.
--
David Marchand
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 0/4] doc: changes to abi policy introducing major abi versions
2019-10-21 14:38 5% ` Thomas Monjalon
@ 2019-10-22 8:12 5% ` Ray Kinsella
0 siblings, 0 replies; 200+ results
From: Ray Kinsella @ 2019-10-22 8:12 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
On 21/10/2019 15:38, Thomas Monjalon wrote:
> 21/10/2019 12:10, Ray Kinsella:
>>
>> On 21/10/2019 10:50, Thomas Monjalon wrote:
>>> 27/09/2019 18:54, Ray Kinsella:
>>>> TL;DR abbreviation:
>>>> A major ABI version that all DPDK releases during a one year period
>>>> support. ABI versioning is managed at a project-level, in place of library-level
>>>> management. ABI changes to add new features are permitted, as long as ABI
>>>> compatibility with the major ABI version is maintained.
>>>>
>>>> Detail:
>>>> This patch introduces major ABI versions, supported for one year and released
>>>> aligned with the LTS release. This ABI version is then supported by all
>>>> subsequent releases within that one year period. The intention is that the one
>>>> year support period, will then be reviewed after the initial year with the
>>>> intention of lengthing the support period for the next ABI version.
>>>
>>> For the record, I would prefer a v7 saying it is a fixed period of time,
>>> being one year at first and should be longer next.
>>> Please don't state "supported for one year", which can be understood as a general truth.
>>
>> Well I was very careful to only state an _intention_ to lengthen the fix period,
>> I though it prudent to avoid words like "should", as nothing is known until the year is behind us.
>>
>> Where I used the word "support", I talk about "abi support".
>> I suggest rewording as follows:-
>>
>> This patch introduces major ABI versions, released aligned with the LTS release,
>> maintained for one year through all subsequent releases within that one year period.
>> The intention is that the one year abi support period, will then be reviewed after
>> the initial year with the intention of lengthening the period for the next ABI version.
>
> Yes, looks better.
>
> I am going to review carefully the series.
>
Ok - I will hold fire on any changes then.
Ray K
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v6 0/4] doc: changes to abi policy introducing major abi versions
2019-10-21 10:10 10% ` Ray Kinsella
@ 2019-10-21 14:38 5% ` Thomas Monjalon
2019-10-22 8:12 5% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2019-10-21 14:38 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
21/10/2019 12:10, Ray Kinsella:
>
> On 21/10/2019 10:50, Thomas Monjalon wrote:
> > 27/09/2019 18:54, Ray Kinsella:
> >> TL;DR abbreviation:
> >> A major ABI version that all DPDK releases during a one year period
> >> support. ABI versioning is managed at a project-level, in place of library-level
> >> management. ABI changes to add new features are permitted, as long as ABI
> >> compatibility with the major ABI version is maintained.
> >>
> >> Detail:
> >> This patch introduces major ABI versions, supported for one year and released
> >> aligned with the LTS release. This ABI version is then supported by all
> >> subsequent releases within that one year period. The intention is that the one
> >> year support period, will then be reviewed after the initial year with the
> >> intention of lengthing the support period for the next ABI version.
> >
> > For the record, I would prefer a v7 saying it is a fixed period of time,
> > being one year at first and should be longer next.
> > Please don't state "supported for one year", which can be understood as a general truth.
>
> Well I was very careful to only state an _intention_ to lengthen the fix period,
> I though it prudent to avoid words like "should", as nothing is known until the year is behind us.
>
> Where I used the word "support", I talk about "abi support".
> I suggest rewording as follows:-
>
> This patch introduces major ABI versions, released aligned with the LTS release,
> maintained for one year through all subsequent releases within that one year period.
> The intention is that the one year abi support period, will then be reviewed after
> the initial year with the intention of lengthening the period for the next ABI version.
Yes, looks better.
I am going to review carefully the series.
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v8 00/13] vhost packed ring performance optimization
2019-10-21 15:40 3% ` [dpdk-dev] [PATCH v7 " Marvin Liu
@ 2019-10-21 22:08 3% ` Marvin Liu
2019-10-24 6:49 0% ` Maxime Coquelin
2019-10-24 16:08 3% ` [dpdk-dev] [PATCH v9 " Marvin Liu
0 siblings, 2 replies; 200+ results
From: Marvin Liu @ 2019-10-21 22:08 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen, gavin.hu
Cc: dev, Marvin Liu
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.
http://mails.dpdk.org/archives/dev/2018-April/095470.html
However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses.
For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be splitted into fast and normal path.
Several methods will be taken in fast path:
Handle descriptors in one cache line by batch.
Split loop function into more pieces and unroll them.
Prerequisite check that whether I/O space can copy directly into mbuf
space and vice versa.
Prerequisite check that whether descriptor mapping is successful.
Distinguish vhost used ring update function by enqueue and dequeue
function.
Buffer dequeue used descriptors as many as possible.
Update enqueue used descriptors by cache line.
After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 35%.
v8:
- Allocate mbuf by virtio_dev_pktmbuf_alloc
v7:
- Rebase code
- Rename unroll macro and definitions
- Calculate flags when doing single dequeue
v6:
- Fix dequeue zcopy result check
v5:
- Remove disable sw prefetch as performance impact is small
- Change unroll pragma macro format
- Rename shadow counter elements names
- Clean dequeue update check condition
- Add inline functions replace of duplicated code
- Unify code style
v4:
- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two
v3:
- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization
v2:
- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated
Marvin Liu (13):
vhost: add packed ring indexes increasing function
vhost: add packed ring single enqueue
vhost: try to unroll for each loop
vhost: add packed ring batch enqueue
vhost: add packed ring single dequeue
vhost: add packed ring batch dequeue
vhost: flush enqueue updates by cacheline
vhost: flush batched enqueue descs directly
vhost: buffer packed ring dequeue updates
vhost: optimize packed ring enqueue
vhost: add packed ring zcopy batch and single dequeue
vhost: optimize packed ring dequeue
vhost: optimize packed ring dequeue when in-order
lib/librte_vhost/Makefile | 18 +
lib/librte_vhost/meson.build | 7 +
lib/librte_vhost/vhost.h | 57 ++
lib/librte_vhost/virtio_net.c | 948 +++++++++++++++++++++++++++-------
4 files changed, 837 insertions(+), 193 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-18 13:17 4% ` Akhil Goyal
@ 2019-10-21 13:47 4% ` Ananyev, Konstantin
2019-10-22 13:31 5% ` Akhil Goyal
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2019-10-21 13:47 UTC (permalink / raw)
To: Akhil Goyal, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', Hemant Agrawal
Hi Akhil,
> Added my comments inline with your draft.
> >
> >
> > Hi Akhil,
> >
> > > > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > > > construct for multiple device_ids:
> > > > __extension__ struct {
> > > > void *data;
> > > > uint16_t refcnt;
> > > > } sess_data[0];
> > > > /**< Driver specific session material, variable size */
> > > >
> > > Yes I also feel the same. I was also not in favor of this when it was introduced.
> > > Please go ahead and remove this. I have no issues with that.
> >
> > If you are not happy with that structure, and admit there are issues with it,
> > why do you push for reusing it for cpu-crypto API?
> > Why not to take step back, take into account current drawbacks
> > and define something that (hopefully) would suite us better?
> > Again new API will be experimental for some time, so we'll
> > have some opportunity to see does it works and if not fix it.
>
> [Akhil] This structure is serving some use case which is agreed upon in the
> Community, we cannot just remove a feature altogether.
I understand that, but we don't suggest to remove anything that already here.
We are talking about extending existing/adding new API.
All our debates around how much we can reuse from existing one and what new
needs to be added.
> Rather it is Intel's Use case only.
>
> >
> > About removing data[] from existing rte_cryptodev_sym_session -
> > Personally would like to do that, but the change seems to be too massive.
> > Definitely not ready for such effort right now.
> >
>
> [snip]..
>
> >
> > Ok, then my suggestion:
> > Let's at least write down all points about crypto-dev approach where we
> > disagree and then probably try to resolve them one by one....
> > If we fail to make an agreement/progress in next week or so,
> > (and no more reviews from the community)
> > will have bring that subject to TB meeting to decide.
> > Sounds fair to you?
> Agreed
> >
> > List is below.
> > Please add/correct me, if I missed something.
> >
> > Konstantin
>
> Before going into comparison, we should define the requirement as well.
Good point.
> What I understood from the patchset,
> "You need a synchronous API to perform crypto operations on raw data using SW PMDs"
> So,
> - no crypto-ops,
> - no separate enq-deq, only single process API for data path
> - Do not need any value addition to the session parameters.
> (You would need some parameters from the crypto-op which
> Are constant per session and since you wont use crypto-op,
> You need some place to store that)
Yes, this is correct, I think.
>
> Now as per your mail, the comparison
> 1. extra input parameters to create/init rte_(cpu)_sym_session.
>
> Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and 'key' fields.
> New fields will be optional and would be used by PMD only when cpu-crypto session is requested.
> For lksd-crypto session PMD is free to ignore these fields.
> No ABI breakage is required.
>
> [Akhil] Agreed, no issues.
>
> 2. cpu-crypto create/init.
> a) Our suggestion - introduce new API for that:
> - rte_crypto_cpu_sym_init() that would init completely opaque rte_crypto_cpu_sym_session.
> - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear); /*whatever else we'll need *'};
> - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform *xforms)
> that would return const struct rte_crypto_cpu_sym_session_ops *based on input xforms.
> Advantages:
> 1) totally opaque data structure (no ABI breakages in future), PMD writer is totally free
> with it format and contents.
>
> [Akhil] It will have breakage at some point till we don't hit the union size.
Not sure, what union you are talking about?
> Rather I don't suspect there will be more parameters added.
> Or do we really care about the ABI breakage when the argument is about
> the correct place to add a piece of code or do we really agree to add code
> anywhere just to avoid that breakage.
I am talking about maintaining it in future.
if your struct is not seen externally, no chances to introduce ABI breakage.
>
> 2) each session entity is self-contained, user doesn't need to bring along dev_id etc.
> dev_id is needed only at init stage, after that user will use session ops to perform
> all operations on that session (process(), clear(), etc.).
>
> [Akhil] There is nothing called as session ops in current DPDK.
True, but it doesn't mean we can't/shouldn't have it.
> What you are proposing
> is a new concept which doesn't have any extra benefit, rather it is adding complexity
> to have two different code paths for session create.
>
>
> 3) User can decide does he wants to store ops[] pointer on a per session basis,
> or on a per group of same sessions, or...
>
> [Akhil] Will the user really care which process API should be called from the PMD.
> Rather it should be driver's responsibility to store that in the session private data
> which would be opaque to the user. As per my suggestion same process function can
> be added to multiple sessions or a single session can be managed inside the PMD.
In that case we either need to have a function per session (stored internally),
or make decision (branches) at run-time.
But as I said in other mail - I am ok to add small shim structure here:
either rte_crypto_cpu_sym_session { void *ses; struct rte_crypto_cpu_sym_session_ops ops; }
or rte_crypto_cpu_sym_session { void *ses; struct rte_crypto_cpu_sym_session_ops *ops; }
And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops() into one (init).
>
>
> 4) No mandatory mempools for private sessions. User can allocate memory for cpu-crypto
> session whenever he likes.
>
> [Akhil] you mean session private data?
Yes.
> You would need that memory anyways, user will be
> allocating that already. You do not need to manage that.
What I am saying - right now user has no choice but to allocate it via mempool.
Which is probably not the best options for all cases.
>
> Disadvantages:
> 5) Extra changes in control path
> 6) User has to store session_ops pointer explicitly.
>
> [Akhil] More disadvantages:
> - All supporting PMDs will need to maintain TWO types of session for the
> same crypto processing. Suppose a fix or a new feature(or algo) is added, PMD owner
> will need to add code in both the session create APIs. Hence more maintenance and
> error prone.
I think majority of code for both paths will be common, plus even we'll reuse current sym_session_init() -
changes in PMD session_init() code will be unavoidable.
But yes, it will be new entry in devops, that PMD will have to support.
Ok to add it as 7) to the list.
> - Stacks which will be using these new APIs also need to maintain two
> code path for the same processing while doing session initialization
> for sync and async
That's the same as #5 above, I think.
>
>
> b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and existing rte_cryptodev_sym_session
> structure.
> Advantages:
> 1) allows to reuse same struct and init/create/clear() functions.
> Probably less changes in control path.
> Disadvantages:
> 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means that
> we can't use the same rte_cryptodev_sym_session to hold private sessions pointers
> for both sync and async mode for the same device.
> So the only option we have - make PMD devops->sym_session_configure()
> always create a session that can work in both cpu and lksd modes.
> For some implementations that would probably mean that under the hood PMD would create
> 2 different session structs (sync/async) and then use one or another depending on from what API been called.
> Seems doable, but ...:
> - will contradict with statement from 1:
> " New fields will be optional and would be used by PMD only when cpu-crypto session is requested."
> Now it becomes mandatory for all apps to specify cpu-crypto related parameters too,
> even if they don't plan to use that mode - i.e. behavior change, existing app change.
> - might cause extra space overhead.
>
> [Akhil] It will not contradict with #1, you will only have few checks in the session init PMD
> Which support this mode, find appropriate values and set the appropriate process() in it.
> User should be able to call, legacy enq-deq as well as the new process() without any issue.
> User would be at runtime will be able to change the datapath.
> So this is not a disadvantage, it would be additional flexibility for the user.
Ok, but that's what I am saying - if PMD would *always* have to create a session that can handle
both modes (sync/async), then user would *always* have to provide parameters for both modes too.
Otherwise if let say user didn't setup sync specific parameters at all, what PMD should do?
- return with error?
- init session that can be used with async path only?
My current assumption is #1.
If #2, then how user will be able to distinguish is that session valid for both modes, or only for one?
>
>
> 3) not possible to store device (not driver) specific data within the session, but I think it is not really needed right now.
> So probably minor compared to 2.b.2.
>
> [Akhil] So lets omit this for current discussion. And I hope we can find some way to deal with it.
I don't think there is an easy way to fix that with existing API.
>
>
> Actually #3 follows from #2, but decided to have them separated.
>
> 3. process() parameters/behavior
> a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and just does:
> session_ops->process(sess, ...);
> Advantages:
> 1) fastest possible execution path
> 2) no need to carry on dev_id for data-path
>
> [Akhil] I don't see any overhead of carrying dev id, at least it would be inline with the
> current DPDK methodology.
If we'll add process() into rte_cryptodev itself (same as we have enqueue_burst/dequeue_burst),
then it will be an ABI breakage.
Also there are discussions to get rid of that approach completely:
http://mails.dpdk.org/archives/dev/2019-September/144674.html
So I am not sure this is a recommended way these days.
> What you are suggesting is a new way to get the things done without much benefit.
Would help with ABI stability plus better performance, isn't it enough?
> Also I don't see any performance difference as crypto workload is heavier than
> Code cycles, so that wont matter.
It depends.
Suppose function call costs you ~30 cycles.
If you have burst of big packets (let say crypto for each will take ~2K cycles) that belong
to the same session, then yes you wouldn't notice these extra 30 cycles at all.
If you have burst of small packets (let say crypto for each will take ~300 cycles) each
belongs to different session, then it will cost you ~10% extra.
> So IMO, there is no advantage in your suggestion as well.
>
>
> Disadvantages:
> 3) user has to carry on session_ops pointer explicitly
> b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
> rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session *sess, /*data parameters*/) {...
> rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> /*and then inside PMD specifc process: */
> pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
> /* and then most likely either */
> pmd_private_session->process(pmd_private_session, ...);
> /* or jump based on session/input data */
> Advantages:
> 1) don't see any...
> Disadvantages:
> 2) User has to carry on dev_id inside data-path
> 3) Extra level of indirection (plus data dependency) - both for data and instructions.
> Possible slowdown compared to a) (not measured).
>
> Having said all this, if the disagreements cannot be resolved, you can go for a pmd API specific
> to your PMDs,
I don't think it is good idea.
PMD specific API is sort of deprecated path, also there is no clean way to use it within the libraries.
> because as per my understanding the solution doesn't look scalable to other PMDs.
> Your approach is aligned only to Intel , will not benefit others like openssl which is used by all
> vendors.
I feel quite opposite, from my perspective majority of SW backed PMDs will benefit from it.
And I don't see anything Intel specific in my proposals above.
About openssl PMD: I am not an expert here, but looking at the code, I think it will fit really well.
Look yourself at its internal functions: process_openssl_auth_op/process_openssl_crypto_op,
I think they doing exactly the same - they use sync API underneath, and they are session based
(AFAIK you don't need any device/queue data, everything that needed for crypto/auth is stored inside session).
Konstantin
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code Anatoly Burakov
2019-10-17 21:04 0% ` Carrillo, Erik G
@ 2019-10-21 13:24 3% ` Kevin Traynor
2019-10-24 9:07 4% ` Burakov, Anatoly
1 sibling, 1 reply; 200+ results
From: Kevin Traynor @ 2019-10-21 13:24 UTC (permalink / raw)
To: Anatoly Burakov, dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, bruce.richardson, thomas, david.marchand
On 17/10/2019 15:31, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> Remove code for old ABI versions ahead of ABI version bump.
>
I think there needs to be some doc updates for this.
Looking at http://doc.dpdk.org/guides/rel_notes/deprecation.html there
is nothing saying these functions are deprecated? (probably same issue
for other 'remove deprecated code' patches)
There should probably be an entry in the API/ABI changes section of the
release notes too.
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>
> Notes:
> v2:
> - Moved this to before ABI version bump to avoid compile breakage
>
> lib/librte_timer/rte_timer.c | 90 ++----------------------------------
> lib/librte_timer/rte_timer.h | 15 ------
> 2 files changed, 5 insertions(+), 100 deletions(-)
>
> diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
> index bdcf05d06b..de6959b809 100644
> --- a/lib/librte_timer/rte_timer.c
> +++ b/lib/librte_timer/rte_timer.c
> @@ -68,9 +68,6 @@ static struct rte_timer_data *rte_timer_data_arr;
> static const uint32_t default_data_id;
> static uint32_t rte_timer_subsystem_initialized;
>
> -/* For maintaining older interfaces for a period */
> -static struct rte_timer_data default_timer_data;
> -
> /* when debug is enabled, store some statistics */
> #ifdef RTE_LIBRTE_TIMER_DEBUG
> #define __TIMER_STAT_ADD(priv_timer, name, n) do { \
> @@ -131,22 +128,6 @@ rte_timer_data_dealloc(uint32_t id)
> return 0;
> }
>
> -void
> -rte_timer_subsystem_init_v20(void)
> -{
> - unsigned lcore_id;
> - struct priv_timer *priv_timer = default_timer_data.priv_timer;
> -
> - /* since priv_timer is static, it's zeroed by default, so only init some
> - * fields.
> - */
> - for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id ++) {
> - rte_spinlock_init(&priv_timer[lcore_id].list_lock);
> - priv_timer[lcore_id].prev_lcore = lcore_id;
> - }
> -}
> -VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
> -
> /* Init the timer library. Allocate an array of timer data structs in shared
> * memory, and allocate the zeroth entry for use with original timer
> * APIs. Since the intersection of the sets of lcore ids in primary and
> @@ -154,7 +135,7 @@ VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
> * multiple processes.
> */
> int
> -rte_timer_subsystem_init_v1905(void)
> +rte_timer_subsystem_init(void)
> {
> const struct rte_memzone *mz;
> struct rte_timer_data *data;
> @@ -209,9 +190,6 @@ rte_timer_subsystem_init_v1905(void)
>
> return 0;
> }
> -MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),
> - rte_timer_subsystem_init_v1905);
> -BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);
>
> void
> rte_timer_subsystem_finalize(void)
> @@ -552,42 +530,13 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
>
> /* Reset and start the timer associated with the timer handle tim */
> int
> -rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
> - enum rte_timer_type type, unsigned int tim_lcore,
> - rte_timer_cb_t fct, void *arg)
> -{
> - uint64_t cur_time = rte_get_timer_cycles();
> - uint64_t period;
> -
> - if (unlikely((tim_lcore != (unsigned)LCORE_ID_ANY) &&
> - !(rte_lcore_is_enabled(tim_lcore) ||
> - rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))
> - return -1;
> -
> - if (type == PERIODICAL)
> - period = ticks;
> - else
> - period = 0;
> -
> - return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
> - fct, arg, 0, &default_timer_data);
> -}
> -VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
> -
> -int
> -rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
> +rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
> enum rte_timer_type type, unsigned int tim_lcore,
> rte_timer_cb_t fct, void *arg)
> {
> return rte_timer_alt_reset(default_data_id, tim, ticks, type,
> tim_lcore, fct, arg);
> }
> -MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
> - enum rte_timer_type type,
> - unsigned int tim_lcore,
> - rte_timer_cb_t fct, void *arg),
> - rte_timer_reset_v1905);
> -BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);
>
> int
> rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
> @@ -658,20 +607,10 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked,
>
> /* Stop the timer associated with the timer handle tim */
> int
> -rte_timer_stop_v20(struct rte_timer *tim)
> -{
> - return __rte_timer_stop(tim, 0, &default_timer_data);
> -}
> -VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);
> -
> -int
> -rte_timer_stop_v1905(struct rte_timer *tim)
> +rte_timer_stop(struct rte_timer *tim)
> {
> return rte_timer_alt_stop(default_data_id, tim);
> }
> -MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),
> - rte_timer_stop_v1905);
> -BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);
>
> int
> rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
> @@ -817,15 +756,8 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
> priv_timer[lcore_id].running_tim = NULL;
> }
>
> -void
> -rte_timer_manage_v20(void)
> -{
> - __rte_timer_manage(&default_timer_data);
> -}
> -VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
> -
> int
> -rte_timer_manage_v1905(void)
> +rte_timer_manage(void)
> {
> struct rte_timer_data *timer_data;
>
> @@ -835,8 +767,6 @@ rte_timer_manage_v1905(void)
>
> return 0;
> }
> -MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);
> -BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);
>
> int
> rte_timer_alt_manage(uint32_t timer_data_id,
> @@ -1074,21 +1004,11 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
> #endif
> }
>
> -void
> -rte_timer_dump_stats_v20(FILE *f)
> -{
> - __rte_timer_dump_stats(&default_timer_data, f);
> -}
> -VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);
> -
> int
> -rte_timer_dump_stats_v1905(FILE *f)
> +rte_timer_dump_stats(FILE *f)
> {
> return rte_timer_alt_dump_stats(default_data_id, f);
> }
> -MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),
> - rte_timer_dump_stats_v1905);
> -BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);
>
> int
> rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
> diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
> index 05d287d8f2..9dc5fc3092 100644
> --- a/lib/librte_timer/rte_timer.h
> +++ b/lib/librte_timer/rte_timer.h
> @@ -181,8 +181,6 @@ int rte_timer_data_dealloc(uint32_t id);
> * subsystem
> */
> int rte_timer_subsystem_init(void);
> -int rte_timer_subsystem_init_v1905(void);
> -void rte_timer_subsystem_init_v20(void);
>
> /**
> * @warning
> @@ -250,13 +248,6 @@ void rte_timer_init(struct rte_timer *tim);
> int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
> enum rte_timer_type type, unsigned tim_lcore,
> rte_timer_cb_t fct, void *arg);
> -int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
> - enum rte_timer_type type, unsigned int tim_lcore,
> - rte_timer_cb_t fct, void *arg);
> -int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
> - enum rte_timer_type type, unsigned int tim_lcore,
> - rte_timer_cb_t fct, void *arg);
> -
>
> /**
> * Loop until rte_timer_reset() succeeds.
> @@ -313,8 +304,6 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
> * - (-1): The timer is in the RUNNING or CONFIG state.
> */
> int rte_timer_stop(struct rte_timer *tim);
> -int rte_timer_stop_v1905(struct rte_timer *tim);
> -int rte_timer_stop_v20(struct rte_timer *tim);
>
> /**
> * Loop until rte_timer_stop() succeeds.
> @@ -358,8 +347,6 @@ int rte_timer_pending(struct rte_timer *tim);
> * - -EINVAL: timer subsystem not yet initialized
> */
> int rte_timer_manage(void);
> -int rte_timer_manage_v1905(void);
> -void rte_timer_manage_v20(void);
>
> /**
> * Dump statistics about timers.
> @@ -371,8 +358,6 @@ void rte_timer_manage_v20(void);
> * - -EINVAL: timer subsystem not yet initialized
> */
> int rte_timer_dump_stats(FILE *f);
> -int rte_timer_dump_stats_v1905(FILE *f);
> -void rte_timer_dump_stats_v20(FILE *f);
>
> /**
> * @warning
>
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v6 0/4] doc: changes to abi policy introducing major abi versions
2019-10-21 9:50 5% ` [dpdk-dev] [PATCH v6 0/4] " Thomas Monjalon
@ 2019-10-21 10:10 10% ` Ray Kinsella
2019-10-21 14:38 5% ` Thomas Monjalon
0 siblings, 1 reply; 200+ results
From: Ray Kinsella @ 2019-10-21 10:10 UTC (permalink / raw)
To: Thomas Monjalon
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
On 21/10/2019 10:50, Thomas Monjalon wrote:
> 27/09/2019 18:54, Ray Kinsella:
>> TL;DR abbreviation:
>> A major ABI version that all DPDK releases during a one year period
>> support. ABI versioning is managed at a project-level, in place of library-level
>> management. ABI changes to add new features are permitted, as long as ABI
>> compatibility with the major ABI version is maintained.
>>
>> Detail:
>> This patch introduces major ABI versions, supported for one year and released
>> aligned with the LTS release. This ABI version is then supported by all
>> subsequent releases within that one year period. The intention is that the one
>> year support period, will then be reviewed after the initial year with the
>> intention of lengthing the support period for the next ABI version.
>
> For the record, I would prefer a v7 saying it is a fixed period of time,
> being one year at first and should be longer next.
> Please don't state "supported for one year", which can be understood as a general truth.
Well I was very careful to only state an _intention_ to lengthen the fix period,
I though it prudent to avoid words like "should", as nothing is known until the year is behind us.
Where I used the word "support", I talk about "abi support".
I suggest rewording as follows:-
This patch introduces major ABI versions, released aligned with the LTS release,
maintained for one year through all subsequent releases within that one year period.
The intention is that the one year abi support period, will then be reviewed after
the initial year with the intention of lengthening the period for the next ABI version.
>
>> ABI changes that preserve ABI compatibility with the major ABI version are
>> permitted in subsequent releases. ABI changes, follow similar approval rules as
>> before with the additional gate of now requiring technical board approval. The
>> merging and release of ABI breaking changes would now be pushed to the
>> declaration of the next major ABI version.
>>
>> This change encourages developers to maintain ABI compatibility with the major
>> ABI version, by promoting a permissive culture around those changes that
>> preserve ABI compatibility. This approach begins to align DPDK with those
>> projects that declare major ABI versions (e.g. version 2.x, 3.x) and support
>> those versions for some period, typically two years or more.
>>
>> To provide an example of how this might work in practice:
>>
>> * DPDK v20 is declared as the supported ABI version for one year, aligned with
>> the DPDK v19.11 (LTS) release. All library sonames are updated to reflect the
>> new ABI version, e.g. librte_eal.so.20, librte_acl.so.20...
>> * DPDK v20.02 .. v20.08 releases are ABI compatible with the DPDK v20 ABI. ABI
>> changes are permitted from DPDK v20.02 onwards, with the condition that ABI
>> compatibility with DPDK v20 is preserved.
>> * DPDK v21 is declared as the new supported ABI version for two years, aligned
>> with the DPDK v20.11 (LTS) release. The DPDK v20 ABI is now depreciated,
>> library sonames are updated to v21 and ABI compatibility breaking changes may
>> be introduced.
>
> OK I agree with these explanations.
>
>
^ permalink raw reply [relevance 10%]
* Re: [dpdk-dev] [PATCH v6 1/4] doc: separate versioning.rst into version and policy
@ 2019-10-21 9:53 0% ` Thomas Monjalon
2019-10-25 11:36 0% ` Ray Kinsella
0 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2019-10-21 9:53 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
27/09/2019 18:54, Ray Kinsella:
> Separate versioning.rst into abi versioning and abi policy guidance, in
> preparation for adding more detail to the abi policy.
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> --- /dev/null
> +++ b/doc/guides/contributing/abi_policy.rst
> @@ -0,0 +1,169 @@
> +.. SPDX-License-Identifier: BSD-3-Clause
> + Copyright 2018 The DPDK contributors
> +
> +.. abi_api_policy:
No need to add an anchor at the beginning of a file.
RsT syntax :doc: allows to refer to a .rst file.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 0/4] doc: changes to abi policy introducing major abi versions
@ 2019-10-21 9:50 5% ` Thomas Monjalon
2019-10-21 10:10 10% ` Ray Kinsella
2 siblings, 1 reply; 200+ results
From: Thomas Monjalon @ 2019-10-21 9:50 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, stephen, bruce.richardson, ferruh.yigit, konstantin.ananyev,
jerinj, olivier.matz, nhorman, maxime.coquelin, john.mcnamara,
marko.kovacevic, hemant.agrawal, ktraynor, aconole
27/09/2019 18:54, Ray Kinsella:
> TL;DR abbreviation:
> A major ABI version that all DPDK releases during a one year period
> support. ABI versioning is managed at a project-level, in place of library-level
> management. ABI changes to add new features are permitted, as long as ABI
> compatibility with the major ABI version is maintained.
>
> Detail:
> This patch introduces major ABI versions, supported for one year and released
> aligned with the LTS release. This ABI version is then supported by all
> subsequent releases within that one year period. The intention is that the one
> year support period, will then be reviewed after the initial year with the
> intention of lengthing the support period for the next ABI version.
For the record, I would prefer a v7 saying it is a fixed period of time,
being one year at first and should be longer next.
Please don't state "supported for one year", which can be understood as a general truth.
> ABI changes that preserve ABI compatibility with the major ABI version are
> permitted in subsequent releases. ABI changes, follow similar approval rules as
> before with the additional gate of now requiring technical board approval. The
> merging and release of ABI breaking changes would now be pushed to the
> declaration of the next major ABI version.
>
> This change encourages developers to maintain ABI compatibility with the major
> ABI version, by promoting a permissive culture around those changes that
> preserve ABI compatibility. This approach begins to align DPDK with those
> projects that declare major ABI versions (e.g. version 2.x, 3.x) and support
> those versions for some period, typically two years or more.
>
> To provide an example of how this might work in practice:
>
> * DPDK v20 is declared as the supported ABI version for one year, aligned with
> the DPDK v19.11 (LTS) release. All library sonames are updated to reflect the
> new ABI version, e.g. librte_eal.so.20, librte_acl.so.20...
> * DPDK v20.02 .. v20.08 releases are ABI compatible with the DPDK v20 ABI. ABI
> changes are permitted from DPDK v20.02 onwards, with the condition that ABI
> compatibility with DPDK v20 is preserved.
> * DPDK v21 is declared as the new supported ABI version for two years, aligned
> with the DPDK v20.11 (LTS) release. The DPDK v20 ABI is now depreciated,
> library sonames are updated to v21 and ABI compatibility breaking changes may
> be introduced.
OK I agree with these explanations.
^ permalink raw reply [relevance 5%]
* Re: [dpdk-dev] [PATCH v6 RESEND] eal: add tsc_hz to rte_mem_config
@ 2019-10-21 8:23 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-21 8:23 UTC (permalink / raw)
To: Jim Harris; +Cc: dev, Bruce Richardson, Burakov, Anatoly
On Tue, Oct 8, 2019 at 10:39 AM Bruce Richardson
<bruce.richardson@intel.com> wrote:
>
> On Mon, Oct 07, 2019 at 08:28:21AM -0700, Jim Harris wrote:
> > This ensures secondary processes never have to
> > calculate the TSC rate themselves, which can be
> > noticeable in VMs that don't have access to
> > arch-specific detection mechanism (such as
> > CPUID leaf 0x15 or MSR 0xCE on x86).
> >
> > Since rte_mem_config is now internal to the rte_eal
> > library, we can add tsc_hz without ABI breakage
> > concerns.
> >
> > Reduces rte_eal_init() execution time in a secondary
> > process from 165ms to 66ms on my test system.
> >
> > Signed-off-by: Jim Harris <james.r.harris@intel.com>
> > ---
> This seems a good idea to me.
>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
I wonder if we can get rid of eal_tsc_resolution_hz and just rely on
the shared memory to get/store this info.
Feel free to look at this later and send a followup patch :-).
Applied, thanks.
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v7 00/13] vhost packed ring performance optimization
2019-10-15 16:07 3% ` [dpdk-dev] [PATCH v6 " Marvin Liu
2019-10-17 7:31 0% ` Maxime Coquelin
@ 2019-10-21 15:40 3% ` Marvin Liu
2019-10-21 22:08 3% ` [dpdk-dev] [PATCH v8 " Marvin Liu
1 sibling, 1 reply; 200+ results
From: Marvin Liu @ 2019-10-21 15:40 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen, gavin.hu
Cc: dev, Marvin Liu
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.
http://mails.dpdk.org/archives/dev/2018-April/095470.html
However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses.
For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be splitted into fast and normal path.
Several methods will be taken in fast path:
Handle descriptors in one cache line by batch.
Split loop function into more pieces and unroll them.
Prerequisite check that whether I/O space can copy directly into mbuf
space and vice versa.
Prerequisite check that whether descriptor mapping is successful.
Distinguish vhost used ring update function by enqueue and dequeue
function.
Buffer dequeue used descriptors as many as possible.
Update enqueue used descriptors by cache line.
After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 35%.
v7:
- Rebase code
- Rename unroll macro and definitions
- Calculate flags when doing single dequeue
v6:
- Fix dequeue zcopy result check
v5:
- Remove disable sw prefetch as performance impact is small
- Change unroll pragma macro format
- Rename shadow counter elements names
- Clean dequeue update check condition
- Add inline functions replace of duplicated code
- Unify code style
v4:
- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two
v3:
- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization
v2:
- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated
Marvin Liu (13):
vhost: add packed ring indexes increasing function
vhost: add packed ring single enqueue
vhost: try to unroll for each loop
vhost: add packed ring batch enqueue
vhost: add packed ring single dequeue
vhost: add packed ring batch dequeue
vhost: flush enqueue updates by cacheline
vhost: flush batched enqueue descs directly
vhost: buffer packed ring dequeue updates
vhost: optimize packed ring enqueue
vhost: add packed ring zcopy batch and single dequeue
vhost: optimize packed ring dequeue
vhost: optimize packed ring dequeue when in-order
lib/librte_vhost/Makefile | 18 +
lib/librte_vhost/meson.build | 7 +
lib/librte_vhost/vhost.h | 57 ++
lib/librte_vhost/virtio_net.c | 945 +++++++++++++++++++++++++++-------
4 files changed, 834 insertions(+), 193 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [RFC] ethdev: add new fields for max LRO session size
2019-10-18 16:35 0% ` Ferruh Yigit
@ 2019-10-18 18:05 0% ` Ananyev, Konstantin
2019-10-22 12:56 0% ` Andrew Rybchenko
1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-18 18:05 UTC (permalink / raw)
To: Yigit, Ferruh, Thomas Monjalon, Matan Azrad
Cc: dev, Andrew Rybchenko, Olivier Matz
> -----Original Message-----
> From: Yigit, Ferruh
> Sent: Friday, October 18, 2019 5:36 PM
> To: Thomas Monjalon <thomas@monjalon.net>; Matan Azrad <matan@mellanox.com>
> Cc: dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Olivier Matz
> <olivier.matz@6wind.com>
> Subject: Re: [RFC] ethdev: add new fields for max LRO session size
>
> On 10/2/2019 2:58 PM, Thomas Monjalon wrote:
> > 24/09/2019 14:03, Matan Azrad:
> >> From: Ferruh Yigit
> >>> On 9/15/2019 8:48 AM, Matan Azrad wrote:
> >>>> Hi Ferruh
> >>>>
> >>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
> >>>>> On 8/29/2019 8:47 AM, Matan Azrad wrote:
> >>>>>> It may be needed by the user to limit the LRO session packet size.
> >>>>>> In order to allow the above limitation, add new Rx configuration for
> >>>>>> the maximum LRO session size.
> >>>>>>
> >>>>>> In addition, Add a new capability to expose the maximum LRO session
> >>>>>> size supported by the port.
> >>>>>>
> >>>>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
> >>>>>
> >>>>> Hi Matan,
> >>>>>
> >>>>> Is there any existing user of this new field?
> >>>>
> >>>> All the LRO users need it due to the next reasons:
> >>>>
> >>>> 1. If scatter is enabled - The dpdk user can limit the LRO session size created
> >>> by the HW by this field, if no field like that - there is no way to limit it.
> >>>> 2. No scatter - the dpdk user may want to limit the LRO packet size in order
> >>> to save enough tail-room in the mbuf for its own usage.
> >>>> 3. The limitation of max_rx_pkt_len is not enough - doesn't make sense to
> >>> limit LRO traffic as single packet.
> >>>>
> >>>
> >>> So should there be more complement patches to this RFC? To update the
> >>> users of the field with the new field.
> >>
> >>
> >> We already exposed it as ABI breakage in the last deprecation notice.
> >> We probably cannot complete it for 19.11 version, hopefully for 20.02 it will be completed.
> >
> > We won't break the ABI in 20.02.
> > What should be done in 19.11?
> >
>
> The ask was to add code that uses new added fields, this patch only adds new
> field to two public ethdev struct.
>
> @Thomas, @Andrew, if this patch doesn't goes it on this release it will have to
> wait a year. I would like to see the implementation but it is not there, what is
> your comment?
Just a side note, if I am not mistaken, there is a 6B gap in eth_rxmode:
struct rte_eth_rxmode {
/** The multi-queue packet distribution mode to be used, e.g. RSS. */
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if JUMBO_FRAME enabled. */
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ <---- offset 8
/**
* Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
* Only offloads set on rx_offload_capa field on rte_eth_dev_info
* structure are allowed to be set.
*/
uint64_t offloads; <--- offset 16
};
So we can reserve these 6B, and then reuse for LRO, or whatever.
Might be it would help somehow.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2 2/3] vhost: call vDPA callback at the end of vring enable handler
@ 2019-10-18 16:54 0% ` Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2019-10-18 16:54 UTC (permalink / raw)
To: Tiwei Bie, Andy Pei
Cc: dev, rosen.xu, xiaolong.ye, xiao.w.wang, maxime.coquelin, zhihong.wang
On 9/23/2019 9:12 AM, Tiwei Bie wrote:
> On Tue, Sep 17, 2019 at 05:09:47PM +0800, Andy Pei wrote:
>> vDPA's set_vring_state callback would need to know the virtqueues'
>> enable status to configure the hardware.
>>
>> Signed-off-by: Xiaolong Ye <xiaolong.ye@intel.com>
>> Signed-off-by: Andy Pei <andy.pei@intel.com>
>> ---
>> v2:
>> add nr_active_vring as a parameter to ops function set_vring_state in
>> case of callback in set_vring_state() and avoid exposing new API.
>>
>> lib/librte_vhost/rte_vdpa.h | 4 ++--
>> lib/librte_vhost/vhost_user.c | 27 +++++++++++++++++++++++++--
>> 2 files changed, 27 insertions(+), 4 deletions(-)
>>
>> diff --git a/lib/librte_vhost/rte_vdpa.h b/lib/librte_vhost/rte_vdpa.h
>> index 9a3deb3..6e55d4d 100644
>> --- a/lib/librte_vhost/rte_vdpa.h
>> +++ b/lib/librte_vhost/rte_vdpa.h
>> @@ -54,8 +54,8 @@ struct rte_vdpa_dev_ops {
>> int (*dev_conf)(int vid);
>> int (*dev_close)(int vid);
>>
>> - /** Enable/disable this vring */
>> - int (*set_vring_state)(int vid, int vring, int state);
>> + /** Enable/disable vring queue pairs */
>> + int (*set_vring_state)(int vid, int nr_active_vring);
>
> We should avoid changing the API/ABI unless we have a very good
> justification.
>
> With the existing API, it should be easy to get the number of
> active rings by maintaining a bitmap or something similar in
> ifc driver.
>
> Besides, please keep other maintainers got from get-maintainer.sh
> in the Cc list as well.
>
updating patchset [1] as "Change Requested" based on above comment.
[1]
https://patches.dpdk.org/user/todo/dpdk/?series=6424&delegate=319&state=*
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC] ethdev: add new fields for max LRO session size
@ 2019-10-18 16:35 0% ` Ferruh Yigit
2019-10-18 18:05 0% ` Ananyev, Konstantin
2019-10-22 12:56 0% ` Andrew Rybchenko
0 siblings, 2 replies; 200+ results
From: Ferruh Yigit @ 2019-10-18 16:35 UTC (permalink / raw)
To: Thomas Monjalon, Matan Azrad
Cc: dev, Andrew Rybchenko, Konstantin Ananyev, Olivier Matz
On 10/2/2019 2:58 PM, Thomas Monjalon wrote:
> 24/09/2019 14:03, Matan Azrad:
>> From: Ferruh Yigit
>>> On 9/15/2019 8:48 AM, Matan Azrad wrote:
>>>> Hi Ferruh
>>>>
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>>> On 8/29/2019 8:47 AM, Matan Azrad wrote:
>>>>>> It may be needed by the user to limit the LRO session packet size.
>>>>>> In order to allow the above limitation, add new Rx configuration for
>>>>>> the maximum LRO session size.
>>>>>>
>>>>>> In addition, Add a new capability to expose the maximum LRO session
>>>>>> size supported by the port.
>>>>>>
>>>>>> Signed-off-by: Matan Azrad <matan@mellanox.com>
>>>>>
>>>>> Hi Matan,
>>>>>
>>>>> Is there any existing user of this new field?
>>>>
>>>> All the LRO users need it due to the next reasons:
>>>>
>>>> 1. If scatter is enabled - The dpdk user can limit the LRO session size created
>>> by the HW by this field, if no field like that - there is no way to limit it.
>>>> 2. No scatter - the dpdk user may want to limit the LRO packet size in order
>>> to save enough tail-room in the mbuf for its own usage.
>>>> 3. The limitation of max_rx_pkt_len is not enough - doesn't make sense to
>>> limit LRO traffic as single packet.
>>>>
>>>
>>> So should there be more complement patches to this RFC? To update the
>>> users of the field with the new field.
>>
>>
>> We already exposed it as ABI breakage in the last deprecation notice.
>> We probably cannot complete it for 19.11 version, hopefully for 20.02 it will be completed.
>
> We won't break the ABI in 20.02.
> What should be done in 19.11?
>
The ask was to add code that uses new added fields, this patch only adds new
field to two public ethdev struct.
@Thomas, @Andrew, if this patch doesn't goes it on this release it will have to
wait a year. I would like to see the implementation but it is not there, what is
your comment?
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-16 22:07 3% ` Ananyev, Konstantin
2019-10-17 12:49 0% ` Ananyev, Konstantin
@ 2019-10-18 13:17 4% ` Akhil Goyal
2019-10-21 13:47 4% ` Ananyev, Konstantin
1 sibling, 1 reply; 200+ results
From: Akhil Goyal @ 2019-10-18 13:17 UTC (permalink / raw)
To: Ananyev, Konstantin, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph', Hemant Agrawal
Hi Konstantin,
Added my comments inline with your draft.
>
>
> Hi Akhil,
>
> > > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > > construct for multiple device_ids:
> > > __extension__ struct {
> > > void *data;
> > > uint16_t refcnt;
> > > } sess_data[0];
> > > /**< Driver specific session material, variable size */
> > >
> > Yes I also feel the same. I was also not in favor of this when it was introduced.
> > Please go ahead and remove this. I have no issues with that.
>
> If you are not happy with that structure, and admit there are issues with it,
> why do you push for reusing it for cpu-crypto API?
> Why not to take step back, take into account current drawbacks
> and define something that (hopefully) would suite us better?
> Again new API will be experimental for some time, so we'll
> have some opportunity to see does it works and if not fix it.
[Akhil] This structure is serving some use case which is agreed upon in the
Community, we cannot just remove a feature altogether. Rather it is Intel's
Use case only.
>
> About removing data[] from existing rte_cryptodev_sym_session -
> Personally would like to do that, but the change seems to be too massive.
> Definitely not ready for such effort right now.
>
[snip]..
>
> Ok, then my suggestion:
> Let's at least write down all points about crypto-dev approach where we
> disagree and then probably try to resolve them one by one....
> If we fail to make an agreement/progress in next week or so,
> (and no more reviews from the community)
> will have bring that subject to TB meeting to decide.
> Sounds fair to you?
Agreed
>
> List is below.
> Please add/correct me, if I missed something.
>
> Konstantin
Before going into comparison, we should define the requirement as well.
What I understood from the patchset,
"You need a synchronous API to perform crypto operations on raw data using SW PMDs"
So,
- no crypto-ops,
- no separate enq-deq, only single process API for data path
- Do not need any value addition to the session parameters.
(You would need some parameters from the crypto-op which
Are constant per session and since you wont use crypto-op,
You need some place to store that)
Now as per your mail, the comparison
1. extra input parameters to create/init rte_(cpu)_sym_session.
Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and 'key' fields.
New fields will be optional and would be used by PMD only when cpu-crypto session is requested.
For lksd-crypto session PMD is free to ignore these fields.
No ABI breakage is required.
[Akhil] Agreed, no issues.
2. cpu-crypto create/init.
a) Our suggestion - introduce new API for that:
- rte_crypto_cpu_sym_init() that would init completely opaque rte_crypto_cpu_sym_session.
- struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear); /*whatever else we'll need *'};
- rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform *xforms)
that would return const struct rte_crypto_cpu_sym_session_ops *based on input xforms.
Advantages:
1) totally opaque data structure (no ABI breakages in future), PMD writer is totally free
with it format and contents.
[Akhil] It will have breakage at some point till we don't hit the union size.
Rather I don't suspect there will be more parameters added.
Or do we really care about the ABI breakage when the argument is about
the correct place to add a piece of code or do we really agree to add code
anywhere just to avoid that breakage.
2) each session entity is self-contained, user doesn't need to bring along dev_id etc.
dev_id is needed only at init stage, after that user will use session ops to perform
all operations on that session (process(), clear(), etc.).
[Akhil] There is nothing called as session ops in current DPDK. What you are proposing
is a new concept which doesn't have any extra benefit, rather it is adding complexity
to have two different code paths for session create.
3) User can decide does he wants to store ops[] pointer on a per session basis,
or on a per group of same sessions, or...
[Akhil] Will the user really care which process API should be called from the PMD.
Rather it should be driver's responsibility to store that in the session private data
which would be opaque to the user. As per my suggestion same process function can
be added to multiple sessions or a single session can be managed inside the PMD.
4) No mandatory mempools for private sessions. User can allocate memory for cpu-crypto
session whenever he likes.
[Akhil] you mean session private data? You would need that memory anyways, user will be
allocating that already. You do not need to manage that.
Disadvantages:
5) Extra changes in control path
6) User has to store session_ops pointer explicitly.
[Akhil] More disadvantages:
- All supporting PMDs will need to maintain TWO types of session for the
same crypto processing. Suppose a fix or a new feature(or algo) is added, PMD owner
will need to add code in both the session create APIs. Hence more maintenance and
error prone.
- Stacks which will be using these new APIs also need to maintain two
code path for the same processing while doing session initialization
for sync and async
b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and existing rte_cryptodev_sym_session
structure.
Advantages:
1) allows to reuse same struct and init/create/clear() functions.
Probably less changes in control path.
Disadvantages:
2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means that
we can't use the same rte_cryptodev_sym_session to hold private sessions pointers
for both sync and async mode for the same device.
So the only option we have - make PMD devops->sym_session_configure()
always create a session that can work in both cpu and lksd modes.
For some implementations that would probably mean that under the hood PMD would create
2 different session structs (sync/async) and then use one or another depending on from what API been called.
Seems doable, but ...:
- will contradict with statement from 1:
" New fields will be optional and would be used by PMD only when cpu-crypto session is requested."
Now it becomes mandatory for all apps to specify cpu-crypto related parameters too,
even if they don't plan to use that mode - i.e. behavior change, existing app change.
- might cause extra space overhead.
[Akhil] It will not contradict with #1, you will only have few checks in the session init PMD
Which support this mode, find appropriate values and set the appropriate process() in it.
User should be able to call, legacy enq-deq as well as the new process() without any issue.
User would be at runtime will be able to change the datapath.
So this is not a disadvantage, it would be additional flexibility for the user.
3) not possible to store device (not driver) specific data within the session, but I think it is not really needed right now.
So probably minor compared to 2.b.2.
[Akhil] So lets omit this for current discussion. And I hope we can find some way to deal with it.
Actually #3 follows from #2, but decided to have them separated.
3. process() parameters/behavior
a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and just does:
session_ops->process(sess, ...);
Advantages:
1) fastest possible execution path
2) no need to carry on dev_id for data-path
[Akhil] I don't see any overhead of carrying dev id, at least it would be inline with the
current DPDK methodology.
What you are suggesting is a new way to get the things done without much benefit.
Also I don't see any performance difference as crypto workload is heavier than
Code cycles, so that wont matter.
So IMO, there is no advantage in your suggestion as well.
Disadvantages:
3) user has to carry on session_ops pointer explicitly
b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session *sess, /*data parameters*/) {...
rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
/*and then inside PMD specifc process: */
pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
/* and then most likely either */
pmd_private_session->process(pmd_private_session, ...);
/* or jump based on session/input data */
Advantages:
1) don't see any...
Disadvantages:
2) User has to carry on dev_id inside data-path
3) Extra level of indirection (plus data dependency) - both for data and instructions.
Possible slowdown compared to a) (not measured).
Having said all this, if the disagreements cannot be resolved, you can go for a pmd API specific
to your PMDs, because as per my understanding the solution doesn't look scalable to other PMDs.
Your approach is aligned only to Intel, will not benefit others like openssl which is used by all
vendors.
Regards,
Akhil
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-18 9:47 0% ` Olivier Matz
@ 2019-10-18 11:24 0% ` Wang, Haiyue
0 siblings, 0 replies; 200+ results
From: Wang, Haiyue @ 2019-10-18 11:24 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, October 18, 2019 17:48
> To: Wang, Haiyue <haiyue.wang@intel.com>
> Cc: dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: Re: [PATCH v2] mbuf: support dynamic fields and flags
>
> On Fri, Oct 18, 2019 at 08:28:02AM +0000, Wang, Haiyue wrote:
> > Hi Olivier,
> >
> > > -----Original Message-----
> > > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > Sent: Friday, October 18, 2019 15:54
> > > To: Wang, Haiyue <haiyue.wang@intel.com>
> > > Cc: dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> > > <bruce.richardson@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > > <thomas@monjalon.net>
> > > Subject: Re: [PATCH v2] mbuf: support dynamic fields and flags
> > >
> > > Hi Haiyue,
> > >
> > > On Fri, Oct 18, 2019 at 02:47:50AM +0000, Wang, Haiyue wrote:
> > > > Hi Olivier
> > > >
> > > > > -----Original Message-----
> > > > > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > > > Sent: Thursday, October 17, 2019 22:42
> > > > > To: dev@dpdk.org
> > > > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> <bruce.richardson@intel.com>;
> > > Wang,
> > > > > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > > > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > > > > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > > > > <thomas@monjalon.net>
> > > > > Subject: [PATCH v2] mbuf: support dynamic fields and flags
> > > > >
> > > > > Many features require to store data inside the mbuf. As the room in mbuf
> > > > > structure is limited, it is not possible to have a field for each
> > > > > feature. Also, changing fields in the mbuf structure can break the API
> > > > > or ABI.
> > > > >
> > > > > This commit addresses these issues, by enabling the dynamic registration
> > > > > of fields or flags:
> > > > >
> > > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > > given size (>= 1 byte) and alignment constraint.
> > > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > > >
> > > > > The typical use case is a PMD that registers space for an offload
> > > > > feature, when the application requests to enable this feature. As
> > > > > the space in mbuf is limited, the space should only be reserved if it
> > > > > is going to be used (i.e when the application explicitly asks for it).
> > > > >
> > > > > The registration can be done at any moment, but it is not possible
> > > > > to unregister fields or flags for now.
> > > > >
> > > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > > > ---
> > > > >
> > > > > v2
> > > > >
> > > > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > > > (packet copy)
> > > > > * Add new apis to register a dynamic field/flag at a specific place
> > > > > * Add a dump function (sugg by David)
> > > > > * Enhance field registration function to select the best offset, keeping
> > > > > large aligned zones as much as possible (sugg by Konstantin)
> > > > > * Use a size_t and unsigned int instead of int when relevant
> > > > > (sugg by Konstantin)
> > > > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > > > (sugg by Konstantin)
> > > > > * Remove unused argument in private function (sugg by Konstantin)
> > > > > * Fix and simplify locking (sugg by Konstantin)
> > > > > * Fix minor typo
> > > > >
> > > > > rfc -> v1
> > > > >
> > > > > * Rebase on top of master
> > > > > * Change registration API to use a structure instead of
> > > > > variables, getting rid of #defines (Stephen's comment)
> > > > > * Update flag registration to use a similar API as fields.
> > > > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > > > * Add a debug log at registration
> > > > > * Add some words in release note
> > > > > * Did some performance tests (sugg. by Andrew):
> > > > > On my platform, reading a dynamic field takes ~3 cycles more
> > > > > than a static field, and ~2 cycles more for writing.
> > > > >
> > > > > app/test/test_mbuf.c | 145 ++++++-
> > > > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > > > lib/librte_mbuf/Makefile | 2 +
> > > > > lib/librte_mbuf/meson.build | 6 +-
> > > > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > > > >
> > > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > > > index b9c2b2500..01cafad59 100644
> > > > > --- a/app/test/test_mbuf.c
> > > > > +++ b/app/test/test_mbuf.c
> > > > > @@ -28,6 +28,7 @@
> > > > > #include <rte_random.h>
> > > >
> > > > [snip]
> > > >
> > > > > +/**
> > > > > + * Helper macro to access to a dynamic field.
> > > > > + */
> > > > > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> > > > > +
> > > >
> > > > The suggested macro is missed ? ;-)
> > > > /**
> > > > * Helper macro to access to a dynamic flag.
> > > > */
> > > > #define RTE_MBUF_DYNFLAG(offset) (1ULL << (offset))
> > >
> > > Yes, sorry.
> > >
> > > Thinking a bit more about it, I wonder if the macros below aren't
> > > more consistent with the dynamic field (because they take the mbuf
> > > as parameter)?
> > >
> > > #define RTE_MBUF_SET_DYNFLAG(m, bitnum, val) ...
> > > #define RTE_MBUF_GET_DYNFLAG(m, bitnum) ...
> > >
> > > They could even be static inline functions.
> > >
> > > On the other hand, these helpers would be generic to ol_flags, not only
> > > for dynamic flags. Today, we use (1ULL << bit) for ol_flags, which makes
> > > me wonder... is the macro really needed after all? :)
> > >
> >
> > I used as this:
> > 1). in PMD:
> > mb->ol_flags |= RTE_MBUF_DYNFLAG(ol_offset);
> >
> >
> > 2). In testpmd
> > if (mb->ol_flags & RTE_MBUF_DYNFLAG(ol_offset))
> > ...
> >
> > The above two macros look better in real use.
>
> I just looked at http://patchwork.dpdk.org/patch/60908/
> In the patch, a mask is used instead of a bit number, which is indeed
> better in terms of performance. This makes the macro not that useful,
> given there is a specific helper.
>
'a mask is used instead of a bit number' good practice, yes, then no need
this macro, thanks for sharing. ;-)
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v11 1/3] eal/arm64: add 128-bit atomic compare exchange
2019-10-15 11:38 2% ` [dpdk-dev] [PATCH v10 " Phil Yang
@ 2019-10-18 11:21 4% ` Phil Yang
0 siblings, 0 replies; 200+ results
From: Phil Yang @ 2019-10-18 11:21 UTC (permalink / raw)
To: david.marchand, jerinj, gage.eads, dev
Cc: thomas, hemant.agrawal, Honnappa.Nagarahalli, gavin.hu, nd
This patch adds the implementation of the 128-bit atomic compare
exchange API on AArch64. Using 64-bit 'ldxp/stxp' instructions
can perform this operation. Moreover, on the LSE atomic extension
accelerated platforms, it implemented by 'casp' instructions for
better performance.
Since the '__ARM_FEATURE_ATOMICS' flag only supports GCC-9, so this
patch adds a new config flag 'RTE_ARM_FEATURE_ATOMICS' to enable the
'cas' version on elder version compilers.
Since direct x0 register used in the code and cas_op_name() and
rte_atomic128_cmp_exchange() is inline function, based on parent
function load, it may corrupt x0 register aka Break arm64 ABI.
Define CAS operations as rte_noinline functions to avoid the ABI
break[1].
[1]5b40ec6b9662 ("mempool/octeontx2: fix possible arm64 ABI break").
Suggested-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
v11:
1. Renamed cas_op_name by adding the data width tag 128.
2. Replaced __ldx/__stx inline functions with macros.
3. Illustrate the reason of define cas operations as non-inline
functions in commitlog.
v10:
1.Removed all the rte tag for internal functions.
2.Removed __MO_LOAD and _MO_STORE macros and keep define __HAS_ACQ
and __HAS_REL under non LSE conditional branch.
3.Undef the macro once it is unused.
4.Reword the 1/3 and 2/3 patches' commitlog more specific.
v9:
Updated 19.11 release note.
v8:
Fixed "WARNING:LONG_LINE: line over 80 characters" warnings with latest kernel
checkpatch.pl
v7:
1. Adjust code comment.
v6:
1. Put the RTE_ARM_FEATURE_ATOMICS flag into EAL group. (Jerin Jocob)
2. Keep rte_stack_lf_stubs.h doing nothing. (Gage Eads)
3. Fixed 32 bit build issue.
v5:
1. Enable RTE_ARM_FEATURE_ATOMICS on octeontx2 in default. (Jerin Jocob)
2. Record the reason of introducing "rte_stack_lf_stubs.h" in git
commit.
(Jerin, Jocob)
3. Fixed a conditional MACRO error in rte_atomic128_cmp_exchange. (Jerin
Jocob)
v4:
1. Add RTE_ARM_FEATURE_ATOMICS flag to support LSE CASP instructions.
(Jerin Jocob)
2. Fix possible arm64 ABI break by making casp_op_name noinline. (Jerin
Jocob)
3. Add rte_stack_lf_stubs.h to reduce the ifdef clutter. (Gage
Eads/Jerin Jocob)
v3:
1. Avoid duplication code with macro. (Jerin Jocob)
2. Make invalid memory order to strongest barrier. (Jerin Jocob)
3. Update doc/guides/prog_guide/env_abstraction_layer.rst. (Gage Eads)
4. Fix 32-bit x86 builds issue. (Gage Eads)
5. Correct documentation issues in UT. (Gage Eads)
v2:
Initial version.
config/arm/meson.build | 2 +
config/common_base | 3 +
config/defconfig_arm64-octeontx2-linuxapp-gcc | 1 +
config/defconfig_arm64-thunderx2-linuxapp-gcc | 1 +
.../common/include/arch/arm/rte_atomic_64.h | 151 +++++++++++++++++++++
.../common/include/arch/x86/rte_atomic_64.h | 12 --
lib/librte_eal/common/include/generic/rte_atomic.h | 17 ++-
7 files changed, 174 insertions(+), 13 deletions(-)
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 979018e..9f28271 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -71,11 +71,13 @@ flags_thunderx2_extra = [
['RTE_CACHE_LINE_SIZE', 64],
['RTE_MAX_NUMA_NODES', 2],
['RTE_MAX_LCORE', 256],
+ ['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true]]
flags_octeontx2_extra = [
['RTE_MACHINE', '"octeontx2"'],
['RTE_MAX_NUMA_NODES', 1],
['RTE_MAX_LCORE', 24],
+ ['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_EAL_IGB_UIO', false],
['RTE_USE_C11_MEM_MODEL', true]]
diff --git a/config/common_base b/config/common_base
index e843a21..a96beb9 100644
--- a/config/common_base
+++ b/config/common_base
@@ -82,6 +82,9 @@ CONFIG_RTE_MAX_LCORE=128
CONFIG_RTE_MAX_NUMA_NODES=8
CONFIG_RTE_MAX_HEAPS=32
CONFIG_RTE_MAX_MEMSEG_LISTS=64
+
+# Use ARM LSE ATOMIC instructions
+CONFIG_RTE_ARM_FEATURE_ATOMICS=n
# each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages
# or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is smaller
CONFIG_RTE_MAX_MEMSEG_PER_LIST=8192
diff --git a/config/defconfig_arm64-octeontx2-linuxapp-gcc b/config/defconfig_arm64-octeontx2-linuxapp-gcc
index f20da24..7687dbe 100644
--- a/config/defconfig_arm64-octeontx2-linuxapp-gcc
+++ b/config/defconfig_arm64-octeontx2-linuxapp-gcc
@@ -9,6 +9,7 @@ CONFIG_RTE_MACHINE="octeontx2"
CONFIG_RTE_CACHE_LINE_SIZE=128
CONFIG_RTE_MAX_NUMA_NODES=1
CONFIG_RTE_MAX_LCORE=24
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
# Doesn't support NUMA
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/config/defconfig_arm64-thunderx2-linuxapp-gcc b/config/defconfig_arm64-thunderx2-linuxapp-gcc
index cc5c64b..af4a89c 100644
--- a/config/defconfig_arm64-thunderx2-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx2-linuxapp-gcc
@@ -9,3 +9,4 @@ CONFIG_RTE_MACHINE="thunderx2"
CONFIG_RTE_CACHE_LINE_SIZE=64
CONFIG_RTE_MAX_NUMA_NODES=2
CONFIG_RTE_MAX_LCORE=256
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
index 97060e4..d9ebccc 100644
--- a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2015 Cavium, Inc
+ * Copyright(c) 2019 Arm Limited
*/
#ifndef _RTE_ATOMIC_ARM64_H_
@@ -14,6 +15,9 @@ extern "C" {
#endif
#include "generic/rte_atomic.h"
+#include <rte_branch_prediction.h>
+#include <rte_compat.h>
+#include <rte_debug.h>
#define dsb(opt) asm volatile("dsb " #opt : : : "memory")
#define dmb(opt) asm volatile("dmb " #opt : : : "memory")
@@ -40,6 +44,153 @@ extern "C" {
#define rte_cio_rmb() dmb(oshld)
+/*------------------------ 128 bit atomic operations -------------------------*/
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+#define __ATOMIC128_CAS_OP(cas_op_name, op_string) \
+static __rte_noinline rte_int128_t \
+cas_op_name(rte_int128_t *dst, rte_int128_t old, \
+ rte_int128_t updated) \
+{ \
+ /* caspX instructions register pair must start from even-numbered
+ * register at operand 1.
+ * So, specify registers for local variables here.
+ */ \
+ register uint64_t x0 __asm("x0") = (uint64_t)old.val[0]; \
+ register uint64_t x1 __asm("x1") = (uint64_t)old.val[1]; \
+ register uint64_t x2 __asm("x2") = (uint64_t)updated.val[0]; \
+ register uint64_t x3 __asm("x3") = (uint64_t)updated.val[1]; \
+ asm volatile( \
+ op_string " %[old0], %[old1], %[upd0], %[upd1], [%[dst]]" \
+ : [old0] "+r" (x0), \
+ [old1] "+r" (x1) \
+ : [upd0] "r" (x2), \
+ [upd1] "r" (x3), \
+ [dst] "r" (dst) \
+ : "memory"); \
+ old.val[0] = x0; \
+ old.val[1] = x1; \
+ return old; \
+}
+
+__ATOMIC128_CAS_OP(__cas_128_relaxed, "casp")
+__ATOMIC128_CAS_OP(__cas_128_acquire, "caspa")
+__ATOMIC128_CAS_OP(__cas_128_release, "caspl")
+__ATOMIC128_CAS_OP(__cas_128_acq_rel, "caspal")
+
+#undef __ATOMIC128_CAS_OP
+
+#endif
+
+__rte_experimental
+static inline int
+rte_atomic128_cmp_exchange(rte_int128_t *dst,
+ rte_int128_t *exp,
+ const rte_int128_t *src,
+ unsigned int weak,
+ int success,
+ int failure)
+{
+ /* Always do strong CAS */
+ RTE_SET_USED(weak);
+ /* Ignore memory ordering for failure, memory order for
+ * success must be stronger or equal
+ */
+ RTE_SET_USED(failure);
+ /* Find invalid memory order */
+ RTE_ASSERT(success == __ATOMIC_RELAXED
+ || success == __ATOMIC_ACQUIRE
+ || success == __ATOMIC_RELEASE
+ || success == __ATOMIC_ACQ_REL
+ || success == __ATOMIC_SEQ_CST);
+
+ rte_int128_t expected = *exp;
+ rte_int128_t desired = *src;
+ rte_int128_t old;
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+ if (success == __ATOMIC_RELAXED)
+ old = __cas_128_relaxed(dst, expected, desired);
+ else if (success == __ATOMIC_ACQUIRE)
+ old = __cas_128_acquire(dst, expected, desired);
+ else if (success == __ATOMIC_RELEASE)
+ old = __cas_128_release(dst, expected, desired);
+ else
+ old = __cas_128_acq_rel(dst, expected, desired);
+#else
+#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
+#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
+ (mo) == __ATOMIC_SEQ_CST)
+
+ int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
+ int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+
+#undef __HAS_ACQ
+#undef __HAS_RLS
+
+ uint32_t ret = 1;
+
+ /* ldx128 can not guarantee atomic,
+ * Must write back src or old to verify atomicity of ldx128;
+ */
+ do {
+
+#define __LOAD_128(op_string, src, dst) {\
+ asm volatile( \
+ op_string " %0, %1, %2" \
+ : "=&r" (dst.val[0]), \
+ "=&r" (dst.val[1]) \
+ : "Q" (src->val[0]) \
+ : "memory"); }
+
+ if (ldx_mo == __ATOMIC_RELAXED)
+ __LOAD_128("ldxp", dst, old)
+ else
+ __LOAD_128("ldaxp", dst, old)
+
+#undef __LOAD_128
+
+#define __STORE_128(op_string, dst, src, ret) {\
+ asm volatile( \
+ op_string " %w0, %1, %2, %3" \
+ : "=&r" (ret) \
+ : "r" (src.val[0]), \
+ "r" (src.val[1]), \
+ "Q" (dst->val[0]) \
+ : "memory"); }
+
+ if (likely(old.int128 == expected.int128)) {
+ if (stx_mo == __ATOMIC_RELAXED)
+ __STORE_128("stxp", dst, desired, ret)
+ else
+ __STORE_128("stlxp", dst, desired, ret)
+ } else {
+ /* In the failure case (since 'weak' is ignored and only
+ * weak == 0 is implemented), expected should contain
+ * the atomically read value of dst. This means, 'old'
+ * needs to be stored back to ensure it was read
+ * atomically.
+ */
+ if (stx_mo == __ATOMIC_RELAXED)
+ __STORE_128("stxp", dst, old, ret)
+ else
+ __STORE_128("stlxp", dst, old, ret)
+ }
+#undef __STORE_128
+
+ } while (unlikely(ret));
+#endif
+
+ /* Unconditionally updating expected removes
+ * an 'if' statement.
+ * expected should already be in register if
+ * not in the cache.
+ */
+ *exp = old;
+
+ return (old.int128 == expected.int128);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
index 1335d92..cfe7067 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
@@ -183,18 +183,6 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
/*------------------------ 128 bit atomic operations -------------------------*/
-/**
- * 128-bit integer structure.
- */
-RTE_STD_C11
-typedef struct {
- RTE_STD_C11
- union {
- uint64_t val[2];
- __extension__ __int128 int128;
- };
-} __rte_aligned(16) rte_int128_t;
-
__rte_experimental
static inline int
rte_atomic128_cmp_exchange(rte_int128_t *dst,
diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index 24ff7dc..e6ab15a 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -1081,6 +1081,20 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
/*------------------------ 128 bit atomic operations -------------------------*/
+/**
+ * 128-bit integer structure.
+ */
+RTE_STD_C11
+typedef struct {
+ RTE_STD_C11
+ union {
+ uint64_t val[2];
+#ifdef RTE_ARCH_64
+ __extension__ __int128 int128;
+#endif
+ };
+} __rte_aligned(16) rte_int128_t;
+
#ifdef __DOXYGEN__
/**
@@ -1093,7 +1107,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* *exp = *dst
* @endcode
*
- * @note This function is currently only available for the x86-64 platform.
+ * @note This function is currently available for the x86-64 and aarch64
+ * platforms.
*
* @note The success and failure arguments must be one of the __ATOMIC_* values
* defined in the C++11 standard. For details on their behavior, refer to the
--
2.7.4
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-17 14:09 8% ` Luca Boccassi
2019-10-17 14:12 4% ` Bruce Richardson
@ 2019-10-18 10:07 7% ` Kevin Traynor
1 sibling, 0 replies; 200+ results
From: Kevin Traynor @ 2019-10-18 10:07 UTC (permalink / raw)
To: Luca Boccassi, Bruce Richardson, Anatoly Burakov,
Christian Ehrhardt, Timothy Redaelli
Cc: dev, Marcin Baran, Thomas Monjalon, john.mcnamara,
david.marchand, Pawel Modrak
On 17/10/2019 15:09, Luca Boccassi wrote:
> On Thu, 2019-10-17 at 09:44 +0100, Bruce Richardson wrote:
>> On Wed, Oct 16, 2019 at 06:03:36PM +0100, Anatoly Burakov wrote:
>>> From: Marcin Baran <
>>> marcinx.baran@intel.com
>>>>
>>>
>>> As per new ABI policy, all of the libraries are now versioned using
>>> one global ABI version. Changes in this patch implement the
>>> necessary steps to enable that.
>>>
>>> Signed-off-by: Marcin Baran <
>>> marcinx.baran@intel.com
>>>>
>>> Signed-off-by: Pawel Modrak <
>>> pawelx.modrak@intel.com
>>>>
>>> Signed-off-by: Anatoly Burakov <
>>> anatoly.burakov@intel.com
>>>>
>>> ---
>>>
>>> Notes:
>>> v3:
>>> - Removed Windows support from Makefile changes
>>> - Removed unneeded path conversions from meson files
>>>
>>> buildtools/meson.build | 2 ++
>>> config/ABI_VERSION | 1 +
>>> config/meson.build | 5 +++--
>>> drivers/meson.build | 20 ++++++++++++--------
>>> lib/meson.build | 18 +++++++++++-------
>>> meson_options.txt | 2 --
>>> mk/rte.lib.mk | 13 ++++---------
>>> 7 files changed, 33 insertions(+), 28 deletions(-)
>>> create mode 100644 config/ABI_VERSION
>>>
>>> diff --git a/buildtools/meson.build b/buildtools/meson.build
>>> index 32c79c1308..78ce69977d 100644
>>> --- a/buildtools/meson.build
>>> +++ b/buildtools/meson.build
>>> @@ -12,3 +12,5 @@ if python3.found()
>>> else
>>> map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
>>> endif
>>> +
>>> +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
>>> diff --git a/config/ABI_VERSION b/config/ABI_VERSION
>>> new file mode 100644
>>> index 0000000000..9a7c1e503f
>>> --- /dev/null
>>> +++ b/config/ABI_VERSION
>>> @@ -0,0 +1 @@
>>> +20.0
>>> diff --git a/config/meson.build b/config/meson.build
>>> index a27f731f85..3cfc02406c 100644
>>> --- a/config/meson.build
>>> +++ b/config/meson.build
>>> @@ -17,7 +17,8 @@ endforeach
>>> # set the major version, which might be used by drivers and
>>> libraries
>>> # depending on the configuration options
>>> pver = meson.project_version().split('.')
>>> -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
>>> +abi_version = run_command(find_program('cat', 'more'),
>>> + files('ABI_VERSION')).stdout().strip()
>>>
>>> # extract all version information into the build configuration
>>> dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
>>> @@ -37,7 +38,7 @@ endif
>>>
>>> pmd_subdir_opt = get_option('drivers_install_subdir')
>>> if pmd_subdir_opt.contains('<VERSION>')
>>> - pmd_subdir_opt =
>>> major_version.join(pmd_subdir_opt.split('<VERSION>'))
>>> + pmd_subdir_opt =
>>> abi_version.join(pmd_subdir_opt.split('<VERSION>'))
>>> endif
>>
>> This is an interesting change, and I'm not sure about it. I think for
>> user-visible changes, version should still refer to DPDK version
>> rather
>> than ABI version. Even with a stable ABI, it makes more sense to me
>> to find
>> the drivers in a 19.11 directory than a 20.0 one. Then again, the
>> drivers
>> should be re-usable across the one ABI version, so perhaps this is
>> the best
>> approach.
>>
>> Thoughts from others? Luca or Kevin, any thoughts from a packagers
>> perspective?
>>
>> /Bruce
>
> Hi,
>
> We are currently assembing this path using the ABI version in
> Debian/Ubuntu, as we want same-ABI libraries not to be co-installed,
> but instead fo use the exact same name/path. So from our POV this
> change seems right.
>
Seems ok to me as it's consistent with having the libs from different
releases using one ABI version. Would like to check with Timothy too..
+ Timothy
^ permalink raw reply [relevance 7%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-18 8:28 0% ` Wang, Haiyue
@ 2019-10-18 9:47 0% ` Olivier Matz
2019-10-18 11:24 0% ` Wang, Haiyue
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-18 9:47 UTC (permalink / raw)
To: Wang, Haiyue
Cc: dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
On Fri, Oct 18, 2019 at 08:28:02AM +0000, Wang, Haiyue wrote:
> Hi Olivier,
>
> > -----Original Message-----
> > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > Sent: Friday, October 18, 2019 15:54
> > To: Wang, Haiyue <haiyue.wang@intel.com>
> > Cc: dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > <thomas@monjalon.net>
> > Subject: Re: [PATCH v2] mbuf: support dynamic fields and flags
> >
> > Hi Haiyue,
> >
> > On Fri, Oct 18, 2019 at 02:47:50AM +0000, Wang, Haiyue wrote:
> > > Hi Olivier
> > >
> > > > -----Original Message-----
> > > > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > > Sent: Thursday, October 17, 2019 22:42
> > > > To: dev@dpdk.org
> > > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> > Wang,
> > > > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > > > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > > > <thomas@monjalon.net>
> > > > Subject: [PATCH v2] mbuf: support dynamic fields and flags
> > > >
> > > > Many features require to store data inside the mbuf. As the room in mbuf
> > > > structure is limited, it is not possible to have a field for each
> > > > feature. Also, changing fields in the mbuf structure can break the API
> > > > or ABI.
> > > >
> > > > This commit addresses these issues, by enabling the dynamic registration
> > > > of fields or flags:
> > > >
> > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > given size (>= 1 byte) and alignment constraint.
> > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > >
> > > > The typical use case is a PMD that registers space for an offload
> > > > feature, when the application requests to enable this feature. As
> > > > the space in mbuf is limited, the space should only be reserved if it
> > > > is going to be used (i.e when the application explicitly asks for it).
> > > >
> > > > The registration can be done at any moment, but it is not possible
> > > > to unregister fields or flags for now.
> > > >
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > > ---
> > > >
> > > > v2
> > > >
> > > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > > (packet copy)
> > > > * Add new apis to register a dynamic field/flag at a specific place
> > > > * Add a dump function (sugg by David)
> > > > * Enhance field registration function to select the best offset, keeping
> > > > large aligned zones as much as possible (sugg by Konstantin)
> > > > * Use a size_t and unsigned int instead of int when relevant
> > > > (sugg by Konstantin)
> > > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > > (sugg by Konstantin)
> > > > * Remove unused argument in private function (sugg by Konstantin)
> > > > * Fix and simplify locking (sugg by Konstantin)
> > > > * Fix minor typo
> > > >
> > > > rfc -> v1
> > > >
> > > > * Rebase on top of master
> > > > * Change registration API to use a structure instead of
> > > > variables, getting rid of #defines (Stephen's comment)
> > > > * Update flag registration to use a similar API as fields.
> > > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > > * Add a debug log at registration
> > > > * Add some words in release note
> > > > * Did some performance tests (sugg. by Andrew):
> > > > On my platform, reading a dynamic field takes ~3 cycles more
> > > > than a static field, and ~2 cycles more for writing.
> > > >
> > > > app/test/test_mbuf.c | 145 ++++++-
> > > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > > lib/librte_mbuf/Makefile | 2 +
> > > > lib/librte_mbuf/meson.build | 6 +-
> > > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > > >
> > > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > > index b9c2b2500..01cafad59 100644
> > > > --- a/app/test/test_mbuf.c
> > > > +++ b/app/test/test_mbuf.c
> > > > @@ -28,6 +28,7 @@
> > > > #include <rte_random.h>
> > >
> > > [snip]
> > >
> > > > +/**
> > > > + * Helper macro to access to a dynamic field.
> > > > + */
> > > > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> > > > +
> > >
> > > The suggested macro is missed ? ;-)
> > > /**
> > > * Helper macro to access to a dynamic flag.
> > > */
> > > #define RTE_MBUF_DYNFLAG(offset) (1ULL << (offset))
> >
> > Yes, sorry.
> >
> > Thinking a bit more about it, I wonder if the macros below aren't
> > more consistent with the dynamic field (because they take the mbuf
> > as parameter)?
> >
> > #define RTE_MBUF_SET_DYNFLAG(m, bitnum, val) ...
> > #define RTE_MBUF_GET_DYNFLAG(m, bitnum) ...
> >
> > They could even be static inline functions.
> >
> > On the other hand, these helpers would be generic to ol_flags, not only
> > for dynamic flags. Today, we use (1ULL << bit) for ol_flags, which makes
> > me wonder... is the macro really needed after all? :)
> >
>
> I used as this:
> 1). in PMD:
> mb->ol_flags |= RTE_MBUF_DYNFLAG(ol_offset);
>
>
> 2). In testpmd
> if (mb->ol_flags & RTE_MBUF_DYNFLAG(ol_offset))
> ...
>
> The above two macros look better in real use.
I just looked at http://patchwork.dpdk.org/patch/60908/
In the patch, a mask is used instead of a bit number, which is indeed
better in terms of performance. This makes the macro not that useful,
given there is a specific helper.
> > > BTW, should we have a place to put the registered dynamic fields and flags
> > > names together (a name overview -- detail Link to --> PMD's help page) ?
> >
> > The centralized place will be in rte_mbuf_dyn.h for fields/flags that can
> > are shared between several dpdk areas. Some libraries/pmd could have private
> > dynamic fields/flags. In any case, I think the same namespace than functions
> > should be used. Probably something like this:
> > - "rte_mbuf_dynfield_<name>" in mbuf lib
> > - "rte_<libname>_dynfield_<name>" in other libs
> > - "rte_net_<pmd>_dynfield_<name>" in pmds
> > - "<name>" in apps
> >
> > > Since rte_mbuf_dynfield:name & rte_mbuf_dynflag:name work as a API style,
> > > users can check how many 'names' registered, developers can check whether
> > > the names they want to use are registered or not ? They don't need to have
> > > to check the rte_errno ... Just a suggestion for user experience.
> >
> > I did not get you point. Does my response above answers to your question?
> >
>
> Yes, the name conversation you mentioned above is a good practice, then no doc
> needed any more, thanks!
>
> > Regards,
> > Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-18 7:53 0% ` Olivier Matz
@ 2019-10-18 8:28 0% ` Wang, Haiyue
2019-10-18 9:47 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Wang, Haiyue @ 2019-10-18 8:28 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi Olivier,
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Friday, October 18, 2019 15:54
> To: Wang, Haiyue <haiyue.wang@intel.com>
> Cc: dev@dpdk.org; Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: Re: [PATCH v2] mbuf: support dynamic fields and flags
>
> Hi Haiyue,
>
> On Fri, Oct 18, 2019 at 02:47:50AM +0000, Wang, Haiyue wrote:
> > Hi Olivier
> >
> > > -----Original Message-----
> > > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > > Sent: Thursday, October 17, 2019 22:42
> > > To: dev@dpdk.org
> > > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>;
> Wang,
> > > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > > <thomas@monjalon.net>
> > > Subject: [PATCH v2] mbuf: support dynamic fields and flags
> > >
> > > Many features require to store data inside the mbuf. As the room in mbuf
> > > structure is limited, it is not possible to have a field for each
> > > feature. Also, changing fields in the mbuf structure can break the API
> > > or ABI.
> > >
> > > This commit addresses these issues, by enabling the dynamic registration
> > > of fields or flags:
> > >
> > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > given size (>= 1 byte) and alignment constraint.
> > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > >
> > > The typical use case is a PMD that registers space for an offload
> > > feature, when the application requests to enable this feature. As
> > > the space in mbuf is limited, the space should only be reserved if it
> > > is going to be used (i.e when the application explicitly asks for it).
> > >
> > > The registration can be done at any moment, but it is not possible
> > > to unregister fields or flags for now.
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> > >
> > > v2
> > >
> > > * Rebase on top of master: solve conflict with Stephen's patchset
> > > (packet copy)
> > > * Add new apis to register a dynamic field/flag at a specific place
> > > * Add a dump function (sugg by David)
> > > * Enhance field registration function to select the best offset, keeping
> > > large aligned zones as much as possible (sugg by Konstantin)
> > > * Use a size_t and unsigned int instead of int when relevant
> > > (sugg by Konstantin)
> > > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > > (sugg by Konstantin)
> > > * Remove unused argument in private function (sugg by Konstantin)
> > > * Fix and simplify locking (sugg by Konstantin)
> > > * Fix minor typo
> > >
> > > rfc -> v1
> > >
> > > * Rebase on top of master
> > > * Change registration API to use a structure instead of
> > > variables, getting rid of #defines (Stephen's comment)
> > > * Update flag registration to use a similar API as fields.
> > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > * Add a debug log at registration
> > > * Add some words in release note
> > > * Did some performance tests (sugg. by Andrew):
> > > On my platform, reading a dynamic field takes ~3 cycles more
> > > than a static field, and ~2 cycles more for writing.
> > >
> > > app/test/test_mbuf.c | 145 ++++++-
> > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > lib/librte_mbuf/Makefile | 2 +
> > > lib/librte_mbuf/meson.build | 6 +-
> > > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > > 8 files changed, 959 insertions(+), 5 deletions(-)
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > >
> > > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > > index b9c2b2500..01cafad59 100644
> > > --- a/app/test/test_mbuf.c
> > > +++ b/app/test/test_mbuf.c
> > > @@ -28,6 +28,7 @@
> > > #include <rte_random.h>
> >
> > [snip]
> >
> > > +/**
> > > + * Helper macro to access to a dynamic field.
> > > + */
> > > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> > > +
> >
> > The suggested macro is missed ? ;-)
> > /**
> > * Helper macro to access to a dynamic flag.
> > */
> > #define RTE_MBUF_DYNFLAG(offset) (1ULL << (offset))
>
> Yes, sorry.
>
> Thinking a bit more about it, I wonder if the macros below aren't
> more consistent with the dynamic field (because they take the mbuf
> as parameter)?
>
> #define RTE_MBUF_SET_DYNFLAG(m, bitnum, val) ...
> #define RTE_MBUF_GET_DYNFLAG(m, bitnum) ...
>
> They could even be static inline functions.
>
> On the other hand, these helpers would be generic to ol_flags, not only
> for dynamic flags. Today, we use (1ULL << bit) for ol_flags, which makes
> me wonder... is the macro really needed after all? :)
>
I used as this:
1). in PMD:
mb->ol_flags |= RTE_MBUF_DYNFLAG(ol_offset);
2). In testpmd
if (mb->ol_flags & RTE_MBUF_DYNFLAG(ol_offset))
...
The above two macros look better in real use.
> > BTW, should we have a place to put the registered dynamic fields and flags
> > names together (a name overview -- detail Link to --> PMD's help page) ?
>
> The centralized place will be in rte_mbuf_dyn.h for fields/flags that can
> are shared between several dpdk areas. Some libraries/pmd could have private
> dynamic fields/flags. In any case, I think the same namespace than functions
> should be used. Probably something like this:
> - "rte_mbuf_dynfield_<name>" in mbuf lib
> - "rte_<libname>_dynfield_<name>" in other libs
> - "rte_net_<pmd>_dynfield_<name>" in pmds
> - "<name>" in apps
>
> > Since rte_mbuf_dynfield:name & rte_mbuf_dynflag:name work as a API style,
> > users can check how many 'names' registered, developers can check whether
> > the names they want to use are registered or not ? They don't need to have
> > to check the rte_errno ... Just a suggestion for user experience.
>
> I did not get you point. Does my response above answers to your question?
>
Yes, the name conversation you mentioned above is a good practice, then no doc
needed any more, thanks!
> Regards,
> Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-18 2:47 0% ` Wang, Haiyue
@ 2019-10-18 7:53 0% ` Olivier Matz
2019-10-18 8:28 0% ` Wang, Haiyue
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-18 7:53 UTC (permalink / raw)
To: Wang, Haiyue
Cc: dev, Andrew Rybchenko, Richardson, Bruce,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Hi Haiyue,
On Fri, Oct 18, 2019 at 02:47:50AM +0000, Wang, Haiyue wrote:
> Hi Olivier
>
> > -----Original Message-----
> > From: Olivier Matz [mailto:olivier.matz@6wind.com]
> > Sent: Thursday, October 17, 2019 22:42
> > To: dev@dpdk.org
> > Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>; Wang,
> > Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> > <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> > <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> > <thomas@monjalon.net>
> > Subject: [PATCH v2] mbuf: support dynamic fields and flags
> >
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each
> > feature. Also, changing fields in the mbuf structure can break the API
> > or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration
> > of fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload
> > feature, when the application requests to enable this feature. As
> > the space in mbuf is limited, the space should only be reserved if it
> > is going to be used (i.e when the application explicitly asks for it).
> >
> > The registration can be done at any moment, but it is not possible
> > to unregister fields or flags for now.
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >
> > v2
> >
> > * Rebase on top of master: solve conflict with Stephen's patchset
> > (packet copy)
> > * Add new apis to register a dynamic field/flag at a specific place
> > * Add a dump function (sugg by David)
> > * Enhance field registration function to select the best offset, keeping
> > large aligned zones as much as possible (sugg by Konstantin)
> > * Use a size_t and unsigned int instead of int when relevant
> > (sugg by Konstantin)
> > * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> > (sugg by Konstantin)
> > * Remove unused argument in private function (sugg by Konstantin)
> > * Fix and simplify locking (sugg by Konstantin)
> > * Fix minor typo
> >
> > rfc -> v1
> >
> > * Rebase on top of master
> > * Change registration API to use a structure instead of
> > variables, getting rid of #defines (Stephen's comment)
> > * Update flag registration to use a similar API as fields.
> > * Change max name length from 32 to 64 (sugg. by Thomas)
> > * Enhance API documentation (Haiyue's and Andrew's comments)
> > * Add a debug log at registration
> > * Add some words in release note
> > * Did some performance tests (sugg. by Andrew):
> > On my platform, reading a dynamic field takes ~3 cycles more
> > than a static field, and ~2 cycles more for writing.
> >
> > app/test/test_mbuf.c | 145 ++++++-
> > doc/guides/rel_notes/release_19_11.rst | 7 +
> > lib/librte_mbuf/Makefile | 2 +
> > lib/librte_mbuf/meson.build | 6 +-
> > lib/librte_mbuf/rte_mbuf.h | 23 +-
> > lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> > lib/librte_mbuf/rte_mbuf_version.map | 7 +
> > 8 files changed, 959 insertions(+), 5 deletions(-)
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> >
> > diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> > index b9c2b2500..01cafad59 100644
> > --- a/app/test/test_mbuf.c
> > +++ b/app/test/test_mbuf.c
> > @@ -28,6 +28,7 @@
> > #include <rte_random.h>
>
> [snip]
>
> > +/**
> > + * Helper macro to access to a dynamic field.
> > + */
> > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> > +
>
> The suggested macro is missed ? ;-)
> /**
> * Helper macro to access to a dynamic flag.
> */
> #define RTE_MBUF_DYNFLAG(offset) (1ULL << (offset))
Yes, sorry.
Thinking a bit more about it, I wonder if the macros below aren't
more consistent with the dynamic field (because they take the mbuf
as parameter)?
#define RTE_MBUF_SET_DYNFLAG(m, bitnum, val) ...
#define RTE_MBUF_GET_DYNFLAG(m, bitnum) ...
They could even be static inline functions.
On the other hand, these helpers would be generic to ol_flags, not only
for dynamic flags. Today, we use (1ULL << bit) for ol_flags, which makes
me wonder... is the macro really needed after all? :)
> BTW, should we have a place to put the registered dynamic fields and flags
> names together (a name overview -- detail Link to --> PMD's help page) ?
The centralized place will be in rte_mbuf_dyn.h for fields/flags that can
are shared between several dpdk areas. Some libraries/pmd could have private
dynamic fields/flags. In any case, I think the same namespace than functions
should be used. Probably something like this:
- "rte_mbuf_dynfield_<name>" in mbuf lib
- "rte_<libname>_dynfield_<name>" in other libs
- "rte_net_<pmd>_dynfield_<name>" in pmds
- "<name>" in apps
> Since rte_mbuf_dynfield:name & rte_mbuf_dynflag:name work as a API style,
> users can check how many 'names' registered, developers can check whether
> the names they want to use are registered or not ? They don't need to have
> to check the rte_errno ... Just a suggestion for user experience.
I did not get you point. Does my response above answers to your question?
Regards,
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/3] lib/lpm: integrate RCU QSBR
2019-10-15 11:15 0% ` Ananyev, Konstantin
@ 2019-10-18 3:32 0% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2019-10-18 3:32 UTC (permalink / raw)
To: Ananyev, Konstantin, Richardson, Bruce, Medvedkin, Vladimir,
olivier.matz
Cc: dev, stephen, paulmck, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, nd, nd
<snip>
> >
> > > Hi guys,
> > I have tried to consolidate design related questions here. If I have missed
> anything, please add.
> >
> > >
> > > >
> > > > From: Ruifeng Wang <ruifeng.wang@arm.com>
> > > >
> > > > Currently, the tbl8 group is freed even though the readers might
> > > > be using the tbl8 group entries. The freed tbl8 group can be
> > > > reallocated quickly. This results in incorrect lookup results.
> > > >
> > > > RCU QSBR process is integrated for safe tbl8 group reclaim.
> > > > Refer to RCU documentation to understand various aspects of
> > > > integrating RCU library into other libraries.
> > > >
> > > > Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > > ---
> > > > lib/librte_lpm/Makefile | 3 +-
> > > > lib/librte_lpm/meson.build | 2 +
> > > > lib/librte_lpm/rte_lpm.c | 102 +++++++++++++++++++++++++----
> > > > lib/librte_lpm/rte_lpm.h | 21 ++++++
> > > > lib/librte_lpm/rte_lpm_version.map | 6 ++
> > > > 5 files changed, 122 insertions(+), 12 deletions(-)
> > > >
> > > > diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile
> > > > index
> > > > a7946a1c5..ca9e16312 100644
> > > > --- a/lib/librte_lpm/Makefile
> > > > +++ b/lib/librte_lpm/Makefile
> > > > @@ -6,9 +6,10 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name
> > > > LIB = librte_lpm.a
> > > >
> > > > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > > > CFLAGS += -O3
> > > > CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -LDLIBS += -lrte_eal
> > > > -lrte_hash
> > > > +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu
> > > >
> > > > EXPORT_MAP := rte_lpm_version.map
> > > >
> > > > diff --git a/lib/librte_lpm/meson.build
> > > > b/lib/librte_lpm/meson.build index a5176d8ae..19a35107f 100644
> > > > --- a/lib/librte_lpm/meson.build
> > > > +++ b/lib/librte_lpm/meson.build
> > > > @@ -2,9 +2,11 @@
> > > > # Copyright(c) 2017 Intel Corporation
> > > >
> > > > version = 2
> > > > +allow_experimental_apis = true
> > > > sources = files('rte_lpm.c', 'rte_lpm6.c') headers =
> > > > files('rte_lpm.h', 'rte_lpm6.h') # since header files have
> > > > different names, we can install all vector headers # without
> > > > worrying about which architecture we actually need headers +=
> > > > files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')
> > > > deps += ['hash']
> > > > +deps += ['rcu']
> > > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
> > > > index
> > > > 3a929a1b1..ca58d4b35 100644
> > > > --- a/lib/librte_lpm/rte_lpm.c
> > > > +++ b/lib/librte_lpm/rte_lpm.c
> > > > @@ -1,5 +1,6 @@
> > > > /* SPDX-License-Identifier: BSD-3-Clause
> > > > * Copyright(c) 2010-2014 Intel Corporation
> > > > + * Copyright(c) 2019 Arm Limited
> > > > */
> > > >
> > > > #include <string.h>
> > > > @@ -381,6 +382,8 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
> > > >
> > > > rte_mcfg_tailq_write_unlock();
> > > >
> > > > + if (lpm->dq)
> > > > + rte_rcu_qsbr_dq_delete(lpm->dq);
> > > > rte_free(lpm->tbl8);
> > > > rte_free(lpm->rules_tbl);
> > > > rte_free(lpm);
> > > > @@ -390,6 +393,59 @@ BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604,
> > > 16.04);
> > > > MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
> > > > rte_lpm_free_v1604);
> > > >
> > > > +struct __rte_lpm_rcu_dq_entry {
> > > > + uint32_t tbl8_group_index;
> > > > + uint32_t pad;
> > > > +};
> > > > +
> > > > +static void
> > > > +__lpm_rcu_qsbr_free_resource(void *p, void *data) {
> > > > + struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > > > + struct __rte_lpm_rcu_dq_entry *e =
> > > > + (struct __rte_lpm_rcu_dq_entry *)data;
> > > > + struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p;
> > > > +
> > > > + /* Set tbl8 group invalid */
> > > > + __atomic_store(&tbl8[e->tbl8_group_index], &zero_tbl8_entry,
> > > > + __ATOMIC_RELAXED);
> > > > +}
> > > > +
> > > > +/* Associate QSBR variable with an LPM object.
> > > > + */
> > > > +int
> > > > +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr *v) {
> > > > + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
> > > > + struct rte_rcu_qsbr_dq_parameters params;
> > > > +
> > > > + if ((lpm == NULL) || (v == NULL)) {
> > > > + rte_errno = EINVAL;
> > > > + return 1;
> > > > + }
> > > > +
> > > > + if (lpm->dq) {
> > > > + rte_errno = EEXIST;
> > > > + return 1;
> > > > + }
> > > > +
> > > > + /* Init QSBR defer queue. */
> > > > + snprintf(rcu_dq_name, sizeof(rcu_dq_name), "LPM_RCU_%s", lpm-
> > > >name);
> > > > + params.name = rcu_dq_name;
> > > > + params.size = lpm->number_tbl8s;
> > > > + params.esize = sizeof(struct __rte_lpm_rcu_dq_entry);
> > > > + params.f = __lpm_rcu_qsbr_free_resource;
> > > > + params.p = lpm->tbl8;
> > > > + params.v = v;
> > > > + lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
> > > > + if (lpm->dq == NULL) {
> > > > + RTE_LOG(ERR, LPM, "LPM QS defer queue creation failed\n");
> > > > + return 1;
> > > > + }
> > >
> > > Few thoughts about that function:
> > Few things to keep in mind, the goal of the design is to make it easy
> > for the applications to adopt lock-free algorithms. The reclamation
> > process in the writer is a major portion of code one has to write for using
> lock-free algorithms. The current design is such that the writer does not have
> to change any code or write additional code other than calling
> 'rte_lpm_rcu_qsbr_add'.
> >
> > > It names rcu_qsbr_add() but in fact it allocates defer queue for give rcu var.
> > > So first thought - is it always necessary?
> > This is part of the design. If the application does not want to use
> > this integrated logic then, it does not have to call this API. It can
> > use the RCU defer APIs to implement its own logic. But, if I ask the question,
> does this integrated logic address most of the use cases of the LPM library, I
> think the answer is yes.
> >
> > > For some use-cases I suppose user might be ok to wait for quiescent
> > > state change inside tbl8_free()?
> > Yes, that is a possibility (for ex: no frequent route changes). But, I
> > think that is very trivial for the application to implement. Though, the LPM
> library has to separate the 'delete' and 'free' operations.
>
> Exactly.
> That's why it is not trivial with current LPM library.
> In fact to do that himself right now, user would have to implement and support
> his own version of LPM code.
😊, well we definitely don't want them to write their own library (if DPDK LPM is enough)
IMO, we need to be consistent with other libraries in terms of APIs. That's another topic.
I do not see any problem to implement this or provide facility to implement this in the future in the APIs now. We can add 'flags' field which will allow for other methods of reclamation.
>
> Honestly, I don't understand why you consider it as a drawback.
> From my perspective only few things need to be changed:
>
> 1. Add 2 parameters to 'rte_lpm_rcu_qsbr_add():
> number of elems in defer_queue
> reclaim() threshold value.
> If the user doesn't want to provide any values, that's fine we can use default
> ones here (as you do it right now).
I think we have agreed on this, I see the value in doing this.
> 2. Make rte_lpm_rcu_qsbr_add() to return pointer to the defer_queue.
> Again if user doesn't want to call reclaim() himself, he can just ignore return
> value.
Given the goal of reducing the burden on the user, this is not in that direction. But if you see a use case for it, I don't have any issues. Vladimir asked for it as well in the other thread.
>
> These 2 changes will provide us with necessary flexibility that would help to
> cover more use-cases:
> - user can decide how big should be the defer queue
> - user can decide when/how he wants to do reclaim()
>
> Konstantin
>
> >Similar operations are provided in rte_hash library. IMO, we should
> >follow consistent approach.
> >
> > > Another thing you do allocate defer queue, but it is internal, so
> > > user can't call
> > > reclaim() manually, which looks strange.
> > > Why not to return defer_queue pointer to the user, so he can call
> > > reclaim() himself at appropriate time?
> > The intention of the design is to take the complexity away from the
> > user of LPM library. IMO, the current design will address most uses cases of
> LPM library. If we expose the 2 parameters (when to trigger reclamation and
> how much to reclaim) in the 'rte_lpm_rcu_qsbr_add'
> > API, it should provide enough flexibility to the application.
> >
> > > Third thing - you always allocate defer queue with size equal to
> > > number of tbl8.
> > > Though I understand it could be up to 16M tbl8 groups inside the LPM.
> > > Do we really need defer queue that long?
> > No, we do not need it to be this long. It is this long today to avoid returning
> no-space on the defer queue error.
> >
> > > Especially considering that current rcu_defer_queue will start
> > > reclamation when 1/8 of defer_quueue becomes full and wouldn't
> > > reclaim more then
> > > 1/16 of it.
> > > Probably better to let user to decide himself how long defer_queue
> > > he needs for that LPM?
> > It makes sense to expose it to the user if the writer-writer
> > concurrency is lock-free (no memory allocation allowed to expand the
> > defer queue size when the queue is full). However, LPM is not lock-free on
> the writer side. If we think the writer could be lock-free in the future, it has to
> be exposed to the user.
> >
> > >
> > > Konstantin
> > Pulling questions/comments from other threads:
> > Can we leave reclamation to some other house-keeping thread to do (sort of
> garbage collector). Or such mode is not supported/planned?
> >
> > [Honnappa] If the reclamation cost is small, the current method
> > provides advantages over having a separate thread to do reclamation. I
> > did not plan to provide such an option. But may be it makes sense to keep the
> options open (especially from ABI perspective). May be we should add a flags
> field which will allow us to implement different methods in the future?
> >
> > >
> > >
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > /*
> > > > * Adds a rule to the rule table.
> > > > *
> > > > @@ -679,14 +735,15 @@ tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20
> > > > *tbl8) }
> > > >
> > > > static int32_t
> > > > -tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > number_tbl8s)
> > > > +__tbl8_alloc_v1604(struct rte_lpm *lpm)
> > > > {
> > > > uint32_t group_idx; /* tbl8 group index. */
> > > > struct rte_lpm_tbl_entry *tbl8_entry;
> > > >
> > > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> > > > - for (group_idx = 0; group_idx < number_tbl8s; group_idx++) {
> > > > - tbl8_entry = &tbl8[group_idx *
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > > + for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
> > > > + tbl8_entry = &lpm->tbl8[group_idx *
> > > > +
> > > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > > /* If a free tbl8 group is found clean it and set as VALID. */
> > > > if (!tbl8_entry->valid_group) {
> > > > struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -
> > > 712,6 +769,21 @@
> > > > tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
> > > > return -ENOSPC;
> > > > }
> > > >
> > > > +static int32_t
> > > > +tbl8_alloc_v1604(struct rte_lpm *lpm) {
> > > > + int32_t group_idx; /* tbl8 group index. */
> > > > +
> > > > + group_idx = __tbl8_alloc_v1604(lpm);
> > > > + if ((group_idx < 0) && (lpm->dq != NULL)) {
> > > > + /* If there are no tbl8 groups try to reclaim some. */
> > > > + if (rte_rcu_qsbr_dq_reclaim(lpm->dq) == 0)
> > > > + group_idx = __tbl8_alloc_v1604(lpm);
> > > > + }
> > > > +
> > > > + return group_idx;
> > > > +}
> > > > +
> > > > static void
> > > > tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t
> > > > tbl8_group_start) { @@ -728,13 +800,21 @@ tbl8_free_v20(struct
> > > > rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start) }
> > > >
> > > > static void
> > > > -tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > > tbl8_group_start)
> > > > +tbl8_free_v1604(struct rte_lpm *lpm, uint32_t tbl8_group_start)
> > > > {
> > > > - /* Set tbl8 group invalid*/
> > > > struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > > > + struct __rte_lpm_rcu_dq_entry e;
> > > >
> > > > - __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
> > > > - __ATOMIC_RELAXED);
> > > > + if (lpm->dq != NULL) {
> > > > + e.tbl8_group_index = tbl8_group_start;
> > > > + e.pad = 0;
> > > > + /* Push into QSBR defer queue. */
> > > > + rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&e);
> > > > + } else {
> > > > + /* Set tbl8 group invalid*/
> > > > + __atomic_store(&lpm->tbl8[tbl8_group_start],
> > > &zero_tbl8_entry,
> > > > + __ATOMIC_RELAXED);
> > > > + }
> > > > }
> > > >
> > > > static __rte_noinline int32_t
> > > > @@ -1037,7 +1117,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > > > uint32_t ip_masked, uint8_t depth,
> > > >
> > > > if (!lpm->tbl24[tbl24_index].valid) {
> > > > /* Search for a free tbl8 group. */
> > > > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> > > >number_tbl8s);
> > > > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> > > >
> > > > /* Check tbl8 allocation was successful. */
> > > > if (tbl8_group_index < 0) {
> > > > @@ -1083,7 +1163,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > > uint32_t ip_masked, uint8_t depth,
> > > > } /* If valid entry but not extended calculate the index into Table8. */
> > > > else if (lpm->tbl24[tbl24_index].valid_group == 0) {
> > > > /* Search for free tbl8 group. */
> > > > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> > > >number_tbl8s);
> > > > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> > > >
> > > > if (tbl8_group_index < 0) {
> > > > return tbl8_group_index;
> > > > @@ -1818,7 +1898,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm,
> > > uint32_t ip_masked,
> > > > */
> > > > lpm->tbl24[tbl24_index].valid = 0;
> > > > __atomic_thread_fence(__ATOMIC_RELEASE);
> > > > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > > > + tbl8_free_v1604(lpm, tbl8_group_start);
> > > > } else if (tbl8_recycle_index > -1) {
> > > > /* Update tbl24 entry. */
> > > > struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1834,7
> > > +1914,7 @@
> > > > delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
> > > > __atomic_store(&lpm->tbl24[tbl24_index],
> > > &new_tbl24_entry,
> > > > __ATOMIC_RELAXED);
> > > > __atomic_thread_fence(__ATOMIC_RELEASE);
> > > > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > > > + tbl8_free_v1604(lpm, tbl8_group_start);
> > > > }
> > > > #undef group_idx
> > > > return 0;
> > > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
> > > > index 906ec4483..49c12a68d 100644
> > > > --- a/lib/librte_lpm/rte_lpm.h
> > > > +++ b/lib/librte_lpm/rte_lpm.h
> > > > @@ -1,5 +1,6 @@
> > > > /* SPDX-License-Identifier: BSD-3-Clause
> > > > * Copyright(c) 2010-2014 Intel Corporation
> > > > + * Copyright(c) 2019 Arm Limited
> > > > */
> > > >
> > > > #ifndef _RTE_LPM_H_
> > > > @@ -21,6 +22,7 @@
> > > > #include <rte_common.h>
> > > > #include <rte_vect.h>
> > > > #include <rte_compat.h>
> > > > +#include <rte_rcu_qsbr.h>
> > > >
> > > > #ifdef __cplusplus
> > > > extern "C" {
> > > > @@ -186,6 +188,7 @@ struct rte_lpm {
> > > > __rte_cache_aligned; /**< LPM tbl24 table. */
> > > > struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
> > > > struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> > > > + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue.*/
> > > > };
> > > >
> > > > /**
> > > > @@ -248,6 +251,24 @@ rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
> > > void
> > > > rte_lpm_free_v1604(struct rte_lpm *lpm);
> > > >
> > > > +/**
> > > > + * Associate RCU QSBR variable with an LPM object.
> > > > + *
> > > > + * @param lpm
> > > > + * the lpm object to add RCU QSBR
> > > > + * @param v
> > > > + * RCU QSBR variable
> > > > + * @return
> > > > + * On success - 0
> > > > + * On error - 1 with error code set in rte_errno.
> > > > + * Possible rte_errno codes are:
> > > > + * - EINVAL - invalid pointer
> > > > + * - EEXIST - already added QSBR
> > > > + * - ENOMEM - memory allocation failure
> > > > + */
> > > > +__rte_experimental
> > > > +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr
> > > > +*v);
> > > > +
> > > > /**
> > > > * Add a rule to the LPM table.
> > > > *
> > > > diff --git a/lib/librte_lpm/rte_lpm_version.map
> > > > b/lib/librte_lpm/rte_lpm_version.map
> > > > index 90beac853..b353aabd2 100644
> > > > --- a/lib/librte_lpm/rte_lpm_version.map
> > > > +++ b/lib/librte_lpm/rte_lpm_version.map
> > > > @@ -44,3 +44,9 @@ DPDK_17.05 {
> > > > rte_lpm6_lookup_bulk_func;
> > > >
> > > > } DPDK_16.04;
> > > > +
> > > > +EXPERIMENTAL {
> > > > + global:
> > > > +
> > > > + rte_lpm_rcu_qsbr_add;
> > > > +};
> > > > --
> > > > 2.17.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
@ 2019-10-18 2:47 0% ` Wang, Haiyue
2019-10-18 7:53 0% ` Olivier Matz
2019-10-22 22:51 0% ` Ananyev, Konstantin
` (2 subsequent siblings)
3 siblings, 1 reply; 200+ results
From: Wang, Haiyue @ 2019-10-18 2:47 UTC (permalink / raw)
To: Olivier Matz, dev
Cc: Andrew Rybchenko, Richardson, Bruce, Jerin Jacob Kollanukkaran,
Wiles, Keith, Ananyev, Konstantin, Morten Brørup,
Stephen Hemminger, Thomas Monjalon
Hi Olivier
> -----Original Message-----
> From: Olivier Matz [mailto:olivier.matz@6wind.com]
> Sent: Thursday, October 17, 2019 22:42
> To: dev@dpdk.org
> Cc: Andrew Rybchenko <arybchenko@solarflare.com>; Richardson, Bruce <bruce.richardson@intel.com>; Wang,
> Haiyue <haiyue.wang@intel.com>; Jerin Jacob Kollanukkaran <jerinj@marvell.com>; Wiles, Keith
> <keith.wiles@intel.com>; Ananyev, Konstantin <konstantin.ananyev@intel.com>; Morten Brørup
> <mb@smartsharesystems.com>; Stephen Hemminger <stephen@networkplumber.org>; Thomas Monjalon
> <thomas@monjalon.net>
> Subject: [PATCH v2] mbuf: support dynamic fields and flags
>
> Many features require to store data inside the mbuf. As the room in mbuf
> structure is limited, it is not possible to have a field for each
> feature. Also, changing fields in the mbuf structure can break the API
> or ABI.
>
> This commit addresses these issues, by enabling the dynamic registration
> of fields or flags:
>
> - a dynamic field is a named area in the rte_mbuf structure, with a
> given size (>= 1 byte) and alignment constraint.
> - a dynamic flag is a named bit in the rte_mbuf structure.
>
> The typical use case is a PMD that registers space for an offload
> feature, when the application requests to enable this feature. As
> the space in mbuf is limited, the space should only be reserved if it
> is going to be used (i.e when the application explicitly asks for it).
>
> The registration can be done at any moment, but it is not possible
> to unregister fields or flags for now.
>
> Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> Acked-by: Thomas Monjalon <thomas@monjalon.net>
> ---
>
> v2
>
> * Rebase on top of master: solve conflict with Stephen's patchset
> (packet copy)
> * Add new apis to register a dynamic field/flag at a specific place
> * Add a dump function (sugg by David)
> * Enhance field registration function to select the best offset, keeping
> large aligned zones as much as possible (sugg by Konstantin)
> * Use a size_t and unsigned int instead of int when relevant
> (sugg by Konstantin)
> * Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
> (sugg by Konstantin)
> * Remove unused argument in private function (sugg by Konstantin)
> * Fix and simplify locking (sugg by Konstantin)
> * Fix minor typo
>
> rfc -> v1
>
> * Rebase on top of master
> * Change registration API to use a structure instead of
> variables, getting rid of #defines (Stephen's comment)
> * Update flag registration to use a similar API as fields.
> * Change max name length from 32 to 64 (sugg. by Thomas)
> * Enhance API documentation (Haiyue's and Andrew's comments)
> * Add a debug log at registration
> * Add some words in release note
> * Did some performance tests (sugg. by Andrew):
> On my platform, reading a dynamic field takes ~3 cycles more
> than a static field, and ~2 cycles more for writing.
>
> app/test/test_mbuf.c | 145 ++++++-
> doc/guides/rel_notes/release_19_11.rst | 7 +
> lib/librte_mbuf/Makefile | 2 +
> lib/librte_mbuf/meson.build | 6 +-
> lib/librte_mbuf/rte_mbuf.h | 23 +-
> lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
> lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
> lib/librte_mbuf/rte_mbuf_version.map | 7 +
> 8 files changed, 959 insertions(+), 5 deletions(-)
> create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
>
> diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
> index b9c2b2500..01cafad59 100644
> --- a/app/test/test_mbuf.c
> +++ b/app/test/test_mbuf.c
> @@ -28,6 +28,7 @@
> #include <rte_random.h>
[snip]
> +/**
> + * Helper macro to access to a dynamic field.
> + */
> +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> +
The suggested macro is missed ? ;-)
/**
* Helper macro to access to a dynamic flag.
*/
#define RTE_MBUF_DYNFLAG(offset) (1ULL << (offset))
BTW, should we have a place to put the registered dynamic fields and flags
names together (a name overview -- detail Link to --> PMD's help page) ?
Since rte_mbuf_dynfield:name & rte_mbuf_dynflag:name work as a API style,
users can check how many 'names' registered, developers can check whether
the names they want to use are registered or not ? They don't need to have
to check the rte_errno ... Just a suggestion for user experience.
>
> } DPDK_18.08;
> --
> 2.20.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code Anatoly Burakov
@ 2019-10-17 21:04 0% ` Carrillo, Erik G
2019-10-21 13:24 3% ` Kevin Traynor
1 sibling, 0 replies; 200+ results
From: Carrillo, Erik G @ 2019-10-17 21:04 UTC (permalink / raw)
To: Burakov, Anatoly, dev
Cc: Baran, MarcinX, Robert Sanford, Mcnamara, John, Richardson,
Bruce, thomas, david.marchand
> -----Original Message-----
> From: Burakov, Anatoly <anatoly.burakov@intel.com>
> Sent: Thursday, October 17, 2019 9:32 AM
> To: dev@dpdk.org
> Cc: Baran, MarcinX <marcinx.baran@intel.com>; Robert Sanford
> <rsanford@akamai.com>; Carrillo, Erik G <erik.g.carrillo@intel.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; thomas@monjalon.net;
> david.marchand@redhat.com
> Subject: [PATCH v4 04/10] timer: remove deprecated code
>
> From: Marcin Baran <marcinx.baran@intel.com>
>
> Remove code for old ABI versions ahead of ABI version bump.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Looks good to me too:
Acked-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 09/10] build: change ABI version to 20.0
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (9 preceding siblings ...)
2019-10-17 14:31 3% ` [dpdk-dev] [PATCH v4 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
@ 2019-10-17 14:31 2% ` Anatoly Burakov
2019-10-17 14:32 23% ` [dpdk-dev] [PATCH v4 10/10] buildtools: add ABI versioning check script Anatoly Burakov
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Pawel Modrak, Nicolas Chautru, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Stephen Hemminger, Anoob Joseph, Tomasz Duszynski,
Liron Himi, Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru,
Lee Daly, Fiona Trahe, Ashish Gupta, Sunila Sahu, Declan Doherty,
Pablo de Lara, Gagandeep Singh, Ravi Kumar, Akhil Goyal,
Michael Shamis, Nagadheeraj Rottela, Srikanth Jampala, Fan Zhang,
Jay Zhou, Nipun Gupta, Mattias Rönnblom, Pavan Nikhilesh,
Liang Ma, Peter Mccarthy, Harry van Haaren, Artem V. Andreev,
Andrew Rybchenko, Olivier Matz, Gage Eads, John W. Linville,
Xiaolong Ye, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Igor Russkikh, Pavel Belous, Allain Legacy, Matt Peters,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Chas Williams, Rahul Lakkireddy, Wenzhuo Lu, Marcin Wojtas,
Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Xiao Wang, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Wei Hu (Xavier), Min Hu (Connor),
Yisen Zhuang, Beilei Xing, Jingjing Wu, Qiming Yang,
Konstantin Ananyev, Ferruh Yigit, Shijith Thotton,
Srisivasubramanian Srinivasan, Jakub Grajciar, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
K. Y. Srinivasan, Haiyang Zhang, Rastislav Cernay, Jan Remes,
Alejandro Lucero, Tetsuya Mukawa, Kiran Kumar K,
Bruce Richardson, Jasvinder Singh, Cristian Dumitrescu,
Keith Wiles, Maciej Czekaj, Maxime Coquelin, Tiwei Bie,
Zhihong Wang, Yong Wang, Tianfei zhang, Xiaoyun Li, Satha Rao,
Shreyansh Jain, David Hunt, Byron Marohn, Yipeng Wang,
Thomas Monjalon, Bernard Iremonger, Jiayu Hu, Sameh Gobriel,
Reshma Pattan, Vladimir Medvedkin, Honnappa Nagarahalli,
Kevin Laatz, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Merge all vesions in linker version script files to DPDK_20.0.
This commit was generated by running the following command:
:~/DPDK$ buildtools/update-abi.sh 20.0
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +++----
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++++-----
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 6 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +--
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 ++--
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 ++--
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 ++----
.../rte_distributor_version.map | 4 +-
lib/librte_eal/rte_eal_version.map | 310 +++++++-----------
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +++------
lib/librte_eventdev/rte_eventdev_version.map | 130 +++-----
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +--
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm_version.map | 39 +--
lib/librte_mbuf/rte_mbuf_version.map | 49 +--
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +--
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +---
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +--
154 files changed, 724 insertions(+), 1406 deletions(-)
diff --git a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
index f64b0f9c27..6bcea2cc7f 100644
--- a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
+++ b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
@@ -1,10 +1,10 @@
-DPDK_19.08 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
EXPERIMENTAL {
- global:
+ global:
- fpga_lte_fec_configure;
+ fpga_lte_fec_configure;
};
diff --git a/drivers/baseband/null/rte_pmd_bbdev_null_version.map b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/null/rte_pmd_bbdev_null_version.map
+++ b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
+++ b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a221522c23..9ab8c76eef 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
bman_acquire;
@@ -8,127 +8,94 @@ DPDK_17.11 {
bman_new_pool;
bman_query_free_buffers;
bman_release;
+ bman_thread_irq;
+ dpaa_logtype_eventdev;
dpaa_logtype_mempool;
dpaa_logtype_pmd;
dpaa_netcfg;
+ dpaa_svr_family;
fman_ccsr_map_fd;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
fman_if_clear_mac_addr;
fman_if_disable_rx;
- fman_if_enable_rx;
fman_if_discard_rx_errors;
- fman_if_get_fc_threshold;
+ fman_if_enable_rx;
fman_if_get_fc_quanta;
+ fman_if_get_fc_threshold;
fman_if_get_fdoff;
+ fman_if_get_sg_enable;
fman_if_loopback_disable;
fman_if_loopback_enable;
fman_if_promiscuous_disable;
fman_if_promiscuous_enable;
fman_if_reset_mcast_filter_table;
fman_if_set_bp;
- fman_if_set_fc_threshold;
fman_if_set_fc_quanta;
+ fman_if_set_fc_threshold;
fman_if_set_fdoff;
fman_if_set_ic_params;
fman_if_set_maxfrm;
fman_if_set_mcast_filter_table;
+ fman_if_set_sg;
fman_if_stats_get;
fman_if_stats_get_all;
fman_if_stats_reset;
fman_ip_rev;
+ fsl_qman_fq_portal_create;
netcfg_acquire;
netcfg_release;
of_find_compatible_node;
+ of_get_mac_address;
of_get_property;
+ per_lcore_dpaa_io;
+ per_lcore_held_bufs;
qm_channel_caam;
+ qm_channel_pool1;
+ qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
+ qman_clear_irq;
+ qman_create_cgr;
qman_create_fq;
+ qman_dca_index;
+ qman_delete_cgr;
qman_dequeue;
qman_dqrr_consume;
qman_enqueue;
qman_enqueue_multi;
+ qman_enqueue_multi_fq;
qman_fq_fqid;
+ qman_fq_portal_irqsource_add;
+ qman_fq_portal_irqsource_remove;
+ qman_fq_portal_thread_irq;
qman_fq_state;
qman_global_init;
qman_init_fq;
- qman_poll_dqrr;
- qman_query_fq_np;
- qman_set_vdq;
- qman_reserve_fqid_range;
- qman_volatile_dequeue;
- rte_dpaa_driver_register;
- rte_dpaa_driver_unregister;
- rte_dpaa_mem_ptov;
- rte_dpaa_portal_init;
-
- local: *;
-};
-
-DPDK_18.02 {
- global:
-
- dpaa_logtype_eventdev;
- dpaa_svr_family;
- per_lcore_dpaa_io;
- per_lcore_held_bufs;
- qm_channel_pool1;
- qman_alloc_cgrid_range;
- qman_alloc_pool_range;
- qman_create_cgr;
- qman_dca_index;
- qman_delete_cgr;
- qman_enqueue_multi_fq;
+ qman_irqsource_add;
+ qman_irqsource_remove;
qman_modify_cgr;
qman_oos_fq;
+ qman_poll_dqrr;
qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
+ qman_query_fq_np;
qman_release_cgrid_range;
+ qman_reserve_fqid_range;
qman_retire_fq;
+ qman_set_fq_lookup_table;
+ qman_set_vdq;
qman_static_dequeue_add;
- rte_dpaa_portal_fq_close;
- rte_dpaa_portal_fq_init;
-
-} DPDK_17.11;
-
-DPDK_18.08 {
- global:
-
- fman_if_get_sg_enable;
- fman_if_set_sg;
- of_get_mac_address;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
-
- bman_thread_irq;
- fman_if_get_sg_enable;
- fman_if_set_sg;
- qman_clear_irq;
-
- qman_irqsource_add;
- qman_irqsource_remove;
qman_thread_fd;
qman_thread_irq;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- qman_set_fq_lookup_table;
-
-} DPDK_18.11;
-
-DPDK_19.11 {
- global:
-
- fsl_qman_fq_portal_create;
- qman_fq_portal_irqsource_add;
- qman_fq_portal_irqsource_remove;
- qman_fq_portal_thread_irq;
-
-} DPDK_19.05;
+ qman_volatile_dequeue;
+ rte_dpaa_driver_register;
+ rte_dpaa_driver_unregister;
+ rte_dpaa_mem_ptov;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
+ rte_dpaa_portal_init;
+
+ local: *;
+};
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 4da787236b..fe45575046 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,32 +1,67 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
+ dpaa2_affine_qbman_ethrx_swp;
dpaa2_affine_qbman_swp;
dpaa2_alloc_dpbp_dev;
dpaa2_alloc_dq_storage;
+ dpaa2_dpbp_supported;
+ dpaa2_dqrr_size;
+ dpaa2_eqcr_size;
dpaa2_free_dpbp_dev;
dpaa2_free_dq_storage;
+ dpaa2_free_eq_descriptors;
+ dpaa2_get_qbman_swp;
+ dpaa2_io_portal;
+ dpaa2_svr_family;
+ dpaa2_virt_mode;
dpbp_disable;
dpbp_enable;
dpbp_get_attributes;
dpbp_get_num_free_bufs;
dpbp_open;
dpbp_reset;
+ dpci_get_opr;
+ dpci_set_opr;
+ dpci_set_rx_queue;
+ dpcon_get_attributes;
+ dpcon_open;
+ dpdmai_close;
+ dpdmai_disable;
+ dpdmai_enable;
+ dpdmai_get_attributes;
+ dpdmai_get_rx_queue;
+ dpdmai_get_tx_queue;
+ dpdmai_open;
+ dpdmai_set_rx_queue;
+ dpio_add_static_dequeue_channel;
dpio_close;
dpio_disable;
dpio_enable;
dpio_get_attributes;
dpio_open;
+ dpio_remove_static_dequeue_channel;
dpio_reset;
dpio_set_stashing_destination;
+ mc_get_soc_version;
+ mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
+ per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
+ qbman_check_new_result;
qbman_eq_desc_clear;
+ qbman_eq_desc_set_dca;
qbman_eq_desc_set_fq;
qbman_eq_desc_set_no_orp;
+ qbman_eq_desc_set_orp;
qbman_eq_desc_set_qd;
qbman_eq_desc_set_response;
+ qbman_eq_desc_set_token;
+ qbman_fq_query_state;
+ qbman_fq_state_frame_count;
+ qbman_get_dqrr_from_idx;
+ qbman_get_dqrr_idx;
qbman_pull_desc_clear;
qbman_pull_desc_set_fq;
qbman_pull_desc_set_numframes;
@@ -35,112 +70,43 @@ DPDK_17.05 {
qbman_release_desc_set_bpid;
qbman_result_DQ_fd;
qbman_result_DQ_flags;
- qbman_result_has_new_result;
- qbman_swp_acquire;
- qbman_swp_pull;
- qbman_swp_release;
- rte_fslmc_driver_register;
- rte_fslmc_driver_unregister;
- rte_fslmc_vfio_dmamap;
- rte_mcp_ptr_list;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- dpaa2_io_portal;
- dpaa2_get_qbman_swp;
- dpci_set_rx_queue;
- dpcon_open;
- dpcon_get_attributes;
- dpio_add_static_dequeue_channel;
- dpio_remove_static_dequeue_channel;
- mc_get_soc_version;
- mc_get_version;
- qbman_check_new_result;
- qbman_eq_desc_set_dca;
- qbman_get_dqrr_from_idx;
- qbman_get_dqrr_idx;
qbman_result_DQ_fqd_ctx;
+ qbman_result_DQ_odpid;
+ qbman_result_DQ_seqnum;
qbman_result_SCN_state;
+ qbman_result_eqresp_fd;
+ qbman_result_eqresp_rc;
+ qbman_result_eqresp_rspid;
+ qbman_result_eqresp_set_rspid;
+ qbman_result_has_new_result;
+ qbman_swp_acquire;
qbman_swp_dqrr_consume;
+ qbman_swp_dqrr_idx_consume;
qbman_swp_dqrr_next;
qbman_swp_enqueue_multiple;
qbman_swp_enqueue_multiple_desc;
+ qbman_swp_enqueue_multiple_fd;
qbman_swp_interrupt_clear_status;
+ qbman_swp_prefetch_dqrr_next;
+ qbman_swp_pull;
qbman_swp_push_set;
+ qbman_swp_release;
rte_dpaa2_alloc_dpci_dev;
- rte_fslmc_object_register;
- rte_global_active_dqs_list;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- dpaa2_dpbp_supported;
rte_dpaa2_dev_type;
+ rte_dpaa2_free_dpci_dev;
rte_dpaa2_intr_disable;
rte_dpaa2_intr_enable;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- dpaa2_svr_family;
- dpaa2_virt_mode;
- per_lcore_dpaa2_held_bufs;
- qbman_fq_query_state;
- qbman_fq_state_frame_count;
- qbman_swp_dqrr_idx_consume;
- qbman_swp_prefetch_dqrr_next;
- rte_fslmc_get_device_count;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- dpaa2_affine_qbman_ethrx_swp;
- dpdmai_close;
- dpdmai_disable;
- dpdmai_enable;
- dpdmai_get_attributes;
- dpdmai_get_rx_queue;
- dpdmai_get_tx_queue;
- dpdmai_open;
- dpdmai_set_rx_queue;
- rte_dpaa2_free_dpci_dev;
rte_dpaa2_memsegs;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
- dpaa2_dqrr_size;
- dpaa2_eqcr_size;
- dpci_get_opr;
- dpci_set_opr;
-
-} DPDK_18.05;
-
-DPDK_19.05 {
- global:
- dpaa2_free_eq_descriptors;
-
- qbman_eq_desc_set_orp;
- qbman_eq_desc_set_token;
- qbman_result_DQ_odpid;
- qbman_result_DQ_seqnum;
- qbman_result_eqresp_fd;
- qbman_result_eqresp_rc;
- qbman_result_eqresp_rspid;
- qbman_result_eqresp_set_rspid;
- qbman_swp_enqueue_multiple_fd;
-} DPDK_18.11;
+ rte_fslmc_driver_register;
+ rte_fslmc_driver_unregister;
+ rte_fslmc_get_device_count;
+ rte_fslmc_object_register;
+ rte_fslmc_vfio_dmamap;
+ rte_global_active_dqs_list;
+ rte_mcp_ptr_list;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/bus/ifpga/rte_bus_ifpga_version.map b/drivers/bus/ifpga/rte_bus_ifpga_version.map
index 964c9a9c45..05b4a28c1b 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga_version.map
+++ b/drivers/bus/ifpga/rte_bus_ifpga_version.map
@@ -1,17 +1,11 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
- rte_ifpga_get_integer32_arg;
- rte_ifpga_get_string_arg;
rte_ifpga_driver_register;
rte_ifpga_driver_unregister;
+ rte_ifpga_find_afu_by_name;
+ rte_ifpga_get_integer32_arg;
+ rte_ifpga_get_string_arg;
local: *;
};
-
-DPDK_19.05 {
- global:
-
- rte_ifpga_find_afu_by_name;
-
-} DPDK_18.05;
diff --git a/drivers/bus/pci/rte_bus_pci_version.map b/drivers/bus/pci/rte_bus_pci_version.map
index 27e9c4f101..012d817e14 100644
--- a/drivers/bus/pci/rte_bus_pci_version.map
+++ b/drivers/bus/pci/rte_bus_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pci_dump;
diff --git a/drivers/bus/vdev/rte_bus_vdev_version.map b/drivers/bus/vdev/rte_bus_vdev_version.map
index 590cf9b437..5abb10ecb0 100644
--- a/drivers/bus/vdev/rte_bus_vdev_version.map
+++ b/drivers/bus/vdev/rte_bus_vdev_version.map
@@ -1,18 +1,12 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
+ rte_vdev_add_custom_scan;
rte_vdev_init;
rte_vdev_register;
+ rte_vdev_remove_custom_scan;
rte_vdev_uninit;
rte_vdev_unregister;
local: *;
};
-
-DPDK_18.02 {
- global:
-
- rte_vdev_add_custom_scan;
- rte_vdev_remove_custom_scan;
-
-} DPDK_17.11;
diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map
index ae231ad329..cbaaebc06c 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus_version.map
+++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map
@@ -1,6 +1,4 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_vmbus_chan_close;
@@ -20,6 +18,7 @@ DPDK_18.08 {
rte_vmbus_probe;
rte_vmbus_register;
rte_vmbus_scan;
+ rte_vmbus_set_latency;
rte_vmbus_sub_channel_index;
rte_vmbus_subchan_open;
rte_vmbus_unmap_device;
@@ -27,10 +26,3 @@ DPDK_18.08 {
local: *;
};
-
-DPDK_18.11 {
- global:
-
- rte_vmbus_set_latency;
-
-} DPDK_18.08;
diff --git a/drivers/common/cpt/rte_common_cpt_version.map b/drivers/common/cpt/rte_common_cpt_version.map
index dec614f0de..79fa5751bc 100644
--- a/drivers/common/cpt/rte_common_cpt_version.map
+++ b/drivers/common/cpt/rte_common_cpt_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
cpt_pmd_ops_helper_get_mlen_direct_mode;
cpt_pmd_ops_helper_get_mlen_sg_mode;
+
+ local: *;
};
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index 8131c9e305..45d62aea9d 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,11 +1,11 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- dpaax_iova_table_update;
dpaax_iova_table_depopulate;
dpaax_iova_table_dump;
dpaax_iova_table_p;
dpaax_iova_table_populate;
+ dpaax_iova_table_update;
local: *;
};
diff --git a/drivers/common/mvep/rte_common_mvep_version.map b/drivers/common/mvep/rte_common_mvep_version.map
index c71722d79f..030928439d 100644
--- a/drivers/common/mvep/rte_common_mvep_version.map
+++ b/drivers/common/mvep/rte_common_mvep_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- rte_mvep_init;
rte_mvep_deinit;
+ rte_mvep_init;
+
+ local: *;
};
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index a9b3cff9bc..c15fb89112 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,8 +1,10 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
octeontx_logtype_mbox;
+ octeontx_mbox_send;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
- octeontx_mbox_send;
+
+ local: *;
};
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 4400120da0..adad21a2d6 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -1,39 +1,35 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
otx2_dev_active_vfs;
otx2_dev_fini;
otx2_dev_priv_init;
-
+ otx2_disable_irqs;
+ otx2_intra_dev_get_cfg;
otx2_logtype_base;
otx2_logtype_dpi;
otx2_logtype_mbox;
+ otx2_logtype_nix;
otx2_logtype_npa;
otx2_logtype_npc;
- otx2_logtype_nix;
otx2_logtype_sso;
- otx2_logtype_tm;
otx2_logtype_tim;
-
+ otx2_logtype_tm;
otx2_mbox_alloc_msg_rsp;
otx2_mbox_get_rsp;
otx2_mbox_get_rsp_tmo;
otx2_mbox_id2name;
otx2_mbox_msg_send;
otx2_mbox_wait_for_rsp;
-
- otx2_intra_dev_get_cfg;
otx2_npa_lf_active;
otx2_npa_lf_obj_get;
otx2_npa_lf_obj_ref;
otx2_npa_pf_func_get;
otx2_npa_set_defaults;
+ otx2_register_irq;
otx2_sso_pf_func_get;
otx2_sso_pf_func_set;
-
- otx2_disable_irqs;
otx2_unregister_irq;
- otx2_register_irq;
local: *;
};
diff --git a/drivers/compress/isal/rte_pmd_isal_version.map b/drivers/compress/isal/rte_pmd_isal_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/compress/isal/rte_pmd_isal_version.map
+++ b/drivers/compress/isal/rte_pmd_isal_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
+++ b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/qat/rte_pmd_qat_version.map b/drivers/compress/qat/rte_pmd_qat_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/qat/rte_pmd_qat_version.map
+++ b/drivers/compress/qat/rte_pmd_qat_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/zlib/rte_pmd_zlib_version.map b/drivers/compress/zlib/rte_pmd_zlib_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/zlib/rte_pmd_zlib_version.map
+++ b/drivers/compress/zlib/rte_pmd_zlib_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
+++ b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/armv8/rte_pmd_armv8_version.map b/drivers/crypto/armv8/rte_pmd_armv8_version.map
index 1f84b68a83..f9f17e4f6e 100644
--- a/drivers/crypto/armv8/rte_pmd_armv8_version.map
+++ b/drivers/crypto/armv8/rte_pmd_armv8_version.map
@@ -1,3 +1,3 @@
-DPDK_17.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
+++ b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/ccp/rte_pmd_ccp_version.map b/drivers/crypto/ccp/rte_pmd_ccp_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/crypto/ccp/rte_pmd_ccp_version.map
+++ b/drivers/crypto/ccp/rte_pmd_ccp_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 0bfb986d0b..5952d645fd 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_18.11 {
+DPDK_20.0 {
global:
dpaa2_sec_eventq_attach;
dpaa2_sec_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index cc7f2162e0..8580fa13db 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_19.11 {
+DPDK_20.0 {
global:
dpaa_sec_eventq_attach;
dpaa_sec_eventq_detach;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
index 8ffeca934e..f9f17e4f6e 100644
--- a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -1,3 +1,3 @@
-DPDK_16.07 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
+++ b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
index 406964d1fc..f9f17e4f6e 100644
--- a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
+++ b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/null/rte_pmd_null_crypto_version.map b/drivers/crypto/null/rte_pmd_null_crypto_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/null/rte_pmd_null_crypto_version.map
+++ b/drivers/crypto/null/rte_pmd_null_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
+++ b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/openssl/rte_pmd_openssl_version.map b/drivers/crypto/openssl/rte_pmd_openssl_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/openssl/rte_pmd_openssl_version.map
+++ b/drivers/crypto/openssl/rte_pmd_openssl_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
index 5c43127cf2..077afedce7 100644
--- a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -1,21 +1,16 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_cryptodev_scheduler_load_user_scheduler;
- rte_cryptodev_scheduler_slave_attach;
- rte_cryptodev_scheduler_slave_detach;
- rte_cryptodev_scheduler_ordering_set;
- rte_cryptodev_scheduler_ordering_get;
-
-};
-
-DPDK_17.05 {
- global:
-
rte_cryptodev_scheduler_mode_get;
rte_cryptodev_scheduler_mode_set;
rte_cryptodev_scheduler_option_get;
rte_cryptodev_scheduler_option_set;
+ rte_cryptodev_scheduler_ordering_get;
+ rte_cryptodev_scheduler_ordering_set;
+ rte_cryptodev_scheduler_slave_attach;
+ rte_cryptodev_scheduler_slave_detach;
rte_cryptodev_scheduler_slaves_get;
-} DPDK_17.02;
+ local: *;
+};
diff --git a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
+++ b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
+++ b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/zuc/rte_pmd_zuc_version.map b/drivers/crypto/zuc/rte_pmd_zuc_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/zuc/rte_pmd_zuc_version.map
+++ b/drivers/crypto/zuc/rte_pmd_zuc_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
+++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
index 1c0b7559dc..f9f17e4f6e 100644
--- a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
+++ b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dsw/rte_pmd_dsw_event_version.map b/drivers/event/dsw/rte_pmd_dsw_event_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/event/dsw/rte_pmd_dsw_event_version.map
+++ b/drivers/event/dsw/rte_pmd_dsw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
+++ b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
index 41c65c8c9c..f9f17e4f6e 100644
--- a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
+++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
+DPDK_20.0 {
local: *;
};
-
diff --git a/drivers/event/opdl/rte_pmd_opdl_event_version.map b/drivers/event/opdl/rte_pmd_opdl_event_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/event/opdl/rte_pmd_opdl_event_version.map
+++ b/drivers/event/opdl/rte_pmd_opdl_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/sw/rte_pmd_sw_event_version.map b/drivers/event/sw/rte_pmd_sw_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/sw/rte_pmd_sw_event_version.map
+++ b/drivers/event/sw/rte_pmd_sw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket_version.map
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 60bf50b2d1..9eebaf7ffd 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_dpaa_bpid_info;
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index b45e7a9ac1..cd4bc88273 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,16 +1,10 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_dpaa2_bpid_info;
rte_dpaa2_mbuf_alloc_bulk;
-
- local: *;
-};
-
-DPDK_18.05 {
- global:
-
rte_dpaa2_mbuf_from_buf_addr;
rte_dpaa2_mbuf_pool_bpid;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
index d703368c31..d4f81aed8e 100644
--- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
+++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
@@ -1,8 +1,8 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
- otx2_npa_lf_init;
otx2_npa_lf_fini;
+ otx2_npa_lf_init;
local: *;
};
diff --git a/drivers/mempool/ring/rte_mempool_ring_version.map b/drivers/mempool/ring/rte_mempool_ring_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/ring/rte_mempool_ring_version.map
+++ b/drivers/mempool/ring/rte_mempool_ring_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/stack/rte_mempool_stack_version.map b/drivers/mempool/stack/rte_mempool_stack_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/stack/rte_mempool_stack_version.map
+++ b/drivers/mempool/stack/rte_mempool_stack_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_packet/rte_pmd_af_packet_version.map b/drivers/net/af_packet/rte_pmd_af_packet_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/af_packet/rte_pmd_af_packet_version.map
+++ b/drivers/net/af_packet/rte_pmd_af_packet_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
index c6db030fe6..f9f17e4f6e 100644
--- a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
+++ b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
@@ -1,3 +1,3 @@
-DPDK_19.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ark/rte_pmd_ark_version.map b/drivers/net/ark/rte_pmd_ark_version.map
index 1062e0429f..f9f17e4f6e 100644
--- a/drivers/net/ark/rte_pmd_ark_version.map
+++ b/drivers/net/ark/rte_pmd_ark_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
- local: *;
-
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/atlantic/rte_pmd_atlantic_version.map b/drivers/net/atlantic/rte_pmd_atlantic_version.map
index b16faa999f..9b04838d84 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic_version.map
+++ b/drivers/net/atlantic/rte_pmd_atlantic_version.map
@@ -1,5 +1,4 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
@@ -13,4 +12,3 @@ EXPERIMENTAL {
rte_pmd_atl_macsec_select_txsa;
rte_pmd_atl_macsec_select_rxsa;
};
-
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/net/avp/rte_pmd_avp_version.map
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/axgbe/rte_pmd_axgbe_version.map b/drivers/net/axgbe/rte_pmd_axgbe_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/net/axgbe/rte_pmd_axgbe_version.map
+++ b/drivers/net/axgbe/rte_pmd_axgbe_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
+++ b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnxt/rte_pmd_bnxt_version.map b/drivers/net/bnxt/rte_pmd_bnxt_version.map
index 4750d40ad6..bb52562347 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt_version.map
+++ b/drivers/net/bnxt/rte_pmd_bnxt_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_pmd_bnxt_get_vf_rx_status;
@@ -10,13 +10,13 @@ DPDK_17.08 {
rte_pmd_bnxt_set_tx_loopback;
rte_pmd_bnxt_set_vf_mac_addr;
rte_pmd_bnxt_set_vf_mac_anti_spoof;
+ rte_pmd_bnxt_set_vf_persist_stats;
rte_pmd_bnxt_set_vf_rate_limit;
rte_pmd_bnxt_set_vf_rxmode;
rte_pmd_bnxt_set_vf_vlan_anti_spoof;
rte_pmd_bnxt_set_vf_vlan_filter;
rte_pmd_bnxt_set_vf_vlan_insert;
rte_pmd_bnxt_set_vf_vlan_stripq;
- rte_pmd_bnxt_set_vf_persist_stats;
local: *;
};
diff --git a/drivers/net/bonding/rte_pmd_bond_version.map b/drivers/net/bonding/rte_pmd_bond_version.map
index 00d955c481..270c7d5d55 100644
--- a/drivers/net/bonding/rte_pmd_bond_version.map
+++ b/drivers/net/bonding/rte_pmd_bond_version.map
@@ -1,9 +1,21 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_bond_8023ad_agg_selection_get;
+ rte_eth_bond_8023ad_agg_selection_set;
+ rte_eth_bond_8023ad_conf_get;
+ rte_eth_bond_8023ad_dedicated_queues_disable;
+ rte_eth_bond_8023ad_dedicated_queues_enable;
+ rte_eth_bond_8023ad_ext_collect;
+ rte_eth_bond_8023ad_ext_collect_get;
+ rte_eth_bond_8023ad_ext_distrib;
+ rte_eth_bond_8023ad_ext_distrib_get;
+ rte_eth_bond_8023ad_ext_slowtx;
+ rte_eth_bond_8023ad_setup;
rte_eth_bond_8023ad_slave_info;
rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
+ rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
rte_eth_bond_mac_address_reset;
rte_eth_bond_mac_address_set;
@@ -19,36 +31,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- rte_eth_bond_free;
-
-} DPDK_2.0;
-
-DPDK_16.04 {
-};
-
-DPDK_16.07 {
- global:
-
- rte_eth_bond_8023ad_ext_collect;
- rte_eth_bond_8023ad_ext_collect_get;
- rte_eth_bond_8023ad_ext_distrib;
- rte_eth_bond_8023ad_ext_distrib_get;
- rte_eth_bond_8023ad_ext_slowtx;
-
-} DPDK_16.04;
-
-DPDK_17.08 {
- global:
-
- rte_eth_bond_8023ad_dedicated_queues_enable;
- rte_eth_bond_8023ad_dedicated_queues_disable;
- rte_eth_bond_8023ad_agg_selection_get;
- rte_eth_bond_8023ad_agg_selection_set;
- rte_eth_bond_8023ad_conf_get;
- rte_eth_bond_8023ad_setup;
-
-} DPDK_16.07;
diff --git a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
+++ b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index 8cb4500b51..f403a1526d 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,12 +1,9 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
dpaa_eth_eventq_attach;
dpaa_eth_eventq_detach;
rte_pmd_dpaa_set_tx_loopback;
-} DPDK_17.11;
+
+ local: *;
+};
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index d1b4cdb232..f2bb793319 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,15 +1,11 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_17.11 {
+DPDK_20.0 {
global:
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -17,4 +13,4 @@ EXPERIMENTAL {
rte_pmd_dpaa2_mux_flow_create;
rte_pmd_dpaa2_set_custom_hash;
rte_pmd_dpaa2_set_timestamp;
-} DPDK_17.11;
+};
diff --git a/drivers/net/e1000/rte_pmd_e1000_version.map b/drivers/net/e1000/rte_pmd_e1000_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/e1000/rte_pmd_e1000_version.map
+++ b/drivers/net/e1000/rte_pmd_e1000_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ena/rte_pmd_ena_version.map b/drivers/net/ena/rte_pmd_ena_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/ena/rte_pmd_ena_version.map
+++ b/drivers/net/ena/rte_pmd_ena_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enetc/rte_pmd_enetc_version.map b/drivers/net/enetc/rte_pmd_enetc_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/net/enetc/rte_pmd_enetc_version.map
+++ b/drivers/net/enetc/rte_pmd_enetc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enic/rte_pmd_enic_version.map b/drivers/net/enic/rte_pmd_enic_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/enic/rte_pmd_enic_version.map
+++ b/drivers/net/enic/rte_pmd_enic_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/failsafe/rte_pmd_failsafe_version.map b/drivers/net/failsafe/rte_pmd_failsafe_version.map
index b6d2840be4..f9f17e4f6e 100644
--- a/drivers/net/failsafe/rte_pmd_failsafe_version.map
+++ b/drivers/net/failsafe/rte_pmd_failsafe_version.map
@@ -1,4 +1,3 @@
-DPDK_17.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/fm10k/rte_pmd_fm10k_version.map b/drivers/net/fm10k/rte_pmd_fm10k_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/fm10k/rte_pmd_fm10k_version.map
+++ b/drivers/net/fm10k/rte_pmd_fm10k_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hinic/rte_pmd_hinic_version.map b/drivers/net/hinic/rte_pmd_hinic_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/hinic/rte_pmd_hinic_version.map
+++ b/drivers/net/hinic/rte_pmd_hinic_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
index 35e5f2debb..f9f17e4f6e 100644
--- a/drivers/net/hns3/rte_pmd_hns3_version.map
+++ b/drivers/net/hns3/rte_pmd_hns3_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map
index cccd5768c2..a80e69b93e 100644
--- a/drivers/net/i40e/rte_pmd_i40e_version.map
+++ b/drivers/net/i40e/rte_pmd_i40e_version.map
@@ -1,23 +1,34 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_i40e_add_vf_mac_addr;
+ rte_pmd_i40e_flow_add_del_packet_template;
+ rte_pmd_i40e_flow_type_mapping_get;
+ rte_pmd_i40e_flow_type_mapping_reset;
+ rte_pmd_i40e_flow_type_mapping_update;
+ rte_pmd_i40e_get_ddp_info;
+ rte_pmd_i40e_get_ddp_list;
rte_pmd_i40e_get_vf_stats;
+ rte_pmd_i40e_inset_get;
+ rte_pmd_i40e_inset_set;
rte_pmd_i40e_ping_vfs;
+ rte_pmd_i40e_process_ddp_package;
rte_pmd_i40e_ptype_mapping_get;
rte_pmd_i40e_ptype_mapping_replace;
rte_pmd_i40e_ptype_mapping_reset;
rte_pmd_i40e_ptype_mapping_update;
+ rte_pmd_i40e_query_vfid_by_mac;
rte_pmd_i40e_reset_vf_stats;
+ rte_pmd_i40e_rss_queue_region_conf;
+ rte_pmd_i40e_set_tc_strict_prio;
rte_pmd_i40e_set_tx_loopback;
rte_pmd_i40e_set_vf_broadcast;
rte_pmd_i40e_set_vf_mac_addr;
rte_pmd_i40e_set_vf_mac_anti_spoof;
+ rte_pmd_i40e_set_vf_max_bw;
rte_pmd_i40e_set_vf_multicast_promisc;
+ rte_pmd_i40e_set_vf_tc_bw_alloc;
+ rte_pmd_i40e_set_vf_tc_max_bw;
rte_pmd_i40e_set_vf_unicast_promisc;
rte_pmd_i40e_set_vf_vlan_anti_spoof;
rte_pmd_i40e_set_vf_vlan_filter;
@@ -25,43 +36,5 @@ DPDK_17.02 {
rte_pmd_i40e_set_vf_vlan_stripq;
rte_pmd_i40e_set_vf_vlan_tag;
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_pmd_i40e_set_tc_strict_prio;
- rte_pmd_i40e_set_vf_max_bw;
- rte_pmd_i40e_set_vf_tc_bw_alloc;
- rte_pmd_i40e_set_vf_tc_max_bw;
- rte_pmd_i40e_process_ddp_package;
- rte_pmd_i40e_get_ddp_list;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_i40e_get_ddp_info;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_pmd_i40e_add_vf_mac_addr;
- rte_pmd_i40e_flow_add_del_packet_template;
- rte_pmd_i40e_flow_type_mapping_update;
- rte_pmd_i40e_flow_type_mapping_get;
- rte_pmd_i40e_flow_type_mapping_reset;
- rte_pmd_i40e_query_vfid_by_mac;
- rte_pmd_i40e_rss_queue_region_conf;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_pmd_i40e_inset_get;
- rte_pmd_i40e_inset_set;
-} DPDK_17.11;
\ No newline at end of file
+ local: *;
+};
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
index 7b23b609da..f9f17e4f6e 100644
--- a/drivers/net/ice/rte_pmd_ice_version.map
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -1,4 +1,3 @@
-DPDK_19.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ifc/rte_pmd_ifc_version.map b/drivers/net/ifc/rte_pmd_ifc_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/net/ifc/rte_pmd_ifc_version.map
+++ b/drivers/net/ifc/rte_pmd_ifc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
+++ b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
index c814f96d72..21534dbc3d 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
- rte_pmd_ixgbe_set_all_queues_drop_en;
- rte_pmd_ixgbe_set_tx_loopback;
- rte_pmd_ixgbe_set_vf_mac_addr;
- rte_pmd_ixgbe_set_vf_mac_anti_spoof;
- rte_pmd_ixgbe_set_vf_split_drop_en;
- rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
- rte_pmd_ixgbe_set_vf_vlan_insert;
- rte_pmd_ixgbe_set_vf_vlan_stripq;
-} DPDK_2.0;
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_ixgbe_bypass_event_show;
+ rte_pmd_ixgbe_bypass_event_store;
+ rte_pmd_ixgbe_bypass_init;
+ rte_pmd_ixgbe_bypass_state_set;
+ rte_pmd_ixgbe_bypass_state_show;
+ rte_pmd_ixgbe_bypass_ver_show;
+ rte_pmd_ixgbe_bypass_wd_reset;
+ rte_pmd_ixgbe_bypass_wd_timeout_show;
+ rte_pmd_ixgbe_bypass_wd_timeout_store;
rte_pmd_ixgbe_macsec_config_rxsc;
rte_pmd_ixgbe_macsec_config_txsc;
rte_pmd_ixgbe_macsec_disable;
rte_pmd_ixgbe_macsec_enable;
rte_pmd_ixgbe_macsec_select_rxsa;
rte_pmd_ixgbe_macsec_select_txsa;
+ rte_pmd_ixgbe_ping_vf;
+ rte_pmd_ixgbe_set_all_queues_drop_en;
+ rte_pmd_ixgbe_set_tc_bw_alloc;
+ rte_pmd_ixgbe_set_tx_loopback;
+ rte_pmd_ixgbe_set_vf_mac_addr;
+ rte_pmd_ixgbe_set_vf_mac_anti_spoof;
rte_pmd_ixgbe_set_vf_rate_limit;
rte_pmd_ixgbe_set_vf_rx;
rte_pmd_ixgbe_set_vf_rxmode;
+ rte_pmd_ixgbe_set_vf_split_drop_en;
rte_pmd_ixgbe_set_vf_tx;
+ rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
rte_pmd_ixgbe_set_vf_vlan_filter;
-} DPDK_16.11;
+ rte_pmd_ixgbe_set_vf_vlan_insert;
+ rte_pmd_ixgbe_set_vf_vlan_stripq;
-DPDK_17.05 {
- global:
-
- rte_pmd_ixgbe_ping_vf;
- rte_pmd_ixgbe_set_tc_bw_alloc;
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_ixgbe_bypass_event_show;
- rte_pmd_ixgbe_bypass_event_store;
- rte_pmd_ixgbe_bypass_init;
- rte_pmd_ixgbe_bypass_state_set;
- rte_pmd_ixgbe_bypass_state_show;
- rte_pmd_ixgbe_bypass_ver_show;
- rte_pmd_ixgbe_bypass_wd_reset;
- rte_pmd_ixgbe_bypass_wd_timeout_show;
- rte_pmd_ixgbe_bypass_wd_timeout_store;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/net/kni/rte_pmd_kni_version.map b/drivers/net/kni/rte_pmd_kni_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/kni/rte_pmd_kni_version.map
+++ b/drivers/net/kni/rte_pmd_kni_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/liquidio/rte_pmd_liquidio_version.map b/drivers/net/liquidio/rte_pmd_liquidio_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/liquidio/rte_pmd_liquidio_version.map
+++ b/drivers/net/liquidio/rte_pmd_liquidio_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/memif/rte_pmd_memif_version.map b/drivers/net/memif/rte_pmd_memif_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/net/memif/rte_pmd_memif_version.map
+++ b/drivers/net/memif/rte_pmd_memif_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/mlx4/rte_pmd_mlx4_version.map b/drivers/net/mlx4/rte_pmd_mlx4_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/mlx4/rte_pmd_mlx4_version.map
+++ b/drivers/net/mlx4/rte_pmd_mlx4_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5_version.map
+++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvneta/rte_pmd_mvneta_version.map b/drivers/net/mvneta/rte_pmd_mvneta_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/net/mvneta/rte_pmd_mvneta_version.map
+++ b/drivers/net/mvneta/rte_pmd_mvneta_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
+++ b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/netvsc/rte_pmd_netvsc_version.map b/drivers/net/netvsc/rte_pmd_netvsc_version.map
index d534019a6b..f9f17e4f6e 100644
--- a/drivers/net/netvsc/rte_pmd_netvsc_version.map
+++ b/drivers/net/netvsc/rte_pmd_netvsc_version.map
@@ -1,5 +1,3 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfb/rte_pmd_nfb_version.map b/drivers/net/nfb/rte_pmd_nfb_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/nfb/rte_pmd_nfb_version.map
+++ b/drivers/net/nfb/rte_pmd_nfb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfp/rte_pmd_nfp_version.map b/drivers/net/nfp/rte_pmd_nfp_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/nfp/rte_pmd_nfp_version.map
+++ b/drivers/net/nfp/rte_pmd_nfp_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/null/rte_pmd_null_version.map b/drivers/net/null/rte_pmd_null_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/null/rte_pmd_null_version.map
+++ b/drivers/net/null/rte_pmd_null_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/octeontx/rte_pmd_octeontx_version.map b/drivers/net/octeontx/rte_pmd_octeontx_version.map
index a3161b14d0..f7cae02fac 100644
--- a/drivers/net/octeontx/rte_pmd_octeontx_version.map
+++ b/drivers/net/octeontx/rte_pmd_octeontx_version.map
@@ -1,11 +1,7 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.02 {
+DPDK_20.0 {
global:
rte_octeontx_pchan_map;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/pcap/rte_pmd_pcap_version.map b/drivers/net/pcap/rte_pmd_pcap_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/pcap/rte_pmd_pcap_version.map
+++ b/drivers/net/pcap/rte_pmd_pcap_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/qede/rte_pmd_qede_version.map b/drivers/net/qede/rte_pmd_qede_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/qede/rte_pmd_qede_version.map
+++ b/drivers/net/qede/rte_pmd_qede_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ring/rte_pmd_ring_version.map b/drivers/net/ring/rte_pmd_ring_version.map
index 1f785d9409..ebb6be2733 100644
--- a/drivers/net/ring/rte_pmd_ring_version.map
+++ b/drivers/net/ring/rte_pmd_ring_version.map
@@ -1,14 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_from_ring;
rte_eth_from_rings;
local: *;
};
-
-DPDK_2.2 {
- global:
-
- rte_eth_from_ring;
-
-} DPDK_2.0;
diff --git a/drivers/net/sfc/rte_pmd_sfc_version.map b/drivers/net/sfc/rte_pmd_sfc_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/sfc/rte_pmd_sfc_version.map
+++ b/drivers/net/sfc/rte_pmd_sfc_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/softnic/rte_pmd_softnic_version.map b/drivers/net/softnic/rte_pmd_softnic_version.map
index bc44b06f98..50f113d5a2 100644
--- a/drivers/net/softnic/rte_pmd_softnic_version.map
+++ b/drivers/net/softnic/rte_pmd_softnic_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pmd_softnic_run;
diff --git a/drivers/net/szedata2/rte_pmd_szedata2_version.map b/drivers/net/szedata2/rte_pmd_szedata2_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/szedata2/rte_pmd_szedata2_version.map
+++ b/drivers/net/szedata2/rte_pmd_szedata2_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/tap/rte_pmd_tap_version.map b/drivers/net/tap/rte_pmd_tap_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/tap/rte_pmd_tap_version.map
+++ b/drivers/net/tap/rte_pmd_tap_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_version.map b/drivers/net/thunderx/rte_pmd_thunderx_version.map
index 1901bcb3b3..f9f17e4f6e 100644
--- a/drivers/net/thunderx/rte_pmd_thunderx_version.map
+++ b/drivers/net/thunderx/rte_pmd_thunderx_version.map
@@ -1,4 +1,3 @@
-DPDK_16.07 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
+++ b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vhost/rte_pmd_vhost_version.map b/drivers/net/vhost/rte_pmd_vhost_version.map
index 695db85749..16b591ccc4 100644
--- a/drivers/net/vhost/rte_pmd_vhost_version.map
+++ b/drivers/net/vhost/rte_pmd_vhost_version.map
@@ -1,13 +1,8 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
rte_eth_vhost_get_queue_event;
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
rte_eth_vhost_get_vid_from_port_id;
+
+ local: *;
};
diff --git a/drivers/net/virtio/rte_pmd_virtio_version.map b/drivers/net/virtio/rte_pmd_virtio_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/virtio/rte_pmd_virtio_version.map
+++ b/drivers/net/virtio/rte_pmd_virtio_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
+++ b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
+++ b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
index d16a136fc8..ca6a0d7626 100644
--- a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
+++ b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
@@ -1,4 +1,4 @@
-DPDK_19.05 {
+DPDK_20.0 {
global:
rte_qdma_attr_get;
@@ -9,9 +9,9 @@ DPDK_19.05 {
rte_qdma_start;
rte_qdma_stop;
rte_qdma_vq_create;
- rte_qdma_vq_destroy;
rte_qdma_vq_dequeue;
rte_qdma_vq_dequeue_multi;
+ rte_qdma_vq_destroy;
rte_qdma_vq_enqueue;
rte_qdma_vq_enqueue_multi;
rte_qdma_vq_stats;
diff --git a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
+++ b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ioat/rte_rawdev_ioat_version.map b/drivers/raw/ioat/rte_rawdev_ioat_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/ioat/rte_rawdev_ioat_version.map
+++ b/drivers/raw/ioat/rte_rawdev_ioat_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ntb/rte_rawdev_ntb_version.map b/drivers/raw/ntb/rte_rawdev_ntb_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/raw/ntb/rte_rawdev_ntb_version.map
+++ b/drivers/raw/ntb/rte_rawdev_ntb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
+++ b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
+++ b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/lib/librte_acl/rte_acl_version.map b/lib/librte_acl/rte_acl_version.map
index b09370a104..c3daca8115 100644
--- a/lib/librte_acl/rte_acl_version.map
+++ b/lib/librte_acl/rte_acl_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_acl_add_rules;
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
index 3624eb1cb4..45b560dbe7 100644
--- a/lib/librte_bbdev/rte_bbdev_version.map
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_bitratestats/rte_bitratestats_version.map b/lib/librte_bitratestats/rte_bitratestats_version.map
index fe7454452d..88fc2912db 100644
--- a/lib/librte_bitratestats/rte_bitratestats_version.map
+++ b/lib/librte_bitratestats/rte_bitratestats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_stats_bitrate_calc;
diff --git a/lib/librte_bpf/rte_bpf_version.map b/lib/librte_bpf/rte_bpf_version.map
index a203e088ea..e1ec43faa0 100644
--- a/lib/librte_bpf/rte_bpf_version.map
+++ b/lib/librte_bpf/rte_bpf_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cfgfile/rte_cfgfile_version.map b/lib/librte_cfgfile/rte_cfgfile_version.map
index a0a11cea8d..906eee96bf 100644
--- a/lib/librte_cfgfile/rte_cfgfile_version.map
+++ b/lib/librte_cfgfile/rte_cfgfile_version.map
@@ -1,40 +1,22 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_cfgfile_add_entry;
+ rte_cfgfile_add_section;
rte_cfgfile_close;
+ rte_cfgfile_create;
rte_cfgfile_get_entry;
rte_cfgfile_has_entry;
rte_cfgfile_has_section;
rte_cfgfile_load;
+ rte_cfgfile_load_with_params;
rte_cfgfile_num_sections;
+ rte_cfgfile_save;
rte_cfgfile_section_entries;
+ rte_cfgfile_section_entries_by_index;
rte_cfgfile_section_num_entries;
rte_cfgfile_sections;
+ rte_cfgfile_set_entry;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_cfgfile_section_entries_by_index;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_cfgfile_load_with_params;
-
-} DPDK_16.04;
-
-DPDK_17.11 {
- global:
-
- rte_cfgfile_add_entry;
- rte_cfgfile_add_section;
- rte_cfgfile_create;
- rte_cfgfile_save;
- rte_cfgfile_set_entry;
-
-} DPDK_17.05;
diff --git a/lib/librte_cmdline/rte_cmdline_version.map b/lib/librte_cmdline/rte_cmdline_version.map
index 04bcb387f2..95fce812ff 100644
--- a/lib/librte_cmdline/rte_cmdline_version.map
+++ b/lib/librte_cmdline/rte_cmdline_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
cirbuf_add_buf_head;
@@ -40,6 +40,7 @@ DPDK_2.0 {
cmdline_parse_num;
cmdline_parse_portlist;
cmdline_parse_string;
+ cmdline_poll;
cmdline_printf;
cmdline_quit;
cmdline_set_prompt;
@@ -68,10 +69,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- cmdline_poll;
-
-} DPDK_2.0;
diff --git a/lib/librte_compressdev/rte_compressdev_version.map b/lib/librte_compressdev/rte_compressdev_version.map
index e2a108b650..cfcd50ac1c 100644
--- a/lib/librte_compressdev/rte_compressdev_version.map
+++ b/lib/librte_compressdev/rte_compressdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 3deb265ac2..1dd1e259a0 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -1,92 +1,62 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
- rte_cryptodevs;
+ rte_crypto_aead_algorithm_strings;
+ rte_crypto_aead_operation_strings;
+ rte_crypto_auth_algorithm_strings;
+ rte_crypto_auth_operation_strings;
+ rte_crypto_cipher_algorithm_strings;
+ rte_crypto_cipher_operation_strings;
+ rte_crypto_op_pool_create;
+ rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
- rte_cryptodev_count;
rte_cryptodev_configure;
+ rte_cryptodev_count;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_devices_get;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
+ rte_cryptodev_get_aead_algo_enum;
+ rte_cryptodev_get_auth_algo_enum;
+ rte_cryptodev_get_cipher_algo_enum;
rte_cryptodev_get_dev_id;
rte_cryptodev_get_feature_name;
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
- rte_cryptodev_sym_session_create;
- rte_cryptodev_sym_session_free;
+ rte_cryptodev_queue_pair_count;
+ rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
rte_cryptodev_start;
rte_cryptodev_stats_get;
rte_cryptodev_stats_reset;
rte_cryptodev_stop;
- rte_cryptodev_queue_pair_count;
- rte_cryptodev_queue_pair_setup;
- rte_crypto_op_pool_create;
-
- local: *;
-};
-
-DPDK_17.02 {
- global:
-
- rte_cryptodev_devices_get;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_sym_capability_check_aead;
rte_cryptodev_sym_capability_check_auth;
rte_cryptodev_sym_capability_check_cipher;
rte_cryptodev_sym_capability_get;
- rte_crypto_auth_algorithm_strings;
- rte_crypto_auth_operation_strings;
- rte_crypto_cipher_algorithm_strings;
- rte_crypto_cipher_operation_strings;
-
-} DPDK_16.04;
-
-DPDK_17.05 {
- global:
-
- rte_cryptodev_get_auth_algo_enum;
- rte_cryptodev_get_cipher_algo_enum;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_cryptodev_allocate_driver;
- rte_cryptodev_device_count_by_driver;
- rte_cryptodev_driver_id_get;
- rte_cryptodev_driver_name_get;
- rte_cryptodev_get_aead_algo_enum;
- rte_cryptodev_sym_capability_check_aead;
- rte_cryptodev_sym_session_init;
- rte_cryptodev_sym_session_clear;
- rte_crypto_aead_algorithm_strings;
- rte_crypto_aead_operation_strings;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_cryptodev_get_sec_ctx;
- rte_cryptodev_name_get;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_parse_input_args;
-
-} DPDK_17.08;
-
-DPDK_18.05 {
- global:
-
rte_cryptodev_sym_get_header_session_size;
rte_cryptodev_sym_get_private_session_size;
+ rte_cryptodev_sym_session_clear;
+ rte_cryptodev_sym_session_create;
+ rte_cryptodev_sym_session_free;
+ rte_cryptodev_sym_session_init;
+ rte_cryptodevs;
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 00e26b4804..1b7c643005 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_distributor_clear_returns;
@@ -10,4 +10,6 @@ DPDK_17.05 {
rte_distributor_request_pkt;
rte_distributor_return_pkt;
rte_distributor_returned_pkts;
+
+ local: *;
};
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d37b..8c41999317 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
__rte_panic;
@@ -7,46 +7,111 @@ DPDK_2.0 {
lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
+ rte_bus_dump;
+ rte_bus_find;
+ rte_bus_find_by_device;
+ rte_bus_find_by_name;
+ rte_bus_get_iommu_class;
+ rte_bus_probe;
+ rte_bus_register;
+ rte_bus_scan;
+ rte_bus_unregister;
rte_calloc;
rte_calloc_socket;
rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
+ rte_cpu_get_flag_name;
+ rte_cpu_is_supported;
+ rte_ctrl_thread_create;
rte_cycles_vmware_tsc_map;
rte_delay_us;
+ rte_delay_us_block;
+ rte_delay_us_callback_register;
+ rte_dev_is_probed;
+ rte_dev_probe;
+ rte_dev_remove;
+ rte_devargs_add;
+ rte_devargs_dump;
+ rte_devargs_insert;
+ rte_devargs_next;
+ rte_devargs_parse;
+ rte_devargs_parsef;
+ rte_devargs_remove;
+ rte_devargs_type_count;
rte_dump_physmem_layout;
rte_dump_registers;
rte_dump_stack;
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
+ rte_eal_cleanup;
+ rte_eal_create_uio_dev;
rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
+ rte_eal_get_runtime_dir;
rte_eal_has_hugepages;
+ rte_eal_has_pci;
+ rte_eal_hotplug_add;
+ rte_eal_hotplug_remove;
rte_eal_hpet_init;
rte_eal_init;
rte_eal_iopl_init;
+ rte_eal_iova_mode;
rte_eal_lcore_role;
+ rte_eal_mbuf_user_pool_ops;
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
+ rte_eal_primary_proc_alive;
rte_eal_process_type;
rte_eal_remote_launch;
rte_eal_tailq_lookup;
rte_eal_tailq_register;
+ rte_eal_using_phys_addrs;
+ rte_eal_vfio_intr_mode;
rte_eal_wait_lcore;
+ rte_epoll_ctl;
+ rte_epoll_wait;
rte_exit;
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
rte_get_tsc_hz;
rte_hexdump;
+ rte_hypervisor_get;
+ rte_hypervisor_get_name;
+ rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
+ rte_intr_cap_multiple;
rte_intr_disable;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
rte_intr_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_keepalive_create;
+ rte_keepalive_dispatch_pings;
+ rte_keepalive_mark_alive;
+ rte_keepalive_mark_sleep;
+ rte_keepalive_register_core;
+ rte_keepalive_register_relay_callback;
+ rte_lcore_has_role;
+ rte_lcore_index;
+ rte_lcore_to_socket_id;
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
+ rte_log_dump;
+ rte_log_get_global_level;
+ rte_log_get_level;
+ rte_log_register;
+ rte_log_set_global_level;
+ rte_log_set_level;
+ rte_log_set_level_pattern;
+ rte_log_set_level_regexp;
rte_logs;
rte_malloc;
rte_malloc_dump_stats;
@@ -54,155 +119,38 @@ DPDK_2.0 {
rte_malloc_set_limit;
rte_malloc_socket;
rte_malloc_validate;
+ rte_malloc_virt2iova;
+ rte_mcfg_mem_read_lock;
+ rte_mcfg_mem_read_unlock;
+ rte_mcfg_mem_write_lock;
+ rte_mcfg_mem_write_unlock;
+ rte_mcfg_mempool_read_lock;
+ rte_mcfg_mempool_read_unlock;
+ rte_mcfg_mempool_write_lock;
+ rte_mcfg_mempool_write_unlock;
+ rte_mcfg_tailq_read_lock;
+ rte_mcfg_tailq_read_unlock;
+ rte_mcfg_tailq_write_lock;
+ rte_mcfg_tailq_write_unlock;
rte_mem_lock_page;
+ rte_mem_virt2iova;
rte_mem_virt2phy;
rte_memdump;
rte_memory_get_nchannel;
rte_memory_get_nrank;
rte_memzone_dump;
+ rte_memzone_free;
rte_memzone_lookup;
rte_memzone_reserve;
rte_memzone_reserve_aligned;
rte_memzone_reserve_bounded;
rte_memzone_walk;
rte_openlog_stream;
+ rte_rand;
rte_realloc;
- rte_set_application_usage_hook;
- rte_socket_id;
- rte_strerror;
- rte_strsplit;
- rte_sys_gettid;
- rte_thread_get_affinity;
- rte_thread_set_affinity;
- rte_vlog;
- rte_zmalloc;
- rte_zmalloc_socket;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_epoll_ctl;
- rte_epoll_wait;
- rte_intr_allow_others;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
- rte_memzone_free;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_intr_cap_multiple;
- rte_keepalive_create;
- rte_keepalive_dispatch_pings;
- rte_keepalive_mark_alive;
- rte_keepalive_register_core;
-
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_cpu_get_flag_name;
- rte_eal_primary_proc_alive;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_keepalive_mark_sleep;
- rte_keepalive_register_relay_callback;
- rte_rtm_supported;
- rte_thread_setname;
-
-} DPDK_16.04;
-
-DPDK_16.11 {
- global:
-
- rte_delay_us_block;
- rte_delay_us_callback_register;
-
-} DPDK_16.07;
-
-DPDK_17.02 {
- global:
-
- rte_bus_dump;
- rte_bus_probe;
- rte_bus_register;
- rte_bus_scan;
- rte_bus_unregister;
-
-} DPDK_16.11;
-
-DPDK_17.05 {
- global:
-
- rte_cpu_is_supported;
- rte_intr_free_epoll_fd;
- rte_log_dump;
- rte_log_get_global_level;
- rte_log_register;
- rte_log_set_global_level;
- rte_log_set_level;
- rte_log_set_level_regexp;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_bus_find;
- rte_bus_find_by_device;
- rte_bus_find_by_name;
- rte_log_get_level;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eal_create_uio_dev;
- rte_bus_get_iommu_class;
- rte_eal_has_pci;
- rte_eal_iova_mode;
- rte_eal_using_phys_addrs;
- rte_eal_vfio_intr_mode;
- rte_lcore_has_role;
- rte_malloc_virt2iova;
- rte_mem_virt2iova;
- rte_vfio_enable;
- rte_vfio_is_enabled;
- rte_vfio_noiommu_is_enabled;
- rte_vfio_release_device;
- rte_vfio_setup_device;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_hypervisor_get;
- rte_hypervisor_get_name;
- rte_vfio_clear_group;
rte_reciprocal_value;
rte_reciprocal_value_u64;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_log_set_level_pattern;
+ rte_rtm_supported;
rte_service_attr_get;
rte_service_attr_reset_all;
rte_service_component_register;
@@ -215,6 +163,8 @@ DPDK_18.05 {
rte_service_get_count;
rte_service_get_name;
rte_service_lcore_add;
+ rte_service_lcore_attr_get;
+ rte_service_lcore_attr_reset_all;
rte_service_lcore_count;
rte_service_lcore_count_services;
rte_service_lcore_del;
@@ -224,6 +174,7 @@ DPDK_18.05 {
rte_service_lcore_stop;
rte_service_map_lcore_get;
rte_service_map_lcore_set;
+ rte_service_may_be_active;
rte_service_probe_capability;
rte_service_run_iter_on_app_lcore;
rte_service_runstate_get;
@@ -231,17 +182,23 @@ DPDK_18.05 {
rte_service_set_runstate_mapped_check;
rte_service_set_stats_enable;
rte_service_start_with_defaults;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eal_mbuf_user_pool_ops;
+ rte_set_application_usage_hook;
+ rte_socket_count;
+ rte_socket_id;
+ rte_socket_id_by_idx;
+ rte_srand;
+ rte_strerror;
+ rte_strscpy;
+ rte_strsplit;
+ rte_sys_gettid;
+ rte_thread_get_affinity;
+ rte_thread_set_affinity;
+ rte_thread_setname;
rte_uuid_compare;
rte_uuid_is_null;
rte_uuid_parse;
rte_uuid_unparse;
+ rte_vfio_clear_group;
rte_vfio_container_create;
rte_vfio_container_destroy;
rte_vfio_container_dma_map;
@@ -250,67 +207,20 @@ DPDK_18.08 {
rte_vfio_container_group_unbind;
rte_vfio_dma_map;
rte_vfio_dma_unmap;
+ rte_vfio_enable;
rte_vfio_get_container_fd;
rte_vfio_get_group_fd;
rte_vfio_get_group_num;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_dev_probe;
- rte_dev_remove;
- rte_eal_get_runtime_dir;
- rte_eal_hotplug_add;
- rte_eal_hotplug_remove;
- rte_strscpy;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_ctrl_thread_create;
- rte_dev_is_probed;
- rte_devargs_add;
- rte_devargs_dump;
- rte_devargs_insert;
- rte_devargs_next;
- rte_devargs_parse;
- rte_devargs_parsef;
- rte_devargs_remove;
- rte_devargs_type_count;
- rte_eal_cleanup;
- rte_socket_count;
- rte_socket_id_by_idx;
-
-} DPDK_18.11;
-
-DPDK_19.08 {
- global:
-
- rte_lcore_index;
- rte_lcore_to_socket_id;
- rte_mcfg_mem_read_lock;
- rte_mcfg_mem_read_unlock;
- rte_mcfg_mem_write_lock;
- rte_mcfg_mem_write_unlock;
- rte_mcfg_mempool_read_lock;
- rte_mcfg_mempool_read_unlock;
- rte_mcfg_mempool_write_lock;
- rte_mcfg_mempool_write_unlock;
- rte_mcfg_tailq_read_lock;
- rte_mcfg_tailq_read_unlock;
- rte_mcfg_tailq_write_lock;
- rte_mcfg_tailq_write_unlock;
- rte_rand;
- rte_service_lcore_attr_get;
- rte_service_lcore_attr_reset_all;
- rte_service_may_be_active;
- rte_srand;
-
-} DPDK_19.05;
+ rte_vfio_is_enabled;
+ rte_vfio_noiommu_is_enabled;
+ rte_vfio_release_device;
+ rte_vfio_setup_device;
+ rte_vlog;
+ rte_zmalloc;
+ rte_zmalloc_socket;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_efd/rte_efd_version.map b/lib/librte_efd/rte_efd_version.map
index ae60a64178..e010eecfe4 100644
--- a/lib/librte_efd/rte_efd_version.map
+++ b/lib/librte_efd/rte_efd_version.map
@@ -1,4 +1,4 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_efd_create;
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index 6df42a47b8..9e1dbdebb4 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -1,35 +1,53 @@
-DPDK_2.2 {
+DPDK_20.0 {
global:
+ _rte_eth_dev_callback_process;
+ _rte_eth_dev_reset;
+ rte_eth_add_first_rx_callback;
rte_eth_add_rx_callback;
rte_eth_add_tx_callback;
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_allocate;
rte_eth_dev_allocated;
+ rte_eth_dev_attach_secondary;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
rte_eth_dev_close;
rte_eth_dev_configure;
rte_eth_dev_count;
+ rte_eth_dev_count_avail;
+ rte_eth_dev_count_total;
rte_eth_dev_default_mac_addr_set;
+ rte_eth_dev_filter_ctrl;
rte_eth_dev_filter_supported;
rte_eth_dev_flow_ctrl_get;
rte_eth_dev_flow_ctrl_set;
+ rte_eth_dev_fw_version_get;
rte_eth_dev_get_dcb_info;
rte_eth_dev_get_eeprom;
rte_eth_dev_get_eeprom_length;
rte_eth_dev_get_mtu;
+ rte_eth_dev_get_name_by_port;
+ rte_eth_dev_get_port_by_name;
rte_eth_dev_get_reg_info;
+ rte_eth_dev_get_sec_ctx;
+ rte_eth_dev_get_supported_ptypes;
rte_eth_dev_get_vlan_offload;
- rte_eth_devices;
rte_eth_dev_info_get;
rte_eth_dev_is_valid_port;
+ rte_eth_dev_l2_tunnel_eth_type_conf;
+ rte_eth_dev_l2_tunnel_offload_set;
+ rte_eth_dev_logtype;
rte_eth_dev_mac_addr_add;
rte_eth_dev_mac_addr_remove;
+ rte_eth_dev_pool_ops_supported;
rte_eth_dev_priority_flow_ctrl_set;
+ rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
+ rte_eth_dev_reset;
rte_eth_dev_rss_hash_conf_get;
rte_eth_dev_rss_hash_update;
rte_eth_dev_rss_reta_query;
@@ -38,6 +56,7 @@ DPDK_2.2 {
rte_eth_dev_rx_intr_ctl_q;
rte_eth_dev_rx_intr_disable;
rte_eth_dev_rx_intr_enable;
+ rte_eth_dev_rx_offload_name;
rte_eth_dev_rx_queue_start;
rte_eth_dev_rx_queue_stop;
rte_eth_dev_set_eeprom;
@@ -47,18 +66,28 @@ DPDK_2.2 {
rte_eth_dev_set_mtu;
rte_eth_dev_set_rx_queue_stats_mapping;
rte_eth_dev_set_tx_queue_stats_mapping;
+ rte_eth_dev_set_vlan_ether_type;
rte_eth_dev_set_vlan_offload;
rte_eth_dev_set_vlan_pvid;
rte_eth_dev_set_vlan_strip_on_queue;
rte_eth_dev_socket_id;
rte_eth_dev_start;
rte_eth_dev_stop;
+ rte_eth_dev_tx_offload_name;
rte_eth_dev_tx_queue_start;
rte_eth_dev_tx_queue_stop;
rte_eth_dev_uc_all_hash_table_set;
rte_eth_dev_uc_hash_table_set;
+ rte_eth_dev_udp_tunnel_port_add;
+ rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
+ rte_eth_devices;
rte_eth_dma_zone_reserve;
+ rte_eth_find_next;
+ rte_eth_find_next_owned_by;
+ rte_eth_iterator_cleanup;
+ rte_eth_iterator_init;
+ rte_eth_iterator_next;
rte_eth_led_off;
rte_eth_led_on;
rte_eth_link;
@@ -75,6 +104,7 @@ DPDK_2.2 {
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
+ rte_eth_speed_bitflag;
rte_eth_stats;
rte_eth_stats_get;
rte_eth_stats_reset;
@@ -85,66 +115,27 @@ DPDK_2.2 {
rte_eth_timesync_read_time;
rte_eth_timesync_read_tx_timestamp;
rte_eth_timesync_write_time;
- rte_eth_tx_queue_info_get;
- rte_eth_tx_queue_setup;
- rte_eth_xstats_get;
- rte_eth_xstats_reset;
-
- local: *;
-};
-
-DPDK_16.04 {
- global:
-
- rte_eth_dev_get_supported_ptypes;
- rte_eth_dev_l2_tunnel_eth_type_conf;
- rte_eth_dev_l2_tunnel_offload_set;
- rte_eth_dev_set_vlan_ether_type;
- rte_eth_dev_udp_tunnel_port_add;
- rte_eth_dev_udp_tunnel_port_delete;
- rte_eth_speed_bitflag;
rte_eth_tx_buffer_count_callback;
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_eth_add_first_rx_callback;
- rte_eth_dev_get_name_by_port;
- rte_eth_dev_get_port_by_name;
- rte_eth_xstats_get_names;
-
-} DPDK_16.04;
-
-DPDK_17.02 {
- global:
-
- _rte_eth_dev_reset;
- rte_eth_dev_fw_version_get;
-
-} DPDK_16.07;
-
-DPDK_17.05 {
- global:
-
- rte_eth_dev_attach_secondary;
- rte_eth_find_next;
rte_eth_tx_done_cleanup;
+ rte_eth_tx_queue_info_get;
+ rte_eth_tx_queue_setup;
+ rte_eth_xstats_get;
rte_eth_xstats_get_by_id;
rte_eth_xstats_get_id_by_name;
+ rte_eth_xstats_get_names;
rte_eth_xstats_get_names_by_id;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- _rte_eth_dev_callback_process;
- rte_eth_dev_adjust_nb_rx_tx_desc;
+ rte_eth_xstats_reset;
+ rte_flow_copy;
+ rte_flow_create;
+ rte_flow_destroy;
+ rte_flow_error_set;
+ rte_flow_flush;
+ rte_flow_isolate;
+ rte_flow_query;
+ rte_flow_validate;
rte_tm_capabilities_get;
rte_tm_get_number_of_leaf_nodes;
rte_tm_hierarchy_commit;
@@ -176,65 +167,8 @@ DPDK_17.08 {
rte_tm_wred_profile_add;
rte_tm_wred_profile_delete;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eth_dev_get_sec_ctx;
- rte_eth_dev_pool_ops_supported;
- rte_eth_dev_reset;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_eth_dev_filter_ctrl;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_eth_dev_count_avail;
- rte_eth_dev_probing_finish;
- rte_eth_find_next_owned_by;
- rte_flow_copy;
- rte_flow_create;
- rte_flow_destroy;
- rte_flow_error_set;
- rte_flow_flush;
- rte_flow_isolate;
- rte_flow_query;
- rte_flow_validate;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eth_dev_logtype;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_eth_dev_rx_offload_name;
- rte_eth_dev_tx_offload_name;
- rte_eth_iterator_cleanup;
- rte_eth_iterator_init;
- rte_eth_iterator_next;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_eth_dev_count_total;
-
-} DPDK_18.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 76b3021d3a..edfc15282d 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -1,61 +1,38 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
- rte_eventdevs;
-
+ rte_event_crypto_adapter_caps_get;
+ rte_event_crypto_adapter_create;
+ rte_event_crypto_adapter_create_ext;
+ rte_event_crypto_adapter_event_port_get;
+ rte_event_crypto_adapter_free;
+ rte_event_crypto_adapter_queue_pair_add;
+ rte_event_crypto_adapter_queue_pair_del;
+ rte_event_crypto_adapter_service_id_get;
+ rte_event_crypto_adapter_start;
+ rte_event_crypto_adapter_stats_get;
+ rte_event_crypto_adapter_stats_reset;
+ rte_event_crypto_adapter_stop;
+ rte_event_dequeue_timeout_ticks;
+ rte_event_dev_attr_get;
+ rte_event_dev_close;
+ rte_event_dev_configure;
rte_event_dev_count;
+ rte_event_dev_dump;
rte_event_dev_get_dev_id;
- rte_event_dev_socket_id;
rte_event_dev_info_get;
- rte_event_dev_configure;
+ rte_event_dev_selftest;
+ rte_event_dev_service_id_get;
+ rte_event_dev_socket_id;
rte_event_dev_start;
rte_event_dev_stop;
- rte_event_dev_close;
- rte_event_dev_dump;
+ rte_event_dev_stop_flush_callback_register;
rte_event_dev_xstats_by_name_get;
rte_event_dev_xstats_get;
rte_event_dev_xstats_names_get;
rte_event_dev_xstats_reset;
-
- rte_event_port_default_conf_get;
- rte_event_port_setup;
- rte_event_port_link;
- rte_event_port_unlink;
- rte_event_port_links_get;
-
- rte_event_queue_default_conf_get;
- rte_event_queue_setup;
-
- rte_event_dequeue_timeout_ticks;
-
- rte_event_pmd_allocate;
- rte_event_pmd_release;
- rte_event_pmd_vdev_init;
- rte_event_pmd_vdev_uninit;
- rte_event_pmd_pci_probe;
- rte_event_pmd_pci_remove;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- rte_event_ring_create;
- rte_event_ring_free;
- rte_event_ring_init;
- rte_event_ring_lookup;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_event_dev_attr_get;
- rte_event_dev_service_id_get;
- rte_event_port_attr_get;
- rte_event_queue_attr_get;
-
rte_event_eth_rx_adapter_caps_get;
+ rte_event_eth_rx_adapter_cb_register;
rte_event_eth_rx_adapter_create;
rte_event_eth_rx_adapter_create_ext;
rte_event_eth_rx_adapter_free;
@@ -63,38 +40,9 @@ DPDK_17.11 {
rte_event_eth_rx_adapter_queue_del;
rte_event_eth_rx_adapter_service_id_get;
rte_event_eth_rx_adapter_start;
+ rte_event_eth_rx_adapter_stats_get;
rte_event_eth_rx_adapter_stats_reset;
rte_event_eth_rx_adapter_stop;
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_event_dev_selftest;
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_event_dev_stop_flush_callback_register;
-} DPDK_18.02;
-
-DPDK_19.05 {
- global:
-
- rte_event_crypto_adapter_caps_get;
- rte_event_crypto_adapter_create;
- rte_event_crypto_adapter_create_ext;
- rte_event_crypto_adapter_event_port_get;
- rte_event_crypto_adapter_free;
- rte_event_crypto_adapter_queue_pair_add;
- rte_event_crypto_adapter_queue_pair_del;
- rte_event_crypto_adapter_service_id_get;
- rte_event_crypto_adapter_start;
- rte_event_crypto_adapter_stats_get;
- rte_event_crypto_adapter_stats_reset;
- rte_event_crypto_adapter_stop;
- rte_event_port_unlinks_in_progress;
rte_event_eth_tx_adapter_caps_get;
rte_event_eth_tx_adapter_create;
rte_event_eth_tx_adapter_create_ext;
@@ -107,6 +55,26 @@ DPDK_19.05 {
rte_event_eth_tx_adapter_stats_get;
rte_event_eth_tx_adapter_stats_reset;
rte_event_eth_tx_adapter_stop;
+ rte_event_pmd_allocate;
+ rte_event_pmd_pci_probe;
+ rte_event_pmd_pci_remove;
+ rte_event_pmd_release;
+ rte_event_pmd_vdev_init;
+ rte_event_pmd_vdev_uninit;
+ rte_event_port_attr_get;
+ rte_event_port_default_conf_get;
+ rte_event_port_link;
+ rte_event_port_links_get;
+ rte_event_port_setup;
+ rte_event_port_unlink;
+ rte_event_port_unlinks_in_progress;
+ rte_event_queue_attr_get;
+ rte_event_queue_default_conf_get;
+ rte_event_queue_setup;
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
rte_event_timer_adapter_caps_get;
rte_event_timer_adapter_create;
rte_event_timer_adapter_create_ext;
@@ -121,11 +89,7 @@ DPDK_19.05 {
rte_event_timer_arm_burst;
rte_event_timer_arm_tmo_tick_burst;
rte_event_timer_cancel_burst;
-} DPDK_18.05;
+ rte_eventdevs;
-DPDK_19.08 {
- global:
-
- rte_event_eth_rx_adapter_cb_register;
- rte_event_eth_rx_adapter_stats_get;
-} DPDK_19.05;
+ local: *;
+};
diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map
index 49bc25c6a0..001ff660e3 100644
--- a/lib/librte_flow_classify/rte_flow_classify_version.map
+++ b/lib/librte_flow_classify/rte_flow_classify_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_gro/rte_gro_version.map b/lib/librte_gro/rte_gro_version.map
index 1606b6dc72..9f6fe79e57 100644
--- a/lib/librte_gro/rte_gro_version.map
+++ b/lib/librte_gro/rte_gro_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_gro_ctx_create;
diff --git a/lib/librte_gso/rte_gso_version.map b/lib/librte_gso/rte_gso_version.map
index e1fd453edb..8505a59c27 100644
--- a/lib/librte_gso/rte_gso_version.map
+++ b/lib/librte_gso/rte_gso_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_gso_segment;
diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_hash_version.map
index 734ae28b04..138c130c1b 100644
--- a/lib/librte_hash/rte_hash_version.map
+++ b/lib/librte_hash/rte_hash_version.map
@@ -1,58 +1,33 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_fbk_hash_create;
rte_fbk_hash_find_existing;
rte_fbk_hash_free;
rte_hash_add_key;
+ rte_hash_add_key_data;
rte_hash_add_key_with_hash;
+ rte_hash_add_key_with_hash_data;
+ rte_hash_count;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
rte_hash_find_existing;
rte_hash_free;
+ rte_hash_get_key_with_position;
rte_hash_hash;
+ rte_hash_iterate;
rte_hash_lookup;
rte_hash_lookup_bulk;
- rte_hash_lookup_with_hash;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_hash_add_key_data;
- rte_hash_add_key_with_hash_data;
- rte_hash_iterate;
rte_hash_lookup_bulk_data;
rte_hash_lookup_data;
+ rte_hash_lookup_with_hash;
rte_hash_lookup_with_hash_data;
rte_hash_reset;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_hash_set_cmp_func;
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_hash_get_key_with_position;
-
-} DPDK_2.2;
-
-
-DPDK_18.08 {
- global:
-
- rte_hash_count;
-
-} DPDK_16.07;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_ip_frag/rte_ip_frag_version.map b/lib/librte_ip_frag/rte_ip_frag_version.map
index a193007c61..5dd34f828c 100644
--- a/lib/librte_ip_frag/rte_ip_frag_version.map
+++ b/lib/librte_ip_frag/rte_ip_frag_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ip_frag_free_death_row;
rte_ip_frag_table_create;
+ rte_ip_frag_table_destroy;
rte_ip_frag_table_statistics_dump;
rte_ipv4_frag_reassemble_packet;
rte_ipv4_fragment_packet;
@@ -12,13 +13,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_17.08 {
- global:
-
- rte_ip_frag_table_destroy;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index ee9f1961b0..3723b812fc 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_jobstats/rte_jobstats_version.map b/lib/librte_jobstats/rte_jobstats_version.map
index f89441438e..dbd2664ae2 100644
--- a/lib/librte_jobstats/rte_jobstats_version.map
+++ b/lib/librte_jobstats/rte_jobstats_version.map
@@ -1,6 +1,7 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_jobstats_abort;
rte_jobstats_context_finish;
rte_jobstats_context_init;
rte_jobstats_context_reset;
@@ -17,10 +18,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_jobstats_abort;
-
-} DPDK_2.0;
diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map
index c877dc6aaa..9cd3cedc54 100644
--- a/lib/librte_kni/rte_kni_version.map
+++ b/lib/librte_kni/rte_kni_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kni_alloc;
diff --git a/lib/librte_kvargs/rte_kvargs_version.map b/lib/librte_kvargs/rte_kvargs_version.map
index 8f4b4e3f8f..3ba0f4b59c 100644
--- a/lib/librte_kvargs/rte_kvargs_version.map
+++ b/lib/librte_kvargs/rte_kvargs_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kvargs_count;
@@ -15,4 +15,4 @@ EXPERIMENTAL {
rte_kvargs_parse_delim;
rte_kvargs_strcmp;
-} DPDK_2.0;
+};
diff --git a/lib/librte_latencystats/rte_latencystats_version.map b/lib/librte_latencystats/rte_latencystats_version.map
index ac8403e821..e04e63463f 100644
--- a/lib/librte_latencystats/rte_latencystats_version.map
+++ b/lib/librte_latencystats/rte_latencystats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_latencystats_get;
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 90beac853d..500f58b806 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,13 +1,6 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
- rte_lpm_add;
- rte_lpm_create;
- rte_lpm_delete;
- rte_lpm_delete_all;
- rte_lpm_find_existing;
- rte_lpm_free;
- rte_lpm_is_rule_present;
rte_lpm6_add;
rte_lpm6_create;
rte_lpm6_delete;
@@ -18,29 +11,13 @@ DPDK_2.0 {
rte_lpm6_is_rule_present;
rte_lpm6_lookup;
rte_lpm6_lookup_bulk_func;
+ rte_lpm_add;
+ rte_lpm_create;
+ rte_lpm_delete;
+ rte_lpm_delete_all;
+ rte_lpm_find_existing;
+ rte_lpm_free;
+ rte_lpm_is_rule_present;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_lpm_add;
- rte_lpm_find_existing;
- rte_lpm_create;
- rte_lpm_free;
- rte_lpm_is_rule_present;
- rte_lpm_delete;
- rte_lpm_delete_all;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_lpm6_add;
- rte_lpm6_is_rule_present;
- rte_lpm6_lookup;
- rte_lpm6_lookup_bulk_func;
-
-} DPDK_16.04;
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index 519fead35a..b2dc5e50f4 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -1,26 +1,7 @@
-DPDK_2.0 {
- global:
-
- rte_get_rx_ol_flag_name;
- rte_get_tx_ol_flag_name;
- rte_mbuf_sanity_check;
- rte_pktmbuf_dump;
- rte_pktmbuf_init;
- rte_pktmbuf_pool_init;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pktmbuf_pool_create;
-
-} DPDK_2.0;
-
-DPDK_16.11 {
+DPDK_20.0 {
global:
+ __rte_pktmbuf_linearize;
__rte_pktmbuf_read;
rte_get_ptype_inner_l2_name;
rte_get_ptype_inner_l3_name;
@@ -31,28 +12,24 @@ DPDK_16.11 {
rte_get_ptype_name;
rte_get_ptype_tunnel_name;
rte_get_rx_ol_flag_list;
+ rte_get_rx_ol_flag_name;
rte_get_tx_ol_flag_list;
-
-} DPDK_2.1;
-
-DPDK_18.08 {
- global:
-
+ rte_get_tx_ol_flag_name;
rte_mbuf_best_mempool_ops;
rte_mbuf_platform_mempool_ops;
+ rte_mbuf_sanity_check;
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
- rte_pktmbuf_pool_create_by_ops;
-} DPDK_16.11;
-
-DPDK_19.11 {
- global:
-
- __rte_pktmbuf_linearize;
rte_pktmbuf_clone;
+ rte_pktmbuf_dump;
+ rte_pktmbuf_init;
+ rte_pktmbuf_pool_create;
+ rte_pktmbuf_pool_create_by_ops;
+ rte_pktmbuf_pool_init;
-} DPDK_18.08;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -60,4 +37,4 @@ EXPERIMENTAL {
rte_mbuf_check;
rte_pktmbuf_copy;
-} DPDK_18.08;
+};
diff --git a/lib/librte_member/rte_member_version.map b/lib/librte_member/rte_member_version.map
index 019e4cd962..87780ae611 100644
--- a/lib/librte_member/rte_member_version.map
+++ b/lib/librte_member/rte_member_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_member_add;
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17cbca4607..6a425d203a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_mempool_audit;
- rte_mempool_calc_obj_size;
- rte_mempool_create;
- rte_mempool_dump;
- rte_mempool_list_dump;
- rte_mempool_lookup;
- rte_mempool_walk;
-
- local: *;
-};
-
-DPDK_16.07 {
- global:
-
rte_mempool_avail_count;
rte_mempool_cache_create;
rte_mempool_cache_flush;
rte_mempool_cache_free;
+ rte_mempool_calc_obj_size;
rte_mempool_check_cookies;
+ rte_mempool_contig_blocks_check_cookies;
+ rte_mempool_create;
rte_mempool_create_empty;
rte_mempool_default_cache;
+ rte_mempool_dump;
rte_mempool_free;
rte_mempool_generic_get;
rte_mempool_generic_put;
rte_mempool_in_use_count;
+ rte_mempool_list_dump;
+ rte_mempool_lookup;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
+ rte_mempool_op_calc_mem_size_default;
+ rte_mempool_op_populate_default;
rte_mempool_ops_table;
rte_mempool_populate_anon;
rte_mempool_populate_default;
+ rte_mempool_populate_iova;
rte_mempool_populate_virt;
rte_mempool_register_ops;
rte_mempool_set_ops_byname;
+ rte_mempool_walk;
-} DPDK_2.0;
-
-DPDK_17.11 {
- global:
-
- rte_mempool_populate_iova;
-
-} DPDK_16.07;
-
-DPDK_18.05 {
- global:
-
- rte_mempool_contig_blocks_check_cookies;
- rte_mempool_op_calc_mem_size_default;
- rte_mempool_op_populate_default;
-
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 4b460d5803..46410b0369 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -1,21 +1,16 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_meter_srtcm_color_aware_check;
rte_meter_srtcm_color_blind_check;
rte_meter_srtcm_config;
+ rte_meter_srtcm_profile_config;
rte_meter_trtcm_color_aware_check;
rte_meter_trtcm_color_blind_check;
rte_meter_trtcm_config;
-
- local: *;
-};
-
-DPDK_18.08 {
- global:
-
- rte_meter_srtcm_profile_config;
rte_meter_trtcm_profile_config;
+
+ local: *;
};
EXPERIMENTAL {
diff --git a/lib/librte_metrics/rte_metrics_version.map b/lib/librte_metrics/rte_metrics_version.map
index 6ac99a44a1..85663f356e 100644
--- a/lib/librte_metrics/rte_metrics_version.map
+++ b/lib/librte_metrics/rte_metrics_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_metrics_get_names;
diff --git a/lib/librte_net/rte_net_version.map b/lib/librte_net/rte_net_version.map
index fffc4a3723..8a4e75a3a0 100644
--- a/lib/librte_net/rte_net_version.map
+++ b/lib/librte_net/rte_net_version.map
@@ -1,25 +1,14 @@
-DPDK_16.11 {
- global:
- rte_net_get_ptype;
-
- local: *;
-};
-
-DPDK_17.05 {
- global:
-
- rte_net_crc_calc;
- rte_net_crc_set_alg;
-
-} DPDK_16.11;
-
-DPDK_19.08 {
+DPDK_20.0 {
global:
rte_eth_random_addr;
rte_ether_format_addr;
+ rte_net_crc_calc;
+ rte_net_crc_set_alg;
+ rte_net_get_ptype;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c0280277bb..539785f5f4 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
eal_parse_pci_BDF;
diff --git a/lib/librte_pdump/rte_pdump_version.map b/lib/librte_pdump/rte_pdump_version.map
index 3e744f3012..6d02ccce6d 100644
--- a/lib/librte_pdump/rte_pdump_version.map
+++ b/lib/librte_pdump/rte_pdump_version.map
@@ -1,4 +1,4 @@
-DPDK_16.07 {
+DPDK_20.0 {
global:
rte_pdump_disable;
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 420f065d6e..64d38afecd 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -1,6 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_pipeline_ah_packet_drop;
+ rte_pipeline_ah_packet_hijack;
rte_pipeline_check;
rte_pipeline_create;
rte_pipeline_flush;
@@ -9,42 +11,22 @@ DPDK_2.0 {
rte_pipeline_port_in_create;
rte_pipeline_port_in_disable;
rte_pipeline_port_in_enable;
+ rte_pipeline_port_in_stats_read;
rte_pipeline_port_out_create;
rte_pipeline_port_out_packet_insert;
+ rte_pipeline_port_out_stats_read;
rte_pipeline_run;
rte_pipeline_table_create;
rte_pipeline_table_default_entry_add;
rte_pipeline_table_default_entry_delete;
rte_pipeline_table_entry_add;
- rte_pipeline_table_entry_delete;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pipeline_port_in_stats_read;
- rte_pipeline_port_out_stats_read;
- rte_pipeline_table_stats_read;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete;
rte_pipeline_table_entry_delete_bulk;
+ rte_pipeline_table_stats_read;
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_pipeline_ah_packet_hijack;
- rte_pipeline_ah_packet_drop;
-
-} DPDK_2.2;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_port/rte_port_version.map b/lib/librte_port/rte_port_version.map
index 609bcec3ff..db1b8681d9 100644
--- a/lib/librte_port/rte_port_version.map
+++ b/lib/librte_port/rte_port_version.map
@@ -1,62 +1,32 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_port_ethdev_reader_ops;
+ rte_port_ethdev_writer_nodrop_ops;
rte_port_ethdev_writer_ops;
+ rte_port_fd_reader_ops;
+ rte_port_fd_writer_nodrop_ops;
+ rte_port_fd_writer_ops;
+ rte_port_kni_reader_ops;
+ rte_port_kni_writer_nodrop_ops;
+ rte_port_kni_writer_ops;
+ rte_port_ring_multi_reader_ops;
+ rte_port_ring_multi_writer_nodrop_ops;
+ rte_port_ring_multi_writer_ops;
rte_port_ring_reader_ipv4_frag_ops;
+ rte_port_ring_reader_ipv6_frag_ops;
rte_port_ring_reader_ops;
rte_port_ring_writer_ipv4_ras_ops;
+ rte_port_ring_writer_ipv6_ras_ops;
+ rte_port_ring_writer_nodrop_ops;
rte_port_ring_writer_ops;
rte_port_sched_reader_ops;
rte_port_sched_writer_ops;
rte_port_sink_ops;
rte_port_source_ops;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_port_ethdev_writer_nodrop_ops;
- rte_port_ring_reader_ipv6_frag_ops;
- rte_port_ring_writer_ipv6_ras_ops;
- rte_port_ring_writer_nodrop_ops;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_port_ring_multi_reader_ops;
- rte_port_ring_multi_writer_ops;
- rte_port_ring_multi_writer_nodrop_ops;
-
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_port_kni_reader_ops;
- rte_port_kni_writer_ops;
- rte_port_kni_writer_nodrop_ops;
-
-} DPDK_2.2;
-
-DPDK_16.11 {
- global:
-
- rte_port_fd_reader_ops;
- rte_port_fd_writer_ops;
- rte_port_fd_writer_nodrop_ops;
-
-} DPDK_16.07;
-
-DPDK_18.11 {
- global:
-
rte_port_sym_crypto_reader_ops;
- rte_port_sym_crypto_writer_ops;
rte_port_sym_crypto_writer_nodrop_ops;
+ rte_port_sym_crypto_writer_ops;
-} DPDK_16.11;
+ local: *;
+};
diff --git a/lib/librte_power/rte_power_version.map b/lib/librte_power/rte_power_version.map
index 042917360e..a94ab30c3d 100644
--- a/lib/librte_power/rte_power_version.map
+++ b/lib/librte_power/rte_power_version.map
@@ -1,39 +1,27 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_power_exit;
+ rte_power_freq_disable_turbo;
rte_power_freq_down;
+ rte_power_freq_enable_turbo;
rte_power_freq_max;
rte_power_freq_min;
rte_power_freq_up;
rte_power_freqs;
+ rte_power_get_capabilities;
rte_power_get_env;
rte_power_get_freq;
+ rte_power_guest_channel_send_msg;
rte_power_init;
rte_power_set_env;
rte_power_set_freq;
+ rte_power_turbo_status;
rte_power_unset_env;
local: *;
};
-DPDK_17.11 {
- global:
-
- rte_power_guest_channel_send_msg;
- rte_power_freq_disable_turbo;
- rte_power_freq_enable_turbo;
- rte_power_turbo_status;
-
-} DPDK_2.0;
-
-DPDK_18.08 {
- global:
-
- rte_power_get_capabilities;
-
-} DPDK_17.11;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_rawdev/rte_rawdev_version.map b/lib/librte_rawdev/rte_rawdev_version.map
index b61dbff11c..d847c9e0d3 100644
--- a/lib/librte_rawdev/rte_rawdev_version.map
+++ b/lib/librte_rawdev/rte_rawdev_version.map
@@ -1,4 +1,4 @@
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_rawdev_close;
@@ -17,8 +17,8 @@ DPDK_18.08 {
rte_rawdev_pmd_release;
rte_rawdev_queue_conf_get;
rte_rawdev_queue_count;
- rte_rawdev_queue_setup;
rte_rawdev_queue_release;
+ rte_rawdev_queue_setup;
rte_rawdev_reset;
rte_rawdev_selftest;
rte_rawdev_set_attr;
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
index f8b9ef2abb..787e51ef27 100644
--- a/lib/librte_rcu/rte_rcu_version.map
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_reorder/rte_reorder_version.map b/lib/librte_reorder/rte_reorder_version.map
index 0a8a54de83..cf444062df 100644
--- a/lib/librte_reorder/rte_reorder_version.map
+++ b/lib/librte_reorder/rte_reorder_version.map
@@ -1,13 +1,13 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_reorder_create;
- rte_reorder_init;
+ rte_reorder_drain;
rte_reorder_find_existing;
- rte_reorder_reset;
rte_reorder_free;
+ rte_reorder_init;
rte_reorder_insert;
- rte_reorder_drain;
+ rte_reorder_reset;
local: *;
};
diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map
index 510c1386e0..89d84bcf48 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ring_create;
rte_ring_dump;
+ rte_ring_free;
rte_ring_get_memsize;
rte_ring_init;
rte_ring_list_dump;
@@ -11,13 +12,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.2 {
- global:
-
- rte_ring_free;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 729588794e..1b48bfbf36 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_approx;
@@ -14,6 +14,9 @@ DPDK_2.0 {
rte_sched_port_enqueue;
rte_sched_port_free;
rte_sched_port_get_memory_footprint;
+ rte_sched_port_pkt_read_color;
+ rte_sched_port_pkt_read_tree_path;
+ rte_sched_port_pkt_write;
rte_sched_queue_read_stats;
rte_sched_subport_config;
rte_sched_subport_read_stats;
@@ -21,15 +24,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.1 {
- global:
-
- rte_sched_port_pkt_write;
- rte_sched_port_pkt_read_tree_path;
- rte_sched_port_pkt_read_color;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
index 53267bf3cc..b07314bbf4 100644
--- a/lib/librte_security/rte_security_version.map
+++ b/lib/librte_security/rte_security_version.map
@@ -1,4 +1,4 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
rte_security_attach_session;
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c36..adbb7be9d9 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index 6237252bec..40f72b1fe8 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_table_acl_ops;
diff --git a/lib/librte_telemetry/rte_telemetry_version.map b/lib/librte_telemetry/rte_telemetry_version.map
index fa62d7718c..c1f4613af5 100644
--- a/lib/librte_telemetry/rte_telemetry_version.map
+++ b/lib/librte_telemetry/rte_telemetry_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map
index 72f75c8181..2a59d3f081 100644
--- a/lib/librte_timer/rte_timer_version.map
+++ b/lib/librte_timer/rte_timer_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_timer_dump_stats;
@@ -14,16 +14,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_19.05 {
- global:
-
- rte_timer_dump_stats;
- rte_timer_manage;
- rte_timer_reset;
- rte_timer_stop;
- rte_timer_subsystem_init;
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 5f1d4a75c2..8e9ffac2c2 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -1,64 +1,34 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_vhost_avail_entries;
rte_vhost_dequeue_burst;
rte_vhost_driver_callback_register;
- rte_vhost_driver_register;
- rte_vhost_enable_guest_notification;
- rte_vhost_enqueue_burst;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_vhost_driver_unregister;
-
-} DPDK_2.0;
-
-DPDK_16.07 {
- global:
-
- rte_vhost_avail_entries;
- rte_vhost_get_ifname;
- rte_vhost_get_numa_node;
- rte_vhost_get_queue_num;
-
-} DPDK_2.1;
-
-DPDK_17.05 {
- global:
-
rte_vhost_driver_disable_features;
rte_vhost_driver_enable_features;
rte_vhost_driver_get_features;
+ rte_vhost_driver_register;
rte_vhost_driver_set_features;
rte_vhost_driver_start;
+ rte_vhost_driver_unregister;
+ rte_vhost_enable_guest_notification;
+ rte_vhost_enqueue_burst;
+ rte_vhost_get_ifname;
rte_vhost_get_mem_table;
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
+ rte_vhost_get_numa_node;
+ rte_vhost_get_queue_num;
rte_vhost_get_vhost_vring;
rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
rte_vhost_log_used_vring;
rte_vhost_log_write;
-
-} DPDK_16.07;
-
-DPDK_17.08 {
- global:
-
rte_vhost_rx_queue_count;
-
-} DPDK_17.05;
-
-DPDK_18.02 {
- global:
-
rte_vhost_vring_call;
-} DPDK_17.08;
+ local: *;
+};
EXPERIMENTAL {
global:
--
2.17.1
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix
2019-10-17 14:31 6% ` [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
@ 2019-10-17 16:00 4% ` Hunt, David
0 siblings, 0 replies; 200+ results
From: Hunt, David @ 2019-10-17 16:00 UTC (permalink / raw)
To: Anatoly Burakov, dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas, david.marchand
On 17/10/2019 15:31, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> The original ABI versioning was slightly misleading in that the
> DPDK 2.0 ABI was really a single mode for the distributor, and is
> used as such throughout the distributor code.
>
> Fix this by renaming all _v20 API's to _single API's, and remove
> symbol versioning.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v4:
> - Changed it back to how it was with v2
> - Removed remaining v2.0 symbols
>
> v3:
> - Removed single mode from distributor as per Dave's comments
>
> v2:
> - Moved this to before ABI version bump to avoid compile breakage
>
Hi Anatoly,
tested with Distributor sample app, Unit tests, looks good.
Acked-by: David Hunt <david.hunt@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v4 06/10] distributor: remove deprecated code
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 06/10] distributor: " Anatoly Burakov
@ 2019-10-17 15:59 0% ` Hunt, David
0 siblings, 0 replies; 200+ results
From: Hunt, David @ 2019-10-17 15:59 UTC (permalink / raw)
To: Anatoly Burakov, dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas, david.marchand
On 17/10/2019 15:31, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> Remove code for old ABI versions ahead of ABI version bump.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v2:
> - Moved this to before ABI version bump to avoid compile breakage
>
Hi Anatoly,
tested with Distributor sample app, Unit tests, looks good.
Acked-by: David Hunt <david.hunt@intel.com>
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v2] mbuf: support dynamic fields and flags
@ 2019-10-17 14:42 3% ` Olivier Matz
2019-10-18 2:47 0% ` Wang, Haiyue
` (3 more replies)
2019-10-24 8:13 3% ` [dpdk-dev] [PATCH v3] " Olivier Matz
2019-10-26 12:39 3% ` [dpdk-dev] [PATCH v4] " Olivier Matz
3 siblings, 4 replies; 200+ results
From: Olivier Matz @ 2019-10-17 14:42 UTC (permalink / raw)
To: dev
Cc: Andrew Rybchenko, Bruce Richardson, Wang, Haiyue,
Jerin Jacob Kollanukkaran, Wiles, Keith, Ananyev, Konstantin,
Morten Brørup, Stephen Hemminger, Thomas Monjalon
Many features require to store data inside the mbuf. As the room in mbuf
structure is limited, it is not possible to have a field for each
feature. Also, changing fields in the mbuf structure can break the API
or ABI.
This commit addresses these issues, by enabling the dynamic registration
of fields or flags:
- a dynamic field is a named area in the rte_mbuf structure, with a
given size (>= 1 byte) and alignment constraint.
- a dynamic flag is a named bit in the rte_mbuf structure.
The typical use case is a PMD that registers space for an offload
feature, when the application requests to enable this feature. As
the space in mbuf is limited, the space should only be reserved if it
is going to be used (i.e when the application explicitly asks for it).
The registration can be done at any moment, but it is not possible
to unregister fields or flags for now.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
v2
* Rebase on top of master: solve conflict with Stephen's patchset
(packet copy)
* Add new apis to register a dynamic field/flag at a specific place
* Add a dump function (sugg by David)
* Enhance field registration function to select the best offset, keeping
large aligned zones as much as possible (sugg by Konstantin)
* Use a size_t and unsigned int instead of int when relevant
(sugg by Konstantin)
* Use "uint64_t dynfield1[2]" in mbuf instead of 2 uint64_t fields
(sugg by Konstantin)
* Remove unused argument in private function (sugg by Konstantin)
* Fix and simplify locking (sugg by Konstantin)
* Fix minor typo
rfc -> v1
* Rebase on top of master
* Change registration API to use a structure instead of
variables, getting rid of #defines (Stephen's comment)
* Update flag registration to use a similar API as fields.
* Change max name length from 32 to 64 (sugg. by Thomas)
* Enhance API documentation (Haiyue's and Andrew's comments)
* Add a debug log at registration
* Add some words in release note
* Did some performance tests (sugg. by Andrew):
On my platform, reading a dynamic field takes ~3 cycles more
than a static field, and ~2 cycles more for writing.
app/test/test_mbuf.c | 145 ++++++-
doc/guides/rel_notes/release_19_11.rst | 7 +
lib/librte_mbuf/Makefile | 2 +
lib/librte_mbuf/meson.build | 6 +-
lib/librte_mbuf/rte_mbuf.h | 23 +-
lib/librte_mbuf/rte_mbuf_dyn.c | 548 +++++++++++++++++++++++++
lib/librte_mbuf/rte_mbuf_dyn.h | 226 ++++++++++
lib/librte_mbuf/rte_mbuf_version.map | 7 +
8 files changed, 959 insertions(+), 5 deletions(-)
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
diff --git a/app/test/test_mbuf.c b/app/test/test_mbuf.c
index b9c2b2500..01cafad59 100644
--- a/app/test/test_mbuf.c
+++ b/app/test/test_mbuf.c
@@ -28,6 +28,7 @@
#include <rte_random.h>
#include <rte_cycles.h>
#include <rte_malloc.h>
+#include <rte_mbuf_dyn.h>
#include "test.h"
@@ -657,7 +658,6 @@ test_attach_from_different_pool(struct rte_mempool *pktmbuf_pool,
rte_pktmbuf_free(clone2);
return -1;
}
-#undef GOTO_FAIL
/*
* test allocation and free of mbufs
@@ -1276,6 +1276,143 @@ test_tx_offload(void)
return (v1 == v2) ? 0 : -EINVAL;
}
+static int
+test_mbuf_dyn(struct rte_mempool *pktmbuf_pool)
+{
+ const struct rte_mbuf_dynfield dynfield = {
+ .name = "test-dynfield",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield2 = {
+ .name = "test-dynfield2",
+ .size = sizeof(uint16_t),
+ .align = __alignof__(uint16_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield3 = {
+ .name = "test-dynfield3",
+ .size = sizeof(uint8_t),
+ .align = __alignof__(uint8_t),
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_big = {
+ .name = "test-dynfield-fail-big",
+ .size = 256,
+ .align = 1,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynfield dynfield_fail_align = {
+ .name = "test-dynfield-fail-align",
+ .size = 1,
+ .align = 3,
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag = {
+ .name = "test-dynflag",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag2 = {
+ .name = "test-dynflag2",
+ .flags = 0,
+ };
+ const struct rte_mbuf_dynflag dynflag3 = {
+ .name = "test-dynflag3",
+ .flags = 0,
+ };
+ struct rte_mbuf *m = NULL;
+ int offset, offset2, offset3;
+ int flag, flag2, flag3;
+ int ret;
+
+ printf("Test mbuf dynamic fields and flags\n");
+ rte_mbuf_dyn_dump(stdout);
+
+ offset = rte_mbuf_dynfield_register(&dynfield);
+ if (offset == -1)
+ GOTO_FAIL("failed to register dynamic field, offset=%d: %s",
+ offset, strerror(errno));
+
+ ret = rte_mbuf_dynfield_register(&dynfield);
+ if (ret != offset)
+ GOTO_FAIL("failed to lookup dynamic field, ret=%d: %s",
+ ret, strerror(errno));
+
+ offset2 = rte_mbuf_dynfield_register(&dynfield2);
+ if (offset2 == -1 || offset2 == offset || (offset2 & 1))
+ GOTO_FAIL("failed to register dynamic field 2, offset2=%d: %s",
+ offset2, strerror(errno));
+
+ offset3 = rte_mbuf_dynfield_register_offset(&dynfield3,
+ offsetof(struct rte_mbuf, dynfield1[1]));
+ if (offset3 != offsetof(struct rte_mbuf, dynfield1[1]))
+ GOTO_FAIL("failed to register dynamic field 3, offset=%d: %s",
+ offset3, strerror(errno));
+
+ printf("dynfield: offset=%d, offset2=%d, offset3=%d\n",
+ offset, offset2, offset3);
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_big);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (too big)");
+
+ ret = rte_mbuf_dynfield_register(&dynfield_fail_align);
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (bad alignment)");
+
+ ret = rte_mbuf_dynfield_register_offset(&dynfield_fail_align,
+ offsetof(struct rte_mbuf, ol_flags));
+ if (ret != -1)
+ GOTO_FAIL("dynamic field creation should fail (not avail)");
+
+ flag = rte_mbuf_dynflag_register(&dynflag);
+ if (flag == -1)
+ GOTO_FAIL("failed to register dynamic flag, flag=%d: %s",
+ flag, strerror(errno));
+
+ ret = rte_mbuf_dynflag_register(&dynflag);
+ if (ret != flag)
+ GOTO_FAIL("failed to lookup dynamic flag, ret=%d: %s",
+ ret, strerror(errno));
+
+ flag2 = rte_mbuf_dynflag_register(&dynflag2);
+ if (flag2 == -1 || flag2 == flag)
+ GOTO_FAIL("failed to register dynamic flag 2, flag2=%d: %s",
+ flag2, strerror(errno));
+
+ flag3 = rte_mbuf_dynflag_register_bitnum(&dynflag3,
+ rte_bsf64(PKT_LAST_FREE));
+ if (flag3 != rte_bsf64(PKT_LAST_FREE))
+ GOTO_FAIL("failed to register dynamic flag 3, flag2=%d: %s",
+ flag3, strerror(errno));
+
+ printf("dynflag: flag=%d, flag2=%d, flag3=%d\n", flag, flag2, flag3);
+
+ /* set, get dynamic field */
+ m = rte_pktmbuf_alloc(pktmbuf_pool);
+ if (m == NULL)
+ GOTO_FAIL("Cannot allocate mbuf");
+
+ *RTE_MBUF_DYNFIELD(m, offset, uint8_t *) = 1;
+ if (*RTE_MBUF_DYNFIELD(m, offset, uint8_t *) != 1)
+ GOTO_FAIL("failed to read dynamic field");
+ *RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) = 1000;
+ if (*RTE_MBUF_DYNFIELD(m, offset2, uint16_t *) != 1000)
+ GOTO_FAIL("failed to read dynamic field");
+
+ /* set a dynamic flag */
+ m->ol_flags |= (1ULL << flag);
+
+ rte_mbuf_dyn_dump(stdout);
+ rte_pktmbuf_free(m);
+ return 0;
+fail:
+ rte_pktmbuf_free(m);
+ return -1;
+}
+#undef GOTO_FAIL
+
static int
test_mbuf(void)
{
@@ -1295,6 +1432,12 @@ test_mbuf(void)
goto err;
}
+ /* test registration of dynamic fields and flags */
+ if (test_mbuf_dyn(pktmbuf_pool) < 0) {
+ printf("mbuf dynflag test failed\n");
+ goto err;
+ }
+
/* create a specific pktmbuf pool with a priv_size != 0 and no data
* room size */
pktmbuf_pool2 = rte_pktmbuf_pool_create("test_pktmbuf_pool2",
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 85953b962..9e9c94554 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -21,6 +21,13 @@ DPDK Release 19.11
xdg-open build/doc/html/guides/rel_notes/release_19_11.html
+* **Add support of support dynamic fields and flags in mbuf.**
+
+ This new feature adds the ability to dynamically register some room
+ for a field or a flag in the mbuf structure. This is typically used
+ for specific offload features, where adding a static field or flag
+ in the mbuf is not justified.
+
New Features
------------
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index c8f6d2689..5a9bcee73 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -17,8 +17,10 @@ LIBABIVER := 5
# all source are stored in SRCS-y
SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
+SRCS-$(CONFIG_RTE_LIBRTE_MBUF) += rte_mbuf_dyn.c
# install includes
SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include := rte_mbuf.h rte_mbuf_ptype.h rte_mbuf_pool_ops.h
+SYMLINK-$(CONFIG_RTE_LIBRTE_MBUF)-include += rte_mbuf_dyn.h
include $(RTE_SDK)/mk/rte.lib.mk
diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
index 6cc11ebb4..9137e8f26 100644
--- a/lib/librte_mbuf/meson.build
+++ b/lib/librte_mbuf/meson.build
@@ -2,8 +2,10 @@
# Copyright(c) 2017 Intel Corporation
version = 5
-sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
-headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
+sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c',
+ 'rte_mbuf_dyn.c')
+headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h',
+ 'rte_mbuf_dyn.h')
deps += ['mempool']
allow_experimental_apis = true
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index fb0849ac1..5740b1e93 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -198,9 +198,12 @@ extern "C" {
#define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
#define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
-/* add new RX flags here */
+/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
-/* add new TX flags here */
+#define PKT_FIRST_FREE (1ULL << 23)
+#define PKT_LAST_FREE (1ULL << 39)
+
+/* add new TX flags here, don't forget to update PKT_LAST_FREE */
/**
* Indicate that the metadata field in the mbuf is in use.
@@ -738,6 +741,7 @@ struct rte_mbuf {
*/
struct rte_mbuf_ext_shared_info *shinfo;
+ uint64_t dynfield1[2]; /**< Reserved for dynamic fields. */
} __rte_cache_aligned;
/**
@@ -1684,6 +1688,20 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
*/
#define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
+/**
+ * Copy dynamic fields from m_src to m_dst.
+ *
+ * @param m_dst
+ * The destination mbuf.
+ * @param m_src
+ * The source mbuf.
+ */
+static inline void
+rte_mbuf_dynfield_copy(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
+{
+ memcpy(&mdst->dynfield1, msrc->dynfield1, sizeof(mdst->dynfield1));
+}
+
/* internal */
static inline void
__rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
@@ -1695,6 +1713,7 @@ __rte_pktmbuf_copy_hdr(struct rte_mbuf *mdst, const struct rte_mbuf *msrc)
mdst->hash = msrc->hash;
mdst->packet_type = msrc->packet_type;
mdst->timestamp = msrc->timestamp;
+ rte_mbuf_dynfield_copy(mdst, msrc);
}
/**
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
new file mode 100644
index 000000000..9ef235483
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.c
@@ -0,0 +1,548 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#include <sys/queue.h>
+#include <stdint.h>
+#include <limits.h>
+
+#include <rte_common.h>
+#include <rte_eal.h>
+#include <rte_eal_memconfig.h>
+#include <rte_tailq.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_string_fns.h>
+#include <rte_mbuf.h>
+#include <rte_mbuf_dyn.h>
+
+#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
+
+struct mbuf_dynfield_elt {
+ TAILQ_ENTRY(mbuf_dynfield_elt) next;
+ struct rte_mbuf_dynfield params;
+ size_t offset;
+};
+TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynfield_tailq = {
+ .name = "RTE_MBUF_DYNFIELD",
+};
+EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
+
+struct mbuf_dynflag_elt {
+ TAILQ_ENTRY(mbuf_dynflag_elt) next;
+ struct rte_mbuf_dynflag params;
+ unsigned int bitnum;
+};
+TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
+
+static struct rte_tailq_elem mbuf_dynflag_tailq = {
+ .name = "RTE_MBUF_DYNFLAG",
+};
+EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
+
+struct mbuf_dyn_shm {
+ /**
+ * For each mbuf byte, free_space[i] != 0 if space is free.
+ * The value is the size of the biggest aligned element that
+ * can fit in the zone.
+ */
+ uint8_t free_space[sizeof(struct rte_mbuf)];
+ /** Bitfield of available flags. */
+ uint64_t free_flags;
+};
+static struct mbuf_dyn_shm *shm;
+
+/* Set the value of free_space[] according to the size and alignment of
+ * the free areas. This helps to select the best place when reserving a
+ * dynamic field. Assume tailq is locked.
+ */
+static void
+process_score(void)
+{
+ size_t off, align, size, i;
+
+ /* first, erase previous info */
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if (shm->free_space[i])
+ shm->free_space[i] = 1;
+ }
+
+ for (off = 0; off < sizeof(struct rte_mbuf); off++) {
+ /* get the size of the free zone */
+ for (size = 0; shm->free_space[off + size]; size++)
+ ;
+ if (size == 0)
+ continue;
+
+ /* get the alignment of biggest object that can fit in
+ * the zone at this offset.
+ */
+ for (align = 1;
+ (off % (align << 1)) == 0 && (align << 1) <= size;
+ align <<= 1)
+ ;
+
+ /* save it in free_space[] */
+ for (i = off; i < off + size; i++)
+ shm->free_space[i] = RTE_MAX(align, shm->free_space[i]);
+ }
+}
+
+/* Allocate and initialize the shared memory. Assume tailq is locked */
+static int
+init_shared_mem(void)
+{
+ const struct rte_memzone *mz;
+ uint64_t mask;
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
+ sizeof(struct mbuf_dyn_shm),
+ SOCKET_ID_ANY, 0,
+ RTE_CACHE_LINE_SIZE);
+ } else {
+ mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
+ }
+ if (mz == NULL)
+ return -1;
+
+ shm = mz->addr;
+
+#define mark_free(field) \
+ memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
+ 1, sizeof(((struct rte_mbuf *)0)->field))
+
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
+ /* init free_space, keep it sync'd with
+ * rte_mbuf_dynfield_copy().
+ */
+ memset(shm, 0, sizeof(*shm));
+ mark_free(dynfield1);
+
+ /* init free_flags */
+ for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
+ shm->free_flags |= mask;
+
+ process_score();
+ }
+#undef mark_free
+
+ return 0;
+}
+
+/* check if this offset can be used */
+static int
+check_offset(size_t offset, size_t size, size_t align)
+{
+ size_t i;
+
+ if ((offset & (align - 1)) != 0)
+ return -1;
+ if (offset + size > sizeof(struct rte_mbuf))
+ return -1;
+
+ for (i = 0; i < size; i++) {
+ if (!shm->free_space[i + offset])
+ return -1;
+ }
+
+ return 0;
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynfield_elt *
+__mbuf_dynfield_lookup(const char *name)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
+ if (strcmp(name, mbuf_dynfield->params.name) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynfield;
+}
+
+int
+rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
+{
+ struct mbuf_dynfield_elt *mbuf_dynfield;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynfield = __mbuf_dynfield_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynfield == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynfield->params, sizeof(*params));
+
+ return mbuf_dynfield->offset;
+}
+
+static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
+ const struct rte_mbuf_dynfield *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->size != params2->size)
+ return -1;
+ if (params1->align != params2->align)
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int best_zone = UINT_MAX;
+ size_t i, offset;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
+ if (mbuf_dynfield != NULL) {
+ if (req != SIZE_MAX && req != mbuf_dynfield->offset) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynfield->offset;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == SIZE_MAX) {
+ for (offset = 0;
+ offset < sizeof(struct rte_mbuf);
+ offset++) {
+ if (check_offset(offset, params->size,
+ params->align) == 0 &&
+ shm->free_space[offset] < best_zone) {
+ best_zone = shm->free_space[offset];
+ req = offset;
+ }
+ }
+ if (req == SIZE_MAX) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ } else {
+ if (check_offset(req, params->size, params->align) < 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ }
+
+ offset = req;
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+
+ te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
+ if (mbuf_dynfield == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynfield->params.name, params->name,
+ sizeof(mbuf_dynfield->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
+ rte_errno = ENAMETOOLONG;
+ rte_free(mbuf_dynfield);
+ rte_free(te);
+ return -1;
+ }
+ memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
+ mbuf_dynfield->offset = offset;
+ te->data = mbuf_dynfield;
+
+ TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
+
+ for (i = offset; i < offset + params->size; i++)
+ shm->free_space[i] = 0;
+ process_score();
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %zd\n",
+ params->name, params->size, params->align, params->flags,
+ offset);
+
+ return offset;
+}
+
+int
+rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t req)
+{
+ int ret;
+
+ if (params->size >= sizeof(struct rte_mbuf)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (!rte_is_power_of_2(params->align)) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+ if (params->flags != 0) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynfield_register_offset(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
+{
+ return rte_mbuf_dynfield_register_offset(params, SIZE_MAX);
+}
+
+/* assume tailq is locked */
+static struct mbuf_dynflag_elt *
+__mbuf_dynflag_lookup(const char *name)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+ struct rte_tailq_entry *te;
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
+ if (strncmp(name, mbuf_dynflag->params.name,
+ RTE_MBUF_DYN_NAMESIZE) == 0)
+ break;
+ }
+
+ if (te == NULL) {
+ rte_errno = ENOENT;
+ return NULL;
+ }
+
+ return mbuf_dynflag;
+}
+
+int
+rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params)
+{
+ struct mbuf_dynflag_elt *mbuf_dynflag;
+
+ if (shm == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ rte_mcfg_tailq_read_lock();
+ mbuf_dynflag = __mbuf_dynflag_lookup(name);
+ rte_mcfg_tailq_read_unlock();
+
+ if (mbuf_dynflag == NULL) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+
+ if (params != NULL)
+ memcpy(params, &mbuf_dynflag->params, sizeof(*params));
+
+ return mbuf_dynflag->bitnum;
+}
+
+static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
+ const struct rte_mbuf_dynflag *params2)
+{
+ if (strcmp(params1->name, params2->name))
+ return -1;
+ if (params1->flags != params2->flags)
+ return -1;
+ return 0;
+}
+
+/* assume tailq is locked */
+static int
+__rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
+ struct rte_tailq_entry *te = NULL;
+ unsigned int bitnum;
+ int ret;
+
+ if (shm == NULL && init_shared_mem() < 0)
+ return -1;
+
+ mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
+ if (mbuf_dynflag != NULL) {
+ if (req != UINT_MAX && req != mbuf_dynflag->bitnum) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
+ rte_errno = EEXIST;
+ return -1;
+ }
+ return mbuf_dynflag->bitnum;
+ }
+
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
+ rte_errno = EPERM;
+ return -1;
+ }
+
+ if (req == UINT_MAX) {
+ if (shm->free_flags == 0) {
+ rte_errno = ENOENT;
+ return -1;
+ }
+ bitnum = rte_bsf64(shm->free_flags);
+ } else {
+ if ((shm->free_flags & (1ULL << req)) == 0) {
+ rte_errno = EBUSY;
+ return -1;
+ }
+ bitnum = req;
+ }
+
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+
+ te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
+ if (te == NULL)
+ return -1;
+
+ mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
+ if (mbuf_dynflag == NULL) {
+ rte_free(te);
+ return -1;
+ }
+
+ ret = strlcpy(mbuf_dynflag->params.name, params->name,
+ sizeof(mbuf_dynflag->params.name));
+ if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
+ rte_free(mbuf_dynflag);
+ rte_free(te);
+ rte_errno = ENAMETOOLONG;
+ return -1;
+ }
+ mbuf_dynflag->bitnum = bitnum;
+ te->data = mbuf_dynflag;
+
+ TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
+
+ shm->free_flags &= ~(1ULL << bitnum);
+
+ RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
+ params->name, params->flags, bitnum);
+
+ return bitnum;
+}
+
+int
+rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int req)
+{
+ int ret;
+
+ if (req != UINT_MAX && req >= 64) {
+ rte_errno = EINVAL;
+ return -1;
+ }
+
+ rte_mcfg_tailq_write_lock();
+ ret = __rte_mbuf_dynflag_register_bitnum(params, req);
+ rte_mcfg_tailq_write_unlock();
+
+ return ret;
+}
+
+int
+rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
+{
+ return rte_mbuf_dynflag_register_bitnum(params, UINT_MAX);
+}
+
+void rte_mbuf_dyn_dump(FILE *out)
+{
+ struct mbuf_dynfield_list *mbuf_dynfield_list;
+ struct mbuf_dynfield_elt *dynfield;
+ struct mbuf_dynflag_list *mbuf_dynflag_list;
+ struct mbuf_dynflag_elt *dynflag;
+ struct rte_tailq_entry *te;
+ size_t i;
+
+ rte_mcfg_tailq_write_lock();
+ init_shared_mem();
+ fprintf(out, "Reserved fields:\n");
+ mbuf_dynfield_list = RTE_TAILQ_CAST(
+ mbuf_dynfield_tailq.head, mbuf_dynfield_list);
+ TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
+ dynfield = (struct mbuf_dynfield_elt *)te->data;
+ fprintf(out, " name=%s offset=%zd size=%zd align=%zd flags=%x\n",
+ dynfield->params.name, dynfield->offset,
+ dynfield->params.size, dynfield->params.align,
+ dynfield->params.flags);
+ }
+ fprintf(out, "Reserved flags:\n");
+ mbuf_dynflag_list = RTE_TAILQ_CAST(
+ mbuf_dynflag_tailq.head, mbuf_dynflag_list);
+ TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
+ dynflag = (struct mbuf_dynflag_elt *)te->data;
+ fprintf(out, " name=%s bitnum=%u flags=%x\n",
+ dynflag->params.name, dynflag->bitnum,
+ dynflag->params.flags);
+ }
+ fprintf(out, "Free space in mbuf (0 = free, value = zone alignment):\n");
+ for (i = 0; i < sizeof(struct rte_mbuf); i++) {
+ if ((i % 8) == 0)
+ fprintf(out, " %4.4zx: ", i);
+ fprintf(out, "%2.2x%s", shm->free_space[i],
+ (i % 8 != 7) ? " " : "\n");
+ }
+ rte_mcfg_tailq_write_unlock();
+}
diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
new file mode 100644
index 000000000..307613c96
--- /dev/null
+++ b/lib/librte_mbuf/rte_mbuf_dyn.h
@@ -0,0 +1,226 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright 2019 6WIND S.A.
+ */
+
+#ifndef _RTE_MBUF_DYN_H_
+#define _RTE_MBUF_DYN_H_
+
+/**
+ * @file
+ * RTE Mbuf dynamic fields and flags
+ *
+ * Many features require to store data inside the mbuf. As the room in
+ * mbuf structure is limited, it is not possible to have a field for
+ * each feature. Also, changing fields in the mbuf structure can break
+ * the API or ABI.
+ *
+ * This module addresses this issue, by enabling the dynamic
+ * registration of fields or flags:
+ *
+ * - a dynamic field is a named area in the rte_mbuf structure, with a
+ * given size (>= 1 byte) and alignment constraint.
+ * - a dynamic flag is a named bit in the rte_mbuf structure, stored
+ * in mbuf->ol_flags.
+ *
+ * The typical use case is when a specific offload feature requires to
+ * register a dedicated offload field in the mbuf structure, and adding
+ * a static field or flag is not justified.
+ *
+ * Example of use:
+ *
+ * - A rte_mbuf_dynfield structure is defined, containing the parameters
+ * of the dynamic field to be registered:
+ * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
+ * - The application initializes the PMD, and asks for this feature
+ * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
+ * rxconf. This will make the PMD to register the field by calling
+ * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
+ * stores the returned offset.
+ * - The application that uses the offload feature also registers
+ * the field to retrieve the same offset.
+ * - When the PMD receives a packet, it can set the field:
+ * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
+ * - In the main loop, the application can retrieve the value with
+ * the same macro.
+ *
+ * To avoid wasting space, the dynamic fields or flags must only be
+ * reserved on demand, when an application asks for the related feature.
+ *
+ * The registration can be done at any moment, but it is not possible
+ * to unregister fields or flags for now.
+ *
+ * A dynamic field can be reserved and used by an application only.
+ * It can for instance be a packet mark.
+ */
+
+#include <sys/types.h>
+/**
+ * Maximum length of the dynamic field or flag string.
+ */
+#define RTE_MBUF_DYN_NAMESIZE 64
+
+/**
+ * Structure describing the parameters of a mbuf dynamic field.
+ */
+struct rte_mbuf_dynfield {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
+ size_t size; /**< The number of bytes to reserve. */
+ size_t align; /**< The alignment constraint (power of 2). */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Structure describing the parameters of a mbuf dynamic flag.
+ */
+struct rte_mbuf_dynflag {
+ char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic flag. */
+ unsigned int flags; /**< Reserved for future use, must be 0. */
+};
+
+/**
+ * Register space for a dynamic field in the mbuf structure.
+ *
+ * If the field is already registered (same name and parameters), its
+ * offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
+
+/**
+ * Register space for a dynamic field in the mbuf structure at offset.
+ *
+ * If the field is already registered (same name, parameters and offset),
+ * the offset is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters (name, size,
+ * alignment constraint and flags).
+ * @param offset
+ * The requested offset. Ignored if SIZE_MAX is passed.
+ * @return
+ * The offset in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, flags, or offset).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested offset cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: not enough room in mbuf.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name does not ends with \0.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_register_offset(const struct rte_mbuf_dynfield *params,
+ size_t offset);
+
+/**
+ * Lookup for a registered dynamic mbuf field.
+ *
+ * @param name
+ * A string identifying the dynamic field.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic field.
+ * @return
+ * The offset of this field in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic field matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynfield_lookup(const char *name,
+ struct rte_mbuf_dynfield *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure.
+ *
+ * If the flag is already registered (same name and parameters), its
+ * bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
+
+/**
+ * Register a dynamic flag in the mbuf structure specifying bitnum.
+ *
+ * If the flag is already registered (same name, parameters and bitnum),
+ * the bitnum is returned.
+ *
+ * @param params
+ * A structure containing the requested parameters of the dynamic
+ * flag (name and options).
+ * @param bitnum
+ * The requested bitnum. Ignored if UINT_MAX is passed.
+ * @return
+ * The number of the reserved bit, or -1 on error.
+ * Possible values for rte_errno:
+ * - EINVAL: invalid parameters (size, align, or flags).
+ * - EEXIST: this name is already register with different parameters.
+ * - EBUSY: the requested bitnum cannot be used.
+ * - EPERM: called from a secondary process.
+ * - ENOENT: no more flag available.
+ * - ENOMEM: allocation failure.
+ * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_register_bitnum(const struct rte_mbuf_dynflag *params,
+ unsigned int bitnum);
+
+/**
+ * Lookup for a registered dynamic mbuf flag.
+ *
+ * @param name
+ * A string identifying the dynamic flag.
+ * @param params
+ * If not NULL, and if the lookup is successful, the structure is
+ * filled with the parameters of the dynamic flag.
+ * @return
+ * The offset of this flag in the mbuf structure, or -1 on error.
+ * Possible values for rte_errno:
+ * - ENOENT: no dynamic flag matches this name.
+ */
+__rte_experimental
+int rte_mbuf_dynflag_lookup(const char *name,
+ struct rte_mbuf_dynflag *params);
+
+/**
+ * Helper macro to access to a dynamic field.
+ */
+#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
+
+/**
+ * Dump the status of dynamic fields and flags.
+ *
+ * @param out
+ * The stream where the status is displayed.
+ */
+__rte_experimental
+void rte_mbuf_dyn_dump(FILE *out);
+
+/* Placeholder for dynamic fields and flags declarations. */
+
+#endif
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index 519fead35..9bf5ca37a 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -58,6 +58,13 @@ EXPERIMENTAL {
global:
rte_mbuf_check;
+ rte_mbuf_dynfield_lookup;
+ rte_mbuf_dynfield_register;
+ rte_mbuf_dynfield_register_offset;
+ rte_mbuf_dynflag_lookup;
+ rte_mbuf_dynflag_register;
+ rte_mbuf_dynflag_register_bitnum;
+ rte_mbuf_dyn_dump;
rte_pktmbuf_copy;
} DPDK_18.08;
--
2.20.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 10/10] buildtools: add ABI versioning check script
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (10 preceding siblings ...)
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 09/10] build: change ABI version to 20.0 Anatoly Burakov
@ 2019-10-17 14:32 23% ` Anatoly Burakov
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:32 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
Add a shell script that checks whether built libraries are
versioned with expected ABI (current ABI, current ABI + 1,
or EXPERIMENTAL).
The following command was used to verify current source tree
(assuming build directory is in ./build):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- Moved this to the end of the patchset
- Fixed bug when ABI symbols were not found because the .so
did not declare any public symbols
buildtools/check-abi-version.sh | 54 +++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100755 buildtools/check-abi-version.sh
diff --git a/buildtools/check-abi-version.sh b/buildtools/check-abi-version.sh
new file mode 100755
index 0000000000..29aea97735
--- /dev/null
+++ b/buildtools/check-abi-version.sh
@@ -0,0 +1,54 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+# Check whether library symbols have correct
+# version (provided ABI number or provided ABI
+# number + 1 or EXPERIMENTAL).
+# Args:
+# $1: path of the library .so file
+# $2: ABI major version number to check
+# (defaults to ABI_VERSION file value)
+
+if [ -z "$1" ]; then
+ echo "Script checks whether library symbols have"
+ echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL)"
+ echo "Usage:"
+ echo " $0 SO_FILE_PATH [ABI_VER]"
+ exit 1
+fi
+
+LIB="$1"
+DEFAULT_ABI=$(cat "$(dirname \
+ $(readlink -f $0))/../config/ABI_VERSION" | \
+ cut -d'.' -f 1)
+ABIVER="DPDK_${2-$DEFAULT_ABI}"
+NEXT_ABIVER="DPDK_$((${2-$DEFAULT_ABI}+1))"
+
+ret=0
+
+# get output of objdump
+OBJ_DUMP_OUTPUT=`objdump -TC --section=.text ${LIB} 2>&1 | grep ".text"`
+
+# there may not be any .text sections in the .so file, in which case exit early
+echo "${OBJ_DUMP_OUTPUT}" | grep "not found in any input file" -q
+if [ "$?" -eq 0 ]; then
+ exit 0
+fi
+
+# we have symbols, so let's see if the versions are correct
+for SYM in `echo "${OBJ_DUMP_OUTPUT}" | awk '{print $(NF-1) "-" $NF}'`
+do
+ version=$(echo $SYM | cut -d'-' -f 1)
+ symbol=$(echo $SYM | cut -d'-' -f 2)
+ case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL")
+ ;;
+ (*)
+ echo "Warning: symbol $symbol ($version) should be annotated " \
+ "as ABI version $ABIVER / $NEXT_ABIVER, or EXPERIMENTAL."
+ ret=1
+ ;;
+ esac
+done
+
+exit $ret
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v4 08/10] drivers/octeontx: add missing public symbol
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (8 preceding siblings ...)
2019-10-17 14:31 6% ` [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
@ 2019-10-17 14:31 3% ` Anatoly Burakov
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-17 14:32 23% ` [dpdk-dev] [PATCH v4 10/10] buildtools: add ABI versioning check script Anatoly Burakov
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Jerin Jacob, john.mcnamara, bruce.richardson, thomas,
david.marchand, pbhagavatula, stable
The logtype symbol was missing from the .map file. Add it.
Fixes: d8dd31652cf4 ("common/octeontx: move mbox to common folder")
Cc: pbhagavatula@caviumnetworks.com
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- add this patch to avoid compile breakage when bumping ABI
drivers/common/octeontx/rte_common_octeontx_version.map | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index f04b3b7f8a..a9b3cff9bc 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,6 +1,7 @@
DPDK_18.05 {
global:
+ octeontx_logtype_mbox;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
octeontx_mbox_send;
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (7 preceding siblings ...)
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 06/10] distributor: " Anatoly Burakov
@ 2019-10-17 14:31 6% ` Anatoly Burakov
2019-10-17 16:00 4% ` Hunt, David
2019-10-17 14:31 3% ` [dpdk-dev] [PATCH v4 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
` (2 subsequent siblings)
11 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, bruce.richardson,
thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
The original ABI versioning was slightly misleading in that the
DPDK 2.0 ABI was really a single mode for the distributor, and is
used as such throughout the distributor code.
Fix this by renaming all _v20 API's to _single API's, and remove
symbol versioning.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v4:
- Changed it back to how it was with v2
- Removed remaining v2.0 symbols
v3:
- Removed single mode from distributor as per Dave's comments
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_distributor/Makefile | 2 +-
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 24 ++++----
.../rte_distributor_private.h | 10 ++--
...ributor_v20.c => rte_distributor_single.c} | 57 ++++++++-----------
...ributor_v20.h => rte_distributor_single.h} | 26 ++++-----
.../rte_distributor_version.map | 18 +-----
7 files changed, 58 insertions(+), 81 deletions(-)
rename lib/librte_distributor/{rte_distributor_v20.c => rte_distributor_single.c} (84%)
rename lib/librte_distributor/{rte_distributor_v20.h => rte_distributor_single.h} (89%)
diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile
index 0ef80dcff4..d9d0089166 100644
--- a/lib/librte_distributor/Makefile
+++ b/lib/librte_distributor/Makefile
@@ -15,7 +15,7 @@ EXPORT_MAP := rte_distributor_version.map
LIBABIVER := 1
# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor_v20.c
+SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor_single.c
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor.c
ifeq ($(CONFIG_RTE_ARCH_X86),y)
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor_match_sse.c
diff --git a/lib/librte_distributor/meson.build b/lib/librte_distributor/meson.build
index dba7e3b2aa..bd12ddb2f1 100644
--- a/lib/librte_distributor/meson.build
+++ b/lib/librte_distributor/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
-sources = files('rte_distributor.c', 'rte_distributor_v20.c')
+sources = files('rte_distributor.c', 'rte_distributor_single.c')
if arch_subdir == 'x86'
sources += files('rte_distributor_match_sse.c')
else
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index ca3f21b833..b4fc0bfead 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -18,7 +18,7 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
-#include "rte_distributor_v20.h"
+#include "rte_distributor_single.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -42,7 +42,7 @@ rte_distributor_request_pkt(struct rte_distributor *d,
volatile int64_t *retptr64;
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- rte_distributor_request_pkt_v20(d->d_v20,
+ rte_distributor_request_pkt_single(d->d_single,
worker_id, oldpkt[0]);
return;
}
@@ -88,7 +88,8 @@ rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int i;
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- pkts[0] = rte_distributor_poll_pkt_v20(d->d_v20, worker_id);
+ pkts[0] = rte_distributor_poll_pkt_single(d->d_single,
+ worker_id);
return (pkts[0]) ? 1 : 0;
}
@@ -123,7 +124,7 @@ rte_distributor_get_pkt(struct rte_distributor *d,
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
if (return_count <= 1) {
- pkts[0] = rte_distributor_get_pkt_v20(d->d_v20,
+ pkts[0] = rte_distributor_get_pkt_single(d->d_single,
worker_id, oldpkt[0]);
return (pkts[0]) ? 1 : 0;
} else
@@ -153,7 +154,7 @@ rte_distributor_return_pkt(struct rte_distributor *d,
if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
if (num == 1)
- return rte_distributor_return_pkt_v20(d->d_v20,
+ return rte_distributor_return_pkt_single(d->d_single,
worker_id, oldpkt[0]);
else
return -EINVAL;
@@ -330,7 +331,8 @@ rte_distributor_process(struct rte_distributor *d,
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_process_v20(d->d_v20, mbufs, num_mbufs);
+ return rte_distributor_process_single(d->d_single,
+ mbufs, num_mbufs);
}
if (unlikely(num_mbufs == 0)) {
@@ -464,7 +466,7 @@ rte_distributor_returned_pkts(struct rte_distributor *d,
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_returned_pkts_v20(d->d_v20,
+ return rte_distributor_returned_pkts_single(d->d_single,
mbufs, max_mbufs);
}
@@ -507,7 +509,7 @@ rte_distributor_flush(struct rte_distributor *d)
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- return rte_distributor_flush_v20(d->d_v20);
+ return rte_distributor_flush_single(d->d_single);
}
flushed = total_outstanding(d);
@@ -538,7 +540,7 @@ rte_distributor_clear_returns(struct rte_distributor *d)
if (d->alg_type == RTE_DIST_ALG_SINGLE) {
/* Call the old API */
- rte_distributor_clear_returns_v20(d->d_v20);
+ rte_distributor_clear_returns_single(d->d_single);
return;
}
@@ -578,9 +580,9 @@ rte_distributor_create(const char *name,
rte_errno = ENOMEM;
return NULL;
}
- d->d_v20 = rte_distributor_create_v20(name,
+ d->d_single = rte_distributor_create_single(name,
socket_id, num_workers);
- if (d->d_v20 == NULL) {
+ if (d->d_single == NULL) {
free(d);
/* rte_errno will have been set */
return NULL;
diff --git a/lib/librte_distributor/rte_distributor_private.h b/lib/librte_distributor/rte_distributor_private.h
index 33cd89410c..bdb62b6e92 100644
--- a/lib/librte_distributor/rte_distributor_private.h
+++ b/lib/librte_distributor/rte_distributor_private.h
@@ -55,7 +55,7 @@ extern "C" {
* the next cache line to worker 0, we pad this out to three cache lines.
* Only 64-bits of the memory is actually used though.
*/
-union rte_distributor_buffer_v20 {
+union rte_distributor_buffer_single {
volatile int64_t bufptr64;
char pad[RTE_CACHE_LINE_SIZE*3];
} __rte_cache_aligned;
@@ -80,8 +80,8 @@ struct rte_distributor_returned_pkts {
struct rte_mbuf *mbufs[RTE_DISTRIB_MAX_RETURNS];
};
-struct rte_distributor_v20 {
- TAILQ_ENTRY(rte_distributor_v20) next; /**< Next in list. */
+struct rte_distributor_single {
+ TAILQ_ENTRY(rte_distributor_single) next; /**< Next in list. */
char name[RTE_DISTRIBUTOR_NAMESIZE]; /**< Name of the ring. */
unsigned int num_workers; /**< Number of workers polling */
@@ -96,7 +96,7 @@ struct rte_distributor_v20 {
struct rte_distributor_backlog backlog[RTE_DISTRIB_MAX_WORKERS];
- union rte_distributor_buffer_v20 bufs[RTE_DISTRIB_MAX_WORKERS];
+ union rte_distributor_buffer_single bufs[RTE_DISTRIB_MAX_WORKERS];
struct rte_distributor_returned_pkts returns;
};
@@ -154,7 +154,7 @@ struct rte_distributor {
enum rte_distributor_match_function dist_match_fn;
- struct rte_distributor_v20 *d_v20;
+ struct rte_distributor_single *d_single;
};
void
diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_distributor/rte_distributor_single.c
similarity index 84%
rename from lib/librte_distributor/rte_distributor_v20.c
rename to lib/librte_distributor/rte_distributor_single.c
index cdc0969a89..9a6ef826c9 100644
--- a/lib/librte_distributor/rte_distributor_v20.c
+++ b/lib/librte_distributor/rte_distributor_single.c
@@ -15,10 +15,10 @@
#include <rte_pause.h>
#include <rte_tailq.h>
-#include "rte_distributor_v20.h"
+#include "rte_distributor_single.h"
#include "rte_distributor_private.h"
-TAILQ_HEAD(rte_distributor_list, rte_distributor_v20);
+TAILQ_HEAD(rte_distributor_list, rte_distributor_single);
static struct rte_tailq_elem rte_distributor_tailq = {
.name = "RTE_DISTRIBUTOR",
@@ -28,23 +28,22 @@ EAL_REGISTER_TAILQ(rte_distributor_tailq)
/**** APIs called by workers ****/
void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_request_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_GET_BUF;
while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK))
rte_pause();
buf->bufptr64 = req;
}
-VERSION_SYMBOL(rte_distributor_request_pkt, _v20, 2.0);
struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_poll_pkt_single(struct rte_distributor_single *d,
unsigned worker_id)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
if (buf->bufptr64 & RTE_DISTRIB_GET_BUF)
return NULL;
@@ -52,31 +51,28 @@ rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
int64_t ret = buf->bufptr64 >> RTE_DISTRIB_FLAG_BITS;
return (struct rte_mbuf *)((uintptr_t)ret);
}
-VERSION_SYMBOL(rte_distributor_poll_pkt, _v20, 2.0);
struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_get_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
struct rte_mbuf *ret;
- rte_distributor_request_pkt_v20(d, worker_id, oldpkt);
- while ((ret = rte_distributor_poll_pkt_v20(d, worker_id)) == NULL)
+ rte_distributor_request_pkt_single(d, worker_id, oldpkt);
+ while ((ret = rte_distributor_poll_pkt_single(d, worker_id)) == NULL)
rte_pause();
return ret;
}
-VERSION_SYMBOL(rte_distributor_get_pkt, _v20, 2.0);
int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_return_pkt_single(struct rte_distributor_single *d,
unsigned worker_id, struct rte_mbuf *oldpkt)
{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
+ union rte_distributor_buffer_single *buf = &d->bufs[worker_id];
uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
| RTE_DISTRIB_RETURN_BUF;
buf->bufptr64 = req;
return 0;
}
-VERSION_SYMBOL(rte_distributor_return_pkt, _v20, 2.0);
/**** APIs called on distributor core ***/
@@ -102,7 +98,7 @@ backlog_pop(struct rte_distributor_backlog *bl)
/* stores a packet returned from a worker inside the returns array */
static inline void
-store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d,
+store_return(uintptr_t oldbuf, struct rte_distributor_single *d,
unsigned *ret_start, unsigned *ret_count)
{
/* store returns in a circular buffer - code is branch-free */
@@ -113,7 +109,7 @@ store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d,
}
static inline void
-handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
+handle_worker_shutdown(struct rte_distributor_single *d, unsigned int wkr)
{
d->in_flight_tags[wkr] = 0;
d->in_flight_bitmask &= ~(1UL << wkr);
@@ -143,7 +139,7 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
* Note that the tags were set before first level call
* to rte_distributor_process.
*/
- rte_distributor_process_v20(d, pkts, i);
+ rte_distributor_process_single(d, pkts, i);
bl->count = bl->start = 0;
}
}
@@ -153,7 +149,7 @@ handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
* to do a partial flush.
*/
static int
-process_returns(struct rte_distributor_v20 *d)
+process_returns(struct rte_distributor_single *d)
{
unsigned wkr;
unsigned flushed = 0;
@@ -192,7 +188,7 @@ process_returns(struct rte_distributor_v20 *d)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
+rte_distributor_process_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned num_mbufs)
{
unsigned next_idx = 0;
@@ -293,11 +289,10 @@ rte_distributor_process_v20(struct rte_distributor_v20 *d,
d->returns.count = ret_count;
return num_mbufs;
}
-VERSION_SYMBOL(rte_distributor_process, _v20, 2.0);
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
+rte_distributor_returned_pkts_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -314,13 +309,12 @@ rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
return retval;
}
-VERSION_SYMBOL(rte_distributor_returned_pkts, _v20, 2.0);
/* return the number of packets in-flight in a distributor, i.e. packets
* being worked on or queued up in a backlog.
*/
static inline unsigned
-total_outstanding(const struct rte_distributor_v20 *d)
+total_outstanding(const struct rte_distributor_single *d)
{
unsigned wkr, total_outstanding;
@@ -335,35 +329,33 @@ total_outstanding(const struct rte_distributor_v20 *d)
/* flush the distributor, so that there are no outstanding packets in flight or
* queued up. */
int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d)
+rte_distributor_flush_single(struct rte_distributor_single *d)
{
const unsigned flushed = total_outstanding(d);
while (total_outstanding(d) > 0)
- rte_distributor_process_v20(d, NULL, 0);
+ rte_distributor_process_single(d, NULL, 0);
return flushed;
}
-VERSION_SYMBOL(rte_distributor_flush, _v20, 2.0);
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d)
+rte_distributor_clear_returns_single(struct rte_distributor_single *d)
{
d->returns.start = d->returns.count = 0;
#ifndef __OPTIMIZE__
memset(d->returns.mbufs, 0, sizeof(d->returns.mbufs));
#endif
}
-VERSION_SYMBOL(rte_distributor_clear_returns, _v20, 2.0);
/* creates a distributor instance */
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name,
+struct rte_distributor_single *
+rte_distributor_create_single(const char *name,
unsigned socket_id,
unsigned num_workers)
{
- struct rte_distributor_v20 *d;
+ struct rte_distributor_single *d;
struct rte_distributor_list *distributor_list;
char mz_name[RTE_MEMZONE_NAMESIZE];
const struct rte_memzone *mz;
@@ -399,4 +391,3 @@ rte_distributor_create_v20(const char *name,
return d;
}
-VERSION_SYMBOL(rte_distributor_create, _v20, 2.0);
diff --git a/lib/librte_distributor/rte_distributor_v20.h b/lib/librte_distributor/rte_distributor_single.h
similarity index 89%
rename from lib/librte_distributor/rte_distributor_v20.h
rename to lib/librte_distributor/rte_distributor_single.h
index 12865658ba..2f80aa43d1 100644
--- a/lib/librte_distributor/rte_distributor_v20.h
+++ b/lib/librte_distributor/rte_distributor_single.h
@@ -2,8 +2,8 @@
* Copyright(c) 2010-2014 Intel Corporation
*/
-#ifndef _RTE_DISTRIB_V20_H_
-#define _RTE_DISTRIB_V20_H_
+#ifndef _RTE_DISTRIB_SINGLE_H_
+#define _RTE_DISTRIB_SINGLE_H_
/**
* @file
@@ -19,7 +19,7 @@ extern "C" {
#define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */
-struct rte_distributor_v20;
+struct rte_distributor_single;
struct rte_mbuf;
/**
@@ -38,8 +38,8 @@ struct rte_mbuf;
* @return
* The newly created distributor instance
*/
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name, unsigned int socket_id,
+struct rte_distributor_single *
+rte_distributor_create_single(const char *name, unsigned int socket_id,
unsigned int num_workers);
/* *** APIS to be called on the distributor lcore *** */
@@ -74,7 +74,7 @@ rte_distributor_create_v20(const char *name, unsigned int socket_id,
* The number of mbufs processed.
*/
int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
+rte_distributor_process_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs);
/**
@@ -92,7 +92,7 @@ rte_distributor_process_v20(struct rte_distributor_v20 *d,
* The number of mbufs returned in the mbufs array.
*/
int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
+rte_distributor_returned_pkts_single(struct rte_distributor_single *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs);
/**
@@ -107,7 +107,7 @@ rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
* The number of queued/in-flight packets that were completed by this call.
*/
int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d);
+rte_distributor_flush_single(struct rte_distributor_single *d);
/**
* Clears the array of returned packets used as the source for the
@@ -119,7 +119,7 @@ rte_distributor_flush_v20(struct rte_distributor_v20 *d);
* The distributor instance to be used
*/
void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d);
+rte_distributor_clear_returns_single(struct rte_distributor_single *d);
/* *** APIS to be called on the worker lcores *** */
/*
@@ -148,7 +148,7 @@ rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d);
* A new packet to be processed by the worker thread.
*/
struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_get_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *oldpkt);
/**
@@ -164,7 +164,7 @@ rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
* The previous packet being processed by the worker
*/
int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_return_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *mbuf);
/**
@@ -188,7 +188,7 @@ rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
* The previous packet, if any, being processed by the worker
*/
void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_request_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id, struct rte_mbuf *oldpkt);
/**
@@ -208,7 +208,7 @@ rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
* packet is yet available.
*/
struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
+rte_distributor_poll_pkt_single(struct rte_distributor_single *d,
unsigned int worker_id);
#ifdef __cplusplus
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 3a285b394e..00e26b4804 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,19 +1,3 @@
-DPDK_2.0 {
- global:
-
- rte_distributor_clear_returns;
- rte_distributor_create;
- rte_distributor_flush;
- rte_distributor_get_pkt;
- rte_distributor_poll_pkt;
- rte_distributor_process;
- rte_distributor_request_pkt;
- rte_distributor_return_pkt;
- rte_distributor_returned_pkts;
-
- local: *;
-};
-
DPDK_17.05 {
global:
@@ -26,4 +10,4 @@ DPDK_17.05 {
rte_distributor_request_pkt;
rte_distributor_return_pkt;
rte_distributor_returned_pkts;
-} DPDK_2.0;
+};
--
2.17.1
^ permalink raw reply [relevance 6%]
* [dpdk-dev] [PATCH v4 06/10] distributor: remove deprecated code
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (6 preceding siblings ...)
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 05/10] lpm: " Anatoly Burakov
@ 2019-10-17 14:31 4% ` Anatoly Burakov
2019-10-17 15:59 0% ` Hunt, David
2019-10-17 14:31 6% ` [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
` (3 subsequent siblings)
11 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, bruce.richardson,
thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_distributor/rte_distributor.c | 56 +++--------------
.../rte_distributor_v1705.h | 61 -------------------
2 files changed, 9 insertions(+), 108 deletions(-)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 21eb1fb0a1..ca3f21b833 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -19,7 +19,6 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
#include "rte_distributor_v20.h"
-#include "rte_distributor_v1705.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -33,7 +32,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
/**** Burst Packet APIs called by workers ****/
void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
+rte_distributor_request_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt,
unsigned int count)
{
@@ -78,14 +77,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor *d,
*/
*retptr64 |= RTE_DISTRIB_GET_BUF;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_request_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_request_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count),
- rte_distributor_request_pkt_v1705);
int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
+rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -119,13 +113,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_poll_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_poll_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts),
- rte_distributor_poll_pkt_v1705);
int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
+rte_distributor_get_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts,
struct rte_mbuf **oldpkt, unsigned int return_count)
{
@@ -153,14 +143,9 @@ rte_distributor_get_pkt_v1705(struct rte_distributor *d,
}
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_get_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_get_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int return_count),
- rte_distributor_get_pkt_v1705);
int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
+rte_distributor_return_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -187,10 +172,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributor *d,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_return_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_return_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num),
- rte_distributor_return_pkt_v1705);
/**** APIs called on distributor core ***/
@@ -336,7 +317,7 @@ release(struct rte_distributor *d, unsigned int wkr)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v1705(struct rte_distributor *d,
+rte_distributor_process(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs)
{
unsigned int next_idx = 0;
@@ -470,14 +451,10 @@ rte_distributor_process_v1705(struct rte_distributor *d,
return num_mbufs;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_process, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_process(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs),
- rte_distributor_process_v1705);
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
+rte_distributor_returned_pkts(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -502,10 +479,6 @@ rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
return retval;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_returned_pkts, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_returned_pkts(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs),
- rte_distributor_returned_pkts_v1705);
/*
* Return the number of packets in-flight in a distributor, i.e. packets
@@ -527,7 +500,7 @@ total_outstanding(const struct rte_distributor *d)
* queued up.
*/
int
-rte_distributor_flush_v1705(struct rte_distributor *d)
+rte_distributor_flush(struct rte_distributor *d)
{
unsigned int flushed;
unsigned int wkr;
@@ -556,13 +529,10 @@ rte_distributor_flush_v1705(struct rte_distributor *d)
return flushed;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_flush, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_flush(struct rte_distributor *d),
- rte_distributor_flush_v1705);
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d)
+rte_distributor_clear_returns(struct rte_distributor *d)
{
unsigned int wkr;
@@ -576,13 +546,10 @@ rte_distributor_clear_returns_v1705(struct rte_distributor *d)
for (wkr = 0; wkr < d->num_workers; wkr++)
d->bufs[wkr].retptr64[0] = 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_clear_returns, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_clear_returns(struct rte_distributor *d),
- rte_distributor_clear_returns_v1705);
/* creates a distributor instance */
struct rte_distributor *
-rte_distributor_create_v1705(const char *name,
+rte_distributor_create(const char *name,
unsigned int socket_id,
unsigned int num_workers,
unsigned int alg_type)
@@ -656,8 +623,3 @@ rte_distributor_create_v1705(const char *name,
return d;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_create, _v1705, 17.05);
-MAP_STATIC_SYMBOL(struct rte_distributor *rte_distributor_create(
- const char *name, unsigned int socket_id,
- unsigned int num_workers, unsigned int alg_type),
- rte_distributor_create_v1705);
diff --git a/lib/librte_distributor/rte_distributor_v1705.h b/lib/librte_distributor/rte_distributor_v1705.h
deleted file mode 100644
index df4d9e8150..0000000000
--- a/lib/librte_distributor/rte_distributor_v1705.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#ifndef _RTE_DISTRIB_V1705_H_
-#define _RTE_DISTRIB_V1705_H_
-
-/**
- * @file
- * RTE distributor
- *
- * The distributor is a component which is designed to pass packets
- * one-at-a-time to workers, with dynamic load balancing.
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_distributor *
-rte_distributor_create_v1705(const char *name, unsigned int socket_id,
- unsigned int num_workers,
- unsigned int alg_type);
-
-int
-rte_distributor_process_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs);
-
-int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs);
-
-int
-rte_distributor_flush_v1705(struct rte_distributor *d);
-
-void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d);
-
-int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int retcount);
-
-int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num);
-
-void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count);
-
-int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **mbufs);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 05/10] lpm: remove deprecated code
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (5 preceding siblings ...)
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code Anatoly Burakov
@ 2019-10-17 14:31 2% ` Anatoly Burakov
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 06/10] distributor: " Anatoly Burakov
` (4 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Bruce Richardson, Vladimir Medvedkin,
john.mcnamara, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_lpm/rte_lpm.c | 996 ++------------------------------------
lib/librte_lpm/rte_lpm.h | 88 ----
lib/librte_lpm/rte_lpm6.c | 132 +----
lib/librte_lpm/rte_lpm6.h | 25 -
4 files changed, 48 insertions(+), 1193 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 3a929a1b16..2687564194 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -89,34 +89,8 @@ depth_to_range(uint8_t depth)
/*
* Find an existing lpm table and return a pointer to it.
*/
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name)
-{
- struct rte_lpm_v20 *l = NULL;
- struct rte_tailq_entry *te;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_read_lock();
- TAILQ_FOREACH(te, lpm_list, next) {
- l = te->data;
- if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
- rte_mcfg_tailq_read_unlock();
-
- if (te == NULL) {
- rte_errno = ENOENT;
- return NULL;
- }
-
- return l;
-}
-VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name)
+rte_lpm_find_existing(const char *name)
{
struct rte_lpm *l = NULL;
struct rte_tailq_entry *te;
@@ -139,88 +113,12 @@ rte_lpm_find_existing_v1604(const char *name)
return l;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v1604, 16.04);
-MAP_STATIC_SYMBOL(struct rte_lpm *rte_lpm_find_existing(const char *name),
- rte_lpm_find_existing_v1604);
/*
* Allocates memory for LPM object
*/
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
- __rte_unused int flags)
-{
- char mem_name[RTE_LPM_NAMESIZE];
- struct rte_lpm_v20 *lpm = NULL;
- struct rte_tailq_entry *te;
- uint32_t mem_size;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry_v20) != 2);
-
- /* Check user arguments. */
- if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
- rte_errno = EINVAL;
- return NULL;
- }
-
- snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
-
- /* Determine the amount of memory to allocate. */
- mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
- rte_mcfg_tailq_write_lock();
-
- /* guarantee there's no existing */
- TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
-
- if (te != NULL) {
- lpm = NULL;
- rte_errno = EEXIST;
- goto exit;
- }
-
- /* allocate tailq entry */
- te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Allocate memory to store the LPM data structures. */
- lpm = rte_zmalloc_socket(mem_name, mem_size,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
- rte_free(te);
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Save user arguments. */
- lpm->max_rules = max_rules;
- strlcpy(lpm->name, name, sizeof(lpm->name));
-
- te->data = lpm;
-
- TAILQ_INSERT_TAIL(lpm_list, te, next);
-
-exit:
- rte_mcfg_tailq_write_unlock();
-
- return lpm;
-}
-VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
+rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config)
{
char mem_name[RTE_LPM_NAMESIZE];
@@ -320,45 +218,12 @@ rte_lpm_create_v1604(const char *name, int socket_id,
return lpm;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_create, _v1604, 16.04);
-MAP_STATIC_SYMBOL(
- struct rte_lpm *rte_lpm_create(const char *name, int socket_id,
- const struct rte_lpm_config *config), rte_lpm_create_v1604);
/*
* Deallocates memory for given LPM table.
*/
void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm)
-{
- struct rte_lpm_list *lpm_list;
- struct rte_tailq_entry *te;
-
- /* Check user arguments. */
- if (lpm == NULL)
- return;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_write_lock();
-
- /* find our tailq entry */
- TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
- break;
- }
- if (te != NULL)
- TAILQ_REMOVE(lpm_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- rte_free(lpm);
- rte_free(te);
-}
-VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
-
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm)
+rte_lpm_free(struct rte_lpm *lpm)
{
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -386,9 +251,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
rte_free(lpm);
rte_free(te);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
- rte_lpm_free_v1604);
/*
* Adds a rule to the rule table.
@@ -401,79 +263,7 @@ MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t rule_gindex, rule_index, last_rule;
- int i;
-
- VERIFY_DEPTH(depth);
-
- /* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
-
- /* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- /* Initialise rule_index to point to start of rule group. */
- rule_index = rule_gindex;
- /* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- for (; rule_index < last_rule; rule_index++) {
-
- /* If rule already exists update its next_hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- return rule_index;
- }
- }
-
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
- } else {
- /* Calculate the position in which the rule will be stored. */
- rule_index = 0;
-
- for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
- break;
- }
- }
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
-
- lpm->rule_info[depth - 1].first_rule = rule_index;
- }
-
- /* Make room for the new rule in the array. */
- for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
- return -ENOSPC;
-
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
- }
- }
-
- /* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- /* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
-
- return rule_index;
-}
-
-static int32_t
-rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
@@ -549,30 +339,7 @@ rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static void
-rule_delete_v20(struct rte_lpm_v20 *lpm, int32_t rule_index, uint8_t depth)
-{
- int i;
-
- VERIFY_DEPTH(depth);
-
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
-
- for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
- }
- }
-
- lpm->rule_info[depth - 1].used_rules--;
-}
-
-static void
-rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
int i;
@@ -599,28 +366,7 @@ rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_find_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth)
-{
- uint32_t rule_gindex, last_rule, rule_index;
-
- VERIFY_DEPTH(depth);
-
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- /* Scan used rules at given depth to find rule. */
- for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
- /* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
- return rule_index;
- }
-
- /* If rule is not found return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
@@ -644,42 +390,7 @@ rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static int32_t
-tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20 *tbl8)
-{
- uint32_t group_idx; /* tbl8 group index. */
- struct rte_lpm_tbl_entry_v20 *tbl8_entry;
-
- /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < RTE_LPM_TBL8_NUM_GROUPS;
- group_idx++) {
- tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
- /* If a free tbl8 group is found clean it and set as VALID. */
- if (!tbl8_entry->valid_group) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = VALID,
- };
- new_tbl8_entry.next_hop = 0;
-
- memset(&tbl8_entry[0], 0,
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
- sizeof(tbl8_entry[0]));
-
- __atomic_store(tbl8_entry, &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- /* Return group index for allocated tbl8 group. */
- return group_idx;
- }
- }
-
- /* If there are no tbl8 groups free then return error. */
- return -ENOSPC;
-}
-
-static int32_t
-tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
@@ -713,22 +424,7 @@ tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
}
static void
-tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start)
-{
- /* Set tbl8 group invalid*/
- struct rte_lpm_tbl_entry_v20 zero_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = INVALID,
- };
- zero_tbl8_entry.next_hop = 0;
-
- __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
- __ATOMIC_RELAXED);
-}
-
-static void
-tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
@@ -738,78 +434,7 @@ tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
}
static __rte_noinline int32_t
-add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
-
- /* Calculate the index into Table24. */
- tbl24_index = ip >> 8;
- tbl24_range = depth_to_range(depth);
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- /*
- * For invalid OR valid and non-extended tbl 24 entries set
- * entry.
- */
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth)) {
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .valid = VALID,
- .valid_group = 0,
- .depth = depth,
- };
- new_tbl24_entry.next_hop = next_hop;
-
- /* Setting tbl24 entry in one go to avoid race
- * conditions
- */
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- continue;
- }
-
- if (lpm->tbl24[i].valid_group == 1) {
- /* If tbl24 entry is valid and extended calculate the
- * index into tbl8.
- */
- tbl8_index = lpm->tbl24[i].group_idx *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- };
- new_tbl8_entry.next_hop = next_hop;
-
- /*
- * Setting tbl8 entry in one go to avoid
- * race conditions
- */
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -881,150 +506,7 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static __rte_noinline int32_t
-add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index;
- int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
- tbl8_range, i;
-
- tbl24_index = (ip_masked >> 8);
- tbl8_range = depth_to_range(depth);
-
- if (!lpm->tbl24[tbl24_index].valid) {
- /* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- /* Check tbl8 allocation was successful. */
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- /* Find index into tbl8 and range. */
- tbl8_index = (tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
- (ip_masked & 0xFF);
-
- /* Set tbl8 entry. */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } /* If valid entry but not extended calculate the index into Table8. */
- else if (lpm->tbl24[tbl24_index].valid_group == 0) {
- /* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_group_start +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /* Populate new tbl8 with tbl24 value. */
- for (i = tbl8_group_start; i < tbl8_group_end; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = lpm->tbl24[tbl24_index].depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop =
- lpm->tbl24[tbl24_index].next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- /* Insert new rule into the tbl8 entry. */
- for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } else { /*
- * If it is valid, extended entry calculate the index into tbl8.
- */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- /*
- * Setting tbl8 entry in one go to avoid race
- * condition
- */
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -1037,7 +519,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
if (!lpm->tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
/* Check tbl8 allocation was successful. */
if (tbl8_group_index < 0) {
@@ -1083,7 +565,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
} /* If valid entry but not extended calculate the index into Table8. */
else if (lpm->tbl24[tbl24_index].valid_group == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
if (tbl8_group_index < 0) {
return tbl8_group_index;
@@ -1177,48 +659,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* Add a route
*/
int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- int32_t rule_index, status = 0;
- uint32_t ip_masked;
-
- /* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- ip_masked = ip & depth_to_mask(depth);
-
- /* Add the rule to the rule table. */
- rule_index = rule_add_v20(lpm, ip_masked, depth, next_hop);
-
- /* If the is no space available for new rule return error. */
- if (rule_index < 0) {
- return rule_index;
- }
-
- if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v20(lpm, ip_masked, depth, next_hop);
- } else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v20(lpm, ip_masked, depth, next_hop);
-
- /*
- * If add fails due to exhaustion of tbl8 extensions delete
- * rule that was added to rule table.
- */
- if (status < 0) {
- rule_delete_v20(lpm, rule_index, depth);
-
- return status;
- }
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
-
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
int32_t rule_index, status = 0;
@@ -1231,7 +672,7 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add_v1604(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(lpm, ip_masked, depth, next_hop);
/* If the is no space available for new rule return error. */
if (rule_index < 0) {
@@ -1239,16 +680,16 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(lpm, ip_masked, depth, next_hop);
} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(lpm, ip_masked, depth, next_hop);
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
if (status < 0) {
- rule_delete_v1604(lpm, rule_index, depth);
+ rule_delete(lpm, rule_index, depth);
return status;
}
@@ -1256,42 +697,12 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_add, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t next_hop), rte_lpm_add_v1604);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
-{
- uint32_t ip_masked;
- int32_t rule_index;
-
- /* Check user arguments. */
- if ((lpm == NULL) ||
- (next_hop == NULL) ||
- (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- /* Look for the rule using rule_find. */
- ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v20(lpm, ip_masked, depth);
-
- if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
- return 1;
- }
-
- /* If rule is not found return 0. */
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop)
{
uint32_t ip_masked;
@@ -1305,7 +716,7 @@ uint32_t *next_hop)
/* Look for the rule using rule_find. */
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v1604(lpm, ip_masked, depth);
+ rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
*next_hop = lpm->rules_tbl[rule_index].next_hop;
@@ -1315,12 +726,9 @@ uint32_t *next_hop)
/* If rule is not found return 0. */
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t *next_hop), rte_lpm_is_rule_present_v1604);
static int32_t
-find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *sub_rule_depth)
{
int32_t rule_index;
@@ -1330,7 +738,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
- rule_index = rule_find_v20(lpm, ip_masked, prev_depth);
+ rule_index = rule_find(lpm, ip_masked, prev_depth);
if (rule_index >= 0) {
*sub_rule_depth = prev_depth;
@@ -1342,133 +750,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
}
static int32_t
-find_previous_rule_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t *sub_rule_depth)
-{
- int32_t rule_index;
- uint32_t ip_masked;
- uint8_t prev_depth;
-
- for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
- ip_masked = ip & depth_to_mask(prev_depth);
-
- rule_index = rule_find_v1604(lpm, ip_masked, prev_depth);
-
- if (rule_index >= 0) {
- *sub_rule_depth = prev_depth;
- return rule_index;
- }
- }
-
- return -1;
-}
-
-static int32_t
-delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
-
- /* Calculate the range and index into Table24. */
- tbl24_range = depth_to_range(depth);
- tbl24_index = (ip_masked >> 8);
-
- /*
- * Firstly check the sub_rule_index. A -1 indicates no replacement rule
- * and a positive number indicates a sub_rule_index.
- */
- if (sub_rule_index < 0) {
- /*
- * If no replacement rule exists then invalidate entries
- * associated with this rule.
- */
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- zero_tbl24_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = 0,
- };
- zero_tbl24_entry.next_hop = 0;
- __atomic_store(&lpm->tbl24[i],
- &zero_tbl24_entry, __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- lpm->tbl8[j].valid = INVALID;
- }
- }
- }
- } else {
- /*
- * If a replacement rule exists then modify entries
- * associated with this rule.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = sub_rule_depth,
- };
-
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = sub_rule_depth,
- };
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1575,7 +857,7 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* thus can be recycled
*/
static int32_t
-tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8,
uint32_t tbl8_group_start)
{
uint32_t tbl8_group_end, i;
@@ -1622,140 +904,7 @@ tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
}
static int32_t
-tbl8_recycle_check_v1604(struct rte_lpm_tbl_entry *tbl8,
- uint32_t tbl8_group_start)
-{
- uint32_t tbl8_group_end, i;
- tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /*
- * Check the first entry of the given tbl8. If it is invalid we know
- * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
- * (As they would affect all entries in a tbl8) and thus this table
- * can not be recycled.
- */
- if (tbl8[tbl8_group_start].valid) {
- /*
- * If first entry is valid check if the depth is less than 24
- * and if so check the rest of the entries to verify that they
- * are all of this depth.
- */
- if (tbl8[tbl8_group_start].depth <= MAX_DEPTH_TBL24) {
- for (i = (tbl8_group_start + 1); i < tbl8_group_end;
- i++) {
-
- if (tbl8[i].depth !=
- tbl8[tbl8_group_start].depth) {
-
- return -EEXIST;
- }
- }
- /* If all entries are the same return the tb8 index */
- return tbl8_group_start;
- }
-
- return -EEXIST;
- }
- /*
- * If the first entry is invalid check if the rest of the entries in
- * the tbl8 are invalid.
- */
- for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
- if (tbl8[i].valid)
- return -EEXIST;
- }
- /* If no valid entries are found then return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
- tbl8_range, i;
- int32_t tbl8_recycle_index;
-
- /*
- * Calculate the index into tbl24 and range. Note: All depths larger
- * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
- */
- tbl24_index = ip_masked >> 8;
-
- /* Calculate the index into tbl8 and range. */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
- tbl8_range = depth_to_range(depth);
-
- if (sub_rule_index < 0) {
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be removed or modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- lpm->tbl8[i].valid = INVALID;
- }
- } else {
- /* Set new tbl8 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- };
-
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
-
- /*
- * Check if there are any valid entries in this tbl8 group. If all
- * tbl8 entries are invalid we can free the tbl8 and invalidate the
- * associated tbl24 entry.
- */
-
- tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start);
-
- if (tbl8_recycle_index == -EINVAL) {
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- lpm->tbl24[tbl24_index].valid = 0;
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- } else if (tbl8_recycle_index > -1) {
- /* Update tbl24 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = lpm->tbl8[tbl8_recycle_index].depth,
- };
-
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELAXED);
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1810,7 +959,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* associated tbl24 entry.
*/
- tbl8_recycle_index = tbl8_recycle_check_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition.
@@ -1818,7 +967,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
*/
lpm->tbl24[tbl24_index].valid = 0;
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
} else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl_entry new_tbl24_entry = {
@@ -1834,7 +983,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
__atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELAXED);
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
}
#undef group_idx
return 0;
@@ -1844,7 +993,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* Deletes a rule
*/
int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
uint32_t ip_masked;
@@ -1863,7 +1012,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* Find the index of the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find_v20(lpm, ip_masked, depth);
+ rule_to_delete_index = rule_find(lpm, ip_masked, depth);
/*
* Check if rule_to_delete_index was found. If no rule was found the
@@ -1873,7 +1022,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
/* Delete the rule from the rule table. */
- rule_delete_v20(lpm, rule_to_delete_index, depth);
+ rule_delete(lpm, rule_to_delete_index, depth);
/*
* Find rule to replace the rule_to_delete. If there is no rule to
@@ -1881,100 +1030,26 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* entries associated with this rule.
*/
sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v20(lpm, ip, depth, &sub_rule_depth);
+ sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
/*
* If the input depth value is less than 25 use function
* delete_depth_small otherwise use delete_depth_big.
*/
if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v20(lpm, ip_masked, depth,
+ return delete_depth_small(lpm, ip_masked, depth,
sub_rule_index, sub_rule_depth);
} else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v20(lpm, ip_masked, depth, sub_rule_index,
+ return delete_depth_big(lpm, ip_masked, depth, sub_rule_index,
sub_rule_depth);
}
}
-VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
-
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
-{
- int32_t rule_to_delete_index, sub_rule_index;
- uint32_t ip_masked;
- uint8_t sub_rule_depth;
- /*
- * Check input arguments. Note: IP must be a positive integer of 32
- * bits in length therefore it need not be checked.
- */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
- return -EINVAL;
- }
-
- ip_masked = ip & depth_to_mask(depth);
-
- /*
- * Find the index of the input rule, that needs to be deleted, in the
- * rule table.
- */
- rule_to_delete_index = rule_find_v1604(lpm, ip_masked, depth);
-
- /*
- * Check if rule_to_delete_index was found. If no rule was found the
- * function rule_find returns -EINVAL.
- */
- if (rule_to_delete_index < 0)
- return -EINVAL;
-
- /* Delete the rule from the rule table. */
- rule_delete_v1604(lpm, rule_to_delete_index, depth);
-
- /*
- * Find rule to replace the rule_to_delete. If there is no rule to
- * replace the rule_to_delete we return -1 and invalidate the table
- * entries associated with this rule.
- */
- sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v1604(lpm, ip, depth, &sub_rule_depth);
-
- /*
- * If the input depth value is less than 25 use function
- * delete_depth_small otherwise use delete_depth_big.
- */
- if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v1604(lpm, ip_masked, depth,
- sub_rule_index, sub_rule_depth);
- } else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v1604(lpm, ip_masked, depth, sub_rule_index,
- sub_rule_depth);
- }
-}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth), rte_lpm_delete_v1604);
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm)
-{
- /* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
-
- /* Zero tbl24. */
- memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
-
- /* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
-
- /* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
-}
-VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
-
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm)
{
/* Zero rule information. */
memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1989,6 +1064,3 @@ rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm *lpm),
- rte_lpm_delete_all_v1604);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 906ec44830..ca9627a141 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -65,31 +65,6 @@ extern "C" {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- /**
- * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
- * a group index pointing to a tbl8 structure (tbl24 only, when
- * valid_group is set)
- */
- RTE_STD_C11
- union {
- uint8_t next_hop;
- uint8_t group_idx;
- };
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- /**
- * For tbl24:
- * - valid_group == 0: entry stores a next hop
- * - valid_group == 1: entry stores a group_index pointing to a tbl8
- * For tbl8:
- * - valid_group indicates whether the current tbl8 is in use or not
- */
- uint8_t valid_group :1;
- uint8_t depth :6; /**< Rule depth. */
-} __rte_aligned(sizeof(uint16_t));
-
__extension__
struct rte_lpm_tbl_entry {
/**
@@ -112,16 +87,6 @@ struct rte_lpm_tbl_entry {
};
#else
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- union {
- uint8_t group_idx;
- uint8_t next_hop;
- };
-} __rte_aligned(sizeof(uint16_t));
__extension__
struct rte_lpm_tbl_entry {
@@ -142,11 +107,6 @@ struct rte_lpm_config {
};
/** @internal Rule structure. */
-struct rte_lpm_rule_v20 {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
-};
-
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint32_t next_hop; /**< Rule next hop. */
@@ -159,21 +119,6 @@ struct rte_lpm_rule_info {
};
/** @internal LPM structure. */
-struct rte_lpm_v20 {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
- /* LPM Tables. */
- struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl8 table. */
- struct rte_lpm_rule_v20 rules_tbl[]
- __rte_cache_aligned; /**< LPM rules. */
-};
-
struct rte_lpm {
/* LPM metadata. */
char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
@@ -210,11 +155,6 @@ struct rte_lpm {
struct rte_lpm *
rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config);
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
-struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
- const struct rte_lpm_config *config);
/**
* Find an existing LPM object and return a pointer to it.
@@ -228,10 +168,6 @@ rte_lpm_create_v1604(const char *name, int socket_id,
*/
struct rte_lpm *
rte_lpm_find_existing(const char *name);
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name);
-struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name);
/**
* Free an LPM object.
@@ -243,10 +179,6 @@ rte_lpm_find_existing_v1604(const char *name);
*/
void
rte_lpm_free(struct rte_lpm *lpm);
-void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm);
/**
* Add a rule to the LPM table.
@@ -264,12 +196,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm);
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
-int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -289,12 +215,6 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -310,10 +230,6 @@ uint32_t *next_hop);
*/
int
rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
/**
* Delete all rules from the LPM table.
@@ -323,10 +239,6 @@ rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
*/
void
rte_lpm_delete_all(struct rte_lpm *lpm);
-void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
/**
* Lookup an IP into the LPM table.
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 9b8aeb9721..b981e40714 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -808,18 +808,6 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
return 1;
}
-/*
- * Add a route
- */
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop)
-{
- return rte_lpm6_add_v1705(lpm, ip, depth, next_hop);
-}
-VERSION_SYMBOL(rte_lpm6_add, _v20, 2.0);
-
-
/*
* Simulate adding a route to LPM
*
@@ -841,7 +829,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
/* Inspect the first three bytes through tbl24 on the first step. */
ret = simulate_add_step(lpm, lpm->tbl24, &tbl_next, masked_ip,
- ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
+ ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
total_need_tbl_nb = need_tbl_nb;
/*
* Inspect one by one the rest of the bytes until
@@ -850,7 +838,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && ret == 1; i++) {
tbl = tbl_next;
ret = simulate_add_step(lpm, tbl, &tbl_next, masked_ip, 1,
- (uint8_t)(i+1), depth, &need_tbl_nb);
+ (uint8_t)(i + 1), depth, &need_tbl_nb);
total_need_tbl_nb += need_tbl_nb;
}
@@ -861,9 +849,12 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
return 0;
}
+/*
+ * Add a route
+ */
int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop)
+rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+ uint32_t next_hop)
{
struct rte_lpm6_tbl_entry *tbl;
struct rte_lpm6_tbl_entry *tbl_next = NULL;
@@ -895,8 +886,8 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
/* Inspect the first three bytes through tbl24 on the first step. */
tbl = lpm->tbl24;
status = add_step(lpm, tbl, TBL24_IND, &tbl_next, &tbl_next_num,
- masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
- is_new_rule);
+ masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
+ is_new_rule);
assert(status >= 0);
/*
@@ -906,17 +897,13 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && status == 1; i++) {
tbl = tbl_next;
status = add_step(lpm, tbl, tbl_next_num, &tbl_next,
- &tbl_next_num, masked_ip, 1, (uint8_t)(i+1),
- depth, next_hop, is_new_rule);
+ &tbl_next_num, masked_ip, 1, (uint8_t)(i + 1),
+ depth, next_hop, is_new_rule);
assert(status >= 0);
}
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_add, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip,
- uint8_t depth, uint32_t next_hop),
- rte_lpm6_add_v1705);
/*
* Takes a pointer to a table entry and inspect one level.
@@ -955,25 +942,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
* Looks up an IP
*/
int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_lookup_v1705(lpm, ip, &next_hop32);
- if (status == 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-}
-VERSION_SYMBOL(rte_lpm6_lookup, _v20, 2.0);
-
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
uint32_t *next_hop)
{
const struct rte_lpm6_tbl_entry *tbl;
@@ -1000,56 +969,12 @@ rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop), rte_lpm6_lookup_v1705);
/*
* Looks up a group of IP addresses
*/
int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n)
-{
- unsigned i;
- const struct rte_lpm6_tbl_entry *tbl;
- const struct rte_lpm6_tbl_entry *tbl_next = NULL;
- uint32_t tbl24_index, next_hop;
- uint8_t first_byte;
- int status;
-
- /* DEBUG: Check user input arguments. */
- if ((lpm == NULL) || (ips == NULL) || (next_hops == NULL))
- return -EINVAL;
-
- for (i = 0; i < n; i++) {
- first_byte = LOOKUP_FIRST_BYTE;
- tbl24_index = (ips[i][0] << BYTES2_SIZE) |
- (ips[i][1] << BYTE_SIZE) | ips[i][2];
-
- /* Calculate pointer to the first entry to be inspected */
- tbl = &lpm->tbl24[tbl24_index];
-
- do {
- /* Continue inspecting following levels until success or failure */
- status = lookup_step(lpm, tbl, &tbl_next, ips[i], first_byte++,
- &next_hop);
- tbl = tbl_next;
- } while (status == 1);
-
- if (status < 0)
- next_hops[i] = -1;
- else
- next_hops[i] = (int16_t)next_hop;
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm6_lookup_bulk_func, _v20, 2.0);
-
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
+rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n)
{
@@ -1089,37 +1014,12 @@ rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup_bulk_func, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n),
- rte_lpm6_lookup_bulk_func_v1705);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_is_rule_present_v1705(lpm, ip, depth, &next_hop32);
- if (status > 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-
-}
-VERSION_SYMBOL(rte_lpm6_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop)
{
uint8_t masked_ip[RTE_LPM6_IPV6_ADDR_SIZE];
@@ -1135,10 +1035,6 @@ rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
return rule_find(lpm, masked_ip, depth, next_hop);
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_is_rule_present, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_is_rule_present(struct rte_lpm6 *lpm,
- uint8_t *ip, uint8_t depth, uint32_t *next_hop),
- rte_lpm6_is_rule_present_v1705);
/*
* Delete a rule from the rule table.
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index 5d59ccb1fe..37dfb20249 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -96,12 +96,6 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t next_hop);
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -121,12 +115,6 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop);
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -184,11 +172,6 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
*/
int
rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
-int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table.
@@ -210,14 +193,6 @@ int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n);
#ifdef __cplusplus
}
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (4 preceding siblings ...)
2019-10-17 14:31 23% ` [dpdk-dev] [PATCH v4 03/10] buildtools: add ABI update shell script Anatoly Burakov
@ 2019-10-17 14:31 4% ` Anatoly Burakov
2019-10-17 21:04 0% ` Carrillo, Erik G
2019-10-21 13:24 3% ` Kevin Traynor
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 05/10] lpm: " Anatoly Burakov
` (5 subsequent siblings)
11 siblings, 2 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, bruce.richardson, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_timer/rte_timer.c | 90 ++----------------------------------
lib/librte_timer/rte_timer.h | 15 ------
2 files changed, 5 insertions(+), 100 deletions(-)
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index bdcf05d06b..de6959b809 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -68,9 +68,6 @@ static struct rte_timer_data *rte_timer_data_arr;
static const uint32_t default_data_id;
static uint32_t rte_timer_subsystem_initialized;
-/* For maintaining older interfaces for a period */
-static struct rte_timer_data default_timer_data;
-
/* when debug is enabled, store some statistics */
#ifdef RTE_LIBRTE_TIMER_DEBUG
#define __TIMER_STAT_ADD(priv_timer, name, n) do { \
@@ -131,22 +128,6 @@ rte_timer_data_dealloc(uint32_t id)
return 0;
}
-void
-rte_timer_subsystem_init_v20(void)
-{
- unsigned lcore_id;
- struct priv_timer *priv_timer = default_timer_data.priv_timer;
-
- /* since priv_timer is static, it's zeroed by default, so only init some
- * fields.
- */
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id ++) {
- rte_spinlock_init(&priv_timer[lcore_id].list_lock);
- priv_timer[lcore_id].prev_lcore = lcore_id;
- }
-}
-VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
-
/* Init the timer library. Allocate an array of timer data structs in shared
* memory, and allocate the zeroth entry for use with original timer
* APIs. Since the intersection of the sets of lcore ids in primary and
@@ -154,7 +135,7 @@ VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
* multiple processes.
*/
int
-rte_timer_subsystem_init_v1905(void)
+rte_timer_subsystem_init(void)
{
const struct rte_memzone *mz;
struct rte_timer_data *data;
@@ -209,9 +190,6 @@ rte_timer_subsystem_init_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),
- rte_timer_subsystem_init_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);
void
rte_timer_subsystem_finalize(void)
@@ -552,42 +530,13 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
/* Reset and start the timer associated with the timer handle tim */
int
-rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg)
-{
- uint64_t cur_time = rte_get_timer_cycles();
- uint64_t period;
-
- if (unlikely((tim_lcore != (unsigned)LCORE_ID_ANY) &&
- !(rte_lcore_is_enabled(tim_lcore) ||
- rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))
- return -1;
-
- if (type == PERIODICAL)
- period = ticks;
- else
- period = 0;
-
- return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
- fct, arg, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
-
-int
-rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned int tim_lcore,
rte_timer_cb_t fct, void *arg)
{
return rte_timer_alt_reset(default_data_id, tim, ticks, type,
tim_lcore, fct, arg);
}
-MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type,
- unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg),
- rte_timer_reset_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);
int
rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
@@ -658,20 +607,10 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked,
/* Stop the timer associated with the timer handle tim */
int
-rte_timer_stop_v20(struct rte_timer *tim)
-{
- return __rte_timer_stop(tim, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);
-
-int
-rte_timer_stop_v1905(struct rte_timer *tim)
+rte_timer_stop(struct rte_timer *tim)
{
return rte_timer_alt_stop(default_data_id, tim);
}
-MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),
- rte_timer_stop_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);
int
rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
@@ -817,15 +756,8 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
priv_timer[lcore_id].running_tim = NULL;
}
-void
-rte_timer_manage_v20(void)
-{
- __rte_timer_manage(&default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
-
int
-rte_timer_manage_v1905(void)
+rte_timer_manage(void)
{
struct rte_timer_data *timer_data;
@@ -835,8 +767,6 @@ rte_timer_manage_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);
int
rte_timer_alt_manage(uint32_t timer_data_id,
@@ -1074,21 +1004,11 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
#endif
}
-void
-rte_timer_dump_stats_v20(FILE *f)
-{
- __rte_timer_dump_stats(&default_timer_data, f);
-}
-VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);
-
int
-rte_timer_dump_stats_v1905(FILE *f)
+rte_timer_dump_stats(FILE *f)
{
return rte_timer_alt_dump_stats(default_data_id, f);
}
-MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),
- rte_timer_dump_stats_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);
int
rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
index 05d287d8f2..9dc5fc3092 100644
--- a/lib/librte_timer/rte_timer.h
+++ b/lib/librte_timer/rte_timer.h
@@ -181,8 +181,6 @@ int rte_timer_data_dealloc(uint32_t id);
* subsystem
*/
int rte_timer_subsystem_init(void);
-int rte_timer_subsystem_init_v1905(void);
-void rte_timer_subsystem_init_v20(void);
/**
* @warning
@@ -250,13 +248,6 @@ void rte_timer_init(struct rte_timer *tim);
int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-
/**
* Loop until rte_timer_reset() succeeds.
@@ -313,8 +304,6 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
* - (-1): The timer is in the RUNNING or CONFIG state.
*/
int rte_timer_stop(struct rte_timer *tim);
-int rte_timer_stop_v1905(struct rte_timer *tim);
-int rte_timer_stop_v20(struct rte_timer *tim);
/**
* Loop until rte_timer_stop() succeeds.
@@ -358,8 +347,6 @@ int rte_timer_pending(struct rte_timer *tim);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_manage(void);
-int rte_timer_manage_v1905(void);
-void rte_timer_manage_v20(void);
/**
* Dump statistics about timers.
@@ -371,8 +358,6 @@ void rte_timer_manage_v20(void);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_dump_stats(FILE *f);
-int rte_timer_dump_stats_v1905(FILE *f);
-void rte_timer_dump_stats_v20(FILE *f);
/**
* @warning
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v4 03/10] buildtools: add ABI update shell script
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (3 preceding siblings ...)
2019-10-17 14:31 14% ` [dpdk-dev] [PATCH v4 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
@ 2019-10-17 14:31 23% ` Anatoly Burakov
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code Anatoly Burakov
` (6 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
In order to facilitate mass updating of version files, add a shell
script that recurses into lib/ and drivers/ directories and calls
the ABI version update script.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Switch to sh rather than bash, and remove bash-isms
- Address review comments
v2:
- Add this patch to split the shell script from previous commit
- Fixup miscellaneous bugs
buildtools/update-abi.sh | 42 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100755 buildtools/update-abi.sh
diff --git a/buildtools/update-abi.sh b/buildtools/update-abi.sh
new file mode 100755
index 0000000000..89ba5804a6
--- /dev/null
+++ b/buildtools/update-abi.sh
@@ -0,0 +1,42 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+abi_version=$1
+abi_version_file="./config/ABI_VERSION"
+update_path="lib drivers"
+
+if [ -z "$1" ]; then
+ # output to stderr
+ >&2 echo "Please provide ABI version"
+ exit 1
+fi
+
+# check version string format
+echo $abi_version | grep -q -e "^[[:digit:]]\{1,2\}\.[[:digit:]]\{1,2\}$"
+if [ "$?" -ne 0 ]; then
+ # output to stderr
+ >&2 echo "ABI version must be formatted as MAJOR.MINOR version"
+ exit 1
+fi
+
+if [ -n "$2" ]; then
+ abi_version_file=$2
+fi
+
+if [ -n "$3" ]; then
+ # drop $1 and $2
+ shift 2
+ # assign all other arguments as update paths
+ update_path=$@
+fi
+
+echo "New ABI version:" $abi_version
+echo "ABI_VERSION path:" $abi_version_file
+echo "Path to update:" $update_path
+
+echo $abi_version > $abi_version_file
+
+find $update_path -name \*version.map -exec \
+ ./buildtools/update_version_map_abi.py {} \
+ $abi_version \; -print
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v4 02/10] buildtools: add script for updating symbols abi version
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (2 preceding siblings ...)
2019-10-17 14:31 7% ` [dpdk-dev] [PATCH v4 01/10] config: change ABI versioning to global Anatoly Burakov
@ 2019-10-17 14:31 14% ` Anatoly Burakov
2019-10-17 14:31 23% ` [dpdk-dev] [PATCH v4 03/10] buildtools: add ABI update shell script Anatoly Burakov
` (7 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev; +Cc: Pawel Modrak, john.mcnamara, bruce.richardson, thomas, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Add a script that automatically merges all stable ABI's under one
ABI section with the new version, while leaving experimental
section exactly as it is.
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Add comments to regex patterns
v2:
- Reworked script to be pep8-compliant and more reliable
buildtools/update_version_map_abi.py | 170 +++++++++++++++++++++++++++
1 file changed, 170 insertions(+)
create mode 100755 buildtools/update_version_map_abi.py
diff --git a/buildtools/update_version_map_abi.py b/buildtools/update_version_map_abi.py
new file mode 100755
index 0000000000..50283e6a3d
--- /dev/null
+++ b/buildtools/update_version_map_abi.py
@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+"""
+A Python program to update the ABI version and function names in a DPDK
+lib_*_version.map file. Called from the buildtools/update_abi.sh utility.
+"""
+
+from __future__ import print_function
+import argparse
+import sys
+import re
+
+
+def __parse_map_file(f_in):
+ # match function name, followed by semicolon, followed by EOL, optionally
+ # with whitespace inbetween each item
+ func_line_regex = re.compile(r"\s*"
+ r"(?P<func>[a-zA-Z_0-9]+)"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+ # match section name, followed by opening bracked, followed by EOL,
+ # optionally with whitespace inbetween each item
+ section_begin_regex = re.compile(r"\s*"
+ r"(?P<version>[a-zA-Z0-9_\.]+)"
+ r"\s*"
+ r"{"
+ r"\s*"
+ r"$")
+ # match closing bracket, optionally followed by section name (for when we
+ # inherit from another ABI version), followed by semicolon, followed by
+ # EOL, optionally with whitespace inbetween each item
+ section_end_regex = re.compile(r"\s*"
+ r"}"
+ r"\s*"
+ r"(?P<parent>[a-zA-Z0-9_\.]+)?"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+
+ # for stable ABI, we don't care about which version introduced which
+ # function, we just flatten the list. there are dupes in certain files, so
+ # use a set instead of a list
+ stable_lines = set()
+ # copy experimental section as is
+ experimental_lines = []
+ is_experimental = False
+
+ # gather all functions
+ for line in f_in:
+ # clean up the line
+ line = line.strip('\n').strip()
+
+ # is this an end of section?
+ match = section_end_regex.match(line)
+ if match:
+ # whatever section this was, it's not active any more
+ is_experimental = False
+ continue
+
+ # if we're in the middle of experimental section, we need to copy
+ # the section verbatim, so just add the line
+ if is_experimental:
+ experimental_lines += [line]
+ continue
+
+ # skip empty lines
+ if not line:
+ continue
+
+ # is this a beginning of a new section?
+ match = section_begin_regex.match(line)
+ if match:
+ cur_section = match.group("version")
+ # is it experimental?
+ is_experimental = cur_section == "EXPERIMENTAL"
+ continue
+
+ # is this a function?
+ match = func_line_regex.match(line)
+ if match:
+ stable_lines.add(match.group("func"))
+
+ return stable_lines, experimental_lines
+
+
+def __regenerate_map_file(f_out, abi_version, stable_lines,
+ experimental_lines):
+ # print ABI version header
+ print("DPDK_{} {{".format(abi_version), file=f_out)
+
+ if stable_lines:
+ # print global section
+ print("\tglobal:", file=f_out)
+ # blank line
+ print(file=f_out)
+
+ # print all stable lines, alphabetically sorted
+ for line in sorted(stable_lines):
+ print("\t{};".format(line), file=f_out)
+
+ # another blank line
+ print(file=f_out)
+
+ # print local section
+ print("\tlocal: *;", file=f_out)
+
+ # end stable version
+ print("};", file=f_out)
+
+ # do we have experimental lines?
+ if not experimental_lines:
+ return
+
+ # another blank line
+ print(file=f_out)
+
+ # start experimental section
+ print("EXPERIMENTAL {", file=f_out)
+
+ # print all experimental lines as they were
+ for line in experimental_lines:
+ # don't print empty whitespace
+ if not line:
+ print("", file=f_out)
+ else:
+ print("\t{}".format(line), file=f_out)
+
+ # end section
+ print("};", file=f_out)
+
+
+def __main():
+ arg_parser = argparse.ArgumentParser(
+ description='Merge versions in linker version script.')
+
+ arg_parser.add_argument("map_file", type=str,
+ help='path to linker version script file '
+ '(pattern: *version.map)')
+ arg_parser.add_argument("abi_version", type=str,
+ help='target ABI version (pattern: MAJOR.MINOR)')
+
+ parsed = arg_parser.parse_args()
+
+ if not parsed.map_file.endswith('version.map'):
+ print("Invalid input file: {}".format(parsed.map_file),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ if not re.match(r"\d{1,2}\.\d{1,2}", parsed.abi_version):
+ print("Invalid ABI version: {}".format(parsed.abi_version),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ with open(parsed.map_file) as f_in:
+ stable_lines, experimental_lines = __parse_map_file(f_in)
+
+ with open(parsed.map_file, 'w') as f_out:
+ __regenerate_map_file(f_out, parsed.abi_version, stable_lines,
+ experimental_lines)
+
+
+if __name__ == "__main__":
+ __main()
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v4 01/10] config: change ABI versioning to global
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
2019-10-17 8:50 4% ` Bruce Richardson
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
@ 2019-10-17 14:31 7% ` Anatoly Burakov
2019-10-17 14:31 14% ` [dpdk-dev] [PATCH v4 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
` (8 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Thomas Monjalon, Bruce Richardson, john.mcnamara,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
As per new ABI policy, all of the libraries are now versioned using
one global ABI version. Changes in this patch implement the
necessary steps to enable that.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
Notes:
v3:
- Removed Windows support from Makefile changes
- Removed unneeded path conversions from meson files
buildtools/meson.build | 2 ++
config/ABI_VERSION | 1 +
config/meson.build | 4 +++-
drivers/meson.build | 20 ++++++++++++--------
lib/meson.build | 18 +++++++++++-------
meson_options.txt | 2 --
mk/rte.lib.mk | 13 ++++---------
7 files changed, 33 insertions(+), 27 deletions(-)
create mode 100644 config/ABI_VERSION
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 32c79c1308..78ce69977d 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -12,3 +12,5 @@ if python3.found()
else
map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
endif
+
+is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
diff --git a/config/ABI_VERSION b/config/ABI_VERSION
new file mode 100644
index 0000000000..9a7c1e503f
--- /dev/null
+++ b/config/ABI_VERSION
@@ -0,0 +1 @@
+20.0
diff --git a/config/meson.build b/config/meson.build
index a27f731f85..374735590c 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -18,6 +18,8 @@ endforeach
# depending on the configuration options
pver = meson.project_version().split('.')
major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
+abi_version = run_command(find_program('cat', 'more'),
+ files('ABI_VERSION')).stdout().strip()
# extract all version information into the build configuration
dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
@@ -37,7 +39,7 @@ endif
pmd_subdir_opt = get_option('drivers_install_subdir')
if pmd_subdir_opt.contains('<VERSION>')
- pmd_subdir_opt = major_version.join(pmd_subdir_opt.split('<VERSION>'))
+ pmd_subdir_opt = abi_version.join(pmd_subdir_opt.split('<VERSION>'))
endif
driver_install_path = join_paths(get_option('libdir'), pmd_subdir_opt)
eal_pmd_path = join_paths(get_option('prefix'), driver_install_path)
diff --git a/drivers/meson.build b/drivers/meson.build
index 2ed2e95411..fd628d9587 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -110,12 +110,19 @@ foreach class:dpdk_driver_classes
output: out_filename,
depends: [pmdinfogen, tmp_lib])
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/@2@_version.map'.format(
+ meson.current_source_dir(),
+ drv_path, lib_name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# now build the static driver
@@ -128,9 +135,6 @@ foreach class:dpdk_driver_classes
install: true)
# now build the shared driver
- version_map = '@0@/@1@/@2@_version.map'.format(
- meson.current_source_dir(),
- drv_path, lib_name)
shared_lib = shared_library(lib_name,
sources,
objects: objs,
diff --git a/lib/meson.build b/lib/meson.build
index e5ff838934..e626da778c 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -97,12 +97,18 @@ foreach l:libraries
cflags += '-DALLOW_EXPERIMENTAL_API'
endif
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/rte_@2@_version.map'.format(
+ meson.current_source_dir(), dir_name, name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# first build static lib
@@ -120,8 +126,6 @@ foreach l:libraries
# then use pre-build objects to build shared lib
sources = []
objs += static_lib.extract_all_objects(recursive: false)
- version_map = '@0@/@1@/rte_@2@_version.map'.format(
- meson.current_source_dir(), dir_name, name)
implib = dir_name + '.dll.a'
def_file = custom_target(name + '_def',
diff --git a/meson_options.txt b/meson_options.txt
index 448f3e63dc..000e38fd98 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -28,8 +28,6 @@ option('max_lcores', type: 'integer', value: 128,
description: 'maximum number of cores/threads supported by EAL')
option('max_numa_nodes', type: 'integer', value: 4,
description: 'maximum number of NUMA nodes supported by EAL')
-option('per_library_versions', type: 'boolean', value: true,
- description: 'true: each lib gets its own version number, false: DPDK version used for each lib')
option('tests', type: 'boolean', value: true,
description: 'build unit tests')
option('use_hpet', type: 'boolean', value: false,
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 4df8849a08..e1ea292b6e 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -11,20 +11,15 @@ EXTLIB_BUILD ?= n
# VPATH contains at least SRCDIR
VPATH += $(SRCDIR)
-ifneq ($(CONFIG_RTE_MAJOR_ABI),)
-ifneq ($(LIBABIVER),)
-LIBABIVER := $(CONFIG_RTE_MAJOR_ABI)
-endif
+ifneq ($(shell grep "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
+LIBABIVER := $(shell cat $(RTE_SRCDIR)/config/ABI_VERSION)
+else
+LIBABIVER := 0
endif
ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
LIB := $(patsubst %.a,%.so.$(LIBABIVER),$(LIB))
ifeq ($(EXTLIB_BUILD),n)
-ifeq ($(CONFIG_RTE_MAJOR_ABI),)
-ifeq ($(CONFIG_RTE_NEXT_ABI),y)
-LIB := $(LIB).1
-endif
-endif
CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
endif
endif
--
2.17.1
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v4 00/10] Implement the new ABI policy and add helper scripts
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
2019-10-17 8:50 4% ` Bruce Richardson
@ 2019-10-17 14:31 8% ` Anatoly Burakov
2019-10-24 9:46 8% ` [dpdk-dev] [PATCH v5 " Anatoly Burakov
` (10 more replies)
2019-10-17 14:31 7% ` [dpdk-dev] [PATCH v4 01/10] config: change ABI versioning to global Anatoly Burakov
` (9 subsequent siblings)
11 siblings, 11 replies; 200+ results
From: Anatoly Burakov @ 2019-10-17 14:31 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
This patchset prepares the codebase for the new ABI policy and
adds a few helper scripts.
There are two new scripts for managing ABI versions added. The
first one is a Python script that will read in a .map file,
flatten it and update the ABI version to the ABI version
specified on the command-line.
The second one is a shell script that will run the above mentioned
Python script recursively over the source tree and set the ABI
version to either that which is defined in config/ABI_VERSION, or
a user-specified one.
Example of its usage: buildtools/update-abi.sh 20.0
This will recurse into lib/ and drivers/ directory and update
whatever .map files it can find.
The other shell script that's added is one that can take in a .so
file and ensure that its declared public ABI matches either
current ABI, next ABI, or EXPERIMENTAL. This was moved to the
last commit because it made no sense to have it beforehand.
The source tree was verified to follow the new ABI policy using
the following command (assuming built binaries are in build/):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
This returns 0.
Changes since v3:
- Put distributor code back and cleaned it up
- Rebased on latest master and regenerated commit 9
Changes since v2:
- Addressed Bruce's review comments
- Removed single distributor mode as per Dave's suggestion
Changes since v1:
- Reordered patchset to have removal of old ABI's before introducing
the new one to avoid compile breakages between patches
- Added a new patch fixing missing symbol in octeontx common
- Split script commits into multiple commits and reordered them
- Re-generated the ABI bump commit
- Verified all scripts to work
Anatoly Burakov (2):
buildtools: add ABI update shell script
drivers/octeontx: add missing public symbol
Marcin Baran (6):
config: change ABI versioning to global
timer: remove deprecated code
lpm: remove deprecated code
distributor: remove deprecated code
distributor: rename v2.0 ABI to _single suffix
buildtools: add ABI versioning check script
Pawel Modrak (2):
buildtools: add script for updating symbols abi version
build: change ABI version to 20.0
buildtools/check-abi-version.sh | 54 +
buildtools/meson.build | 2 +
buildtools/update-abi.sh | 42 +
buildtools/update_version_map_abi.py | 170 +++
config/ABI_VERSION | 1 +
config/meson.build | 4 +-
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++-
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 7 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
drivers/meson.build | 20 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +-
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 +-
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 +-
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 +-
lib/librte_distributor/Makefile | 2 +-
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 80 +-
.../rte_distributor_private.h | 10 +-
...ributor_v20.c => rte_distributor_single.c} | 57 +-
...ributor_v20.h => rte_distributor_single.h} | 26 +-
.../rte_distributor_v1705.h | 61 --
.../rte_distributor_version.map | 16 +-
lib/librte_eal/rte_eal_version.map | 310 ++----
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +--
lib/librte_eventdev/rte_eventdev_version.map | 130 +--
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +-
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm.c | 996 +-----------------
lib/librte_lpm/rte_lpm.h | 88 --
lib/librte_lpm/rte_lpm6.c | 132 +--
lib/librte_lpm/rte_lpm6.h | 25 -
lib/librte_lpm/rte_lpm_version.map | 39 +-
lib/librte_mbuf/rte_mbuf_version.map | 49 +-
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +-
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +-
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer.c | 90 +-
lib/librte_timer/rte_timer.h | 15 -
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +-
lib/meson.build | 18 +-
meson_options.txt | 2 -
mk/rte.lib.mk | 13 +-
177 files changed, 1141 insertions(+), 2912 deletions(-)
create mode 100755 buildtools/check-abi-version.sh
create mode 100755 buildtools/update-abi.sh
create mode 100755 buildtools/update_version_map_abi.py
create mode 100644 config/ABI_VERSION
rename lib/librte_distributor/{rte_distributor_v20.c => rte_distributor_single.c} (84%)
rename lib/librte_distributor/{rte_distributor_v20.h => rte_distributor_single.h} (89%)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
--
2.17.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-17 14:09 8% ` Luca Boccassi
@ 2019-10-17 14:12 4% ` Bruce Richardson
2019-10-18 10:07 7% ` Kevin Traynor
1 sibling, 0 replies; 200+ results
From: Bruce Richardson @ 2019-10-17 14:12 UTC (permalink / raw)
To: Luca Boccassi
Cc: Anatoly Burakov, Christian Ehrhardt, dev, Marcin Baran,
Thomas Monjalon, john.mcnamara, david.marchand, Pawel Modrak,
ktraynor
On Thu, Oct 17, 2019 at 03:09:00PM +0100, Luca Boccassi wrote:
> On Thu, 2019-10-17 at 09:44 +0100, Bruce Richardson wrote:
> > On Wed, Oct 16, 2019 at 06:03:36PM +0100, Anatoly Burakov wrote:
> > > From: Marcin Baran <
> > > marcinx.baran@intel.com
> > > >
> > >
> > > As per new ABI policy, all of the libraries are now versioned using
> > > one global ABI version. Changes in this patch implement the
> > > necessary steps to enable that.
> > >
> > > Signed-off-by: Marcin Baran <
> > > marcinx.baran@intel.com
> > > >
> > > Signed-off-by: Pawel Modrak <
> > > pawelx.modrak@intel.com
> > > >
> > > Signed-off-by: Anatoly Burakov <
> > > anatoly.burakov@intel.com
> > > >
> > > ---
> > >
> > > Notes:
> > > v3:
> > > - Removed Windows support from Makefile changes
> > > - Removed unneeded path conversions from meson files
> > >
> > > buildtools/meson.build | 2 ++
> > > config/ABI_VERSION | 1 +
> > > config/meson.build | 5 +++--
> > > drivers/meson.build | 20 ++++++++++++--------
> > > lib/meson.build | 18 +++++++++++-------
> > > meson_options.txt | 2 --
> > > mk/rte.lib.mk | 13 ++++---------
> > > 7 files changed, 33 insertions(+), 28 deletions(-)
> > > create mode 100644 config/ABI_VERSION
> > >
> > > diff --git a/buildtools/meson.build b/buildtools/meson.build
> > > index 32c79c1308..78ce69977d 100644
> > > --- a/buildtools/meson.build
> > > +++ b/buildtools/meson.build
> > > @@ -12,3 +12,5 @@ if python3.found()
> > > else
> > > map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
> > > endif
> > > +
> > > +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
> > > diff --git a/config/ABI_VERSION b/config/ABI_VERSION
> > > new file mode 100644
> > > index 0000000000..9a7c1e503f
> > > --- /dev/null
> > > +++ b/config/ABI_VERSION
> > > @@ -0,0 +1 @@
> > > +20.0
> > > diff --git a/config/meson.build b/config/meson.build
> > > index a27f731f85..3cfc02406c 100644
> > > --- a/config/meson.build
> > > +++ b/config/meson.build
> > > @@ -17,7 +17,8 @@ endforeach
> > > # set the major version, which might be used by drivers and
> > > libraries
> > > # depending on the configuration options
> > > pver = meson.project_version().split('.')
> > > -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
> > > +abi_version = run_command(find_program('cat', 'more'),
> > > + files('ABI_VERSION')).stdout().strip()
> > >
> > > # extract all version information into the build configuration
> > > dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
> > > @@ -37,7 +38,7 @@ endif
> > >
> > > pmd_subdir_opt = get_option('drivers_install_subdir')
> > > if pmd_subdir_opt.contains('<VERSION>')
> > > - pmd_subdir_opt =
> > > major_version.join(pmd_subdir_opt.split('<VERSION>'))
> > > + pmd_subdir_opt =
> > > abi_version.join(pmd_subdir_opt.split('<VERSION>'))
> > > endif
> >
> > This is an interesting change, and I'm not sure about it. I think for
> > user-visible changes, version should still refer to DPDK version
> > rather
> > than ABI version. Even with a stable ABI, it makes more sense to me
> > to find
> > the drivers in a 19.11 directory than a 20.0 one. Then again, the
> > drivers
> > should be re-usable across the one ABI version, so perhaps this is
> > the best
> > approach.
> >
> > Thoughts from others? Luca or Kevin, any thoughts from a packagers
> > perspective?
> >
> > /Bruce
>
> Hi,
>
> We are currently assembing this path using the ABI version in
> Debian/Ubuntu, as we want same-ABI libraries not to be co-installed,
> but instead fo use the exact same name/path. So from our POV this
> change seems right.
>
Thanks for confirming, Luca.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-17 8:44 9% ` Bruce Richardson
2019-10-17 10:25 4% ` Burakov, Anatoly
@ 2019-10-17 14:09 8% ` Luca Boccassi
2019-10-17 14:12 4% ` Bruce Richardson
2019-10-18 10:07 7% ` Kevin Traynor
1 sibling, 2 replies; 200+ results
From: Luca Boccassi @ 2019-10-17 14:09 UTC (permalink / raw)
To: Bruce Richardson, Anatoly Burakov, Christian Ehrhardt
Cc: dev, Marcin Baran, Thomas Monjalon, john.mcnamara,
david.marchand, Pawel Modrak, ktraynor
On Thu, 2019-10-17 at 09:44 +0100, Bruce Richardson wrote:
> On Wed, Oct 16, 2019 at 06:03:36PM +0100, Anatoly Burakov wrote:
> > From: Marcin Baran <
> > marcinx.baran@intel.com
> > >
> >
> > As per new ABI policy, all of the libraries are now versioned using
> > one global ABI version. Changes in this patch implement the
> > necessary steps to enable that.
> >
> > Signed-off-by: Marcin Baran <
> > marcinx.baran@intel.com
> > >
> > Signed-off-by: Pawel Modrak <
> > pawelx.modrak@intel.com
> > >
> > Signed-off-by: Anatoly Burakov <
> > anatoly.burakov@intel.com
> > >
> > ---
> >
> > Notes:
> > v3:
> > - Removed Windows support from Makefile changes
> > - Removed unneeded path conversions from meson files
> >
> > buildtools/meson.build | 2 ++
> > config/ABI_VERSION | 1 +
> > config/meson.build | 5 +++--
> > drivers/meson.build | 20 ++++++++++++--------
> > lib/meson.build | 18 +++++++++++-------
> > meson_options.txt | 2 --
> > mk/rte.lib.mk | 13 ++++---------
> > 7 files changed, 33 insertions(+), 28 deletions(-)
> > create mode 100644 config/ABI_VERSION
> >
> > diff --git a/buildtools/meson.build b/buildtools/meson.build
> > index 32c79c1308..78ce69977d 100644
> > --- a/buildtools/meson.build
> > +++ b/buildtools/meson.build
> > @@ -12,3 +12,5 @@ if python3.found()
> > else
> > map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
> > endif
> > +
> > +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
> > diff --git a/config/ABI_VERSION b/config/ABI_VERSION
> > new file mode 100644
> > index 0000000000..9a7c1e503f
> > --- /dev/null
> > +++ b/config/ABI_VERSION
> > @@ -0,0 +1 @@
> > +20.0
> > diff --git a/config/meson.build b/config/meson.build
> > index a27f731f85..3cfc02406c 100644
> > --- a/config/meson.build
> > +++ b/config/meson.build
> > @@ -17,7 +17,8 @@ endforeach
> > # set the major version, which might be used by drivers and
> > libraries
> > # depending on the configuration options
> > pver = meson.project_version().split('.')
> > -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
> > +abi_version = run_command(find_program('cat', 'more'),
> > + files('ABI_VERSION')).stdout().strip()
> >
> > # extract all version information into the build configuration
> > dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
> > @@ -37,7 +38,7 @@ endif
> >
> > pmd_subdir_opt = get_option('drivers_install_subdir')
> > if pmd_subdir_opt.contains('<VERSION>')
> > - pmd_subdir_opt =
> > major_version.join(pmd_subdir_opt.split('<VERSION>'))
> > + pmd_subdir_opt =
> > abi_version.join(pmd_subdir_opt.split('<VERSION>'))
> > endif
>
> This is an interesting change, and I'm not sure about it. I think for
> user-visible changes, version should still refer to DPDK version
> rather
> than ABI version. Even with a stable ABI, it makes more sense to me
> to find
> the drivers in a 19.11 directory than a 20.0 one. Then again, the
> drivers
> should be re-usable across the one ABI version, so perhaps this is
> the best
> approach.
>
> Thoughts from others? Luca or Kevin, any thoughts from a packagers
> perspective?
>
> /Bruce
Hi,
We are currently assembing this path using the ABI version in
Debian/Ubuntu, as we want same-ABI libraries not to be co-installed,
but instead fo use the exact same name/path. So from our POV this
change seems right.
--
Kind regards,
Luca Boccassi
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH] mbuf: support dynamic fields and flags
2019-10-17 11:58 0% ` Ananyev, Konstantin
@ 2019-10-17 12:58 0% ` Olivier Matz
0 siblings, 0 replies; 200+ results
From: Olivier Matz @ 2019-10-17 12:58 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: dev, Thomas Monjalon, Wang, Haiyue, Stephen Hemminger,
Andrew Rybchenko, Wiles, Keith, Jerin Jacob Kollanukkaran
Hi Konstantin,
On Thu, Oct 17, 2019 at 11:58:52AM +0000, Ananyev, Konstantin wrote:
>
> Hi Olivier,
>
> > > > Many features require to store data inside the mbuf. As the room in mbuf
> > > > structure is limited, it is not possible to have a field for each
> > > > feature. Also, changing fields in the mbuf structure can break the API
> > > > or ABI.
> > > >
> > > > This commit addresses these issues, by enabling the dynamic registration
> > > > of fields or flags:
> > > >
> > > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > > given size (>= 1 byte) and alignment constraint.
> > > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > > >
> > > > The typical use case is a PMD that registers space for an offload
> > > > feature, when the application requests to enable this feature. As
> > > > the space in mbuf is limited, the space should only be reserved if it
> > > > is going to be used (i.e when the application explicitly asks for it).
> > > >
> > > > The registration can be done at any moment, but it is not possible
> > > > to unregister fields or flags for now.
> > >
> > > Looks ok to me in general.
> > > Some comments/suggestions inline.
> > > Konstantin
> > >
> > > >
> > > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > > ---
> > > >
> > > > rfc -> v1
> > > >
> > > > * Rebase on top of master
> > > > * Change registration API to use a structure instead of
> > > > variables, getting rid of #defines (Stephen's comment)
> > > > * Update flag registration to use a similar API as fields.
> > > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > > * Add a debug log at registration
> > > > * Add some words in release note
> > > > * Did some performance tests (sugg. by Andrew):
> > > > On my platform, reading a dynamic field takes ~3 cycles more
> > > > than a static field, and ~2 cycles more for writing.
> > > >
> > > > app/test/test_mbuf.c | 114 ++++++-
> > > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > > lib/librte_mbuf/Makefile | 2 +
> > > > lib/librte_mbuf/meson.build | 6 +-
> > > > lib/librte_mbuf/rte_mbuf.h | 25 +-
> > > > lib/librte_mbuf/rte_mbuf_dyn.c | 408 +++++++++++++++++++++++++
> > > > lib/librte_mbuf/rte_mbuf_dyn.h | 163 ++++++++++
> > > > lib/librte_mbuf/rte_mbuf_version.map | 4 +
> > > > 8 files changed, 724 insertions(+), 5 deletions(-)
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > > >
> > > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > > @@ -198,9 +198,12 @@ extern "C" {
> > > > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > > > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> > > >
> > > > -/* add new RX flags here */
> > > > +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> > > >
> > > > -/* add new TX flags here */
> > > > +#define PKT_FIRST_FREE (1ULL << 23)
> > > > +#define PKT_LAST_FREE (1ULL << 39)
> > > > +
> > > > +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > > >
> > > > /**
> > > > * Indicate that the metadata field in the mbuf is in use.
> > > > @@ -738,6 +741,8 @@ struct rte_mbuf {
> > > > */
> > > > struct rte_mbuf_ext_shared_info *shinfo;
> > > >
> > > > + uint64_t dynfield1; /**< Reserved for dynamic fields. */
> > > > + uint64_t dynfield2; /**< Reserved for dynamic fields. */
> > >
> > > Wonder why just not one field:
> > > union {
> > > uint8_t u8[16];
> > > ...
> > > uint64_t u64[2];
> > > } dyn_field1;
> > > ?
> > > Probably would be a bit handy, to refer, register, etc. no?
> >
> > I didn't find any place where we need an access through u8, so I
> > just changed it into uint64_t dynfield1[2].
>
> My thought was - if you'll have all dynamic stuff as one field (uint64_t dyn_field[2]),
> then you woulnd't need any cycles at register() at all.
> But up to you.
I changed it.
> >
> > >
> > > > } __rte_cache_aligned;
> > > >
> > > > /**
> > > > @@ -1684,6 +1689,21 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
> > > > */
> > > > #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
> > > >
> > > > +/**
> > > > + * Copy dynamic fields from m_src to m_dst.
> > > > + *
> > > > + * @param m_dst
> > > > + * The destination mbuf.
> > > > + * @param m_src
> > > > + * The source mbuf.
> > > > + */
> > > > +static inline void
> > > > +rte_mbuf_dynfield_copy(struct rte_mbuf *m_dst, const struct rte_mbuf *m_src)
> > > > +{
> > > > + m_dst->dynfield1 = m_src->dynfield1;
> > > > + m_dst->dynfield2 = m_src->dynfield2;
> > > > +}
> > > > +
> > > > /**
> > > > * Attach packet mbuf to another packet mbuf.
> > > > *
> > > > @@ -1732,6 +1752,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
> > > > mi->vlan_tci_outer = m->vlan_tci_outer;
> > > > mi->tx_offload = m->tx_offload;
> > > > mi->hash = m->hash;
> > > > + rte_mbuf_dynfield_copy(mi, m);
> > > >
> > > > mi->next = NULL;
> > > > mi->pkt_len = mi->data_len;
> > > > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
> > > > new file mode 100644
> > > > index 000000000..13b8742d0
> > > > --- /dev/null
> > > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> > > > @@ -0,0 +1,408 @@
> > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > + * Copyright 2019 6WIND S.A.
> > > > + */
> > > > +
> > > > +#include <sys/queue.h>
> > > > +
> > > > +#include <rte_common.h>
> > > > +#include <rte_eal.h>
> > > > +#include <rte_eal_memconfig.h>
> > > > +#include <rte_tailq.h>
> > > > +#include <rte_errno.h>
> > > > +#include <rte_malloc.h>
> > > > +#include <rte_string_fns.h>
> > > > +#include <rte_mbuf.h>
> > > > +#include <rte_mbuf_dyn.h>
> > > > +
> > > > +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> > > > +
> > > > +struct mbuf_dynfield_elt {
> > > > + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> > > > + struct rte_mbuf_dynfield params;
> > > > + int offset;
> > >
> > > Why not 'size_t offset', to avoid any explicit conversions, etc?
> >
> > Fixed
> >
> >
> > > > +};
> > > > +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> > > > +
> > > > +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> > > > + .name = "RTE_MBUF_DYNFIELD",
> > > > +};
> > > > +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> > > > +
> > > > +struct mbuf_dynflag_elt {
> > > > + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> > > > + struct rte_mbuf_dynflag params;
> > > > + int bitnum;
> > > > +};
> > > > +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> > > > +
> > > > +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> > > > + .name = "RTE_MBUF_DYNFLAG",
> > > > +};
> > > > +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> > > > +
> > > > +struct mbuf_dyn_shm {
> > > > + /** For each mbuf byte, free_space[i] == 1 if space is free. */
> > > > + uint8_t free_space[sizeof(struct rte_mbuf)];
> > > > + /** Bitfield of available flags. */
> > > > + uint64_t free_flags;
> > > > +};
> > > > +static struct mbuf_dyn_shm *shm;
> > > > +
> > > > +/* allocate and initialize the shared memory */
> > > > +static int
> > > > +init_shared_mem(void)
> > > > +{
> > > > + const struct rte_memzone *mz;
> > > > + uint64_t mask;
> > > > +
> > > > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > > > + mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> > > > + sizeof(struct mbuf_dyn_shm),
> > > > + SOCKET_ID_ANY, 0,
> > > > + RTE_CACHE_LINE_SIZE);
> > > > + } else {
> > > > + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> > > > + }
> > > > + if (mz == NULL)
> > > > + return -1;
> > > > +
> > > > + shm = mz->addr;
> > > > +
> > > > +#define mark_free(field) \
> > > > + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> > > > + 0xff, sizeof(((struct rte_mbuf *)0)->field))
> > >
> > > I think you can avoid defining/unedifying macros here by something like that:
> > >
> > > static const struct {
> > > size_t offset;
> > > size_t size;
> > > } dyn_syms[] = {
> > > [0] = {.offset = offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
> > > [1] = {.offset = offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
> > > };
> > > ...
> > >
> > > for (i = 0; i != RTE_DIM(dyn_syms); i++)
> > > memset(shm->free_space + dym_syms[i].offset, UINT8_MAX, dym_syms[i].size);
> > >
> >
> > I tried it, but the following lines are too long
> > [0] = {offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
> > [1] = {offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
> > To make them shorter, we can use a macro... but... wait :)
>
> Guess what, you can put offset ans size on different lines :)
> [0] = {
> .offset = offsetof(struct rte_mbuf, dynfield1),
> .size= sizeof((struct rte_mbuf *)0)->dynfield1),
> },
Yes, but honnestly, I'm not sure that it will be more readable than
the macro, knowing that we could add fields in the future.
> ....
>
> >
> > > > +
> > > > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > > > + /* init free_space, keep it sync'd with
> > > > + * rte_mbuf_dynfield_copy().
> > > > + */
> > > > + memset(shm, 0, sizeof(*shm));
> > > > + mark_free(dynfield1);
> > > > + mark_free(dynfield2);
> > > > +
> > > > + /* init free_flags */
> > > > + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
> > > > + shm->free_flags |= mask;
> > > > + }
> > > > +#undef mark_free
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +/* check if this offset can be used */
> > > > +static int
> > > > +check_offset(size_t offset, size_t size, size_t align, unsigned int flags)
> > > > +{
> > > > + size_t i;
> > > > +
> > > > + (void)flags;
> > >
> > >
> > > We have RTE_SET_USED() for such cases...
> > > Though as it is an internal function probably better not to introduce
> > > unused parameters at all.
> >
> > I removed the flag parameter as you suggested.
> >
> >
> > > > +
> > > > + if ((offset & (align - 1)) != 0)
> > > > + return -1;
> > > > + if (offset + size > sizeof(struct rte_mbuf))
> > > > + return -1;
> > > > +
> > > > + for (i = 0; i < size; i++) {
> > > > + if (!shm->free_space[i + offset])
> > > > + return -1;
> > > > + }
> > > > +
> > > > + return 0;
> > > > +}
> > > > +
> > > > +/* assume tailq is locked */
> > > > +static struct mbuf_dynfield_elt *
> > > > +__mbuf_dynfield_lookup(const char *name)
> > > > +{
> > > > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > > > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > > > + struct rte_tailq_entry *te;
> > > > +
> > > > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > > > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > > > +
> > > > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > > > + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> > > > + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> > > > + break;
> > > > + }
> > > > +
> > > > + if (te == NULL) {
> > > > + rte_errno = ENOENT;
> > > > + return NULL;
> > > > + }
> > > > +
> > > > + return mbuf_dynfield;
> > > > +}
> > > > +
> > > > +int
> > > > +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
> > > > +{
> > > > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > > > +
> > > > + if (shm == NULL) {
> > > > + rte_errno = ENOENT;
> > > > + return -1;
> > > > + }
> > > > +
> > > > + rte_mcfg_tailq_read_lock();
> > > > + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> > > > + rte_mcfg_tailq_read_unlock();
> > > > +
> > > > + if (mbuf_dynfield == NULL) {
> > > > + rte_errno = ENOENT;
> > > > + return -1;
> > > > + }
> > > > +
> > > > + if (params != NULL)
> > > > + memcpy(params, &mbuf_dynfield->params, sizeof(*params));
> > > > +
> > > > + return mbuf_dynfield->offset;
> > > > +}
> > > > +
> > > > +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> > > > + const struct rte_mbuf_dynfield *params2)
> > > > +{
> > > > + if (strcmp(params1->name, params2->name))
> > > > + return -1;
> > > > + if (params1->size != params2->size)
> > > > + return -1;
> > > > + if (params1->align != params2->align)
> > > > + return -1;
> > > > + if (params1->flags != params2->flags)
> > > > + return -1;
> > > > + return 0;
> > > > +}
> > > > +
> > > > +int
> > > > +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
> > >
> > > What I meant at user-space - if we can also have another function that would allow
> > > user to specify required offset for dynfield explicitly, then user can define it as constant
> > > value and let compiler do optimization work and hopefully generate faster code to access
> > > this field.
> > > Something like that:
> > >
> > > int rte_mbuf_dynfiled_register_offset(const struct rte_mbuf_dynfield *params, size_t offset);
> > >
> > > #define RTE_MBUF_DYNFIELD_OFFSET(fld, off) (offsetof(struct rte_mbuf, fld) + (off))
> > >
> > > And then somewhere in user code:
> > >
> > > /* to let say reserve first 4B in dynfield1*/
> > > #define MBUF_DYNFIELD_A RTE_MBUF_DYNFIELD_OFFSET(dynfiled1, 0)
> > > ...
> > > params.name = RTE_STR(MBUF_DYNFIELD_A);
> > > params.size = sizeof(uint32_t);
> > > params.align = sizeof(uint32_t);
> > > ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_A);
> > > if (ret != MBUF_DYNFIELD_A) {
> > > /* handle it somehow, probably just terminate gracefully... */
> > > }
> > > ...
> > >
> > > /* to let say reserve last 2B in dynfield2*/
> > > #define MBUF_DYNFIELD_B RTE_MBUF_DYNFIELD_OFFSET(dynfiled2, 6)
> > > ...
> > > params.name = RTE_STR(MBUF_DYNFIELD_B);
> > > params.size = sizeof(uint16_t);
> > > params.align = sizeof(uint16_t);
> > > ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_B);
> > >
> > > After that user can use constant offsets MBUF_DYNFIELD_A/ MBUF_DYNFIELD_B
> > > to access these fields.
> > > Same thoughts for DYNFLAG.
> >
> > I added the feature in v2.
> >
> >
> > > > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > > > + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> > > > + struct rte_tailq_entry *te = NULL;
> > > > + int offset, ret;
> > >
> > > size_t offset
> > > to avoid explicit conversions, etc.?
> > >
> >
> > Fixed.
> >
> >
> > > > + size_t i;
> > > > +
> > > > + if (shm == NULL && init_shared_mem() < 0)
> > > > + goto fail;
> > >
> > > As I understand, here you allocate/initialize your shm without any lock protection,
> > > though later you protect it via rte_mcfg_tailq_write_lock().
> > > That seems a bit flakey to me.
> > > Why not to store information about free dynfield bytes inside mbuf_dynfield_tailq?
> > > Let say at init() create and add an entry into that list with some reserved name.
> > > Then at register - grab mcfg_tailq_write_lock and do lookup
> > > for such entry and then read/update it as needed.
> > > It would help to avoid racing problem, plus you wouldn't need to
> > > allocate/lookup for memzone.
> >
> > I don't quite like the idea of having a special entry with a different type
> > in an element list. Despite it is simpler for a locking perspective, it is
> > less obvious for the developper.
> >
> > Also, I changed the way a zone is reserved to return the one that have the
> > less impact on next reservation, and I feel it is easier to implement with
> > the shared memory.
> >
> > So, I just moved the init_shared_mem() inside the rte_mcfg_tailq_write_lock(),
> > it should do the job.
>
> Yep, that should work too, I think.
>
> >
> >
> > > > + if (params->size >= sizeof(struct rte_mbuf)) {
> > > > + rte_errno = EINVAL;
> > > > + goto fail;
> > > > + }
> > > > + if (!rte_is_power_of_2(params->align)) {
> > > > + rte_errno = EINVAL;
> > > > + goto fail;
> > > > + }
> > > > + if (params->flags != 0) {
> > > > + rte_errno = EINVAL;
> > > > + goto fail;
> > > > + }
> > > > +
> > > > + rte_mcfg_tailq_write_lock();
> > > > +
> > >
> > > I think it probably would be cleaner and easier to read/maintain, if you'll put actual
> > > code under lock protection into a separate function - as you did for __mbuf_dynfield_lookup().
> >
> > Yes, I did that, it should be clearer now.
> >
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-16 22:07 3% ` Ananyev, Konstantin
@ 2019-10-17 12:49 0% ` Ananyev, Konstantin
2019-10-18 13:17 4% ` Akhil Goyal
1 sibling, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-17 12:49 UTC (permalink / raw)
To: Ananyev, Konstantin, Akhil Goyal, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph'
>
> > > > User can use the same session, that is what I am also insisting, but it may have
> > > separate
> > > > Session private data. Cryptodev session create API provide that functionality
> > > and we can
> > > > Leverage that.
> > >
> > > rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means
> > > we can't use
> > > the same rte_cryptodev_sym_session to hold sessions for both sync and async
> > > mode
> > > for the same device. Off course we can add a hard requirement that any driver
> > > that wants to
> > > support process() has to create sessions that can handle both process and
> > > enqueue/dequeue,
> > > but then again what for to create such overhead?
> > >
> > > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > > construct for multiple device_ids:
> > > __extension__ struct {
> > > void *data;
> > > uint16_t refcnt;
> > > } sess_data[0];
> > > /**< Driver specific session material, variable size */
> > >
> > Yes I also feel the same. I was also not in favor of this when it was introduced.
> > Please go ahead and remove this. I have no issues with that.
>
> If you are not happy with that structure, and admit there are issues with it,
> why do you push for reusing it for cpu-crypto API?
> Why not to take step back, take into account current drawbacks
> and define something that (hopefully) would suite us better?
> Again new API will be experimental for some time, so we'll
> have some opportunity to see does it works and if not fix it.
>
> About removing data[] from existing rte_cryptodev_sym_session -
> Personally would like to do that, but the change seems to be too massive.
> Definitely not ready for such effort right now.
>
> >
> > > as an advantage.
> > > It looks too error prone for me:
> > > 1. Simultaneous session initialization/de-initialization for devices with the same
> > > driver_id is not possible.
> > > 2. It assumes that all device driver will be loaded before we start to create
> > > session pools.
> > >
> > > Right now it seems ok, as no-one requires such functionality, but I don't know
> > > how it will be in future.
> > > For me rte_security session model, where for each security context user have to
> > > create new session
> > > looks much more robust.
> > Agreed
> >
> > >
> > > >
> > > > BTW, I can see a v2 to this RFC which is still based on security library.
> > >
> > > Yes, v2 was concentrated on fixing found issues, some code restructuring,
> > > i.e. - changes that would be needed anyway whatever API aproach we'll choose.
> > >
> > > > When do you plan
> > > > To submit the patches for crypto based APIs. We have RC1 merge deadline for
> > > this
> > > > patchset on 21st Oct.
> > >
> > > We'd like to start working on it ASAP, but it seems we still have a major
> > > disagreement
> > > about how this crypto-dev API should look like.
> > > Which makes me think - should we return to our original proposal via
> > > rte_security?
> > > It still looks to me like clean and straightforward way to enable this new API,
> > > and probably wouldn't cause that much controversy.
> > > What do you think?
> >
> > I cannot spend more time discussing on this until RC1 date. I have some other stuff pending.
> > You can send the patches early next week with the approach that I mentioned or else we
> > can discuss this post RC1(which would mean deferring to 20.02).
> >
> > But moving back to security is not acceptable to me. The code should be put where it is
> > intended and not where it is easy to put. You are not doing any rte_security stuff.
> >
>
> Ok, then my suggestion:
> Let's at least write down all points about crypto-dev approach where we
> disagree and then probably try to resolve them one by one....
> If we fail to make an agreement/progress in next week or so,
> (and no more reviews from the community)
> will have bring that subject to TB meeting to decide.
> Sounds fair to you?
>
> List is below.
> Please add/correct me, if I missed something.
>
> Konstantin
>
> 1. extra input parameters to create/init rte_(cpu)_sym_session.
>
> Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and 'key' fields.
> New fields will be optional and would be used by PMD only when cpu-crypto session is requested.
> For lksd-crypto session PMD is free to ignore these fields.
> No ABI breakage is required.
>
> Hopefully no controversy here with #1.
>
> 2. cpu-crypto create/init.
> a) Our suggestion - introduce new API for that:
> - rte_crypto_cpu_sym_init() that would init completely opaque rte_crypto_cpu_sym_session.
> - struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear); /*whatever else we'll need *'};
> - rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform *xforms)
> that would return const struct rte_crypto_cpu_sym_session_ops *based on input xforms.
> Advantages:
> 1) totally opaque data structure (no ABI breakages in future), PMD writer is totally free
> with it format and contents.
> 2) each session entity is self-contained, user doesn't need to bring along dev_id etc.
> dev_id is needed only at init stage, after that user will use session ops to perform
> all operations on that session (process(), clear(), etc.).
> 3) User can decide does he wants to store ops[] pointer on a per session basis,
> or on a per group of same sessions, or...
> 4) No mandatory mempools for private sessions. User can allocate memory for cpu-crypto
> session whenever he likes.
> Disadvantages:
> 5) Extra changes in control path
> 6) User has to store session_ops pointer explicitly.
After another thought if 2.a.6 is really that big deal we can have small shim layer on top:
rte_crypto_cpu_sym_session { void *ses; struct rte_crypto_cpu_sym_session_ops * const ops; }
OR even
rte_crypto_cpu_sym_session { void *ses; struct rte_crypto_cpu_sym_session_ops ops; }
And merge rte_crypto_cpu_sym_init() and rte_crypto_cpu_sym_get_ops() into one (init).
Then process() can become a wrapper:
rte_crypto_cpu_sym_process(ses, ...) {return ses->ops->process(ses->ses, ...);}
OR
rte_crypto_cpu_sym_process(ses, ...) {return ses->ops.process(ses->ses, ...);}
if that would help to reach consensus - works for me.
> b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and existing rte_cryptodev_sym_session
> structure.
> Advantages:
> 1) allows to reuse same struct and init/create/clear() functions.
> Probably less changes in control path.
> Disadvantages:
> 2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means that
> we can't use the same rte_cryptodev_sym_session to hold private sessions pointers
> for both sync and async mode for the same device.
> So wthe only option we have - make PMD devops->sym_session_configure()
> always create a session that can work in both cpu and lksd modes.
> For some implementations that would probably mean that under the hood PMD would create
> 2 different session structs (sync/async) and then use one or another depending on from what API been called.
> Seems doable, but ...:
> - will contradict with statement from 1:
> " New fields will be optional and would be used by PMD only when cpu-crypto session is requested."
> Now it becomes mandatory for all apps to specify cpu-crypto related parameters too,
> even if they don't plan to use that mode - i.e. behavior change, existing app change.
> - might cause extra space overhead.
> 3) not possible to store device (not driver) specific data within the session, but I think it is not really needed right now.
> So probably minor compared to 2.b.2.
>
> Actually #3 follows from #2, but decided to have them separated.
>
> 3. process() parameters/behavior
> a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and just does:
> session_ops->process(sess, ...);
> Advantages:
> 1) fastest possible execution path
> 2) no need to carry on dev_id for data-path
> Disadvantages:
> 3) user has to carry on session_ops pointer explicitly
> b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
> rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session *sess, /*data parameters*/) {...
> rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
> /*and then inside PMD specifc process: */
> pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
> /* and then most likely either */
> pmd_private_session->process(pmd_private_session, ...);
> /* or jump based on session/input data */
> Advantages:
> 1) don't see any...
> Disadvantages:
> 2) User has to carry on dev_id inside data-path
> 3) Extra level of indirection (plus data dependency) - both for data and instructions.
> Possible slowdown compared to a) (not measured).
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic compare exchange
2019-10-16 9:04 4% ` Phil Yang (Arm Technology China)
@ 2019-10-17 12:45 0% ` David Marchand
0 siblings, 0 replies; 200+ results
From: David Marchand @ 2019-10-17 12:45 UTC (permalink / raw)
To: Phil Yang (Arm Technology China)
Cc: thomas, jerinj, Gage Eads, dev, hemant.agrawal,
Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
nd
On Wed, Oct 16, 2019 at 11:04 AM Phil Yang (Arm Technology China)
<Phil.Yang@arm.com> wrote:
>
> > -----Original Message-----
> > From: David Marchand <david.marchand@redhat.com>
> > Sent: Tuesday, October 15, 2019 8:16 PM
> > To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>
> > Cc: thomas@monjalon.net; jerinj@marvell.com; Gage Eads
> > <gage.eads@intel.com>; dev <dev@dpdk.org>; hemant.agrawal@nxp.com;
> > Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> > Technology China) <Gavin.Hu@arm.com>; nd <nd@arm.com>
> > Subject: Re: [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic
> > compare exchange
> >
> > On Tue, Oct 15, 2019 at 1:32 PM Phil Yang (Arm Technology China)
> > <Phil.Yang@arm.com> wrote:
> > > > -----Original Message-----
> > > > From: David Marchand <david.marchand@redhat.com>
> > > > If LSE is available, we expose __rte_cas_XX (explicitely) *non*
> > > > inlined functions, while without LSE, we expose inlined __rte_ldr_XX
> > > > and __rte_stx_XX functions.
> > > > So we have a first disparity with non-inlined vs inlined functions
> > > > depending on a #ifdef.
> >
> > You did not comment on the inline / no inline part and I still see
> > this in the v10.
> > Is this __rte_noinline on the CAS function intentional?
>
> Apologize for missing this item. Yes, it is to avoid ABI break.
> Please check
> 5b40ec6b966260e0ff66a8a2c689664f75d6a0e6 ("mempool/octeontx2: fix possible arm64 ABI break")
Looked at the kernel parts on LSE CAS (thanks for the pointer) but I
see inlines are used:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/arch/arm64/include/asm/atomic_lse.h#n365?h=v5.4-rc3
What is special in the kernel or in dpdk that makes this different?
>
> >
> >
> > > > Then, we have a second disparity with two sets of "apis" depending on
> > > > this #ifdef.
> > > >
> > > > And we expose those sets with a rte_ prefix, meaning people will try
> > > > to use them, but those are not part of a public api.
> > > >
> > > > Can't we do without them ? (see below [2] for a proposal with ldr/stx,
> > > > cas should be the same)
> > >
> > > No, it doesn't work.
> > > Because we need to verify the return value at the end of the loop for these
> > macros.
> >
> > Do you mean the return value for the stores?
>
> It is my bad. I missed the ret option in the macro. This approach works.
Ok, thanks for confirming.
>
> However, I suggest to keep them as static inline functions rather than a piece of macro in the rte_atomic128_cmp_exchange API.
> One reason is APIs name can indicate the memory ordering of these operations.
API?
Those inlines are not part of a public API and we agree this patch is
not about adding 128 bits load/store apis.
My proposal gives us small code that looks like:
if (ldx_mo == __ATOMIC_RELAXED)
__READ_128("ldxp", dst, old);
else
__READ_128("ldaxp", dst, old);
I am not a memory order guru, but with this, I can figure the asm
instruction depends on it.
And, since we are looking at internals of an implementation, this is
mainly for people looking at/maintaining these low level details.
> Moreover, it uses the register type to pass the value in the inline function, so it should not have too much cost comparing with the macro.
This is not a problem of cost, this is about hiding architecture
details from the final user.
If you expose something, you can expect someone will start using it
and will complain later if you break it.
> I also think these 128bit load and store functions can be used in other places, once it has been proved valuable in rte_atomic128_cmp_exchange API. But let's keep them private for the current stage.
Yes I agree this could be introduced in the future.
> BTW, Linux kernel implemented in the same way. https://github.com/torvalds/linux/blob/master/arch/arm64/include/asm/atomic_lse.h#L19
Ok kernel exposes its internals, but I think kernel developpers are
more vigilant than dpdk developpers on what is part of the public API
and what is internal.
--
David Marchand
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] mbuf: support dynamic fields and flags
2019-10-17 7:54 0% ` Olivier Matz
@ 2019-10-17 11:58 0% ` Ananyev, Konstantin
2019-10-17 12:58 0% ` Olivier Matz
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2019-10-17 11:58 UTC (permalink / raw)
To: Olivier Matz
Cc: dev, Thomas Monjalon, Wang, Haiyue, Stephen Hemminger,
Andrew Rybchenko, Wiles, Keith, Jerin Jacob Kollanukkaran
Hi Olivier,
> > > Many features require to store data inside the mbuf. As the room in mbuf
> > > structure is limited, it is not possible to have a field for each
> > > feature. Also, changing fields in the mbuf structure can break the API
> > > or ABI.
> > >
> > > This commit addresses these issues, by enabling the dynamic registration
> > > of fields or flags:
> > >
> > > - a dynamic field is a named area in the rte_mbuf structure, with a
> > > given size (>= 1 byte) and alignment constraint.
> > > - a dynamic flag is a named bit in the rte_mbuf structure.
> > >
> > > The typical use case is a PMD that registers space for an offload
> > > feature, when the application requests to enable this feature. As
> > > the space in mbuf is limited, the space should only be reserved if it
> > > is going to be used (i.e when the application explicitly asks for it).
> > >
> > > The registration can be done at any moment, but it is not possible
> > > to unregister fields or flags for now.
> >
> > Looks ok to me in general.
> > Some comments/suggestions inline.
> > Konstantin
> >
> > >
> > > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > > ---
> > >
> > > rfc -> v1
> > >
> > > * Rebase on top of master
> > > * Change registration API to use a structure instead of
> > > variables, getting rid of #defines (Stephen's comment)
> > > * Update flag registration to use a similar API as fields.
> > > * Change max name length from 32 to 64 (sugg. by Thomas)
> > > * Enhance API documentation (Haiyue's and Andrew's comments)
> > > * Add a debug log at registration
> > > * Add some words in release note
> > > * Did some performance tests (sugg. by Andrew):
> > > On my platform, reading a dynamic field takes ~3 cycles more
> > > than a static field, and ~2 cycles more for writing.
> > >
> > > app/test/test_mbuf.c | 114 ++++++-
> > > doc/guides/rel_notes/release_19_11.rst | 7 +
> > > lib/librte_mbuf/Makefile | 2 +
> > > lib/librte_mbuf/meson.build | 6 +-
> > > lib/librte_mbuf/rte_mbuf.h | 25 +-
> > > lib/librte_mbuf/rte_mbuf_dyn.c | 408 +++++++++++++++++++++++++
> > > lib/librte_mbuf/rte_mbuf_dyn.h | 163 ++++++++++
> > > lib/librte_mbuf/rte_mbuf_version.map | 4 +
> > > 8 files changed, 724 insertions(+), 5 deletions(-)
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> > >
> > > --- a/lib/librte_mbuf/rte_mbuf.h
> > > +++ b/lib/librte_mbuf/rte_mbuf.h
> > > @@ -198,9 +198,12 @@ extern "C" {
> > > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> > >
> > > -/* add new RX flags here */
> > > +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> > >
> > > -/* add new TX flags here */
> > > +#define PKT_FIRST_FREE (1ULL << 23)
> > > +#define PKT_LAST_FREE (1ULL << 39)
> > > +
> > > +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
> > >
> > > /**
> > > * Indicate that the metadata field in the mbuf is in use.
> > > @@ -738,6 +741,8 @@ struct rte_mbuf {
> > > */
> > > struct rte_mbuf_ext_shared_info *shinfo;
> > >
> > > + uint64_t dynfield1; /**< Reserved for dynamic fields. */
> > > + uint64_t dynfield2; /**< Reserved for dynamic fields. */
> >
> > Wonder why just not one field:
> > union {
> > uint8_t u8[16];
> > ...
> > uint64_t u64[2];
> > } dyn_field1;
> > ?
> > Probably would be a bit handy, to refer, register, etc. no?
>
> I didn't find any place where we need an access through u8, so I
> just changed it into uint64_t dynfield1[2].
My thought was - if you'll have all dynamic stuff as one field (uint64_t dyn_field[2]),
then you woulnd't need any cycles at register() at all.
But up to you.
>
>
> >
> > > } __rte_cache_aligned;
> > >
> > > /**
> > > @@ -1684,6 +1689,21 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
> > > */
> > > #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
> > >
> > > +/**
> > > + * Copy dynamic fields from m_src to m_dst.
> > > + *
> > > + * @param m_dst
> > > + * The destination mbuf.
> > > + * @param m_src
> > > + * The source mbuf.
> > > + */
> > > +static inline void
> > > +rte_mbuf_dynfield_copy(struct rte_mbuf *m_dst, const struct rte_mbuf *m_src)
> > > +{
> > > + m_dst->dynfield1 = m_src->dynfield1;
> > > + m_dst->dynfield2 = m_src->dynfield2;
> > > +}
> > > +
> > > /**
> > > * Attach packet mbuf to another packet mbuf.
> > > *
> > > @@ -1732,6 +1752,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
> > > mi->vlan_tci_outer = m->vlan_tci_outer;
> > > mi->tx_offload = m->tx_offload;
> > > mi->hash = m->hash;
> > > + rte_mbuf_dynfield_copy(mi, m);
> > >
> > > mi->next = NULL;
> > > mi->pkt_len = mi->data_len;
> > > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
> > > new file mode 100644
> > > index 000000000..13b8742d0
> > > --- /dev/null
> > > +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> > > @@ -0,0 +1,408 @@
> > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > + * Copyright 2019 6WIND S.A.
> > > + */
> > > +
> > > +#include <sys/queue.h>
> > > +
> > > +#include <rte_common.h>
> > > +#include <rte_eal.h>
> > > +#include <rte_eal_memconfig.h>
> > > +#include <rte_tailq.h>
> > > +#include <rte_errno.h>
> > > +#include <rte_malloc.h>
> > > +#include <rte_string_fns.h>
> > > +#include <rte_mbuf.h>
> > > +#include <rte_mbuf_dyn.h>
> > > +
> > > +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> > > +
> > > +struct mbuf_dynfield_elt {
> > > + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> > > + struct rte_mbuf_dynfield params;
> > > + int offset;
> >
> > Why not 'size_t offset', to avoid any explicit conversions, etc?
>
> Fixed
>
>
> > > +};
> > > +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> > > +
> > > +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> > > + .name = "RTE_MBUF_DYNFIELD",
> > > +};
> > > +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> > > +
> > > +struct mbuf_dynflag_elt {
> > > + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> > > + struct rte_mbuf_dynflag params;
> > > + int bitnum;
> > > +};
> > > +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> > > +
> > > +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> > > + .name = "RTE_MBUF_DYNFLAG",
> > > +};
> > > +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> > > +
> > > +struct mbuf_dyn_shm {
> > > + /** For each mbuf byte, free_space[i] == 1 if space is free. */
> > > + uint8_t free_space[sizeof(struct rte_mbuf)];
> > > + /** Bitfield of available flags. */
> > > + uint64_t free_flags;
> > > +};
> > > +static struct mbuf_dyn_shm *shm;
> > > +
> > > +/* allocate and initialize the shared memory */
> > > +static int
> > > +init_shared_mem(void)
> > > +{
> > > + const struct rte_memzone *mz;
> > > + uint64_t mask;
> > > +
> > > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > > + mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> > > + sizeof(struct mbuf_dyn_shm),
> > > + SOCKET_ID_ANY, 0,
> > > + RTE_CACHE_LINE_SIZE);
> > > + } else {
> > > + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> > > + }
> > > + if (mz == NULL)
> > > + return -1;
> > > +
> > > + shm = mz->addr;
> > > +
> > > +#define mark_free(field) \
> > > + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> > > + 0xff, sizeof(((struct rte_mbuf *)0)->field))
> >
> > I think you can avoid defining/unedifying macros here by something like that:
> >
> > static const struct {
> > size_t offset;
> > size_t size;
> > } dyn_syms[] = {
> > [0] = {.offset = offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
> > [1] = {.offset = offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
> > };
> > ...
> >
> > for (i = 0; i != RTE_DIM(dyn_syms); i++)
> > memset(shm->free_space + dym_syms[i].offset, UINT8_MAX, dym_syms[i].size);
> >
>
> I tried it, but the following lines are too long
> [0] = {offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
> [1] = {offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
> To make them shorter, we can use a macro... but... wait :)
Guess what, you can put offset ans size on different lines :)
[0] = {
.offset = offsetof(struct rte_mbuf, dynfield1),
.size= sizeof((struct rte_mbuf *)0)->dynfield1),
},
....
>
> > > +
> > > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > > + /* init free_space, keep it sync'd with
> > > + * rte_mbuf_dynfield_copy().
> > > + */
> > > + memset(shm, 0, sizeof(*shm));
> > > + mark_free(dynfield1);
> > > + mark_free(dynfield2);
> > > +
> > > + /* init free_flags */
> > > + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
> > > + shm->free_flags |= mask;
> > > + }
> > > +#undef mark_free
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/* check if this offset can be used */
> > > +static int
> > > +check_offset(size_t offset, size_t size, size_t align, unsigned int flags)
> > > +{
> > > + size_t i;
> > > +
> > > + (void)flags;
> >
> >
> > We have RTE_SET_USED() for such cases...
> > Though as it is an internal function probably better not to introduce
> > unused parameters at all.
>
> I removed the flag parameter as you suggested.
>
>
> > > +
> > > + if ((offset & (align - 1)) != 0)
> > > + return -1;
> > > + if (offset + size > sizeof(struct rte_mbuf))
> > > + return -1;
> > > +
> > > + for (i = 0; i < size; i++) {
> > > + if (!shm->free_space[i + offset])
> > > + return -1;
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +/* assume tailq is locked */
> > > +static struct mbuf_dynfield_elt *
> > > +__mbuf_dynfield_lookup(const char *name)
> > > +{
> > > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > > + struct rte_tailq_entry *te;
> > > +
> > > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > > +
> > > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > > + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> > > + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> > > + break;
> > > + }
> > > +
> > > + if (te == NULL) {
> > > + rte_errno = ENOENT;
> > > + return NULL;
> > > + }
> > > +
> > > + return mbuf_dynfield;
> > > +}
> > > +
> > > +int
> > > +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
> > > +{
> > > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > > +
> > > + if (shm == NULL) {
> > > + rte_errno = ENOENT;
> > > + return -1;
> > > + }
> > > +
> > > + rte_mcfg_tailq_read_lock();
> > > + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> > > + rte_mcfg_tailq_read_unlock();
> > > +
> > > + if (mbuf_dynfield == NULL) {
> > > + rte_errno = ENOENT;
> > > + return -1;
> > > + }
> > > +
> > > + if (params != NULL)
> > > + memcpy(params, &mbuf_dynfield->params, sizeof(*params));
> > > +
> > > + return mbuf_dynfield->offset;
> > > +}
> > > +
> > > +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> > > + const struct rte_mbuf_dynfield *params2)
> > > +{
> > > + if (strcmp(params1->name, params2->name))
> > > + return -1;
> > > + if (params1->size != params2->size)
> > > + return -1;
> > > + if (params1->align != params2->align)
> > > + return -1;
> > > + if (params1->flags != params2->flags)
> > > + return -1;
> > > + return 0;
> > > +}
> > > +
> > > +int
> > > +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
> >
> > What I meant at user-space - if we can also have another function that would allow
> > user to specify required offset for dynfield explicitly, then user can define it as constant
> > value and let compiler do optimization work and hopefully generate faster code to access
> > this field.
> > Something like that:
> >
> > int rte_mbuf_dynfiled_register_offset(const struct rte_mbuf_dynfield *params, size_t offset);
> >
> > #define RTE_MBUF_DYNFIELD_OFFSET(fld, off) (offsetof(struct rte_mbuf, fld) + (off))
> >
> > And then somewhere in user code:
> >
> > /* to let say reserve first 4B in dynfield1*/
> > #define MBUF_DYNFIELD_A RTE_MBUF_DYNFIELD_OFFSET(dynfiled1, 0)
> > ...
> > params.name = RTE_STR(MBUF_DYNFIELD_A);
> > params.size = sizeof(uint32_t);
> > params.align = sizeof(uint32_t);
> > ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_A);
> > if (ret != MBUF_DYNFIELD_A) {
> > /* handle it somehow, probably just terminate gracefully... */
> > }
> > ...
> >
> > /* to let say reserve last 2B in dynfield2*/
> > #define MBUF_DYNFIELD_B RTE_MBUF_DYNFIELD_OFFSET(dynfiled2, 6)
> > ...
> > params.name = RTE_STR(MBUF_DYNFIELD_B);
> > params.size = sizeof(uint16_t);
> > params.align = sizeof(uint16_t);
> > ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_B);
> >
> > After that user can use constant offsets MBUF_DYNFIELD_A/ MBUF_DYNFIELD_B
> > to access these fields.
> > Same thoughts for DYNFLAG.
>
> I added the feature in v2.
>
>
> > > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > > + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> > > + struct rte_tailq_entry *te = NULL;
> > > + int offset, ret;
> >
> > size_t offset
> > to avoid explicit conversions, etc.?
> >
>
> Fixed.
>
>
> > > + size_t i;
> > > +
> > > + if (shm == NULL && init_shared_mem() < 0)
> > > + goto fail;
> >
> > As I understand, here you allocate/initialize your shm without any lock protection,
> > though later you protect it via rte_mcfg_tailq_write_lock().
> > That seems a bit flakey to me.
> > Why not to store information about free dynfield bytes inside mbuf_dynfield_tailq?
> > Let say at init() create and add an entry into that list with some reserved name.
> > Then at register - grab mcfg_tailq_write_lock and do lookup
> > for such entry and then read/update it as needed.
> > It would help to avoid racing problem, plus you wouldn't need to
> > allocate/lookup for memzone.
>
> I don't quite like the idea of having a special entry with a different type
> in an element list. Despite it is simpler for a locking perspective, it is
> less obvious for the developper.
>
> Also, I changed the way a zone is reserved to return the one that have the
> less impact on next reservation, and I feel it is easier to implement with
> the shared memory.
>
> So, I just moved the init_shared_mem() inside the rte_mcfg_tailq_write_lock(),
> it should do the job.
Yep, that should work too, I think.
>
>
> > > + if (params->size >= sizeof(struct rte_mbuf)) {
> > > + rte_errno = EINVAL;
> > > + goto fail;
> > > + }
> > > + if (!rte_is_power_of_2(params->align)) {
> > > + rte_errno = EINVAL;
> > > + goto fail;
> > > + }
> > > + if (params->flags != 0) {
> > > + rte_errno = EINVAL;
> > > + goto fail;
> > > + }
> > > +
> > > + rte_mcfg_tailq_write_lock();
> > > +
> >
> > I think it probably would be cleaner and easier to read/maintain, if you'll put actual
> > code under lock protection into a separate function - as you did for __mbuf_dynfield_lookup().
>
> Yes, I did that, it should be clearer now.
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 6/9] distributor: remove deprecated code
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 6/9] distributor: " Anatoly Burakov
@ 2019-10-17 10:53 0% ` Hunt, David
0 siblings, 0 replies; 200+ results
From: Hunt, David @ 2019-10-17 10:53 UTC (permalink / raw)
To: Anatoly Burakov, dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas, david.marchand
On 16/10/2019 18:03, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> Remove code for old ABI versions ahead of ABI version bump.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v3:
> - Removed single mode from distributor as per Dave's comments
Hi Anatoly,
Having looked at this code closer, I see that this now breaks the API
for when a distributor instance is created with the RTE_DIST_ALG_SINGLE.
I think now that the better solution would be to just re-name the _v20
to _single for structs, functions, etc, as you did in the previous patch
version. That means that the unit and perf tests should still work
unchanged, and maintain the API.
Rgds,
Dave.
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-17 8:44 9% ` Bruce Richardson
@ 2019-10-17 10:25 4% ` Burakov, Anatoly
2019-10-17 14:09 8% ` Luca Boccassi
1 sibling, 0 replies; 200+ results
From: Burakov, Anatoly @ 2019-10-17 10:25 UTC (permalink / raw)
To: Bruce Richardson
Cc: dev, Marcin Baran, Thomas Monjalon, john.mcnamara,
david.marchand, Pawel Modrak, bluca, ktraynor
On 17-Oct-19 9:44 AM, Bruce Richardson wrote:
> On Wed, Oct 16, 2019 at 06:03:36PM +0100, Anatoly Burakov wrote:
>> From: Marcin Baran <marcinx.baran@intel.com>
>>
>> As per new ABI policy, all of the libraries are now versioned using
>> one global ABI version. Changes in this patch implement the
>> necessary steps to enable that.
>>
>> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
>> Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
>> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
>> ---
>>
>> Notes:
>> v3:
>> - Removed Windows support from Makefile changes
>> - Removed unneeded path conversions from meson files
>>
>> buildtools/meson.build | 2 ++
>> config/ABI_VERSION | 1 +
>> config/meson.build | 5 +++--
>> drivers/meson.build | 20 ++++++++++++--------
>> lib/meson.build | 18 +++++++++++-------
>> meson_options.txt | 2 --
>> mk/rte.lib.mk | 13 ++++---------
>> 7 files changed, 33 insertions(+), 28 deletions(-)
>> create mode 100644 config/ABI_VERSION
>>
>> diff --git a/buildtools/meson.build b/buildtools/meson.build
>> index 32c79c1308..78ce69977d 100644
>> --- a/buildtools/meson.build
>> +++ b/buildtools/meson.build
>> @@ -12,3 +12,5 @@ if python3.found()
>> else
>> map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
>> endif
>> +
>> +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
>> diff --git a/config/ABI_VERSION b/config/ABI_VERSION
>> new file mode 100644
>> index 0000000000..9a7c1e503f
>> --- /dev/null
>> +++ b/config/ABI_VERSION
>> @@ -0,0 +1 @@
>> +20.0
>> diff --git a/config/meson.build b/config/meson.build
>> index a27f731f85..3cfc02406c 100644
>> --- a/config/meson.build
>> +++ b/config/meson.build
>> @@ -17,7 +17,8 @@ endforeach
>> # set the major version, which might be used by drivers and libraries
>> # depending on the configuration options
>> pver = meson.project_version().split('.')
>> -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
>> +abi_version = run_command(find_program('cat', 'more'),
>> + files('ABI_VERSION')).stdout().strip()
>>
>> # extract all version information into the build configuration
>> dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
>> @@ -37,7 +38,7 @@ endif
>>
>> pmd_subdir_opt = get_option('drivers_install_subdir')
>> if pmd_subdir_opt.contains('<VERSION>')
>> - pmd_subdir_opt = major_version.join(pmd_subdir_opt.split('<VERSION>'))
>> + pmd_subdir_opt = abi_version.join(pmd_subdir_opt.split('<VERSION>'))
>> endif
>
> This is an interesting change, and I'm not sure about it. I think for
> user-visible changes, version should still refer to DPDK version rather
> than ABI version. Even with a stable ABI, it makes more sense to me to find
> the drivers in a 19.11 directory than a 20.0 one. Then again, the drivers
> should be re-usable across the one ABI version, so perhaps this is the best
> approach.
>
> Thoughts from others? Luca or Kevin, any thoughts from a packagers
> perspective?
>
> /Bruce
I can certainly change it back. This wasn't intentional - i just did a
search-and-replace without thinking too much about it :)
--
Thanks,
Anatoly
^ permalink raw reply [relevance 4%]
* [dpdk-dev] DPDK Release Status Meeting 17/10/2019
@ 2019-10-17 9:59 3% Ferruh Yigit
0 siblings, 0 replies; 200+ results
From: Ferruh Yigit @ 2019-10-17 9:59 UTC (permalink / raw)
To: dpdk-dev; +Cc: Thomas Monjalon
Minutes 17 October 2019
-----------------------
Agenda:
* Release Dates
* Subtrees
* OvS
* Conferences
* Opens
Participants:
* Arm
* Debian/Microsoft
* Intel
* Marvell
* NXP
* Red Hat
Release Dates
-------------
* v19.11 dates:
* Integration/Merge/RC1 Wednesday 23 October
* For sub-trees Monday 21 October
* Release Friday 22 November
* Proposed dates for 20.02 release on the mail list, please comment
https://mails.dpdk.org/archives/dev/2019-September/143311.html
These dates may affected from the 19.11 delays, please review again.
Subtrees
--------
* main
* David merged some patches and working on KNI/eal patches
* Planning review ABI process patches for rc1
* next-net
* ~80 patches in backlog, trying to make rc1 dates
* Planning to get ethdev API and rte_flow patches (7-8 patchset) for rc1,
testpmd and driver patches may pushed to rc2
* Two new PMDs already merged
* next-net-crypto
* Planning to get 'octeontx2' PMD
* More review required on three patchset
* ipsec-secgw, add fallback session
https://patches.dpdk.org/project/dpdk/list/?series=6833
* ipsec-secgw, set default to IPsec library mode
https://patches.dpdk.org/patch/60349/, 60350, 60351
* security library, CPU Crypto (asked for tech board review)
https://patches.dpdk.org/project/dpdk/list/?series=6727&state=*
* ipsec, inbound SAD series waiting for an update
https://patches.dpdk.org/project/dpdk/list/?series=6790&state=*
* next-net-eventdev
* Some more patches are in the tree waiting for pull
* l2fwd-event app can be pushed to rc2
(update the existing l3fwd for eventdev pushed to next release)
* next-net-virtio (update from David)
* Reviewing Marvin's vhost packed ring performance optimization patch
* Maxim's (own) Virtio vDPA set is at risk, may pushed to next release
* next-net-intel
* Some patches already in the tree waiting for pull
* ice PMD patches for RSS and FDIR patches under review
* ipn3ke PMD patches has some issues, it has risk for rc1,
can be considered for rc2
* LTS
* v18.11.3-rc2 under test
* More test from more companies is welcome
* Target release date is next week
OvS
---
* For TSO support, DPDK patch has been merged
Conferences
-----------
* DPDK Summit North America, Mountain View CA, November 12-13
* CPF results announced
https://www.dpdk.org/event/dpdk-summit-na-mountain-view/
Opens
-----
* Coverity run last week, there are outstanding issues:
https://scan.coverity.com/projects/dpdk-data-plane-development-kit?tab=overview
* Reminder of other static analysis tool for dpdk, lgtm:
https://lgtm.com/projects/g/DPDK/dpdk/?mode=list
DPDK Release Status Meetings
============================
The DPDK Release Status Meeting is intended for DPDK Committers to discuss
the status of the master tree and sub-trees, and for project managers to
track progress or milestone dates.
The meeting occurs on Thursdays at 8:30 UTC. If you wish to attend just
send an email to "John McNamara <john.mcnamara@intel.com>" for the invite.
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 0/9] Implement the new ABI policy and add helper scripts
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
@ 2019-10-17 8:50 4% ` Bruce Richardson
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
` (10 subsequent siblings)
11 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2019-10-17 8:50 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, john.mcnamara, thomas, david.marchand
On Wed, Oct 16, 2019 at 06:03:35PM +0100, Anatoly Burakov wrote:
> This patchset prepares the codebase for the new ABI policy and
> adds a few helper scripts.
>
> There are two new scripts for managing ABI versions added. The
> first one is a Python script that will read in a .map file,
> flatten it and update the ABI version to the ABI version
> specified on the command-line.
>
> The second one is a shell script that will run the above mentioned
> Python script recursively over the source tree and set the ABI
> version to either that which is defined in config/ABI_VERSION, or
> a user-specified one.
>
> Example of its usage: buildtools/update-abi.sh 20.0
>
> This will recurse into lib/ and drivers/ directory and update
> whatever .map files it can find.
>
> The other shell script that's added is one that can take in a .so
> file and ensure that its declared public ABI matches either
> current ABI, next ABI, or EXPERIMENTAL. This was moved to the
> last commit because it made no sense to have it beforehand.
>
> The source tree was verified to follow the new ABI policy using
> the following command (assuming built binaries are in build/):
>
> find ./build/lib ./build/drivers -name \*.so \
> -exec ./buildtools/check-abi-version.sh {} \; -print
>
> This returns 0.
>
> Changes since v2:
> - Addressed Bruce's review comments
> - Removed single distributor mode as per Dave's suggestion
>
> Changes since v1:
> - Reordered patchset to have removal of old ABI's before introducing
> the new one to avoid compile breakages between patches
> - Added a new patch fixing missing symbol in octeontx common
> - Split script commits into multiple commits and reordered them
> - Re-generated the ABI bump commit
> - Verified all scripts to work
>
> Anatoly Burakov (2):
> buildtools: add ABI update shell script
> drivers/octeontx: add missing public symbol
>
> Marcin Baran (5):
> config: change ABI versioning to global
> timer: remove deprecated code
> lpm: remove deprecated code
> distributor: remove deprecated code
> buildtools: add ABI versioning check script
>
> Pawel Modrak (2):
> buildtools: add script for updating symbols abi version
> build: change ABI version to 20.0
>
For me, bar the one small open question on driver paths, this looks pretty
good.
Series-acked-by: Bruce Richardson <bruce.richardson@intel.com>
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-16 17:03 7% ` [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global Anatoly Burakov
@ 2019-10-17 8:44 9% ` Bruce Richardson
2019-10-17 10:25 4% ` Burakov, Anatoly
2019-10-17 14:09 8% ` Luca Boccassi
0 siblings, 2 replies; 200+ results
From: Bruce Richardson @ 2019-10-17 8:44 UTC (permalink / raw)
To: Anatoly Burakov
Cc: dev, Marcin Baran, Thomas Monjalon, john.mcnamara,
david.marchand, Pawel Modrak, bluca, ktraynor
On Wed, Oct 16, 2019 at 06:03:36PM +0100, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> As per new ABI policy, all of the libraries are now versioned using
> one global ABI version. Changes in this patch implement the
> necessary steps to enable that.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v3:
> - Removed Windows support from Makefile changes
> - Removed unneeded path conversions from meson files
>
> buildtools/meson.build | 2 ++
> config/ABI_VERSION | 1 +
> config/meson.build | 5 +++--
> drivers/meson.build | 20 ++++++++++++--------
> lib/meson.build | 18 +++++++++++-------
> meson_options.txt | 2 --
> mk/rte.lib.mk | 13 ++++---------
> 7 files changed, 33 insertions(+), 28 deletions(-)
> create mode 100644 config/ABI_VERSION
>
> diff --git a/buildtools/meson.build b/buildtools/meson.build
> index 32c79c1308..78ce69977d 100644
> --- a/buildtools/meson.build
> +++ b/buildtools/meson.build
> @@ -12,3 +12,5 @@ if python3.found()
> else
> map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
> endif
> +
> +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
> diff --git a/config/ABI_VERSION b/config/ABI_VERSION
> new file mode 100644
> index 0000000000..9a7c1e503f
> --- /dev/null
> +++ b/config/ABI_VERSION
> @@ -0,0 +1 @@
> +20.0
> diff --git a/config/meson.build b/config/meson.build
> index a27f731f85..3cfc02406c 100644
> --- a/config/meson.build
> +++ b/config/meson.build
> @@ -17,7 +17,8 @@ endforeach
> # set the major version, which might be used by drivers and libraries
> # depending on the configuration options
> pver = meson.project_version().split('.')
> -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
> +abi_version = run_command(find_program('cat', 'more'),
> + files('ABI_VERSION')).stdout().strip()
>
> # extract all version information into the build configuration
> dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
> @@ -37,7 +38,7 @@ endif
>
> pmd_subdir_opt = get_option('drivers_install_subdir')
> if pmd_subdir_opt.contains('<VERSION>')
> - pmd_subdir_opt = major_version.join(pmd_subdir_opt.split('<VERSION>'))
> + pmd_subdir_opt = abi_version.join(pmd_subdir_opt.split('<VERSION>'))
> endif
This is an interesting change, and I'm not sure about it. I think for
user-visible changes, version should still refer to DPDK version rather
than ABI version. Even with a stable ABI, it makes more sense to me to find
the drivers in a 19.11 directory than a 20.0 one. Then again, the drivers
should be re-usable across the one ABI version, so perhaps this is the best
approach.
Thoughts from others? Luca or Kevin, any thoughts from a packagers
perspective?
/Bruce
^ permalink raw reply [relevance 9%]
* Re: [dpdk-dev] [PATCH] mbuf: support dynamic fields and flags
@ 2019-10-17 7:54 0% ` Olivier Matz
2019-10-17 11:58 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Olivier Matz @ 2019-10-17 7:54 UTC (permalink / raw)
To: Ananyev, Konstantin
Cc: dev, Thomas Monjalon, Wang, Haiyue, Stephen Hemminger,
Andrew Rybchenko, Wiles, Keith, Jerin Jacob Kollanukkaran
Hi Konstantin,
Thanks for the feedback. Please see my answers below.
On Tue, Oct 01, 2019 at 10:49:39AM +0000, Ananyev, Konstantin wrote:
> Hi Olivier,
>
> > Many features require to store data inside the mbuf. As the room in mbuf
> > structure is limited, it is not possible to have a field for each
> > feature. Also, changing fields in the mbuf structure can break the API
> > or ABI.
> >
> > This commit addresses these issues, by enabling the dynamic registration
> > of fields or flags:
> >
> > - a dynamic field is a named area in the rte_mbuf structure, with a
> > given size (>= 1 byte) and alignment constraint.
> > - a dynamic flag is a named bit in the rte_mbuf structure.
> >
> > The typical use case is a PMD that registers space for an offload
> > feature, when the application requests to enable this feature. As
> > the space in mbuf is limited, the space should only be reserved if it
> > is going to be used (i.e when the application explicitly asks for it).
> >
> > The registration can be done at any moment, but it is not possible
> > to unregister fields or flags for now.
>
> Looks ok to me in general.
> Some comments/suggestions inline.
> Konstantin
>
> >
> > Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
> > Acked-by: Thomas Monjalon <thomas@monjalon.net>
> > ---
> >
> > rfc -> v1
> >
> > * Rebase on top of master
> > * Change registration API to use a structure instead of
> > variables, getting rid of #defines (Stephen's comment)
> > * Update flag registration to use a similar API as fields.
> > * Change max name length from 32 to 64 (sugg. by Thomas)
> > * Enhance API documentation (Haiyue's and Andrew's comments)
> > * Add a debug log at registration
> > * Add some words in release note
> > * Did some performance tests (sugg. by Andrew):
> > On my platform, reading a dynamic field takes ~3 cycles more
> > than a static field, and ~2 cycles more for writing.
> >
> > app/test/test_mbuf.c | 114 ++++++-
> > doc/guides/rel_notes/release_19_11.rst | 7 +
> > lib/librte_mbuf/Makefile | 2 +
> > lib/librte_mbuf/meson.build | 6 +-
> > lib/librte_mbuf/rte_mbuf.h | 25 +-
> > lib/librte_mbuf/rte_mbuf_dyn.c | 408 +++++++++++++++++++++++++
> > lib/librte_mbuf/rte_mbuf_dyn.h | 163 ++++++++++
> > lib/librte_mbuf/rte_mbuf_version.map | 4 +
> > 8 files changed, 724 insertions(+), 5 deletions(-)
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.c
> > create mode 100644 lib/librte_mbuf/rte_mbuf_dyn.h
> >
> > --- a/lib/librte_mbuf/rte_mbuf.h
> > +++ b/lib/librte_mbuf/rte_mbuf.h
> > @@ -198,9 +198,12 @@ extern "C" {
> > #define PKT_RX_OUTER_L4_CKSUM_GOOD (1ULL << 22)
> > #define PKT_RX_OUTER_L4_CKSUM_INVALID ((1ULL << 21) | (1ULL << 22))
> >
> > -/* add new RX flags here */
> > +/* add new RX flags here, don't forget to update PKT_FIRST_FREE */
> >
> > -/* add new TX flags here */
> > +#define PKT_FIRST_FREE (1ULL << 23)
> > +#define PKT_LAST_FREE (1ULL << 39)
> > +
> > +/* add new TX flags here, don't forget to update PKT_LAST_FREE */
> >
> > /**
> > * Indicate that the metadata field in the mbuf is in use.
> > @@ -738,6 +741,8 @@ struct rte_mbuf {
> > */
> > struct rte_mbuf_ext_shared_info *shinfo;
> >
> > + uint64_t dynfield1; /**< Reserved for dynamic fields. */
> > + uint64_t dynfield2; /**< Reserved for dynamic fields. */
>
> Wonder why just not one field:
> union {
> uint8_t u8[16];
> ...
> uint64_t u64[2];
> } dyn_field1;
> ?
> Probably would be a bit handy, to refer, register, etc. no?
I didn't find any place where we need an access through u8, so I
just changed it into uint64_t dynfield1[2].
>
> > } __rte_cache_aligned;
> >
> > /**
> > @@ -1684,6 +1689,21 @@ rte_pktmbuf_attach_extbuf(struct rte_mbuf *m, void *buf_addr,
> > */
> > #define rte_pktmbuf_detach_extbuf(m) rte_pktmbuf_detach(m)
> >
> > +/**
> > + * Copy dynamic fields from m_src to m_dst.
> > + *
> > + * @param m_dst
> > + * The destination mbuf.
> > + * @param m_src
> > + * The source mbuf.
> > + */
> > +static inline void
> > +rte_mbuf_dynfield_copy(struct rte_mbuf *m_dst, const struct rte_mbuf *m_src)
> > +{
> > + m_dst->dynfield1 = m_src->dynfield1;
> > + m_dst->dynfield2 = m_src->dynfield2;
> > +}
> > +
> > /**
> > * Attach packet mbuf to another packet mbuf.
> > *
> > @@ -1732,6 +1752,7 @@ static inline void rte_pktmbuf_attach(struct rte_mbuf *mi, struct rte_mbuf *m)
> > mi->vlan_tci_outer = m->vlan_tci_outer;
> > mi->tx_offload = m->tx_offload;
> > mi->hash = m->hash;
> > + rte_mbuf_dynfield_copy(mi, m);
> >
> > mi->next = NULL;
> > mi->pkt_len = mi->data_len;
> > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.c b/lib/librte_mbuf/rte_mbuf_dyn.c
> > new file mode 100644
> > index 000000000..13b8742d0
> > --- /dev/null
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.c
> > @@ -0,0 +1,408 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2019 6WIND S.A.
> > + */
> > +
> > +#include <sys/queue.h>
> > +
> > +#include <rte_common.h>
> > +#include <rte_eal.h>
> > +#include <rte_eal_memconfig.h>
> > +#include <rte_tailq.h>
> > +#include <rte_errno.h>
> > +#include <rte_malloc.h>
> > +#include <rte_string_fns.h>
> > +#include <rte_mbuf.h>
> > +#include <rte_mbuf_dyn.h>
> > +
> > +#define RTE_MBUF_DYN_MZNAME "rte_mbuf_dyn"
> > +
> > +struct mbuf_dynfield_elt {
> > + TAILQ_ENTRY(mbuf_dynfield_elt) next;
> > + struct rte_mbuf_dynfield params;
> > + int offset;
>
> Why not 'size_t offset', to avoid any explicit conversions, etc?
Fixed
> > +};
> > +TAILQ_HEAD(mbuf_dynfield_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynfield_tailq = {
> > + .name = "RTE_MBUF_DYNFIELD",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynfield_tailq);
> > +
> > +struct mbuf_dynflag_elt {
> > + TAILQ_ENTRY(mbuf_dynflag_elt) next;
> > + struct rte_mbuf_dynflag params;
> > + int bitnum;
> > +};
> > +TAILQ_HEAD(mbuf_dynflag_list, rte_tailq_entry);
> > +
> > +static struct rte_tailq_elem mbuf_dynflag_tailq = {
> > + .name = "RTE_MBUF_DYNFLAG",
> > +};
> > +EAL_REGISTER_TAILQ(mbuf_dynflag_tailq);
> > +
> > +struct mbuf_dyn_shm {
> > + /** For each mbuf byte, free_space[i] == 1 if space is free. */
> > + uint8_t free_space[sizeof(struct rte_mbuf)];
> > + /** Bitfield of available flags. */
> > + uint64_t free_flags;
> > +};
> > +static struct mbuf_dyn_shm *shm;
> > +
> > +/* allocate and initialize the shared memory */
> > +static int
> > +init_shared_mem(void)
> > +{
> > + const struct rte_memzone *mz;
> > + uint64_t mask;
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + mz = rte_memzone_reserve_aligned(RTE_MBUF_DYN_MZNAME,
> > + sizeof(struct mbuf_dyn_shm),
> > + SOCKET_ID_ANY, 0,
> > + RTE_CACHE_LINE_SIZE);
> > + } else {
> > + mz = rte_memzone_lookup(RTE_MBUF_DYN_MZNAME);
> > + }
> > + if (mz == NULL)
> > + return -1;
> > +
> > + shm = mz->addr;
> > +
> > +#define mark_free(field) \
> > + memset(&shm->free_space[offsetof(struct rte_mbuf, field)], \
> > + 0xff, sizeof(((struct rte_mbuf *)0)->field))
>
> I think you can avoid defining/unedifying macros here by something like that:
>
> static const struct {
> size_t offset;
> size_t size;
> } dyn_syms[] = {
> [0] = {.offset = offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
> [1] = {.offset = offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
> };
> ...
>
> for (i = 0; i != RTE_DIM(dyn_syms); i++)
> memset(shm->free_space + dym_syms[i].offset, UINT8_MAX, dym_syms[i].size);
>
I tried it, but the following lines are too long
[0] = {offsetof(struct rte_mbuf, dynfield1), sizeof((struct rte_mbuf *)0)->dynfield1),
[1] = {offsetof(struct rte_mbuf, dynfield2), sizeof((struct rte_mbuf *)0)->dynfield2),
To make them shorter, we can use a macro... but... wait :)
> > +
> > + if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
> > + /* init free_space, keep it sync'd with
> > + * rte_mbuf_dynfield_copy().
> > + */
> > + memset(shm, 0, sizeof(*shm));
> > + mark_free(dynfield1);
> > + mark_free(dynfield2);
> > +
> > + /* init free_flags */
> > + for (mask = PKT_FIRST_FREE; mask <= PKT_LAST_FREE; mask <<= 1)
> > + shm->free_flags |= mask;
> > + }
> > +#undef mark_free
> > +
> > + return 0;
> > +}
> > +
> > +/* check if this offset can be used */
> > +static int
> > +check_offset(size_t offset, size_t size, size_t align, unsigned int flags)
> > +{
> > + size_t i;
> > +
> > + (void)flags;
>
>
> We have RTE_SET_USED() for such cases...
> Though as it is an internal function probably better not to introduce
> unused parameters at all.
I removed the flag parameter as you suggested.
> > +
> > + if ((offset & (align - 1)) != 0)
> > + return -1;
> > + if (offset + size > sizeof(struct rte_mbuf))
> > + return -1;
> > +
> > + for (i = 0; i < size; i++) {
> > + if (!shm->free_space[i + offset])
> > + return -1;
> > + }
> > +
> > + return 0;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynfield_elt *
> > +__mbuf_dynfield_lookup(const char *name)
> > +{
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynfield_list, next) {
> > + mbuf_dynfield = (struct mbuf_dynfield_elt *)te->data;
> > + if (strcmp(name, mbuf_dynfield->params.name) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynfield;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_lookup(const char *name, struct rte_mbuf_dynfield *params)
> > +{
> > + struct mbuf_dynfield_elt *mbuf_dynfield;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynfield = __mbuf_dynfield_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynfield == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynfield->params, sizeof(*params));
> > +
> > + return mbuf_dynfield->offset;
> > +}
> > +
> > +static int mbuf_dynfield_cmp(const struct rte_mbuf_dynfield *params1,
> > + const struct rte_mbuf_dynfield *params2)
> > +{
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->size != params2->size)
> > + return -1;
> > + if (params1->align != params2->align)
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +int
> > +rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params)
>
> What I meant at user-space - if we can also have another function that would allow
> user to specify required offset for dynfield explicitly, then user can define it as constant
> value and let compiler do optimization work and hopefully generate faster code to access
> this field.
> Something like that:
>
> int rte_mbuf_dynfiled_register_offset(const struct rte_mbuf_dynfield *params, size_t offset);
>
> #define RTE_MBUF_DYNFIELD_OFFSET(fld, off) (offsetof(struct rte_mbuf, fld) + (off))
>
> And then somewhere in user code:
>
> /* to let say reserve first 4B in dynfield1*/
> #define MBUF_DYNFIELD_A RTE_MBUF_DYNFIELD_OFFSET(dynfiled1, 0)
> ...
> params.name = RTE_STR(MBUF_DYNFIELD_A);
> params.size = sizeof(uint32_t);
> params.align = sizeof(uint32_t);
> ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_A);
> if (ret != MBUF_DYNFIELD_A) {
> /* handle it somehow, probably just terminate gracefully... */
> }
> ...
>
> /* to let say reserve last 2B in dynfield2*/
> #define MBUF_DYNFIELD_B RTE_MBUF_DYNFIELD_OFFSET(dynfiled2, 6)
> ...
> params.name = RTE_STR(MBUF_DYNFIELD_B);
> params.size = sizeof(uint16_t);
> params.align = sizeof(uint16_t);
> ret = rte_mbuf_dynfiled_register_offset(¶ms, MBUF_DYNFIELD_B);
>
> After that user can use constant offsets MBUF_DYNFIELD_A/ MBUF_DYNFIELD_B
> to access these fields.
> Same thoughts for DYNFLAG.
I added the feature in v2.
> > + struct mbuf_dynfield_list *mbuf_dynfield_list;
> > + struct mbuf_dynfield_elt *mbuf_dynfield = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + int offset, ret;
>
> size_t offset
> to avoid explicit conversions, etc.?
>
Fixed.
> > + size_t i;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + goto fail;
>
> As I understand, here you allocate/initialize your shm without any lock protection,
> though later you protect it via rte_mcfg_tailq_write_lock().
> That seems a bit flakey to me.
> Why not to store information about free dynfield bytes inside mbuf_dynfield_tailq?
> Let say at init() create and add an entry into that list with some reserved name.
> Then at register - grab mcfg_tailq_write_lock and do lookup
> for such entry and then read/update it as needed.
> It would help to avoid racing problem, plus you wouldn't need to
> allocate/lookup for memzone.
I don't quite like the idea of having a special entry with a different type
in an element list. Despite it is simpler for a locking perspective, it is
less obvious for the developper.
Also, I changed the way a zone is reserved to return the one that have the
less impact on next reservation, and I feel it is easier to implement with
the shared memory.
So, I just moved the init_shared_mem() inside the rte_mcfg_tailq_write_lock(),
it should do the job.
> > + if (params->size >= sizeof(struct rte_mbuf)) {
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > + if (!rte_is_power_of_2(params->align)) {
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > + if (params->flags != 0) {
> > + rte_errno = EINVAL;
> > + goto fail;
> > + }
> > +
> > + rte_mcfg_tailq_write_lock();
> > +
>
> I think it probably would be cleaner and easier to read/maintain, if you'll put actual
> code under lock protection into a separate function - as you did for __mbuf_dynfield_lookup().
Yes, I did that, it should be clearer now.
> > + mbuf_dynfield = __mbuf_dynfield_lookup(params->name);
> > + if (mbuf_dynfield != NULL) {
> > + if (mbuf_dynfield_cmp(params, &mbuf_dynfield->params) < 0) {
> > + rte_errno = EEXIST;
> > + goto fail_unlock;
> > + }
> > + offset = mbuf_dynfield->offset;
> > + goto out_unlock;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + goto fail_unlock;
> > + }
> > +
> > + for (offset = 0;
> > + offset < (int)sizeof(struct rte_mbuf);
> > + offset++) {
> > + if (check_offset(offset, params->size, params->align,
> > + params->flags) == 0)
> > + break;
> > + }
> > +
> > + if (offset == sizeof(struct rte_mbuf)) {
> > + rte_errno = ENOENT;
> > + goto fail_unlock;
> > + }
> > +
> > + mbuf_dynfield_list = RTE_TAILQ_CAST(
> > + mbuf_dynfield_tailq.head, mbuf_dynfield_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFIELD_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + goto fail_unlock;
> > +
> > + mbuf_dynfield = rte_zmalloc("mbuf_dynfield", sizeof(*mbuf_dynfield), 0);
> > + if (mbuf_dynfield == NULL)
> > + goto fail_unlock;
> > +
> > + ret = strlcpy(mbuf_dynfield->params.name, params->name,
> > + sizeof(mbuf_dynfield->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynfield->params.name)) {
> > + rte_errno = ENAMETOOLONG;
> > + goto fail_unlock;
> > + }
> > + memcpy(&mbuf_dynfield->params, params, sizeof(mbuf_dynfield->params));
> > + mbuf_dynfield->offset = offset;
> > + te->data = mbuf_dynfield;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynfield_list, te, next);
> > +
> > + for (i = offset; i < offset + params->size; i++)
> > + shm->free_space[i] = 0;
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic field %s (sz=%zu, al=%zu, fl=0x%x) -> %d\n",
> > + params->name, params->size, params->align, params->flags,
> > + offset);
> > +
> > +out_unlock:
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return offset;
> > +
> > +fail_unlock:
> > + rte_mcfg_tailq_write_unlock();
> > +fail:
> > + rte_free(mbuf_dynfield);
> > + rte_free(te);
> > + return -1;
> > +}
> > +
> > +/* assume tailq is locked */
> > +static struct mbuf_dynflag_elt *
> > +__mbuf_dynflag_lookup(const char *name)
> > +{
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > + struct rte_tailq_entry *te;
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + TAILQ_FOREACH(te, mbuf_dynflag_list, next) {
> > + mbuf_dynflag = (struct mbuf_dynflag_elt *)te->data;
> > + if (strncmp(name, mbuf_dynflag->params.name,
> > + RTE_MBUF_DYN_NAMESIZE) == 0)
> > + break;
> > + }
> > +
> > + if (te == NULL) {
> > + rte_errno = ENOENT;
> > + return NULL;
> > + }
> > +
> > + return mbuf_dynflag;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_lookup(const char *name,
> > + struct rte_mbuf_dynflag *params)
> > +{
> > + struct mbuf_dynflag_elt *mbuf_dynflag;
> > +
> > + if (shm == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + rte_mcfg_tailq_read_lock();
> > + mbuf_dynflag = __mbuf_dynflag_lookup(name);
> > + rte_mcfg_tailq_read_unlock();
> > +
> > + if (mbuf_dynflag == NULL) {
> > + rte_errno = ENOENT;
> > + return -1;
> > + }
> > +
> > + if (params != NULL)
> > + memcpy(params, &mbuf_dynflag->params, sizeof(*params));
> > +
> > + return mbuf_dynflag->bitnum;
> > +}
> > +
> > +static int mbuf_dynflag_cmp(const struct rte_mbuf_dynflag *params1,
> > + const struct rte_mbuf_dynflag *params2)
> > +{
> > + if (strcmp(params1->name, params2->name))
> > + return -1;
> > + if (params1->flags != params2->flags)
> > + return -1;
> > + return 0;
> > +}
> > +
> > +int
> > +rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params)
> > +{
> > + struct mbuf_dynflag_list *mbuf_dynflag_list;
> > + struct mbuf_dynflag_elt *mbuf_dynflag = NULL;
> > + struct rte_tailq_entry *te = NULL;
> > + int bitnum, ret;
> > +
> > + if (shm == NULL && init_shared_mem() < 0)
> > + goto fail;
> > +
> > + rte_mcfg_tailq_write_lock();
> > +
> > + mbuf_dynflag = __mbuf_dynflag_lookup(params->name);
> > + if (mbuf_dynflag != NULL) {
> > + if (mbuf_dynflag_cmp(params, &mbuf_dynflag->params) < 0) {
> > + rte_errno = EEXIST;
> > + goto fail_unlock;
> > + }
> > + bitnum = mbuf_dynflag->bitnum;
> > + goto out_unlock;
> > + }
> > +
> > + if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
> > + rte_errno = EPERM;
> > + goto fail_unlock;
> > + }
> > +
> > + if (shm->free_flags == 0) {
> > + rte_errno = ENOENT;
> > + goto fail_unlock;
> > + }
> > + bitnum = rte_bsf64(shm->free_flags);
> > +
> > + mbuf_dynflag_list = RTE_TAILQ_CAST(
> > + mbuf_dynflag_tailq.head, mbuf_dynflag_list);
> > +
> > + te = rte_zmalloc("MBUF_DYNFLAG_TAILQ_ENTRY", sizeof(*te), 0);
> > + if (te == NULL)
> > + goto fail_unlock;
> > +
> > + mbuf_dynflag = rte_zmalloc("mbuf_dynflag", sizeof(*mbuf_dynflag), 0);
> > + if (mbuf_dynflag == NULL)
> > + goto fail_unlock;
> > +
> > + ret = strlcpy(mbuf_dynflag->params.name, params->name,
> > + sizeof(mbuf_dynflag->params.name));
> > + if (ret < 0 || ret >= (int)sizeof(mbuf_dynflag->params.name)) {
> > + rte_errno = ENAMETOOLONG;
> > + goto fail_unlock;
> > + }
> > + mbuf_dynflag->bitnum = bitnum;
> > + te->data = mbuf_dynflag;
> > +
> > + TAILQ_INSERT_TAIL(mbuf_dynflag_list, te, next);
> > +
> > + shm->free_flags &= ~(1ULL << bitnum);
> > +
> > + RTE_LOG(DEBUG, MBUF, "Registered dynamic flag %s (fl=0x%x) -> %u\n",
> > + params->name, params->flags, bitnum);
> > +
> > +out_unlock:
> > + rte_mcfg_tailq_write_unlock();
> > +
> > + return bitnum;
> > +
> > +fail_unlock:
> > + rte_mcfg_tailq_write_unlock();
> > +fail:
> > + rte_free(mbuf_dynflag);
> > + rte_free(te);
> > + return -1;
> > +}
> > diff --git a/lib/librte_mbuf/rte_mbuf_dyn.h b/lib/librte_mbuf/rte_mbuf_dyn.h
> > new file mode 100644
> > index 000000000..6e2c81654
> > --- /dev/null
> > +++ b/lib/librte_mbuf/rte_mbuf_dyn.h
> > @@ -0,0 +1,163 @@
> > +/* SPDX-License-Identifier: BSD-3-Clause
> > + * Copyright 2019 6WIND S.A.
> > + */
> > +
> > +#ifndef _RTE_MBUF_DYN_H_
> > +#define _RTE_MBUF_DYN_H_
> > +
> > +/**
> > + * @file
> > + * RTE Mbuf dynamic fields and flags
> > + *
> > + * Many features require to store data inside the mbuf. As the room in
> > + * mbuf structure is limited, it is not possible to have a field for
> > + * each feature. Also, changing fields in the mbuf structure can break
> > + * the API or ABI.
> > + *
> > + * This module addresses this issue, by enabling the dynamic
> > + * registration of fields or flags:
> > + *
> > + * - a dynamic field is a named area in the rte_mbuf structure, with a
> > + * given size (>= 1 byte) and alignment constraint.
> > + * - a dynamic flag is a named bit in the rte_mbuf structure, stored
> > + * in mbuf->ol_flags.
> > + *
> > + * The typical use case is when a specific offload feature requires to
> > + * register a dedicated offload field in the mbuf structure, and adding
> > + * a static field or flag is not justified.
> > + *
> > + * Example of use:
> > + *
> > + * - A rte_mbuf_dynfield structure is defined, containing the parameters
> > + * of the dynamic field to be registered:
> > + * const struct rte_mbuf_dynfield rte_dynfield_my_feature = { ... };
> > + * - The application initializes the PMD, and asks for this feature
> > + * at port initialization by passing DEV_RX_OFFLOAD_MY_FEATURE in
> > + * rxconf. This will make the PMD to register the field by calling
> > + * rte_mbuf_dynfield_register(&rte_dynfield_my_feature). The PMD
> > + * stores the returned offset.
> > + * - The application that uses the offload feature also registers
> > + * the field to retrieve the same offset.
> > + * - When the PMD receives a packet, it can set the field:
> > + * *RTE_MBUF_DYNFIELD(m, offset, <type *>) = value;
> > + * - In the main loop, the application can retrieve the value with
> > + * the same macro.
> > + *
> > + * To avoid wasting space, the dynamic fields or flags must only be
> > + * reserved on demand, when an application asks for the related feature.
> > + *
> > + * The registration can be done at any moment, but it is not possible
> > + * to unregister fields or flags for now.
> > + *
> > + * A dynamic field can be reserved and used by an application only.
> > + * It can for instance be a packet mark.
> > + */
> > +
> > +#include <sys/types.h>
> > +/**
> > + * Maximum length of the dynamic field or flag string.
> > + */
> > +#define RTE_MBUF_DYN_NAMESIZE 64
> > +
> > +/**
> > + * Structure describing the parameters of a mbuf dynamic field.
> > + */
> > +struct rte_mbuf_dynfield {
> > + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the field. */
> > + size_t size; /**< The number of bytes to reserve. */
> > + size_t align; /**< The alignment constraint (power of 2). */
> > + unsigned int flags; /**< Reserved for future use, must be 0. */
> > +};
> > +
> > +/**
> > + * Structure describing the parameters of a mbuf dynamic flag.
> > + */
> > +struct rte_mbuf_dynflag {
> > + char name[RTE_MBUF_DYN_NAMESIZE]; /**< Name of the dynamic flag. */
> > + unsigned int flags; /**< Reserved for future use, must be 0. */
> > +};
> > +
> > +/**
> > + * Register space for a dynamic field in the mbuf structure.
> > + *
> > + * If the field is already registered (same name and parameters), its
> > + * offset is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters (name, size,
> > + * alignment constraint and flags).
> > + * @return
> > + * The offset in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, or flags).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: not enough room in mbuf.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name does not ends with \0.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynfield_register(const struct rte_mbuf_dynfield *params);
> > +
> > +/**
> > + * Lookup for a registered dynamic mbuf field.
> > + *
> > + * @param name
> > + * A string identifying the dynamic field.
> > + * @param params
> > + * If not NULL, and if the lookup is successful, the structure is
> > + * filled with the parameters of the dynamic field.
> > + * @return
> > + * The offset of this field in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - ENOENT: no dynamic field matches this name.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynfield_lookup(const char *name,
> > + struct rte_mbuf_dynfield *params);
> > +
> > +/**
> > + * Register a dynamic flag in the mbuf structure.
> > + *
> > + * If the flag is already registered (same name and parameters), its
> > + * offset is returned.
> > + *
> > + * @param params
> > + * A structure containing the requested parameters of the dynamic
> > + * flag (name and options).
> > + * @return
> > + * The number of the reserved bit, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - EINVAL: invalid parameters (size, align, or flags).
> > + * - EEXIST: this name is already register with different parameters.
> > + * - EPERM: called from a secondary process.
> > + * - ENOENT: no more flag available.
> > + * - ENOMEM: allocation failure.
> > + * - ENAMETOOLONG: name is longer than RTE_MBUF_DYN_NAMESIZE - 1.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynflag_register(const struct rte_mbuf_dynflag *params);
> > +
> > +/**
> > + * Lookup for a registered dynamic mbuf flag.
> > + *
> > + * @param name
> > + * A string identifying the dynamic flag.
> > + * @param params
> > + * If not NULL, and if the lookup is successful, the structure is
> > + * filled with the parameters of the dynamic flag.
> > + * @return
> > + * The offset of this flag in the mbuf structure, or -1 on error.
> > + * Possible values for rte_errno:
> > + * - ENOENT: no dynamic flag matches this name.
> > + */
> > +__rte_experimental
> > +int rte_mbuf_dynflag_lookup(const char *name,
> > + struct rte_mbuf_dynflag *params);
> > +
> > +/**
> > + * Helper macro to access to a dynamic field.
> > + */
> > +#define RTE_MBUF_DYNFIELD(m, offset, type) ((type)((uintptr_t)(m) + (offset)))
> > +
> > +#endif
> > diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
> > index 2662a37bf..a98310570 100644
> > --- a/lib/librte_mbuf/rte_mbuf_version.map
> > +++ b/lib/librte_mbuf/rte_mbuf_version.map
> > @@ -50,4 +50,8 @@ EXPERIMENTAL {
> > global:
> >
> > rte_mbuf_check;
> > + rte_mbuf_dynfield_lookup;
> > + rte_mbuf_dynfield_register;
> > + rte_mbuf_dynflag_lookup;
> > + rte_mbuf_dynflag_register;
> > } DPDK_18.08;
> > --
> > 2.20.1
>
I will send a v2 shortly, thanks
Olivier
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 00/13] vhost packed ring performance optimization
2019-10-17 7:31 0% ` Maxime Coquelin
@ 2019-10-17 7:32 0% ` Liu, Yong
0 siblings, 0 replies; 200+ results
From: Liu, Yong @ 2019-10-17 7:32 UTC (permalink / raw)
To: Maxime Coquelin, Bie, Tiwei, Wang, Zhihong, stephen, gavin.hu; +Cc: dev
> -----Original Message-----
> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com]
> Sent: Thursday, October 17, 2019 3:31 PM
> To: Liu, Yong <yong.liu@intel.com>; Bie, Tiwei <tiwei.bie@intel.com>; Wang,
> Zhihong <zhihong.wang@intel.com>; stephen@networkplumber.org;
> gavin.hu@arm.com
> Cc: dev@dpdk.org
> Subject: Re: [PATCH v6 00/13] vhost packed ring performance optimization
>
> Hi Marvin,
>
> This is almost good, just fix the small comments I made.
>
> Also, please rebase on top of next-virtio branch, because I applied
> below patch from Flavio that you need to take into account:
>
> http://patches.dpdk.org/patch/61284/
Thanks, Maxime. I will start rebasing work.
>
> Regards,
> Maxime
>
> On 10/15/19 6:07 PM, Marvin Liu wrote:
> > Packed ring has more compact ring format and thus can significantly
> > reduce the number of cache miss. It can lead to better performance.
> > This has been approved in virtio user driver, on normal E5 Xeon cpu
> > single core performance can raise 12%.
> >
> > http://mails.dpdk.org/archives/dev/2018-April/095470.html
> >
> > However vhost performance with packed ring performance was decreased.
> > Through analysis, mostly extra cost was from the calculating of each
> > descriptor flag which depended on ring wrap counter. Moreover, both
> > frontend and backend need to write same descriptors which will cause
> > cache contention. Especially when doing vhost enqueue function, virtio
> > refill packed ring function may write same cache line when vhost doing
> > enqueue function. This kind of extra cache cost will reduce the benefit
> > of reducing cache misses.
> >
> > For optimizing vhost packed ring performance, vhost enqueue and dequeue
> > function will be splitted into fast and normal path.
> >
> > Several methods will be taken in fast path:
> > Handle descriptors in one cache line by batch.
> > Split loop function into more pieces and unroll them.
> > Prerequisite check that whether I/O space can copy directly into mbuf
> > space and vice versa.
> > Prerequisite check that whether descriptor mapping is successful.
> > Distinguish vhost used ring update function by enqueue and dequeue
> > function.
> > Buffer dequeue used descriptors as many as possible.
> > Update enqueue used descriptors by cache line.
> >
> > After all these methods done, single core vhost PvP performance with 64B
> > packet on Xeon 8180 can boost 35%.
> >
> > v6:
> > - Fix dequeue zcopy result check
> >
> > v5:
> > - Remove disable sw prefetch as performance impact is small
> > - Change unroll pragma macro format
> > - Rename shadow counter elements names
> > - Clean dequeue update check condition
> > - Add inline functions replace of duplicated code
> > - Unify code style
> >
> > v4:
> > - Support meson build
> > - Remove memory region cache for no clear performance gain and ABI break
> > - Not assume ring size is power of two
> >
> > v3:
> > - Check available index overflow
> > - Remove dequeue remained descs number check
> > - Remove changes in split ring datapath
> > - Call memory write barriers once when updating used flags
> > - Rename some functions and macros
> > - Code style optimization
> >
> > v2:
> > - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> > - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> > - Optimize dequeue used ring update when in_order negotiated
> >
> >
> > Marvin Liu (13):
> > vhost: add packed ring indexes increasing function
> > vhost: add packed ring single enqueue
> > vhost: try to unroll for each loop
> > vhost: add packed ring batch enqueue
> > vhost: add packed ring single dequeue
> > vhost: add packed ring batch dequeue
> > vhost: flush enqueue updates by batch
> > vhost: flush batched enqueue descs directly
> > vhost: buffer packed ring dequeue updates
> > vhost: optimize packed ring enqueue
> > vhost: add packed ring zcopy batch and single dequeue
> > vhost: optimize packed ring dequeue
> > vhost: optimize packed ring dequeue when in-order
> >
> > lib/librte_vhost/Makefile | 18 +
> > lib/librte_vhost/meson.build | 7 +
> > lib/librte_vhost/vhost.h | 57 +++
> > lib/librte_vhost/virtio_net.c | 924 +++++++++++++++++++++++++++-------
> > 4 files changed, 812 insertions(+), 194 deletions(-)
> >
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v6 00/13] vhost packed ring performance optimization
2019-10-15 16:07 3% ` [dpdk-dev] [PATCH v6 " Marvin Liu
@ 2019-10-17 7:31 0% ` Maxime Coquelin
2019-10-17 7:32 0% ` Liu, Yong
2019-10-21 15:40 3% ` [dpdk-dev] [PATCH v7 " Marvin Liu
1 sibling, 1 reply; 200+ results
From: Maxime Coquelin @ 2019-10-17 7:31 UTC (permalink / raw)
To: Marvin Liu, tiwei.bie, zhihong.wang, stephen, gavin.hu; +Cc: dev
Hi Marvin,
This is almost good, just fix the small comments I made.
Also, please rebase on top of next-virtio branch, because I applied
below patch from Flavio that you need to take into account:
http://patches.dpdk.org/patch/61284/
Regards,
Maxime
On 10/15/19 6:07 PM, Marvin Liu wrote:
> Packed ring has more compact ring format and thus can significantly
> reduce the number of cache miss. It can lead to better performance.
> This has been approved in virtio user driver, on normal E5 Xeon cpu
> single core performance can raise 12%.
>
> http://mails.dpdk.org/archives/dev/2018-April/095470.html
>
> However vhost performance with packed ring performance was decreased.
> Through analysis, mostly extra cost was from the calculating of each
> descriptor flag which depended on ring wrap counter. Moreover, both
> frontend and backend need to write same descriptors which will cause
> cache contention. Especially when doing vhost enqueue function, virtio
> refill packed ring function may write same cache line when vhost doing
> enqueue function. This kind of extra cache cost will reduce the benefit
> of reducing cache misses.
>
> For optimizing vhost packed ring performance, vhost enqueue and dequeue
> function will be splitted into fast and normal path.
>
> Several methods will be taken in fast path:
> Handle descriptors in one cache line by batch.
> Split loop function into more pieces and unroll them.
> Prerequisite check that whether I/O space can copy directly into mbuf
> space and vice versa.
> Prerequisite check that whether descriptor mapping is successful.
> Distinguish vhost used ring update function by enqueue and dequeue
> function.
> Buffer dequeue used descriptors as many as possible.
> Update enqueue used descriptors by cache line.
>
> After all these methods done, single core vhost PvP performance with 64B
> packet on Xeon 8180 can boost 35%.
>
> v6:
> - Fix dequeue zcopy result check
>
> v5:
> - Remove disable sw prefetch as performance impact is small
> - Change unroll pragma macro format
> - Rename shadow counter elements names
> - Clean dequeue update check condition
> - Add inline functions replace of duplicated code
> - Unify code style
>
> v4:
> - Support meson build
> - Remove memory region cache for no clear performance gain and ABI break
> - Not assume ring size is power of two
>
> v3:
> - Check available index overflow
> - Remove dequeue remained descs number check
> - Remove changes in split ring datapath
> - Call memory write barriers once when updating used flags
> - Rename some functions and macros
> - Code style optimization
>
> v2:
> - Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
> - Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
> - Optimize dequeue used ring update when in_order negotiated
>
>
> Marvin Liu (13):
> vhost: add packed ring indexes increasing function
> vhost: add packed ring single enqueue
> vhost: try to unroll for each loop
> vhost: add packed ring batch enqueue
> vhost: add packed ring single dequeue
> vhost: add packed ring batch dequeue
> vhost: flush enqueue updates by batch
> vhost: flush batched enqueue descs directly
> vhost: buffer packed ring dequeue updates
> vhost: optimize packed ring enqueue
> vhost: add packed ring zcopy batch and single dequeue
> vhost: optimize packed ring dequeue
> vhost: optimize packed ring dequeue when in-order
>
> lib/librte_vhost/Makefile | 18 +
> lib/librte_vhost/meson.build | 7 +
> lib/librte_vhost/vhost.h | 57 +++
> lib/librte_vhost/virtio_net.c | 924 +++++++++++++++++++++++++++-------
> 4 files changed, 812 insertions(+), 194 deletions(-)
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-13 23:07 0% ` Zhang, Roy Fan
@ 2019-10-16 22:07 3% ` Ananyev, Konstantin
2019-10-17 12:49 0% ` Ananyev, Konstantin
2019-10-18 13:17 4% ` Akhil Goyal
1 sibling, 2 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-16 22:07 UTC (permalink / raw)
To: Akhil Goyal, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Zhang, Roy Fan, Doherty, Declan
Cc: 'Anoob Joseph'
Hi Akhil,
> > > User can use the same session, that is what I am also insisting, but it may have
> > separate
> > > Session private data. Cryptodev session create API provide that functionality
> > and we can
> > > Leverage that.
> >
> > rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means
> > we can't use
> > the same rte_cryptodev_sym_session to hold sessions for both sync and async
> > mode
> > for the same device. Off course we can add a hard requirement that any driver
> > that wants to
> > support process() has to create sessions that can handle both process and
> > enqueue/dequeue,
> > but then again what for to create such overhead?
> >
> > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > construct for multiple device_ids:
> > __extension__ struct {
> > void *data;
> > uint16_t refcnt;
> > } sess_data[0];
> > /**< Driver specific session material, variable size */
> >
> Yes I also feel the same. I was also not in favor of this when it was introduced.
> Please go ahead and remove this. I have no issues with that.
If you are not happy with that structure, and admit there are issues with it,
why do you push for reusing it for cpu-crypto API?
Why not to take step back, take into account current drawbacks
and define something that (hopefully) would suite us better?
Again new API will be experimental for some time, so we'll
have some opportunity to see does it works and if not fix it.
About removing data[] from existing rte_cryptodev_sym_session -
Personally would like to do that, but the change seems to be too massive.
Definitely not ready for such effort right now.
>
> > as an advantage.
> > It looks too error prone for me:
> > 1. Simultaneous session initialization/de-initialization for devices with the same
> > driver_id is not possible.
> > 2. It assumes that all device driver will be loaded before we start to create
> > session pools.
> >
> > Right now it seems ok, as no-one requires such functionality, but I don't know
> > how it will be in future.
> > For me rte_security session model, where for each security context user have to
> > create new session
> > looks much more robust.
> Agreed
>
> >
> > >
> > > BTW, I can see a v2 to this RFC which is still based on security library.
> >
> > Yes, v2 was concentrated on fixing found issues, some code restructuring,
> > i.e. - changes that would be needed anyway whatever API aproach we'll choose.
> >
> > > When do you plan
> > > To submit the patches for crypto based APIs. We have RC1 merge deadline for
> > this
> > > patchset on 21st Oct.
> >
> > We'd like to start working on it ASAP, but it seems we still have a major
> > disagreement
> > about how this crypto-dev API should look like.
> > Which makes me think - should we return to our original proposal via
> > rte_security?
> > It still looks to me like clean and straightforward way to enable this new API,
> > and probably wouldn't cause that much controversy.
> > What do you think?
>
> I cannot spend more time discussing on this until RC1 date. I have some other stuff pending.
> You can send the patches early next week with the approach that I mentioned or else we
> can discuss this post RC1(which would mean deferring to 20.02).
>
> But moving back to security is not acceptable to me. The code should be put where it is
> intended and not where it is easy to put. You are not doing any rte_security stuff.
>
Ok, then my suggestion:
Let's at least write down all points about crypto-dev approach where we
disagree and then probably try to resolve them one by one....
If we fail to make an agreement/progress in next week or so,
(and no more reviews from the community)
will have bring that subject to TB meeting to decide.
Sounds fair to you?
List is below.
Please add/correct me, if I missed something.
Konstantin
1. extra input parameters to create/init rte_(cpu)_sym_session.
Will leverage existing 6B gap inside rte_crypto_*_xform between 'algo' and 'key' fields.
New fields will be optional and would be used by PMD only when cpu-crypto session is requested.
For lksd-crypto session PMD is free to ignore these fields.
No ABI breakage is required.
Hopefully no controversy here with #1.
2. cpu-crypto create/init.
a) Our suggestion - introduce new API for that:
- rte_crypto_cpu_sym_init() that would init completely opaque rte_crypto_cpu_sym_session.
- struct rte_crypto_cpu_sym_session_ops {(*process)(...); (*clear); /*whatever else we'll need *'};
- rte_crypto_cpu_sym_get_ops(const struct rte_crypto_sym_xform *xforms)
that would return const struct rte_crypto_cpu_sym_session_ops *based on input xforms.
Advantages:
1) totally opaque data structure (no ABI breakages in future), PMD writer is totally free
with it format and contents.
2) each session entity is self-contained, user doesn't need to bring along dev_id etc.
dev_id is needed only at init stage, after that user will use session ops to perform
all operations on that session (process(), clear(), etc.).
3) User can decide does he wants to store ops[] pointer on a per session basis,
or on a per group of same sessions, or...
4) No mandatory mempools for private sessions. User can allocate memory for cpu-crypto
session whenever he likes.
Disadvantages:
5) Extra changes in control path
6) User has to store session_ops pointer explicitly.
b) Your suggestion - reuse existing rte_cryptodev_sym_session_init() and existing rte_cryptodev_sym_session
structure.
Advantages:
1) allows to reuse same struct and init/create/clear() functions.
Probably less changes in control path.
Disadvantages:
2) rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which means that
we can't use the same rte_cryptodev_sym_session to hold private sessions pointers
for both sync and async mode for the same device.
So wthe only option we have - make PMD devops->sym_session_configure()
always create a session that can work in both cpu and lksd modes.
For some implementations that would probably mean that under the hood PMD would create
2 different session structs (sync/async) and then use one or another depending on from what API been called.
Seems doable, but ...:
- will contradict with statement from 1:
" New fields will be optional and would be used by PMD only when cpu-crypto session is requested."
Now it becomes mandatory for all apps to specify cpu-crypto related parameters too,
even if they don't plan to use that mode - i.e. behavior change, existing app change.
- might cause extra space overhead.
3) not possible to store device (not driver) specific data within the session, but I think it is not really needed right now.
So probably minor compared to 2.b.2.
Actually #3 follows from #2, but decided to have them separated.
3. process() parameters/behavior
a) Our suggestion: user stores ptr to session ops (or to (*process) itself) and just does:
session_ops->process(sess, ...);
Advantages:
1) fastest possible execution path
2) no need to carry on dev_id for data-path
Disadvantages:
3) user has to carry on session_ops pointer explicitly
b) Your suggestion: add (*cpu_process) inside rte_cryptodev_ops and then:
rte_crypto_cpu_sym_process(uint8_t dev_id, rte_cryptodev_sym_session *sess, /*data parameters*/) {...
rte_cryptodevs[dev_id].dev_ops->cpu_process(ses, ...);
/*and then inside PMD specifc process: */
pmd_private_session = sess->sess_data[this_pmd_driver_id].data;
/* and then most likely either */
pmd_private_session->process(pmd_private_session, ...);
/* or jump based on session/input data */
Advantages:
1) don't see any...
Disadvantages:
2) User has to carry on dev_id inside data-path
3) Extra level of indirection (plus data dependency) - both for data and instructions.
Possible slowdown compared to a) (not measured).
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 8/9] build: change ABI version to 20.0
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (7 preceding siblings ...)
2019-10-16 17:03 3% ` [dpdk-dev] [PATCH v3 7/9] drivers/octeontx: add missing public symbol Anatoly Burakov
@ 2019-10-16 17:03 2% ` Anatoly Burakov
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 9/9] buildtools: add ABI versioning check script Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Pawel Modrak, Nicolas Chautru, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Stephen Hemminger, Anoob Joseph, Tomasz Duszynski,
Liron Himi, Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru,
Lee Daly, Fiona Trahe, Ashish Gupta, Sunila Sahu, Declan Doherty,
Pablo de Lara, Gagandeep Singh, Ravi Kumar, Akhil Goyal,
Michael Shamis, Nagadheeraj Rottela, Srikanth Jampala, Fan Zhang,
Jay Zhou, Nipun Gupta, Mattias Rönnblom, Pavan Nikhilesh,
Liang Ma, Peter Mccarthy, Harry van Haaren, Artem V. Andreev,
Andrew Rybchenko, Olivier Matz, Gage Eads, John W. Linville,
Xiaolong Ye, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Igor Russkikh, Pavel Belous, Allain Legacy, Matt Peters,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Chas Williams, Rahul Lakkireddy, Wenzhuo Lu, Marcin Wojtas,
Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Xiao Wang, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Wei Hu (Xavier), Min Hu (Connor),
Yisen Zhuang, Beilei Xing, Jingjing Wu, Qiming Yang,
Konstantin Ananyev, Ferruh Yigit, Shijith Thotton,
Srisivasubramanian Srinivasan, Jakub Grajciar, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
K. Y. Srinivasan, Haiyang Zhang, Rastislav Cernay, Jan Remes,
Alejandro Lucero, Tetsuya Mukawa, Kiran Kumar K,
Bruce Richardson, Jasvinder Singh, Cristian Dumitrescu,
Keith Wiles, Maciej Czekaj, Maxime Coquelin, Tiwei Bie,
Zhihong Wang, Yong Wang, Tianfei zhang, Xiaoyun Li, Satha Rao,
Shreyansh Jain, David Hunt, Byron Marohn, Yipeng Wang,
Thomas Monjalon, Bernard Iremonger, Jiayu Hu, Sameh Gobriel,
Reshma Pattan, Vladimir Medvedkin, Honnappa Nagarahalli,
Kevin Laatz, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Merge all vesions in linker version script files to DPDK_20.0.
This commit was generated by running the following command:
:~/DPDK$ buildtools/update-abi.sh 20.0
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Regenerate the commit using the new script
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +++----
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++++-----
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 6 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +--
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 ++--
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 ++--
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 ++----
.../rte_distributor_version.map | 2 +-
lib/librte_eal/rte_eal_version.map | 310 +++++++-----------
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +++------
lib/librte_eventdev/rte_eventdev_version.map | 130 +++-----
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +--
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm_version.map | 39 +--
lib/librte_mbuf/rte_mbuf_version.map | 41 +--
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +--
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +---
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +--
154 files changed, 721 insertions(+), 1399 deletions(-)
diff --git a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
index f64b0f9c27..6bcea2cc7f 100644
--- a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
+++ b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
@@ -1,10 +1,10 @@
-DPDK_19.08 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
EXPERIMENTAL {
- global:
+ global:
- fpga_lte_fec_configure;
+ fpga_lte_fec_configure;
};
diff --git a/drivers/baseband/null/rte_pmd_bbdev_null_version.map b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/null/rte_pmd_bbdev_null_version.map
+++ b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
+++ b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a221522c23..9ab8c76eef 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
bman_acquire;
@@ -8,127 +8,94 @@ DPDK_17.11 {
bman_new_pool;
bman_query_free_buffers;
bman_release;
+ bman_thread_irq;
+ dpaa_logtype_eventdev;
dpaa_logtype_mempool;
dpaa_logtype_pmd;
dpaa_netcfg;
+ dpaa_svr_family;
fman_ccsr_map_fd;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
fman_if_clear_mac_addr;
fman_if_disable_rx;
- fman_if_enable_rx;
fman_if_discard_rx_errors;
- fman_if_get_fc_threshold;
+ fman_if_enable_rx;
fman_if_get_fc_quanta;
+ fman_if_get_fc_threshold;
fman_if_get_fdoff;
+ fman_if_get_sg_enable;
fman_if_loopback_disable;
fman_if_loopback_enable;
fman_if_promiscuous_disable;
fman_if_promiscuous_enable;
fman_if_reset_mcast_filter_table;
fman_if_set_bp;
- fman_if_set_fc_threshold;
fman_if_set_fc_quanta;
+ fman_if_set_fc_threshold;
fman_if_set_fdoff;
fman_if_set_ic_params;
fman_if_set_maxfrm;
fman_if_set_mcast_filter_table;
+ fman_if_set_sg;
fman_if_stats_get;
fman_if_stats_get_all;
fman_if_stats_reset;
fman_ip_rev;
+ fsl_qman_fq_portal_create;
netcfg_acquire;
netcfg_release;
of_find_compatible_node;
+ of_get_mac_address;
of_get_property;
+ per_lcore_dpaa_io;
+ per_lcore_held_bufs;
qm_channel_caam;
+ qm_channel_pool1;
+ qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
+ qman_clear_irq;
+ qman_create_cgr;
qman_create_fq;
+ qman_dca_index;
+ qman_delete_cgr;
qman_dequeue;
qman_dqrr_consume;
qman_enqueue;
qman_enqueue_multi;
+ qman_enqueue_multi_fq;
qman_fq_fqid;
+ qman_fq_portal_irqsource_add;
+ qman_fq_portal_irqsource_remove;
+ qman_fq_portal_thread_irq;
qman_fq_state;
qman_global_init;
qman_init_fq;
- qman_poll_dqrr;
- qman_query_fq_np;
- qman_set_vdq;
- qman_reserve_fqid_range;
- qman_volatile_dequeue;
- rte_dpaa_driver_register;
- rte_dpaa_driver_unregister;
- rte_dpaa_mem_ptov;
- rte_dpaa_portal_init;
-
- local: *;
-};
-
-DPDK_18.02 {
- global:
-
- dpaa_logtype_eventdev;
- dpaa_svr_family;
- per_lcore_dpaa_io;
- per_lcore_held_bufs;
- qm_channel_pool1;
- qman_alloc_cgrid_range;
- qman_alloc_pool_range;
- qman_create_cgr;
- qman_dca_index;
- qman_delete_cgr;
- qman_enqueue_multi_fq;
+ qman_irqsource_add;
+ qman_irqsource_remove;
qman_modify_cgr;
qman_oos_fq;
+ qman_poll_dqrr;
qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
+ qman_query_fq_np;
qman_release_cgrid_range;
+ qman_reserve_fqid_range;
qman_retire_fq;
+ qman_set_fq_lookup_table;
+ qman_set_vdq;
qman_static_dequeue_add;
- rte_dpaa_portal_fq_close;
- rte_dpaa_portal_fq_init;
-
-} DPDK_17.11;
-
-DPDK_18.08 {
- global:
-
- fman_if_get_sg_enable;
- fman_if_set_sg;
- of_get_mac_address;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
-
- bman_thread_irq;
- fman_if_get_sg_enable;
- fman_if_set_sg;
- qman_clear_irq;
-
- qman_irqsource_add;
- qman_irqsource_remove;
qman_thread_fd;
qman_thread_irq;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- qman_set_fq_lookup_table;
-
-} DPDK_18.11;
-
-DPDK_19.11 {
- global:
-
- fsl_qman_fq_portal_create;
- qman_fq_portal_irqsource_add;
- qman_fq_portal_irqsource_remove;
- qman_fq_portal_thread_irq;
-
-} DPDK_19.05;
+ qman_volatile_dequeue;
+ rte_dpaa_driver_register;
+ rte_dpaa_driver_unregister;
+ rte_dpaa_mem_ptov;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
+ rte_dpaa_portal_init;
+
+ local: *;
+};
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 4da787236b..fe45575046 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,32 +1,67 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
+ dpaa2_affine_qbman_ethrx_swp;
dpaa2_affine_qbman_swp;
dpaa2_alloc_dpbp_dev;
dpaa2_alloc_dq_storage;
+ dpaa2_dpbp_supported;
+ dpaa2_dqrr_size;
+ dpaa2_eqcr_size;
dpaa2_free_dpbp_dev;
dpaa2_free_dq_storage;
+ dpaa2_free_eq_descriptors;
+ dpaa2_get_qbman_swp;
+ dpaa2_io_portal;
+ dpaa2_svr_family;
+ dpaa2_virt_mode;
dpbp_disable;
dpbp_enable;
dpbp_get_attributes;
dpbp_get_num_free_bufs;
dpbp_open;
dpbp_reset;
+ dpci_get_opr;
+ dpci_set_opr;
+ dpci_set_rx_queue;
+ dpcon_get_attributes;
+ dpcon_open;
+ dpdmai_close;
+ dpdmai_disable;
+ dpdmai_enable;
+ dpdmai_get_attributes;
+ dpdmai_get_rx_queue;
+ dpdmai_get_tx_queue;
+ dpdmai_open;
+ dpdmai_set_rx_queue;
+ dpio_add_static_dequeue_channel;
dpio_close;
dpio_disable;
dpio_enable;
dpio_get_attributes;
dpio_open;
+ dpio_remove_static_dequeue_channel;
dpio_reset;
dpio_set_stashing_destination;
+ mc_get_soc_version;
+ mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
+ per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
+ qbman_check_new_result;
qbman_eq_desc_clear;
+ qbman_eq_desc_set_dca;
qbman_eq_desc_set_fq;
qbman_eq_desc_set_no_orp;
+ qbman_eq_desc_set_orp;
qbman_eq_desc_set_qd;
qbman_eq_desc_set_response;
+ qbman_eq_desc_set_token;
+ qbman_fq_query_state;
+ qbman_fq_state_frame_count;
+ qbman_get_dqrr_from_idx;
+ qbman_get_dqrr_idx;
qbman_pull_desc_clear;
qbman_pull_desc_set_fq;
qbman_pull_desc_set_numframes;
@@ -35,112 +70,43 @@ DPDK_17.05 {
qbman_release_desc_set_bpid;
qbman_result_DQ_fd;
qbman_result_DQ_flags;
- qbman_result_has_new_result;
- qbman_swp_acquire;
- qbman_swp_pull;
- qbman_swp_release;
- rte_fslmc_driver_register;
- rte_fslmc_driver_unregister;
- rte_fslmc_vfio_dmamap;
- rte_mcp_ptr_list;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- dpaa2_io_portal;
- dpaa2_get_qbman_swp;
- dpci_set_rx_queue;
- dpcon_open;
- dpcon_get_attributes;
- dpio_add_static_dequeue_channel;
- dpio_remove_static_dequeue_channel;
- mc_get_soc_version;
- mc_get_version;
- qbman_check_new_result;
- qbman_eq_desc_set_dca;
- qbman_get_dqrr_from_idx;
- qbman_get_dqrr_idx;
qbman_result_DQ_fqd_ctx;
+ qbman_result_DQ_odpid;
+ qbman_result_DQ_seqnum;
qbman_result_SCN_state;
+ qbman_result_eqresp_fd;
+ qbman_result_eqresp_rc;
+ qbman_result_eqresp_rspid;
+ qbman_result_eqresp_set_rspid;
+ qbman_result_has_new_result;
+ qbman_swp_acquire;
qbman_swp_dqrr_consume;
+ qbman_swp_dqrr_idx_consume;
qbman_swp_dqrr_next;
qbman_swp_enqueue_multiple;
qbman_swp_enqueue_multiple_desc;
+ qbman_swp_enqueue_multiple_fd;
qbman_swp_interrupt_clear_status;
+ qbman_swp_prefetch_dqrr_next;
+ qbman_swp_pull;
qbman_swp_push_set;
+ qbman_swp_release;
rte_dpaa2_alloc_dpci_dev;
- rte_fslmc_object_register;
- rte_global_active_dqs_list;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- dpaa2_dpbp_supported;
rte_dpaa2_dev_type;
+ rte_dpaa2_free_dpci_dev;
rte_dpaa2_intr_disable;
rte_dpaa2_intr_enable;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- dpaa2_svr_family;
- dpaa2_virt_mode;
- per_lcore_dpaa2_held_bufs;
- qbman_fq_query_state;
- qbman_fq_state_frame_count;
- qbman_swp_dqrr_idx_consume;
- qbman_swp_prefetch_dqrr_next;
- rte_fslmc_get_device_count;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- dpaa2_affine_qbman_ethrx_swp;
- dpdmai_close;
- dpdmai_disable;
- dpdmai_enable;
- dpdmai_get_attributes;
- dpdmai_get_rx_queue;
- dpdmai_get_tx_queue;
- dpdmai_open;
- dpdmai_set_rx_queue;
- rte_dpaa2_free_dpci_dev;
rte_dpaa2_memsegs;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
- dpaa2_dqrr_size;
- dpaa2_eqcr_size;
- dpci_get_opr;
- dpci_set_opr;
-
-} DPDK_18.05;
-
-DPDK_19.05 {
- global:
- dpaa2_free_eq_descriptors;
-
- qbman_eq_desc_set_orp;
- qbman_eq_desc_set_token;
- qbman_result_DQ_odpid;
- qbman_result_DQ_seqnum;
- qbman_result_eqresp_fd;
- qbman_result_eqresp_rc;
- qbman_result_eqresp_rspid;
- qbman_result_eqresp_set_rspid;
- qbman_swp_enqueue_multiple_fd;
-} DPDK_18.11;
+ rte_fslmc_driver_register;
+ rte_fslmc_driver_unregister;
+ rte_fslmc_get_device_count;
+ rte_fslmc_object_register;
+ rte_fslmc_vfio_dmamap;
+ rte_global_active_dqs_list;
+ rte_mcp_ptr_list;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/bus/ifpga/rte_bus_ifpga_version.map b/drivers/bus/ifpga/rte_bus_ifpga_version.map
index 964c9a9c45..05b4a28c1b 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga_version.map
+++ b/drivers/bus/ifpga/rte_bus_ifpga_version.map
@@ -1,17 +1,11 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
- rte_ifpga_get_integer32_arg;
- rte_ifpga_get_string_arg;
rte_ifpga_driver_register;
rte_ifpga_driver_unregister;
+ rte_ifpga_find_afu_by_name;
+ rte_ifpga_get_integer32_arg;
+ rte_ifpga_get_string_arg;
local: *;
};
-
-DPDK_19.05 {
- global:
-
- rte_ifpga_find_afu_by_name;
-
-} DPDK_18.05;
diff --git a/drivers/bus/pci/rte_bus_pci_version.map b/drivers/bus/pci/rte_bus_pci_version.map
index 27e9c4f101..012d817e14 100644
--- a/drivers/bus/pci/rte_bus_pci_version.map
+++ b/drivers/bus/pci/rte_bus_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pci_dump;
diff --git a/drivers/bus/vdev/rte_bus_vdev_version.map b/drivers/bus/vdev/rte_bus_vdev_version.map
index 590cf9b437..5abb10ecb0 100644
--- a/drivers/bus/vdev/rte_bus_vdev_version.map
+++ b/drivers/bus/vdev/rte_bus_vdev_version.map
@@ -1,18 +1,12 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
+ rte_vdev_add_custom_scan;
rte_vdev_init;
rte_vdev_register;
+ rte_vdev_remove_custom_scan;
rte_vdev_uninit;
rte_vdev_unregister;
local: *;
};
-
-DPDK_18.02 {
- global:
-
- rte_vdev_add_custom_scan;
- rte_vdev_remove_custom_scan;
-
-} DPDK_17.11;
diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map
index ae231ad329..cbaaebc06c 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus_version.map
+++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map
@@ -1,6 +1,4 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_vmbus_chan_close;
@@ -20,6 +18,7 @@ DPDK_18.08 {
rte_vmbus_probe;
rte_vmbus_register;
rte_vmbus_scan;
+ rte_vmbus_set_latency;
rte_vmbus_sub_channel_index;
rte_vmbus_subchan_open;
rte_vmbus_unmap_device;
@@ -27,10 +26,3 @@ DPDK_18.08 {
local: *;
};
-
-DPDK_18.11 {
- global:
-
- rte_vmbus_set_latency;
-
-} DPDK_18.08;
diff --git a/drivers/common/cpt/rte_common_cpt_version.map b/drivers/common/cpt/rte_common_cpt_version.map
index dec614f0de..79fa5751bc 100644
--- a/drivers/common/cpt/rte_common_cpt_version.map
+++ b/drivers/common/cpt/rte_common_cpt_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
cpt_pmd_ops_helper_get_mlen_direct_mode;
cpt_pmd_ops_helper_get_mlen_sg_mode;
+
+ local: *;
};
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index 8131c9e305..45d62aea9d 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,11 +1,11 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- dpaax_iova_table_update;
dpaax_iova_table_depopulate;
dpaax_iova_table_dump;
dpaax_iova_table_p;
dpaax_iova_table_populate;
+ dpaax_iova_table_update;
local: *;
};
diff --git a/drivers/common/mvep/rte_common_mvep_version.map b/drivers/common/mvep/rte_common_mvep_version.map
index c71722d79f..030928439d 100644
--- a/drivers/common/mvep/rte_common_mvep_version.map
+++ b/drivers/common/mvep/rte_common_mvep_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- rte_mvep_init;
rte_mvep_deinit;
+ rte_mvep_init;
+
+ local: *;
};
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index a9b3cff9bc..c15fb89112 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,8 +1,10 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
octeontx_logtype_mbox;
+ octeontx_mbox_send;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
- octeontx_mbox_send;
+
+ local: *;
};
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 4400120da0..adad21a2d6 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -1,39 +1,35 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
otx2_dev_active_vfs;
otx2_dev_fini;
otx2_dev_priv_init;
-
+ otx2_disable_irqs;
+ otx2_intra_dev_get_cfg;
otx2_logtype_base;
otx2_logtype_dpi;
otx2_logtype_mbox;
+ otx2_logtype_nix;
otx2_logtype_npa;
otx2_logtype_npc;
- otx2_logtype_nix;
otx2_logtype_sso;
- otx2_logtype_tm;
otx2_logtype_tim;
-
+ otx2_logtype_tm;
otx2_mbox_alloc_msg_rsp;
otx2_mbox_get_rsp;
otx2_mbox_get_rsp_tmo;
otx2_mbox_id2name;
otx2_mbox_msg_send;
otx2_mbox_wait_for_rsp;
-
- otx2_intra_dev_get_cfg;
otx2_npa_lf_active;
otx2_npa_lf_obj_get;
otx2_npa_lf_obj_ref;
otx2_npa_pf_func_get;
otx2_npa_set_defaults;
+ otx2_register_irq;
otx2_sso_pf_func_get;
otx2_sso_pf_func_set;
-
- otx2_disable_irqs;
otx2_unregister_irq;
- otx2_register_irq;
local: *;
};
diff --git a/drivers/compress/isal/rte_pmd_isal_version.map b/drivers/compress/isal/rte_pmd_isal_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/compress/isal/rte_pmd_isal_version.map
+++ b/drivers/compress/isal/rte_pmd_isal_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
+++ b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/qat/rte_pmd_qat_version.map b/drivers/compress/qat/rte_pmd_qat_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/qat/rte_pmd_qat_version.map
+++ b/drivers/compress/qat/rte_pmd_qat_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/zlib/rte_pmd_zlib_version.map b/drivers/compress/zlib/rte_pmd_zlib_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/zlib/rte_pmd_zlib_version.map
+++ b/drivers/compress/zlib/rte_pmd_zlib_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
+++ b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/armv8/rte_pmd_armv8_version.map b/drivers/crypto/armv8/rte_pmd_armv8_version.map
index 1f84b68a83..f9f17e4f6e 100644
--- a/drivers/crypto/armv8/rte_pmd_armv8_version.map
+++ b/drivers/crypto/armv8/rte_pmd_armv8_version.map
@@ -1,3 +1,3 @@
-DPDK_17.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
+++ b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/ccp/rte_pmd_ccp_version.map b/drivers/crypto/ccp/rte_pmd_ccp_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/crypto/ccp/rte_pmd_ccp_version.map
+++ b/drivers/crypto/ccp/rte_pmd_ccp_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 0bfb986d0b..5952d645fd 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_18.11 {
+DPDK_20.0 {
global:
dpaa2_sec_eventq_attach;
dpaa2_sec_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index cc7f2162e0..8580fa13db 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_19.11 {
+DPDK_20.0 {
global:
dpaa_sec_eventq_attach;
dpaa_sec_eventq_detach;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
index 8ffeca934e..f9f17e4f6e 100644
--- a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -1,3 +1,3 @@
-DPDK_16.07 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
+++ b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
index 406964d1fc..f9f17e4f6e 100644
--- a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
+++ b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/null/rte_pmd_null_crypto_version.map b/drivers/crypto/null/rte_pmd_null_crypto_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/null/rte_pmd_null_crypto_version.map
+++ b/drivers/crypto/null/rte_pmd_null_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
+++ b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/openssl/rte_pmd_openssl_version.map b/drivers/crypto/openssl/rte_pmd_openssl_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/openssl/rte_pmd_openssl_version.map
+++ b/drivers/crypto/openssl/rte_pmd_openssl_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
index 5c43127cf2..077afedce7 100644
--- a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -1,21 +1,16 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_cryptodev_scheduler_load_user_scheduler;
- rte_cryptodev_scheduler_slave_attach;
- rte_cryptodev_scheduler_slave_detach;
- rte_cryptodev_scheduler_ordering_set;
- rte_cryptodev_scheduler_ordering_get;
-
-};
-
-DPDK_17.05 {
- global:
-
rte_cryptodev_scheduler_mode_get;
rte_cryptodev_scheduler_mode_set;
rte_cryptodev_scheduler_option_get;
rte_cryptodev_scheduler_option_set;
+ rte_cryptodev_scheduler_ordering_get;
+ rte_cryptodev_scheduler_ordering_set;
+ rte_cryptodev_scheduler_slave_attach;
+ rte_cryptodev_scheduler_slave_detach;
rte_cryptodev_scheduler_slaves_get;
-} DPDK_17.02;
+ local: *;
+};
diff --git a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
+++ b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
+++ b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/zuc/rte_pmd_zuc_version.map b/drivers/crypto/zuc/rte_pmd_zuc_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/zuc/rte_pmd_zuc_version.map
+++ b/drivers/crypto/zuc/rte_pmd_zuc_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
+++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
index 1c0b7559dc..f9f17e4f6e 100644
--- a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
+++ b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dsw/rte_pmd_dsw_event_version.map b/drivers/event/dsw/rte_pmd_dsw_event_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/event/dsw/rte_pmd_dsw_event_version.map
+++ b/drivers/event/dsw/rte_pmd_dsw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
+++ b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
index 41c65c8c9c..f9f17e4f6e 100644
--- a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
+++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
+DPDK_20.0 {
local: *;
};
-
diff --git a/drivers/event/opdl/rte_pmd_opdl_event_version.map b/drivers/event/opdl/rte_pmd_opdl_event_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/event/opdl/rte_pmd_opdl_event_version.map
+++ b/drivers/event/opdl/rte_pmd_opdl_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/sw/rte_pmd_sw_event_version.map b/drivers/event/sw/rte_pmd_sw_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/sw/rte_pmd_sw_event_version.map
+++ b/drivers/event/sw/rte_pmd_sw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket_version.map
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 60bf50b2d1..9eebaf7ffd 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_dpaa_bpid_info;
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index b45e7a9ac1..cd4bc88273 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,16 +1,10 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_dpaa2_bpid_info;
rte_dpaa2_mbuf_alloc_bulk;
-
- local: *;
-};
-
-DPDK_18.05 {
- global:
-
rte_dpaa2_mbuf_from_buf_addr;
rte_dpaa2_mbuf_pool_bpid;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
index d703368c31..d4f81aed8e 100644
--- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
+++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
@@ -1,8 +1,8 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
- otx2_npa_lf_init;
otx2_npa_lf_fini;
+ otx2_npa_lf_init;
local: *;
};
diff --git a/drivers/mempool/ring/rte_mempool_ring_version.map b/drivers/mempool/ring/rte_mempool_ring_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/ring/rte_mempool_ring_version.map
+++ b/drivers/mempool/ring/rte_mempool_ring_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/stack/rte_mempool_stack_version.map b/drivers/mempool/stack/rte_mempool_stack_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/stack/rte_mempool_stack_version.map
+++ b/drivers/mempool/stack/rte_mempool_stack_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_packet/rte_pmd_af_packet_version.map b/drivers/net/af_packet/rte_pmd_af_packet_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/af_packet/rte_pmd_af_packet_version.map
+++ b/drivers/net/af_packet/rte_pmd_af_packet_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
index c6db030fe6..f9f17e4f6e 100644
--- a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
+++ b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
@@ -1,3 +1,3 @@
-DPDK_19.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ark/rte_pmd_ark_version.map b/drivers/net/ark/rte_pmd_ark_version.map
index 1062e0429f..f9f17e4f6e 100644
--- a/drivers/net/ark/rte_pmd_ark_version.map
+++ b/drivers/net/ark/rte_pmd_ark_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
- local: *;
-
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/atlantic/rte_pmd_atlantic_version.map b/drivers/net/atlantic/rte_pmd_atlantic_version.map
index b16faa999f..9b04838d84 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic_version.map
+++ b/drivers/net/atlantic/rte_pmd_atlantic_version.map
@@ -1,5 +1,4 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
@@ -13,4 +12,3 @@ EXPERIMENTAL {
rte_pmd_atl_macsec_select_txsa;
rte_pmd_atl_macsec_select_rxsa;
};
-
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/net/avp/rte_pmd_avp_version.map
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/axgbe/rte_pmd_axgbe_version.map b/drivers/net/axgbe/rte_pmd_axgbe_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/net/axgbe/rte_pmd_axgbe_version.map
+++ b/drivers/net/axgbe/rte_pmd_axgbe_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
+++ b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnxt/rte_pmd_bnxt_version.map b/drivers/net/bnxt/rte_pmd_bnxt_version.map
index 4750d40ad6..bb52562347 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt_version.map
+++ b/drivers/net/bnxt/rte_pmd_bnxt_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_pmd_bnxt_get_vf_rx_status;
@@ -10,13 +10,13 @@ DPDK_17.08 {
rte_pmd_bnxt_set_tx_loopback;
rte_pmd_bnxt_set_vf_mac_addr;
rte_pmd_bnxt_set_vf_mac_anti_spoof;
+ rte_pmd_bnxt_set_vf_persist_stats;
rte_pmd_bnxt_set_vf_rate_limit;
rte_pmd_bnxt_set_vf_rxmode;
rte_pmd_bnxt_set_vf_vlan_anti_spoof;
rte_pmd_bnxt_set_vf_vlan_filter;
rte_pmd_bnxt_set_vf_vlan_insert;
rte_pmd_bnxt_set_vf_vlan_stripq;
- rte_pmd_bnxt_set_vf_persist_stats;
local: *;
};
diff --git a/drivers/net/bonding/rte_pmd_bond_version.map b/drivers/net/bonding/rte_pmd_bond_version.map
index 00d955c481..270c7d5d55 100644
--- a/drivers/net/bonding/rte_pmd_bond_version.map
+++ b/drivers/net/bonding/rte_pmd_bond_version.map
@@ -1,9 +1,21 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_bond_8023ad_agg_selection_get;
+ rte_eth_bond_8023ad_agg_selection_set;
+ rte_eth_bond_8023ad_conf_get;
+ rte_eth_bond_8023ad_dedicated_queues_disable;
+ rte_eth_bond_8023ad_dedicated_queues_enable;
+ rte_eth_bond_8023ad_ext_collect;
+ rte_eth_bond_8023ad_ext_collect_get;
+ rte_eth_bond_8023ad_ext_distrib;
+ rte_eth_bond_8023ad_ext_distrib_get;
+ rte_eth_bond_8023ad_ext_slowtx;
+ rte_eth_bond_8023ad_setup;
rte_eth_bond_8023ad_slave_info;
rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
+ rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
rte_eth_bond_mac_address_reset;
rte_eth_bond_mac_address_set;
@@ -19,36 +31,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- rte_eth_bond_free;
-
-} DPDK_2.0;
-
-DPDK_16.04 {
-};
-
-DPDK_16.07 {
- global:
-
- rte_eth_bond_8023ad_ext_collect;
- rte_eth_bond_8023ad_ext_collect_get;
- rte_eth_bond_8023ad_ext_distrib;
- rte_eth_bond_8023ad_ext_distrib_get;
- rte_eth_bond_8023ad_ext_slowtx;
-
-} DPDK_16.04;
-
-DPDK_17.08 {
- global:
-
- rte_eth_bond_8023ad_dedicated_queues_enable;
- rte_eth_bond_8023ad_dedicated_queues_disable;
- rte_eth_bond_8023ad_agg_selection_get;
- rte_eth_bond_8023ad_agg_selection_set;
- rte_eth_bond_8023ad_conf_get;
- rte_eth_bond_8023ad_setup;
-
-} DPDK_16.07;
diff --git a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
+++ b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index 8cb4500b51..f403a1526d 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,12 +1,9 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
dpaa_eth_eventq_attach;
dpaa_eth_eventq_detach;
rte_pmd_dpaa_set_tx_loopback;
-} DPDK_17.11;
+
+ local: *;
+};
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index d1b4cdb232..f2bb793319 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,15 +1,11 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_17.11 {
+DPDK_20.0 {
global:
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -17,4 +13,4 @@ EXPERIMENTAL {
rte_pmd_dpaa2_mux_flow_create;
rte_pmd_dpaa2_set_custom_hash;
rte_pmd_dpaa2_set_timestamp;
-} DPDK_17.11;
+};
diff --git a/drivers/net/e1000/rte_pmd_e1000_version.map b/drivers/net/e1000/rte_pmd_e1000_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/e1000/rte_pmd_e1000_version.map
+++ b/drivers/net/e1000/rte_pmd_e1000_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ena/rte_pmd_ena_version.map b/drivers/net/ena/rte_pmd_ena_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/ena/rte_pmd_ena_version.map
+++ b/drivers/net/ena/rte_pmd_ena_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enetc/rte_pmd_enetc_version.map b/drivers/net/enetc/rte_pmd_enetc_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/net/enetc/rte_pmd_enetc_version.map
+++ b/drivers/net/enetc/rte_pmd_enetc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enic/rte_pmd_enic_version.map b/drivers/net/enic/rte_pmd_enic_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/enic/rte_pmd_enic_version.map
+++ b/drivers/net/enic/rte_pmd_enic_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/failsafe/rte_pmd_failsafe_version.map b/drivers/net/failsafe/rte_pmd_failsafe_version.map
index b6d2840be4..f9f17e4f6e 100644
--- a/drivers/net/failsafe/rte_pmd_failsafe_version.map
+++ b/drivers/net/failsafe/rte_pmd_failsafe_version.map
@@ -1,4 +1,3 @@
-DPDK_17.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/fm10k/rte_pmd_fm10k_version.map b/drivers/net/fm10k/rte_pmd_fm10k_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/fm10k/rte_pmd_fm10k_version.map
+++ b/drivers/net/fm10k/rte_pmd_fm10k_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hinic/rte_pmd_hinic_version.map b/drivers/net/hinic/rte_pmd_hinic_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/hinic/rte_pmd_hinic_version.map
+++ b/drivers/net/hinic/rte_pmd_hinic_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
index 35e5f2debb..f9f17e4f6e 100644
--- a/drivers/net/hns3/rte_pmd_hns3_version.map
+++ b/drivers/net/hns3/rte_pmd_hns3_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map
index cccd5768c2..a80e69b93e 100644
--- a/drivers/net/i40e/rte_pmd_i40e_version.map
+++ b/drivers/net/i40e/rte_pmd_i40e_version.map
@@ -1,23 +1,34 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_i40e_add_vf_mac_addr;
+ rte_pmd_i40e_flow_add_del_packet_template;
+ rte_pmd_i40e_flow_type_mapping_get;
+ rte_pmd_i40e_flow_type_mapping_reset;
+ rte_pmd_i40e_flow_type_mapping_update;
+ rte_pmd_i40e_get_ddp_info;
+ rte_pmd_i40e_get_ddp_list;
rte_pmd_i40e_get_vf_stats;
+ rte_pmd_i40e_inset_get;
+ rte_pmd_i40e_inset_set;
rte_pmd_i40e_ping_vfs;
+ rte_pmd_i40e_process_ddp_package;
rte_pmd_i40e_ptype_mapping_get;
rte_pmd_i40e_ptype_mapping_replace;
rte_pmd_i40e_ptype_mapping_reset;
rte_pmd_i40e_ptype_mapping_update;
+ rte_pmd_i40e_query_vfid_by_mac;
rte_pmd_i40e_reset_vf_stats;
+ rte_pmd_i40e_rss_queue_region_conf;
+ rte_pmd_i40e_set_tc_strict_prio;
rte_pmd_i40e_set_tx_loopback;
rte_pmd_i40e_set_vf_broadcast;
rte_pmd_i40e_set_vf_mac_addr;
rte_pmd_i40e_set_vf_mac_anti_spoof;
+ rte_pmd_i40e_set_vf_max_bw;
rte_pmd_i40e_set_vf_multicast_promisc;
+ rte_pmd_i40e_set_vf_tc_bw_alloc;
+ rte_pmd_i40e_set_vf_tc_max_bw;
rte_pmd_i40e_set_vf_unicast_promisc;
rte_pmd_i40e_set_vf_vlan_anti_spoof;
rte_pmd_i40e_set_vf_vlan_filter;
@@ -25,43 +36,5 @@ DPDK_17.02 {
rte_pmd_i40e_set_vf_vlan_stripq;
rte_pmd_i40e_set_vf_vlan_tag;
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_pmd_i40e_set_tc_strict_prio;
- rte_pmd_i40e_set_vf_max_bw;
- rte_pmd_i40e_set_vf_tc_bw_alloc;
- rte_pmd_i40e_set_vf_tc_max_bw;
- rte_pmd_i40e_process_ddp_package;
- rte_pmd_i40e_get_ddp_list;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_i40e_get_ddp_info;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_pmd_i40e_add_vf_mac_addr;
- rte_pmd_i40e_flow_add_del_packet_template;
- rte_pmd_i40e_flow_type_mapping_update;
- rte_pmd_i40e_flow_type_mapping_get;
- rte_pmd_i40e_flow_type_mapping_reset;
- rte_pmd_i40e_query_vfid_by_mac;
- rte_pmd_i40e_rss_queue_region_conf;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_pmd_i40e_inset_get;
- rte_pmd_i40e_inset_set;
-} DPDK_17.11;
\ No newline at end of file
+ local: *;
+};
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
index 7b23b609da..f9f17e4f6e 100644
--- a/drivers/net/ice/rte_pmd_ice_version.map
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -1,4 +1,3 @@
-DPDK_19.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ifc/rte_pmd_ifc_version.map b/drivers/net/ifc/rte_pmd_ifc_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/net/ifc/rte_pmd_ifc_version.map
+++ b/drivers/net/ifc/rte_pmd_ifc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
+++ b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
index c814f96d72..21534dbc3d 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
- rte_pmd_ixgbe_set_all_queues_drop_en;
- rte_pmd_ixgbe_set_tx_loopback;
- rte_pmd_ixgbe_set_vf_mac_addr;
- rte_pmd_ixgbe_set_vf_mac_anti_spoof;
- rte_pmd_ixgbe_set_vf_split_drop_en;
- rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
- rte_pmd_ixgbe_set_vf_vlan_insert;
- rte_pmd_ixgbe_set_vf_vlan_stripq;
-} DPDK_2.0;
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_ixgbe_bypass_event_show;
+ rte_pmd_ixgbe_bypass_event_store;
+ rte_pmd_ixgbe_bypass_init;
+ rte_pmd_ixgbe_bypass_state_set;
+ rte_pmd_ixgbe_bypass_state_show;
+ rte_pmd_ixgbe_bypass_ver_show;
+ rte_pmd_ixgbe_bypass_wd_reset;
+ rte_pmd_ixgbe_bypass_wd_timeout_show;
+ rte_pmd_ixgbe_bypass_wd_timeout_store;
rte_pmd_ixgbe_macsec_config_rxsc;
rte_pmd_ixgbe_macsec_config_txsc;
rte_pmd_ixgbe_macsec_disable;
rte_pmd_ixgbe_macsec_enable;
rte_pmd_ixgbe_macsec_select_rxsa;
rte_pmd_ixgbe_macsec_select_txsa;
+ rte_pmd_ixgbe_ping_vf;
+ rte_pmd_ixgbe_set_all_queues_drop_en;
+ rte_pmd_ixgbe_set_tc_bw_alloc;
+ rte_pmd_ixgbe_set_tx_loopback;
+ rte_pmd_ixgbe_set_vf_mac_addr;
+ rte_pmd_ixgbe_set_vf_mac_anti_spoof;
rte_pmd_ixgbe_set_vf_rate_limit;
rte_pmd_ixgbe_set_vf_rx;
rte_pmd_ixgbe_set_vf_rxmode;
+ rte_pmd_ixgbe_set_vf_split_drop_en;
rte_pmd_ixgbe_set_vf_tx;
+ rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
rte_pmd_ixgbe_set_vf_vlan_filter;
-} DPDK_16.11;
+ rte_pmd_ixgbe_set_vf_vlan_insert;
+ rte_pmd_ixgbe_set_vf_vlan_stripq;
-DPDK_17.05 {
- global:
-
- rte_pmd_ixgbe_ping_vf;
- rte_pmd_ixgbe_set_tc_bw_alloc;
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_ixgbe_bypass_event_show;
- rte_pmd_ixgbe_bypass_event_store;
- rte_pmd_ixgbe_bypass_init;
- rte_pmd_ixgbe_bypass_state_set;
- rte_pmd_ixgbe_bypass_state_show;
- rte_pmd_ixgbe_bypass_ver_show;
- rte_pmd_ixgbe_bypass_wd_reset;
- rte_pmd_ixgbe_bypass_wd_timeout_show;
- rte_pmd_ixgbe_bypass_wd_timeout_store;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/net/kni/rte_pmd_kni_version.map b/drivers/net/kni/rte_pmd_kni_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/kni/rte_pmd_kni_version.map
+++ b/drivers/net/kni/rte_pmd_kni_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/liquidio/rte_pmd_liquidio_version.map b/drivers/net/liquidio/rte_pmd_liquidio_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/liquidio/rte_pmd_liquidio_version.map
+++ b/drivers/net/liquidio/rte_pmd_liquidio_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/memif/rte_pmd_memif_version.map b/drivers/net/memif/rte_pmd_memif_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/net/memif/rte_pmd_memif_version.map
+++ b/drivers/net/memif/rte_pmd_memif_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/mlx4/rte_pmd_mlx4_version.map b/drivers/net/mlx4/rte_pmd_mlx4_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/mlx4/rte_pmd_mlx4_version.map
+++ b/drivers/net/mlx4/rte_pmd_mlx4_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5_version.map
+++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvneta/rte_pmd_mvneta_version.map b/drivers/net/mvneta/rte_pmd_mvneta_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/net/mvneta/rte_pmd_mvneta_version.map
+++ b/drivers/net/mvneta/rte_pmd_mvneta_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
+++ b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/netvsc/rte_pmd_netvsc_version.map b/drivers/net/netvsc/rte_pmd_netvsc_version.map
index d534019a6b..f9f17e4f6e 100644
--- a/drivers/net/netvsc/rte_pmd_netvsc_version.map
+++ b/drivers/net/netvsc/rte_pmd_netvsc_version.map
@@ -1,5 +1,3 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfb/rte_pmd_nfb_version.map b/drivers/net/nfb/rte_pmd_nfb_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/nfb/rte_pmd_nfb_version.map
+++ b/drivers/net/nfb/rte_pmd_nfb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfp/rte_pmd_nfp_version.map b/drivers/net/nfp/rte_pmd_nfp_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/nfp/rte_pmd_nfp_version.map
+++ b/drivers/net/nfp/rte_pmd_nfp_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/null/rte_pmd_null_version.map b/drivers/net/null/rte_pmd_null_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/null/rte_pmd_null_version.map
+++ b/drivers/net/null/rte_pmd_null_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/octeontx/rte_pmd_octeontx_version.map b/drivers/net/octeontx/rte_pmd_octeontx_version.map
index a3161b14d0..f7cae02fac 100644
--- a/drivers/net/octeontx/rte_pmd_octeontx_version.map
+++ b/drivers/net/octeontx/rte_pmd_octeontx_version.map
@@ -1,11 +1,7 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.02 {
+DPDK_20.0 {
global:
rte_octeontx_pchan_map;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/pcap/rte_pmd_pcap_version.map b/drivers/net/pcap/rte_pmd_pcap_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/pcap/rte_pmd_pcap_version.map
+++ b/drivers/net/pcap/rte_pmd_pcap_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/qede/rte_pmd_qede_version.map b/drivers/net/qede/rte_pmd_qede_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/qede/rte_pmd_qede_version.map
+++ b/drivers/net/qede/rte_pmd_qede_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ring/rte_pmd_ring_version.map b/drivers/net/ring/rte_pmd_ring_version.map
index 1f785d9409..ebb6be2733 100644
--- a/drivers/net/ring/rte_pmd_ring_version.map
+++ b/drivers/net/ring/rte_pmd_ring_version.map
@@ -1,14 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_from_ring;
rte_eth_from_rings;
local: *;
};
-
-DPDK_2.2 {
- global:
-
- rte_eth_from_ring;
-
-} DPDK_2.0;
diff --git a/drivers/net/sfc/rte_pmd_sfc_version.map b/drivers/net/sfc/rte_pmd_sfc_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/sfc/rte_pmd_sfc_version.map
+++ b/drivers/net/sfc/rte_pmd_sfc_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/softnic/rte_pmd_softnic_version.map b/drivers/net/softnic/rte_pmd_softnic_version.map
index bc44b06f98..50f113d5a2 100644
--- a/drivers/net/softnic/rte_pmd_softnic_version.map
+++ b/drivers/net/softnic/rte_pmd_softnic_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pmd_softnic_run;
diff --git a/drivers/net/szedata2/rte_pmd_szedata2_version.map b/drivers/net/szedata2/rte_pmd_szedata2_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/szedata2/rte_pmd_szedata2_version.map
+++ b/drivers/net/szedata2/rte_pmd_szedata2_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/tap/rte_pmd_tap_version.map b/drivers/net/tap/rte_pmd_tap_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/tap/rte_pmd_tap_version.map
+++ b/drivers/net/tap/rte_pmd_tap_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_version.map b/drivers/net/thunderx/rte_pmd_thunderx_version.map
index 1901bcb3b3..f9f17e4f6e 100644
--- a/drivers/net/thunderx/rte_pmd_thunderx_version.map
+++ b/drivers/net/thunderx/rte_pmd_thunderx_version.map
@@ -1,4 +1,3 @@
-DPDK_16.07 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
+++ b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vhost/rte_pmd_vhost_version.map b/drivers/net/vhost/rte_pmd_vhost_version.map
index 695db85749..16b591ccc4 100644
--- a/drivers/net/vhost/rte_pmd_vhost_version.map
+++ b/drivers/net/vhost/rte_pmd_vhost_version.map
@@ -1,13 +1,8 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
rte_eth_vhost_get_queue_event;
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
rte_eth_vhost_get_vid_from_port_id;
+
+ local: *;
};
diff --git a/drivers/net/virtio/rte_pmd_virtio_version.map b/drivers/net/virtio/rte_pmd_virtio_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/virtio/rte_pmd_virtio_version.map
+++ b/drivers/net/virtio/rte_pmd_virtio_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
+++ b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
+++ b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
index d16a136fc8..ca6a0d7626 100644
--- a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
+++ b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
@@ -1,4 +1,4 @@
-DPDK_19.05 {
+DPDK_20.0 {
global:
rte_qdma_attr_get;
@@ -9,9 +9,9 @@ DPDK_19.05 {
rte_qdma_start;
rte_qdma_stop;
rte_qdma_vq_create;
- rte_qdma_vq_destroy;
rte_qdma_vq_dequeue;
rte_qdma_vq_dequeue_multi;
+ rte_qdma_vq_destroy;
rte_qdma_vq_enqueue;
rte_qdma_vq_enqueue_multi;
rte_qdma_vq_stats;
diff --git a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
+++ b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ioat/rte_rawdev_ioat_version.map b/drivers/raw/ioat/rte_rawdev_ioat_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/ioat/rte_rawdev_ioat_version.map
+++ b/drivers/raw/ioat/rte_rawdev_ioat_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ntb/rte_rawdev_ntb_version.map b/drivers/raw/ntb/rte_rawdev_ntb_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/raw/ntb/rte_rawdev_ntb_version.map
+++ b/drivers/raw/ntb/rte_rawdev_ntb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
+++ b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
+++ b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/lib/librte_acl/rte_acl_version.map b/lib/librte_acl/rte_acl_version.map
index b09370a104..c3daca8115 100644
--- a/lib/librte_acl/rte_acl_version.map
+++ b/lib/librte_acl/rte_acl_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_acl_add_rules;
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
index 3624eb1cb4..45b560dbe7 100644
--- a/lib/librte_bbdev/rte_bbdev_version.map
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_bitratestats/rte_bitratestats_version.map b/lib/librte_bitratestats/rte_bitratestats_version.map
index fe7454452d..88fc2912db 100644
--- a/lib/librte_bitratestats/rte_bitratestats_version.map
+++ b/lib/librte_bitratestats/rte_bitratestats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_stats_bitrate_calc;
diff --git a/lib/librte_bpf/rte_bpf_version.map b/lib/librte_bpf/rte_bpf_version.map
index a203e088ea..e1ec43faa0 100644
--- a/lib/librte_bpf/rte_bpf_version.map
+++ b/lib/librte_bpf/rte_bpf_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cfgfile/rte_cfgfile_version.map b/lib/librte_cfgfile/rte_cfgfile_version.map
index a0a11cea8d..906eee96bf 100644
--- a/lib/librte_cfgfile/rte_cfgfile_version.map
+++ b/lib/librte_cfgfile/rte_cfgfile_version.map
@@ -1,40 +1,22 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_cfgfile_add_entry;
+ rte_cfgfile_add_section;
rte_cfgfile_close;
+ rte_cfgfile_create;
rte_cfgfile_get_entry;
rte_cfgfile_has_entry;
rte_cfgfile_has_section;
rte_cfgfile_load;
+ rte_cfgfile_load_with_params;
rte_cfgfile_num_sections;
+ rte_cfgfile_save;
rte_cfgfile_section_entries;
+ rte_cfgfile_section_entries_by_index;
rte_cfgfile_section_num_entries;
rte_cfgfile_sections;
+ rte_cfgfile_set_entry;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_cfgfile_section_entries_by_index;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_cfgfile_load_with_params;
-
-} DPDK_16.04;
-
-DPDK_17.11 {
- global:
-
- rte_cfgfile_add_entry;
- rte_cfgfile_add_section;
- rte_cfgfile_create;
- rte_cfgfile_save;
- rte_cfgfile_set_entry;
-
-} DPDK_17.05;
diff --git a/lib/librte_cmdline/rte_cmdline_version.map b/lib/librte_cmdline/rte_cmdline_version.map
index 04bcb387f2..95fce812ff 100644
--- a/lib/librte_cmdline/rte_cmdline_version.map
+++ b/lib/librte_cmdline/rte_cmdline_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
cirbuf_add_buf_head;
@@ -40,6 +40,7 @@ DPDK_2.0 {
cmdline_parse_num;
cmdline_parse_portlist;
cmdline_parse_string;
+ cmdline_poll;
cmdline_printf;
cmdline_quit;
cmdline_set_prompt;
@@ -68,10 +69,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- cmdline_poll;
-
-} DPDK_2.0;
diff --git a/lib/librte_compressdev/rte_compressdev_version.map b/lib/librte_compressdev/rte_compressdev_version.map
index e2a108b650..cfcd50ac1c 100644
--- a/lib/librte_compressdev/rte_compressdev_version.map
+++ b/lib/librte_compressdev/rte_compressdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 3deb265ac2..1dd1e259a0 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -1,92 +1,62 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
- rte_cryptodevs;
+ rte_crypto_aead_algorithm_strings;
+ rte_crypto_aead_operation_strings;
+ rte_crypto_auth_algorithm_strings;
+ rte_crypto_auth_operation_strings;
+ rte_crypto_cipher_algorithm_strings;
+ rte_crypto_cipher_operation_strings;
+ rte_crypto_op_pool_create;
+ rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
- rte_cryptodev_count;
rte_cryptodev_configure;
+ rte_cryptodev_count;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_devices_get;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
+ rte_cryptodev_get_aead_algo_enum;
+ rte_cryptodev_get_auth_algo_enum;
+ rte_cryptodev_get_cipher_algo_enum;
rte_cryptodev_get_dev_id;
rte_cryptodev_get_feature_name;
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
- rte_cryptodev_sym_session_create;
- rte_cryptodev_sym_session_free;
+ rte_cryptodev_queue_pair_count;
+ rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
rte_cryptodev_start;
rte_cryptodev_stats_get;
rte_cryptodev_stats_reset;
rte_cryptodev_stop;
- rte_cryptodev_queue_pair_count;
- rte_cryptodev_queue_pair_setup;
- rte_crypto_op_pool_create;
-
- local: *;
-};
-
-DPDK_17.02 {
- global:
-
- rte_cryptodev_devices_get;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_sym_capability_check_aead;
rte_cryptodev_sym_capability_check_auth;
rte_cryptodev_sym_capability_check_cipher;
rte_cryptodev_sym_capability_get;
- rte_crypto_auth_algorithm_strings;
- rte_crypto_auth_operation_strings;
- rte_crypto_cipher_algorithm_strings;
- rte_crypto_cipher_operation_strings;
-
-} DPDK_16.04;
-
-DPDK_17.05 {
- global:
-
- rte_cryptodev_get_auth_algo_enum;
- rte_cryptodev_get_cipher_algo_enum;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_cryptodev_allocate_driver;
- rte_cryptodev_device_count_by_driver;
- rte_cryptodev_driver_id_get;
- rte_cryptodev_driver_name_get;
- rte_cryptodev_get_aead_algo_enum;
- rte_cryptodev_sym_capability_check_aead;
- rte_cryptodev_sym_session_init;
- rte_cryptodev_sym_session_clear;
- rte_crypto_aead_algorithm_strings;
- rte_crypto_aead_operation_strings;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_cryptodev_get_sec_ctx;
- rte_cryptodev_name_get;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_parse_input_args;
-
-} DPDK_17.08;
-
-DPDK_18.05 {
- global:
-
rte_cryptodev_sym_get_header_session_size;
rte_cryptodev_sym_get_private_session_size;
+ rte_cryptodev_sym_session_clear;
+ rte_cryptodev_sym_session_create;
+ rte_cryptodev_sym_session_free;
+ rte_cryptodev_sym_session_init;
+ rte_cryptodevs;
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 5643ab85fb..1b7c643005 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_distributor_clear_returns;
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d37b..8c41999317 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
__rte_panic;
@@ -7,46 +7,111 @@ DPDK_2.0 {
lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
+ rte_bus_dump;
+ rte_bus_find;
+ rte_bus_find_by_device;
+ rte_bus_find_by_name;
+ rte_bus_get_iommu_class;
+ rte_bus_probe;
+ rte_bus_register;
+ rte_bus_scan;
+ rte_bus_unregister;
rte_calloc;
rte_calloc_socket;
rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
+ rte_cpu_get_flag_name;
+ rte_cpu_is_supported;
+ rte_ctrl_thread_create;
rte_cycles_vmware_tsc_map;
rte_delay_us;
+ rte_delay_us_block;
+ rte_delay_us_callback_register;
+ rte_dev_is_probed;
+ rte_dev_probe;
+ rte_dev_remove;
+ rte_devargs_add;
+ rte_devargs_dump;
+ rte_devargs_insert;
+ rte_devargs_next;
+ rte_devargs_parse;
+ rte_devargs_parsef;
+ rte_devargs_remove;
+ rte_devargs_type_count;
rte_dump_physmem_layout;
rte_dump_registers;
rte_dump_stack;
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
+ rte_eal_cleanup;
+ rte_eal_create_uio_dev;
rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
+ rte_eal_get_runtime_dir;
rte_eal_has_hugepages;
+ rte_eal_has_pci;
+ rte_eal_hotplug_add;
+ rte_eal_hotplug_remove;
rte_eal_hpet_init;
rte_eal_init;
rte_eal_iopl_init;
+ rte_eal_iova_mode;
rte_eal_lcore_role;
+ rte_eal_mbuf_user_pool_ops;
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
+ rte_eal_primary_proc_alive;
rte_eal_process_type;
rte_eal_remote_launch;
rte_eal_tailq_lookup;
rte_eal_tailq_register;
+ rte_eal_using_phys_addrs;
+ rte_eal_vfio_intr_mode;
rte_eal_wait_lcore;
+ rte_epoll_ctl;
+ rte_epoll_wait;
rte_exit;
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
rte_get_tsc_hz;
rte_hexdump;
+ rte_hypervisor_get;
+ rte_hypervisor_get_name;
+ rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
+ rte_intr_cap_multiple;
rte_intr_disable;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
rte_intr_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_keepalive_create;
+ rte_keepalive_dispatch_pings;
+ rte_keepalive_mark_alive;
+ rte_keepalive_mark_sleep;
+ rte_keepalive_register_core;
+ rte_keepalive_register_relay_callback;
+ rte_lcore_has_role;
+ rte_lcore_index;
+ rte_lcore_to_socket_id;
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
+ rte_log_dump;
+ rte_log_get_global_level;
+ rte_log_get_level;
+ rte_log_register;
+ rte_log_set_global_level;
+ rte_log_set_level;
+ rte_log_set_level_pattern;
+ rte_log_set_level_regexp;
rte_logs;
rte_malloc;
rte_malloc_dump_stats;
@@ -54,155 +119,38 @@ DPDK_2.0 {
rte_malloc_set_limit;
rte_malloc_socket;
rte_malloc_validate;
+ rte_malloc_virt2iova;
+ rte_mcfg_mem_read_lock;
+ rte_mcfg_mem_read_unlock;
+ rte_mcfg_mem_write_lock;
+ rte_mcfg_mem_write_unlock;
+ rte_mcfg_mempool_read_lock;
+ rte_mcfg_mempool_read_unlock;
+ rte_mcfg_mempool_write_lock;
+ rte_mcfg_mempool_write_unlock;
+ rte_mcfg_tailq_read_lock;
+ rte_mcfg_tailq_read_unlock;
+ rte_mcfg_tailq_write_lock;
+ rte_mcfg_tailq_write_unlock;
rte_mem_lock_page;
+ rte_mem_virt2iova;
rte_mem_virt2phy;
rte_memdump;
rte_memory_get_nchannel;
rte_memory_get_nrank;
rte_memzone_dump;
+ rte_memzone_free;
rte_memzone_lookup;
rte_memzone_reserve;
rte_memzone_reserve_aligned;
rte_memzone_reserve_bounded;
rte_memzone_walk;
rte_openlog_stream;
+ rte_rand;
rte_realloc;
- rte_set_application_usage_hook;
- rte_socket_id;
- rte_strerror;
- rte_strsplit;
- rte_sys_gettid;
- rte_thread_get_affinity;
- rte_thread_set_affinity;
- rte_vlog;
- rte_zmalloc;
- rte_zmalloc_socket;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_epoll_ctl;
- rte_epoll_wait;
- rte_intr_allow_others;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
- rte_memzone_free;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_intr_cap_multiple;
- rte_keepalive_create;
- rte_keepalive_dispatch_pings;
- rte_keepalive_mark_alive;
- rte_keepalive_register_core;
-
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_cpu_get_flag_name;
- rte_eal_primary_proc_alive;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_keepalive_mark_sleep;
- rte_keepalive_register_relay_callback;
- rte_rtm_supported;
- rte_thread_setname;
-
-} DPDK_16.04;
-
-DPDK_16.11 {
- global:
-
- rte_delay_us_block;
- rte_delay_us_callback_register;
-
-} DPDK_16.07;
-
-DPDK_17.02 {
- global:
-
- rte_bus_dump;
- rte_bus_probe;
- rte_bus_register;
- rte_bus_scan;
- rte_bus_unregister;
-
-} DPDK_16.11;
-
-DPDK_17.05 {
- global:
-
- rte_cpu_is_supported;
- rte_intr_free_epoll_fd;
- rte_log_dump;
- rte_log_get_global_level;
- rte_log_register;
- rte_log_set_global_level;
- rte_log_set_level;
- rte_log_set_level_regexp;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_bus_find;
- rte_bus_find_by_device;
- rte_bus_find_by_name;
- rte_log_get_level;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eal_create_uio_dev;
- rte_bus_get_iommu_class;
- rte_eal_has_pci;
- rte_eal_iova_mode;
- rte_eal_using_phys_addrs;
- rte_eal_vfio_intr_mode;
- rte_lcore_has_role;
- rte_malloc_virt2iova;
- rte_mem_virt2iova;
- rte_vfio_enable;
- rte_vfio_is_enabled;
- rte_vfio_noiommu_is_enabled;
- rte_vfio_release_device;
- rte_vfio_setup_device;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_hypervisor_get;
- rte_hypervisor_get_name;
- rte_vfio_clear_group;
rte_reciprocal_value;
rte_reciprocal_value_u64;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_log_set_level_pattern;
+ rte_rtm_supported;
rte_service_attr_get;
rte_service_attr_reset_all;
rte_service_component_register;
@@ -215,6 +163,8 @@ DPDK_18.05 {
rte_service_get_count;
rte_service_get_name;
rte_service_lcore_add;
+ rte_service_lcore_attr_get;
+ rte_service_lcore_attr_reset_all;
rte_service_lcore_count;
rte_service_lcore_count_services;
rte_service_lcore_del;
@@ -224,6 +174,7 @@ DPDK_18.05 {
rte_service_lcore_stop;
rte_service_map_lcore_get;
rte_service_map_lcore_set;
+ rte_service_may_be_active;
rte_service_probe_capability;
rte_service_run_iter_on_app_lcore;
rte_service_runstate_get;
@@ -231,17 +182,23 @@ DPDK_18.05 {
rte_service_set_runstate_mapped_check;
rte_service_set_stats_enable;
rte_service_start_with_defaults;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eal_mbuf_user_pool_ops;
+ rte_set_application_usage_hook;
+ rte_socket_count;
+ rte_socket_id;
+ rte_socket_id_by_idx;
+ rte_srand;
+ rte_strerror;
+ rte_strscpy;
+ rte_strsplit;
+ rte_sys_gettid;
+ rte_thread_get_affinity;
+ rte_thread_set_affinity;
+ rte_thread_setname;
rte_uuid_compare;
rte_uuid_is_null;
rte_uuid_parse;
rte_uuid_unparse;
+ rte_vfio_clear_group;
rte_vfio_container_create;
rte_vfio_container_destroy;
rte_vfio_container_dma_map;
@@ -250,67 +207,20 @@ DPDK_18.08 {
rte_vfio_container_group_unbind;
rte_vfio_dma_map;
rte_vfio_dma_unmap;
+ rte_vfio_enable;
rte_vfio_get_container_fd;
rte_vfio_get_group_fd;
rte_vfio_get_group_num;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_dev_probe;
- rte_dev_remove;
- rte_eal_get_runtime_dir;
- rte_eal_hotplug_add;
- rte_eal_hotplug_remove;
- rte_strscpy;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_ctrl_thread_create;
- rte_dev_is_probed;
- rte_devargs_add;
- rte_devargs_dump;
- rte_devargs_insert;
- rte_devargs_next;
- rte_devargs_parse;
- rte_devargs_parsef;
- rte_devargs_remove;
- rte_devargs_type_count;
- rte_eal_cleanup;
- rte_socket_count;
- rte_socket_id_by_idx;
-
-} DPDK_18.11;
-
-DPDK_19.08 {
- global:
-
- rte_lcore_index;
- rte_lcore_to_socket_id;
- rte_mcfg_mem_read_lock;
- rte_mcfg_mem_read_unlock;
- rte_mcfg_mem_write_lock;
- rte_mcfg_mem_write_unlock;
- rte_mcfg_mempool_read_lock;
- rte_mcfg_mempool_read_unlock;
- rte_mcfg_mempool_write_lock;
- rte_mcfg_mempool_write_unlock;
- rte_mcfg_tailq_read_lock;
- rte_mcfg_tailq_read_unlock;
- rte_mcfg_tailq_write_lock;
- rte_mcfg_tailq_write_unlock;
- rte_rand;
- rte_service_lcore_attr_get;
- rte_service_lcore_attr_reset_all;
- rte_service_may_be_active;
- rte_srand;
-
-} DPDK_19.05;
+ rte_vfio_is_enabled;
+ rte_vfio_noiommu_is_enabled;
+ rte_vfio_release_device;
+ rte_vfio_setup_device;
+ rte_vlog;
+ rte_zmalloc;
+ rte_zmalloc_socket;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_efd/rte_efd_version.map b/lib/librte_efd/rte_efd_version.map
index ae60a64178..e010eecfe4 100644
--- a/lib/librte_efd/rte_efd_version.map
+++ b/lib/librte_efd/rte_efd_version.map
@@ -1,4 +1,4 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_efd_create;
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index 6df42a47b8..9e1dbdebb4 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -1,35 +1,53 @@
-DPDK_2.2 {
+DPDK_20.0 {
global:
+ _rte_eth_dev_callback_process;
+ _rte_eth_dev_reset;
+ rte_eth_add_first_rx_callback;
rte_eth_add_rx_callback;
rte_eth_add_tx_callback;
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_allocate;
rte_eth_dev_allocated;
+ rte_eth_dev_attach_secondary;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
rte_eth_dev_close;
rte_eth_dev_configure;
rte_eth_dev_count;
+ rte_eth_dev_count_avail;
+ rte_eth_dev_count_total;
rte_eth_dev_default_mac_addr_set;
+ rte_eth_dev_filter_ctrl;
rte_eth_dev_filter_supported;
rte_eth_dev_flow_ctrl_get;
rte_eth_dev_flow_ctrl_set;
+ rte_eth_dev_fw_version_get;
rte_eth_dev_get_dcb_info;
rte_eth_dev_get_eeprom;
rte_eth_dev_get_eeprom_length;
rte_eth_dev_get_mtu;
+ rte_eth_dev_get_name_by_port;
+ rte_eth_dev_get_port_by_name;
rte_eth_dev_get_reg_info;
+ rte_eth_dev_get_sec_ctx;
+ rte_eth_dev_get_supported_ptypes;
rte_eth_dev_get_vlan_offload;
- rte_eth_devices;
rte_eth_dev_info_get;
rte_eth_dev_is_valid_port;
+ rte_eth_dev_l2_tunnel_eth_type_conf;
+ rte_eth_dev_l2_tunnel_offload_set;
+ rte_eth_dev_logtype;
rte_eth_dev_mac_addr_add;
rte_eth_dev_mac_addr_remove;
+ rte_eth_dev_pool_ops_supported;
rte_eth_dev_priority_flow_ctrl_set;
+ rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
+ rte_eth_dev_reset;
rte_eth_dev_rss_hash_conf_get;
rte_eth_dev_rss_hash_update;
rte_eth_dev_rss_reta_query;
@@ -38,6 +56,7 @@ DPDK_2.2 {
rte_eth_dev_rx_intr_ctl_q;
rte_eth_dev_rx_intr_disable;
rte_eth_dev_rx_intr_enable;
+ rte_eth_dev_rx_offload_name;
rte_eth_dev_rx_queue_start;
rte_eth_dev_rx_queue_stop;
rte_eth_dev_set_eeprom;
@@ -47,18 +66,28 @@ DPDK_2.2 {
rte_eth_dev_set_mtu;
rte_eth_dev_set_rx_queue_stats_mapping;
rte_eth_dev_set_tx_queue_stats_mapping;
+ rte_eth_dev_set_vlan_ether_type;
rte_eth_dev_set_vlan_offload;
rte_eth_dev_set_vlan_pvid;
rte_eth_dev_set_vlan_strip_on_queue;
rte_eth_dev_socket_id;
rte_eth_dev_start;
rte_eth_dev_stop;
+ rte_eth_dev_tx_offload_name;
rte_eth_dev_tx_queue_start;
rte_eth_dev_tx_queue_stop;
rte_eth_dev_uc_all_hash_table_set;
rte_eth_dev_uc_hash_table_set;
+ rte_eth_dev_udp_tunnel_port_add;
+ rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
+ rte_eth_devices;
rte_eth_dma_zone_reserve;
+ rte_eth_find_next;
+ rte_eth_find_next_owned_by;
+ rte_eth_iterator_cleanup;
+ rte_eth_iterator_init;
+ rte_eth_iterator_next;
rte_eth_led_off;
rte_eth_led_on;
rte_eth_link;
@@ -75,6 +104,7 @@ DPDK_2.2 {
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
+ rte_eth_speed_bitflag;
rte_eth_stats;
rte_eth_stats_get;
rte_eth_stats_reset;
@@ -85,66 +115,27 @@ DPDK_2.2 {
rte_eth_timesync_read_time;
rte_eth_timesync_read_tx_timestamp;
rte_eth_timesync_write_time;
- rte_eth_tx_queue_info_get;
- rte_eth_tx_queue_setup;
- rte_eth_xstats_get;
- rte_eth_xstats_reset;
-
- local: *;
-};
-
-DPDK_16.04 {
- global:
-
- rte_eth_dev_get_supported_ptypes;
- rte_eth_dev_l2_tunnel_eth_type_conf;
- rte_eth_dev_l2_tunnel_offload_set;
- rte_eth_dev_set_vlan_ether_type;
- rte_eth_dev_udp_tunnel_port_add;
- rte_eth_dev_udp_tunnel_port_delete;
- rte_eth_speed_bitflag;
rte_eth_tx_buffer_count_callback;
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_eth_add_first_rx_callback;
- rte_eth_dev_get_name_by_port;
- rte_eth_dev_get_port_by_name;
- rte_eth_xstats_get_names;
-
-} DPDK_16.04;
-
-DPDK_17.02 {
- global:
-
- _rte_eth_dev_reset;
- rte_eth_dev_fw_version_get;
-
-} DPDK_16.07;
-
-DPDK_17.05 {
- global:
-
- rte_eth_dev_attach_secondary;
- rte_eth_find_next;
rte_eth_tx_done_cleanup;
+ rte_eth_tx_queue_info_get;
+ rte_eth_tx_queue_setup;
+ rte_eth_xstats_get;
rte_eth_xstats_get_by_id;
rte_eth_xstats_get_id_by_name;
+ rte_eth_xstats_get_names;
rte_eth_xstats_get_names_by_id;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- _rte_eth_dev_callback_process;
- rte_eth_dev_adjust_nb_rx_tx_desc;
+ rte_eth_xstats_reset;
+ rte_flow_copy;
+ rte_flow_create;
+ rte_flow_destroy;
+ rte_flow_error_set;
+ rte_flow_flush;
+ rte_flow_isolate;
+ rte_flow_query;
+ rte_flow_validate;
rte_tm_capabilities_get;
rte_tm_get_number_of_leaf_nodes;
rte_tm_hierarchy_commit;
@@ -176,65 +167,8 @@ DPDK_17.08 {
rte_tm_wred_profile_add;
rte_tm_wred_profile_delete;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eth_dev_get_sec_ctx;
- rte_eth_dev_pool_ops_supported;
- rte_eth_dev_reset;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_eth_dev_filter_ctrl;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_eth_dev_count_avail;
- rte_eth_dev_probing_finish;
- rte_eth_find_next_owned_by;
- rte_flow_copy;
- rte_flow_create;
- rte_flow_destroy;
- rte_flow_error_set;
- rte_flow_flush;
- rte_flow_isolate;
- rte_flow_query;
- rte_flow_validate;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eth_dev_logtype;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_eth_dev_rx_offload_name;
- rte_eth_dev_tx_offload_name;
- rte_eth_iterator_cleanup;
- rte_eth_iterator_init;
- rte_eth_iterator_next;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_eth_dev_count_total;
-
-} DPDK_18.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 76b3021d3a..edfc15282d 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -1,61 +1,38 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
- rte_eventdevs;
-
+ rte_event_crypto_adapter_caps_get;
+ rte_event_crypto_adapter_create;
+ rte_event_crypto_adapter_create_ext;
+ rte_event_crypto_adapter_event_port_get;
+ rte_event_crypto_adapter_free;
+ rte_event_crypto_adapter_queue_pair_add;
+ rte_event_crypto_adapter_queue_pair_del;
+ rte_event_crypto_adapter_service_id_get;
+ rte_event_crypto_adapter_start;
+ rte_event_crypto_adapter_stats_get;
+ rte_event_crypto_adapter_stats_reset;
+ rte_event_crypto_adapter_stop;
+ rte_event_dequeue_timeout_ticks;
+ rte_event_dev_attr_get;
+ rte_event_dev_close;
+ rte_event_dev_configure;
rte_event_dev_count;
+ rte_event_dev_dump;
rte_event_dev_get_dev_id;
- rte_event_dev_socket_id;
rte_event_dev_info_get;
- rte_event_dev_configure;
+ rte_event_dev_selftest;
+ rte_event_dev_service_id_get;
+ rte_event_dev_socket_id;
rte_event_dev_start;
rte_event_dev_stop;
- rte_event_dev_close;
- rte_event_dev_dump;
+ rte_event_dev_stop_flush_callback_register;
rte_event_dev_xstats_by_name_get;
rte_event_dev_xstats_get;
rte_event_dev_xstats_names_get;
rte_event_dev_xstats_reset;
-
- rte_event_port_default_conf_get;
- rte_event_port_setup;
- rte_event_port_link;
- rte_event_port_unlink;
- rte_event_port_links_get;
-
- rte_event_queue_default_conf_get;
- rte_event_queue_setup;
-
- rte_event_dequeue_timeout_ticks;
-
- rte_event_pmd_allocate;
- rte_event_pmd_release;
- rte_event_pmd_vdev_init;
- rte_event_pmd_vdev_uninit;
- rte_event_pmd_pci_probe;
- rte_event_pmd_pci_remove;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- rte_event_ring_create;
- rte_event_ring_free;
- rte_event_ring_init;
- rte_event_ring_lookup;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_event_dev_attr_get;
- rte_event_dev_service_id_get;
- rte_event_port_attr_get;
- rte_event_queue_attr_get;
-
rte_event_eth_rx_adapter_caps_get;
+ rte_event_eth_rx_adapter_cb_register;
rte_event_eth_rx_adapter_create;
rte_event_eth_rx_adapter_create_ext;
rte_event_eth_rx_adapter_free;
@@ -63,38 +40,9 @@ DPDK_17.11 {
rte_event_eth_rx_adapter_queue_del;
rte_event_eth_rx_adapter_service_id_get;
rte_event_eth_rx_adapter_start;
+ rte_event_eth_rx_adapter_stats_get;
rte_event_eth_rx_adapter_stats_reset;
rte_event_eth_rx_adapter_stop;
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_event_dev_selftest;
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_event_dev_stop_flush_callback_register;
-} DPDK_18.02;
-
-DPDK_19.05 {
- global:
-
- rte_event_crypto_adapter_caps_get;
- rte_event_crypto_adapter_create;
- rte_event_crypto_adapter_create_ext;
- rte_event_crypto_adapter_event_port_get;
- rte_event_crypto_adapter_free;
- rte_event_crypto_adapter_queue_pair_add;
- rte_event_crypto_adapter_queue_pair_del;
- rte_event_crypto_adapter_service_id_get;
- rte_event_crypto_adapter_start;
- rte_event_crypto_adapter_stats_get;
- rte_event_crypto_adapter_stats_reset;
- rte_event_crypto_adapter_stop;
- rte_event_port_unlinks_in_progress;
rte_event_eth_tx_adapter_caps_get;
rte_event_eth_tx_adapter_create;
rte_event_eth_tx_adapter_create_ext;
@@ -107,6 +55,26 @@ DPDK_19.05 {
rte_event_eth_tx_adapter_stats_get;
rte_event_eth_tx_adapter_stats_reset;
rte_event_eth_tx_adapter_stop;
+ rte_event_pmd_allocate;
+ rte_event_pmd_pci_probe;
+ rte_event_pmd_pci_remove;
+ rte_event_pmd_release;
+ rte_event_pmd_vdev_init;
+ rte_event_pmd_vdev_uninit;
+ rte_event_port_attr_get;
+ rte_event_port_default_conf_get;
+ rte_event_port_link;
+ rte_event_port_links_get;
+ rte_event_port_setup;
+ rte_event_port_unlink;
+ rte_event_port_unlinks_in_progress;
+ rte_event_queue_attr_get;
+ rte_event_queue_default_conf_get;
+ rte_event_queue_setup;
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
rte_event_timer_adapter_caps_get;
rte_event_timer_adapter_create;
rte_event_timer_adapter_create_ext;
@@ -121,11 +89,7 @@ DPDK_19.05 {
rte_event_timer_arm_burst;
rte_event_timer_arm_tmo_tick_burst;
rte_event_timer_cancel_burst;
-} DPDK_18.05;
+ rte_eventdevs;
-DPDK_19.08 {
- global:
-
- rte_event_eth_rx_adapter_cb_register;
- rte_event_eth_rx_adapter_stats_get;
-} DPDK_19.05;
+ local: *;
+};
diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map
index 49bc25c6a0..001ff660e3 100644
--- a/lib/librte_flow_classify/rte_flow_classify_version.map
+++ b/lib/librte_flow_classify/rte_flow_classify_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_gro/rte_gro_version.map b/lib/librte_gro/rte_gro_version.map
index 1606b6dc72..9f6fe79e57 100644
--- a/lib/librte_gro/rte_gro_version.map
+++ b/lib/librte_gro/rte_gro_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_gro_ctx_create;
diff --git a/lib/librte_gso/rte_gso_version.map b/lib/librte_gso/rte_gso_version.map
index e1fd453edb..8505a59c27 100644
--- a/lib/librte_gso/rte_gso_version.map
+++ b/lib/librte_gso/rte_gso_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_gso_segment;
diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_hash_version.map
index 734ae28b04..138c130c1b 100644
--- a/lib/librte_hash/rte_hash_version.map
+++ b/lib/librte_hash/rte_hash_version.map
@@ -1,58 +1,33 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_fbk_hash_create;
rte_fbk_hash_find_existing;
rte_fbk_hash_free;
rte_hash_add_key;
+ rte_hash_add_key_data;
rte_hash_add_key_with_hash;
+ rte_hash_add_key_with_hash_data;
+ rte_hash_count;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
rte_hash_find_existing;
rte_hash_free;
+ rte_hash_get_key_with_position;
rte_hash_hash;
+ rte_hash_iterate;
rte_hash_lookup;
rte_hash_lookup_bulk;
- rte_hash_lookup_with_hash;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_hash_add_key_data;
- rte_hash_add_key_with_hash_data;
- rte_hash_iterate;
rte_hash_lookup_bulk_data;
rte_hash_lookup_data;
+ rte_hash_lookup_with_hash;
rte_hash_lookup_with_hash_data;
rte_hash_reset;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_hash_set_cmp_func;
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_hash_get_key_with_position;
-
-} DPDK_2.2;
-
-
-DPDK_18.08 {
- global:
-
- rte_hash_count;
-
-} DPDK_16.07;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_ip_frag/rte_ip_frag_version.map b/lib/librte_ip_frag/rte_ip_frag_version.map
index a193007c61..5dd34f828c 100644
--- a/lib/librte_ip_frag/rte_ip_frag_version.map
+++ b/lib/librte_ip_frag/rte_ip_frag_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ip_frag_free_death_row;
rte_ip_frag_table_create;
+ rte_ip_frag_table_destroy;
rte_ip_frag_table_statistics_dump;
rte_ipv4_frag_reassemble_packet;
rte_ipv4_fragment_packet;
@@ -12,13 +13,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_17.08 {
- global:
-
- rte_ip_frag_table_destroy;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index ee9f1961b0..3723b812fc 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_jobstats/rte_jobstats_version.map b/lib/librte_jobstats/rte_jobstats_version.map
index f89441438e..dbd2664ae2 100644
--- a/lib/librte_jobstats/rte_jobstats_version.map
+++ b/lib/librte_jobstats/rte_jobstats_version.map
@@ -1,6 +1,7 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_jobstats_abort;
rte_jobstats_context_finish;
rte_jobstats_context_init;
rte_jobstats_context_reset;
@@ -17,10 +18,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_jobstats_abort;
-
-} DPDK_2.0;
diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map
index c877dc6aaa..9cd3cedc54 100644
--- a/lib/librte_kni/rte_kni_version.map
+++ b/lib/librte_kni/rte_kni_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kni_alloc;
diff --git a/lib/librte_kvargs/rte_kvargs_version.map b/lib/librte_kvargs/rte_kvargs_version.map
index 8f4b4e3f8f..3ba0f4b59c 100644
--- a/lib/librte_kvargs/rte_kvargs_version.map
+++ b/lib/librte_kvargs/rte_kvargs_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kvargs_count;
@@ -15,4 +15,4 @@ EXPERIMENTAL {
rte_kvargs_parse_delim;
rte_kvargs_strcmp;
-} DPDK_2.0;
+};
diff --git a/lib/librte_latencystats/rte_latencystats_version.map b/lib/librte_latencystats/rte_latencystats_version.map
index ac8403e821..e04e63463f 100644
--- a/lib/librte_latencystats/rte_latencystats_version.map
+++ b/lib/librte_latencystats/rte_latencystats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_latencystats_get;
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 90beac853d..500f58b806 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,13 +1,6 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
- rte_lpm_add;
- rte_lpm_create;
- rte_lpm_delete;
- rte_lpm_delete_all;
- rte_lpm_find_existing;
- rte_lpm_free;
- rte_lpm_is_rule_present;
rte_lpm6_add;
rte_lpm6_create;
rte_lpm6_delete;
@@ -18,29 +11,13 @@ DPDK_2.0 {
rte_lpm6_is_rule_present;
rte_lpm6_lookup;
rte_lpm6_lookup_bulk_func;
+ rte_lpm_add;
+ rte_lpm_create;
+ rte_lpm_delete;
+ rte_lpm_delete_all;
+ rte_lpm_find_existing;
+ rte_lpm_free;
+ rte_lpm_is_rule_present;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_lpm_add;
- rte_lpm_find_existing;
- rte_lpm_create;
- rte_lpm_free;
- rte_lpm_is_rule_present;
- rte_lpm_delete;
- rte_lpm_delete_all;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_lpm6_add;
- rte_lpm6_is_rule_present;
- rte_lpm6_lookup;
- rte_lpm6_lookup_bulk_func;
-
-} DPDK_16.04;
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index 2662a37bf6..d20aa31857 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -1,24 +1,4 @@
-DPDK_2.0 {
- global:
-
- rte_get_rx_ol_flag_name;
- rte_get_tx_ol_flag_name;
- rte_mbuf_sanity_check;
- rte_pktmbuf_dump;
- rte_pktmbuf_init;
- rte_pktmbuf_pool_init;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pktmbuf_pool_create;
-
-} DPDK_2.0;
-
-DPDK_16.11 {
+DPDK_20.0 {
global:
__rte_pktmbuf_read;
@@ -31,23 +11,26 @@ DPDK_16.11 {
rte_get_ptype_name;
rte_get_ptype_tunnel_name;
rte_get_rx_ol_flag_list;
+ rte_get_rx_ol_flag_name;
rte_get_tx_ol_flag_list;
-
-} DPDK_2.1;
-
-DPDK_18.08 {
- global:
-
+ rte_get_tx_ol_flag_name;
rte_mbuf_best_mempool_ops;
rte_mbuf_platform_mempool_ops;
+ rte_mbuf_sanity_check;
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
+ rte_pktmbuf_dump;
+ rte_pktmbuf_init;
+ rte_pktmbuf_pool_create;
rte_pktmbuf_pool_create_by_ops;
-} DPDK_16.11;
+ rte_pktmbuf_pool_init;
+
+ local: *;
+};
EXPERIMENTAL {
global:
rte_mbuf_check;
-} DPDK_18.08;
+};
diff --git a/lib/librte_member/rte_member_version.map b/lib/librte_member/rte_member_version.map
index 019e4cd962..87780ae611 100644
--- a/lib/librte_member/rte_member_version.map
+++ b/lib/librte_member/rte_member_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_member_add;
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17cbca4607..6a425d203a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_mempool_audit;
- rte_mempool_calc_obj_size;
- rte_mempool_create;
- rte_mempool_dump;
- rte_mempool_list_dump;
- rte_mempool_lookup;
- rte_mempool_walk;
-
- local: *;
-};
-
-DPDK_16.07 {
- global:
-
rte_mempool_avail_count;
rte_mempool_cache_create;
rte_mempool_cache_flush;
rte_mempool_cache_free;
+ rte_mempool_calc_obj_size;
rte_mempool_check_cookies;
+ rte_mempool_contig_blocks_check_cookies;
+ rte_mempool_create;
rte_mempool_create_empty;
rte_mempool_default_cache;
+ rte_mempool_dump;
rte_mempool_free;
rte_mempool_generic_get;
rte_mempool_generic_put;
rte_mempool_in_use_count;
+ rte_mempool_list_dump;
+ rte_mempool_lookup;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
+ rte_mempool_op_calc_mem_size_default;
+ rte_mempool_op_populate_default;
rte_mempool_ops_table;
rte_mempool_populate_anon;
rte_mempool_populate_default;
+ rte_mempool_populate_iova;
rte_mempool_populate_virt;
rte_mempool_register_ops;
rte_mempool_set_ops_byname;
+ rte_mempool_walk;
-} DPDK_2.0;
-
-DPDK_17.11 {
- global:
-
- rte_mempool_populate_iova;
-
-} DPDK_16.07;
-
-DPDK_18.05 {
- global:
-
- rte_mempool_contig_blocks_check_cookies;
- rte_mempool_op_calc_mem_size_default;
- rte_mempool_op_populate_default;
-
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 4b460d5803..46410b0369 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -1,21 +1,16 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_meter_srtcm_color_aware_check;
rte_meter_srtcm_color_blind_check;
rte_meter_srtcm_config;
+ rte_meter_srtcm_profile_config;
rte_meter_trtcm_color_aware_check;
rte_meter_trtcm_color_blind_check;
rte_meter_trtcm_config;
-
- local: *;
-};
-
-DPDK_18.08 {
- global:
-
- rte_meter_srtcm_profile_config;
rte_meter_trtcm_profile_config;
+
+ local: *;
};
EXPERIMENTAL {
diff --git a/lib/librte_metrics/rte_metrics_version.map b/lib/librte_metrics/rte_metrics_version.map
index 6ac99a44a1..85663f356e 100644
--- a/lib/librte_metrics/rte_metrics_version.map
+++ b/lib/librte_metrics/rte_metrics_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_metrics_get_names;
diff --git a/lib/librte_net/rte_net_version.map b/lib/librte_net/rte_net_version.map
index fffc4a3723..8a4e75a3a0 100644
--- a/lib/librte_net/rte_net_version.map
+++ b/lib/librte_net/rte_net_version.map
@@ -1,25 +1,14 @@
-DPDK_16.11 {
- global:
- rte_net_get_ptype;
-
- local: *;
-};
-
-DPDK_17.05 {
- global:
-
- rte_net_crc_calc;
- rte_net_crc_set_alg;
-
-} DPDK_16.11;
-
-DPDK_19.08 {
+DPDK_20.0 {
global:
rte_eth_random_addr;
rte_ether_format_addr;
+ rte_net_crc_calc;
+ rte_net_crc_set_alg;
+ rte_net_get_ptype;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c0280277bb..539785f5f4 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
eal_parse_pci_BDF;
diff --git a/lib/librte_pdump/rte_pdump_version.map b/lib/librte_pdump/rte_pdump_version.map
index 3e744f3012..6d02ccce6d 100644
--- a/lib/librte_pdump/rte_pdump_version.map
+++ b/lib/librte_pdump/rte_pdump_version.map
@@ -1,4 +1,4 @@
-DPDK_16.07 {
+DPDK_20.0 {
global:
rte_pdump_disable;
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 420f065d6e..64d38afecd 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -1,6 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_pipeline_ah_packet_drop;
+ rte_pipeline_ah_packet_hijack;
rte_pipeline_check;
rte_pipeline_create;
rte_pipeline_flush;
@@ -9,42 +11,22 @@ DPDK_2.0 {
rte_pipeline_port_in_create;
rte_pipeline_port_in_disable;
rte_pipeline_port_in_enable;
+ rte_pipeline_port_in_stats_read;
rte_pipeline_port_out_create;
rte_pipeline_port_out_packet_insert;
+ rte_pipeline_port_out_stats_read;
rte_pipeline_run;
rte_pipeline_table_create;
rte_pipeline_table_default_entry_add;
rte_pipeline_table_default_entry_delete;
rte_pipeline_table_entry_add;
- rte_pipeline_table_entry_delete;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pipeline_port_in_stats_read;
- rte_pipeline_port_out_stats_read;
- rte_pipeline_table_stats_read;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete;
rte_pipeline_table_entry_delete_bulk;
+ rte_pipeline_table_stats_read;
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_pipeline_ah_packet_hijack;
- rte_pipeline_ah_packet_drop;
-
-} DPDK_2.2;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_port/rte_port_version.map b/lib/librte_port/rte_port_version.map
index 609bcec3ff..db1b8681d9 100644
--- a/lib/librte_port/rte_port_version.map
+++ b/lib/librte_port/rte_port_version.map
@@ -1,62 +1,32 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_port_ethdev_reader_ops;
+ rte_port_ethdev_writer_nodrop_ops;
rte_port_ethdev_writer_ops;
+ rte_port_fd_reader_ops;
+ rte_port_fd_writer_nodrop_ops;
+ rte_port_fd_writer_ops;
+ rte_port_kni_reader_ops;
+ rte_port_kni_writer_nodrop_ops;
+ rte_port_kni_writer_ops;
+ rte_port_ring_multi_reader_ops;
+ rte_port_ring_multi_writer_nodrop_ops;
+ rte_port_ring_multi_writer_ops;
rte_port_ring_reader_ipv4_frag_ops;
+ rte_port_ring_reader_ipv6_frag_ops;
rte_port_ring_reader_ops;
rte_port_ring_writer_ipv4_ras_ops;
+ rte_port_ring_writer_ipv6_ras_ops;
+ rte_port_ring_writer_nodrop_ops;
rte_port_ring_writer_ops;
rte_port_sched_reader_ops;
rte_port_sched_writer_ops;
rte_port_sink_ops;
rte_port_source_ops;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_port_ethdev_writer_nodrop_ops;
- rte_port_ring_reader_ipv6_frag_ops;
- rte_port_ring_writer_ipv6_ras_ops;
- rte_port_ring_writer_nodrop_ops;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_port_ring_multi_reader_ops;
- rte_port_ring_multi_writer_ops;
- rte_port_ring_multi_writer_nodrop_ops;
-
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_port_kni_reader_ops;
- rte_port_kni_writer_ops;
- rte_port_kni_writer_nodrop_ops;
-
-} DPDK_2.2;
-
-DPDK_16.11 {
- global:
-
- rte_port_fd_reader_ops;
- rte_port_fd_writer_ops;
- rte_port_fd_writer_nodrop_ops;
-
-} DPDK_16.07;
-
-DPDK_18.11 {
- global:
-
rte_port_sym_crypto_reader_ops;
- rte_port_sym_crypto_writer_ops;
rte_port_sym_crypto_writer_nodrop_ops;
+ rte_port_sym_crypto_writer_ops;
-} DPDK_16.11;
+ local: *;
+};
diff --git a/lib/librte_power/rte_power_version.map b/lib/librte_power/rte_power_version.map
index 042917360e..a94ab30c3d 100644
--- a/lib/librte_power/rte_power_version.map
+++ b/lib/librte_power/rte_power_version.map
@@ -1,39 +1,27 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_power_exit;
+ rte_power_freq_disable_turbo;
rte_power_freq_down;
+ rte_power_freq_enable_turbo;
rte_power_freq_max;
rte_power_freq_min;
rte_power_freq_up;
rte_power_freqs;
+ rte_power_get_capabilities;
rte_power_get_env;
rte_power_get_freq;
+ rte_power_guest_channel_send_msg;
rte_power_init;
rte_power_set_env;
rte_power_set_freq;
+ rte_power_turbo_status;
rte_power_unset_env;
local: *;
};
-DPDK_17.11 {
- global:
-
- rte_power_guest_channel_send_msg;
- rte_power_freq_disable_turbo;
- rte_power_freq_enable_turbo;
- rte_power_turbo_status;
-
-} DPDK_2.0;
-
-DPDK_18.08 {
- global:
-
- rte_power_get_capabilities;
-
-} DPDK_17.11;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_rawdev/rte_rawdev_version.map b/lib/librte_rawdev/rte_rawdev_version.map
index b61dbff11c..d847c9e0d3 100644
--- a/lib/librte_rawdev/rte_rawdev_version.map
+++ b/lib/librte_rawdev/rte_rawdev_version.map
@@ -1,4 +1,4 @@
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_rawdev_close;
@@ -17,8 +17,8 @@ DPDK_18.08 {
rte_rawdev_pmd_release;
rte_rawdev_queue_conf_get;
rte_rawdev_queue_count;
- rte_rawdev_queue_setup;
rte_rawdev_queue_release;
+ rte_rawdev_queue_setup;
rte_rawdev_reset;
rte_rawdev_selftest;
rte_rawdev_set_attr;
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
index f8b9ef2abb..787e51ef27 100644
--- a/lib/librte_rcu/rte_rcu_version.map
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_reorder/rte_reorder_version.map b/lib/librte_reorder/rte_reorder_version.map
index 0a8a54de83..cf444062df 100644
--- a/lib/librte_reorder/rte_reorder_version.map
+++ b/lib/librte_reorder/rte_reorder_version.map
@@ -1,13 +1,13 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_reorder_create;
- rte_reorder_init;
+ rte_reorder_drain;
rte_reorder_find_existing;
- rte_reorder_reset;
rte_reorder_free;
+ rte_reorder_init;
rte_reorder_insert;
- rte_reorder_drain;
+ rte_reorder_reset;
local: *;
};
diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map
index 510c1386e0..89d84bcf48 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ring_create;
rte_ring_dump;
+ rte_ring_free;
rte_ring_get_memsize;
rte_ring_init;
rte_ring_list_dump;
@@ -11,13 +12,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.2 {
- global:
-
- rte_ring_free;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 729588794e..1b48bfbf36 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_approx;
@@ -14,6 +14,9 @@ DPDK_2.0 {
rte_sched_port_enqueue;
rte_sched_port_free;
rte_sched_port_get_memory_footprint;
+ rte_sched_port_pkt_read_color;
+ rte_sched_port_pkt_read_tree_path;
+ rte_sched_port_pkt_write;
rte_sched_queue_read_stats;
rte_sched_subport_config;
rte_sched_subport_read_stats;
@@ -21,15 +24,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.1 {
- global:
-
- rte_sched_port_pkt_write;
- rte_sched_port_pkt_read_tree_path;
- rte_sched_port_pkt_read_color;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
index 53267bf3cc..b07314bbf4 100644
--- a/lib/librte_security/rte_security_version.map
+++ b/lib/librte_security/rte_security_version.map
@@ -1,4 +1,4 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
rte_security_attach_session;
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c36..adbb7be9d9 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index 6237252bec..40f72b1fe8 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_table_acl_ops;
diff --git a/lib/librte_telemetry/rte_telemetry_version.map b/lib/librte_telemetry/rte_telemetry_version.map
index fa62d7718c..c1f4613af5 100644
--- a/lib/librte_telemetry/rte_telemetry_version.map
+++ b/lib/librte_telemetry/rte_telemetry_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map
index 72f75c8181..2a59d3f081 100644
--- a/lib/librte_timer/rte_timer_version.map
+++ b/lib/librte_timer/rte_timer_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_timer_dump_stats;
@@ -14,16 +14,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_19.05 {
- global:
-
- rte_timer_dump_stats;
- rte_timer_manage;
- rte_timer_reset;
- rte_timer_stop;
- rte_timer_subsystem_init;
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 5f1d4a75c2..8e9ffac2c2 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -1,64 +1,34 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_vhost_avail_entries;
rte_vhost_dequeue_burst;
rte_vhost_driver_callback_register;
- rte_vhost_driver_register;
- rte_vhost_enable_guest_notification;
- rte_vhost_enqueue_burst;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_vhost_driver_unregister;
-
-} DPDK_2.0;
-
-DPDK_16.07 {
- global:
-
- rte_vhost_avail_entries;
- rte_vhost_get_ifname;
- rte_vhost_get_numa_node;
- rte_vhost_get_queue_num;
-
-} DPDK_2.1;
-
-DPDK_17.05 {
- global:
-
rte_vhost_driver_disable_features;
rte_vhost_driver_enable_features;
rte_vhost_driver_get_features;
+ rte_vhost_driver_register;
rte_vhost_driver_set_features;
rte_vhost_driver_start;
+ rte_vhost_driver_unregister;
+ rte_vhost_enable_guest_notification;
+ rte_vhost_enqueue_burst;
+ rte_vhost_get_ifname;
rte_vhost_get_mem_table;
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
+ rte_vhost_get_numa_node;
+ rte_vhost_get_queue_num;
rte_vhost_get_vhost_vring;
rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
rte_vhost_log_used_vring;
rte_vhost_log_write;
-
-} DPDK_16.07;
-
-DPDK_17.08 {
- global:
-
rte_vhost_rx_queue_count;
-
-} DPDK_17.05;
-
-DPDK_18.02 {
- global:
-
rte_vhost_vring_call;
-} DPDK_17.08;
+ local: *;
+};
EXPERIMENTAL {
global:
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 09/10] build: change ABI version to 20.0
` (7 preceding siblings ...)
2019-10-16 12:43 3% ` [dpdk-dev] [PATCH v2 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
@ 2019-10-16 12:43 2% ` Anatoly Burakov
2019-10-16 12:43 23% ` [dpdk-dev] [PATCH v2 10/10] buildtools: add ABI versioning check script Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Pawel Modrak, Nicolas Chautru, Hemant Agrawal, Sachin Saxena,
Rosen Xu, Stephen Hemminger, Anoob Joseph, Tomasz Duszynski,
Liron Himi, Jerin Jacob, Nithin Dabilpuram, Vamsi Attunuru,
Lee Daly, Fiona Trahe, Ashish Gupta, Sunila Sahu, Declan Doherty,
Pablo de Lara, Gagandeep Singh, Ravi Kumar, Akhil Goyal,
Michael Shamis, Nagadheeraj Rottela, Srikanth Jampala, Fan Zhang,
Jay Zhou, Nipun Gupta, Mattias Rönnblom, Pavan Nikhilesh,
Liang Ma, Peter Mccarthy, Harry van Haaren, Artem V. Andreev,
Andrew Rybchenko, Olivier Matz, Gage Eads, John W. Linville,
Xiaolong Ye, Qi Zhang, Shepard Siegel, Ed Czeck, John Miller,
Igor Russkikh, Pavel Belous, Allain Legacy, Matt Peters,
Rasesh Mody, Shahed Shaikh, Ajit Khaparde, Somnath Kotur,
Chas Williams, Rahul Lakkireddy, Wenzhuo Lu, Marcin Wojtas,
Michal Krawczyk, Guy Tzalik, Evgeny Schemeilin, Igor Chauskin,
John Daley, Hyong Youb Kim, Gaetan Rivet, Xiao Wang, Ziyang Xuan,
Xiaoyun Wang, Guoyang Zhou, Wei Hu (Xavier), Min Hu (Connor),
Yisen Zhuang, Beilei Xing, Jingjing Wu, Qiming Yang,
Konstantin Ananyev, Ferruh Yigit, Shijith Thotton,
Srisivasubramanian Srinivasan, Jakub Grajciar, Matan Azrad,
Shahaf Shuler, Viacheslav Ovsiienko, Zyta Szpak,
K. Y. Srinivasan, Haiyang Zhang, Rastislav Cernay, Jan Remes,
Alejandro Lucero, Tetsuya Mukawa, Kiran Kumar K,
Bruce Richardson, Jasvinder Singh, Cristian Dumitrescu,
Keith Wiles, Maciej Czekaj, Maxime Coquelin, Tiwei Bie,
Zhihong Wang, Yong Wang, Tianfei zhang, Xiaoyun Li, Satha Rao,
Shreyansh Jain, David Hunt, Byron Marohn, Yipeng Wang,
Thomas Monjalon, Bernard Iremonger, Jiayu Hu, Sameh Gobriel,
Reshma Pattan, Vladimir Medvedkin, Honnappa Nagarahalli,
Kevin Laatz, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Merge all vesions in linker version script files to DPDK_20.0.
This commit was generated by running the following command:
:~/DPDK$ buildtools/update-abi.sh 20.0
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Regenerated this patch using the new script
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +++----
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++++-----
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 6 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +--
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 ++--
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 ++--
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 ++----
.../rte_distributor_version.map | 16 +-
lib/librte_eal/rte_eal_version.map | 310 +++++++-----------
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +++------
lib/librte_eventdev/rte_eventdev_version.map | 130 +++-----
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +--
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm_version.map | 39 +--
lib/librte_mbuf/rte_mbuf_version.map | 41 +--
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +--
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +---
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +--
154 files changed, 721 insertions(+), 1413 deletions(-)
diff --git a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
index f64b0f9c27..6bcea2cc7f 100644
--- a/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
+++ b/drivers/baseband/fpga_lte_fec/rte_pmd_bbdev_fpga_lte_fec_version.map
@@ -1,10 +1,10 @@
-DPDK_19.08 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
EXPERIMENTAL {
- global:
+ global:
- fpga_lte_fec_configure;
+ fpga_lte_fec_configure;
};
diff --git a/drivers/baseband/null/rte_pmd_bbdev_null_version.map b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/null/rte_pmd_bbdev_null_version.map
+++ b/drivers/baseband/null/rte_pmd_bbdev_null_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
+++ b/drivers/baseband/turbo_sw/rte_pmd_bbdev_turbo_sw_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map
index a221522c23..9ab8c76eef 100644
--- a/drivers/bus/dpaa/rte_bus_dpaa_version.map
+++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
bman_acquire;
@@ -8,127 +8,94 @@ DPDK_17.11 {
bman_new_pool;
bman_query_free_buffers;
bman_release;
+ bman_thread_irq;
+ dpaa_logtype_eventdev;
dpaa_logtype_mempool;
dpaa_logtype_pmd;
dpaa_netcfg;
+ dpaa_svr_family;
fman_ccsr_map_fd;
fman_dealloc_bufs_mask_hi;
fman_dealloc_bufs_mask_lo;
fman_if_add_mac_addr;
fman_if_clear_mac_addr;
fman_if_disable_rx;
- fman_if_enable_rx;
fman_if_discard_rx_errors;
- fman_if_get_fc_threshold;
+ fman_if_enable_rx;
fman_if_get_fc_quanta;
+ fman_if_get_fc_threshold;
fman_if_get_fdoff;
+ fman_if_get_sg_enable;
fman_if_loopback_disable;
fman_if_loopback_enable;
fman_if_promiscuous_disable;
fman_if_promiscuous_enable;
fman_if_reset_mcast_filter_table;
fman_if_set_bp;
- fman_if_set_fc_threshold;
fman_if_set_fc_quanta;
+ fman_if_set_fc_threshold;
fman_if_set_fdoff;
fman_if_set_ic_params;
fman_if_set_maxfrm;
fman_if_set_mcast_filter_table;
+ fman_if_set_sg;
fman_if_stats_get;
fman_if_stats_get_all;
fman_if_stats_reset;
fman_ip_rev;
+ fsl_qman_fq_portal_create;
netcfg_acquire;
netcfg_release;
of_find_compatible_node;
+ of_get_mac_address;
of_get_property;
+ per_lcore_dpaa_io;
+ per_lcore_held_bufs;
qm_channel_caam;
+ qm_channel_pool1;
+ qman_alloc_cgrid_range;
+ qman_alloc_pool_range;
+ qman_clear_irq;
+ qman_create_cgr;
qman_create_fq;
+ qman_dca_index;
+ qman_delete_cgr;
qman_dequeue;
qman_dqrr_consume;
qman_enqueue;
qman_enqueue_multi;
+ qman_enqueue_multi_fq;
qman_fq_fqid;
+ qman_fq_portal_irqsource_add;
+ qman_fq_portal_irqsource_remove;
+ qman_fq_portal_thread_irq;
qman_fq_state;
qman_global_init;
qman_init_fq;
- qman_poll_dqrr;
- qman_query_fq_np;
- qman_set_vdq;
- qman_reserve_fqid_range;
- qman_volatile_dequeue;
- rte_dpaa_driver_register;
- rte_dpaa_driver_unregister;
- rte_dpaa_mem_ptov;
- rte_dpaa_portal_init;
-
- local: *;
-};
-
-DPDK_18.02 {
- global:
-
- dpaa_logtype_eventdev;
- dpaa_svr_family;
- per_lcore_dpaa_io;
- per_lcore_held_bufs;
- qm_channel_pool1;
- qman_alloc_cgrid_range;
- qman_alloc_pool_range;
- qman_create_cgr;
- qman_dca_index;
- qman_delete_cgr;
- qman_enqueue_multi_fq;
+ qman_irqsource_add;
+ qman_irqsource_remove;
qman_modify_cgr;
qman_oos_fq;
+ qman_poll_dqrr;
qman_portal_dequeue;
qman_portal_poll_rx;
qman_query_fq_frm_cnt;
+ qman_query_fq_np;
qman_release_cgrid_range;
+ qman_reserve_fqid_range;
qman_retire_fq;
+ qman_set_fq_lookup_table;
+ qman_set_vdq;
qman_static_dequeue_add;
- rte_dpaa_portal_fq_close;
- rte_dpaa_portal_fq_init;
-
-} DPDK_17.11;
-
-DPDK_18.08 {
- global:
-
- fman_if_get_sg_enable;
- fman_if_set_sg;
- of_get_mac_address;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
-
- bman_thread_irq;
- fman_if_get_sg_enable;
- fman_if_set_sg;
- qman_clear_irq;
-
- qman_irqsource_add;
- qman_irqsource_remove;
qman_thread_fd;
qman_thread_irq;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- qman_set_fq_lookup_table;
-
-} DPDK_18.11;
-
-DPDK_19.11 {
- global:
-
- fsl_qman_fq_portal_create;
- qman_fq_portal_irqsource_add;
- qman_fq_portal_irqsource_remove;
- qman_fq_portal_thread_irq;
-
-} DPDK_19.05;
+ qman_volatile_dequeue;
+ rte_dpaa_driver_register;
+ rte_dpaa_driver_unregister;
+ rte_dpaa_mem_ptov;
+ rte_dpaa_portal_fq_close;
+ rte_dpaa_portal_fq_init;
+ rte_dpaa_portal_init;
+
+ local: *;
+};
diff --git a/drivers/bus/fslmc/rte_bus_fslmc_version.map b/drivers/bus/fslmc/rte_bus_fslmc_version.map
index 4da787236b..fe45575046 100644
--- a/drivers/bus/fslmc/rte_bus_fslmc_version.map
+++ b/drivers/bus/fslmc/rte_bus_fslmc_version.map
@@ -1,32 +1,67 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
+ dpaa2_affine_qbman_ethrx_swp;
dpaa2_affine_qbman_swp;
dpaa2_alloc_dpbp_dev;
dpaa2_alloc_dq_storage;
+ dpaa2_dpbp_supported;
+ dpaa2_dqrr_size;
+ dpaa2_eqcr_size;
dpaa2_free_dpbp_dev;
dpaa2_free_dq_storage;
+ dpaa2_free_eq_descriptors;
+ dpaa2_get_qbman_swp;
+ dpaa2_io_portal;
+ dpaa2_svr_family;
+ dpaa2_virt_mode;
dpbp_disable;
dpbp_enable;
dpbp_get_attributes;
dpbp_get_num_free_bufs;
dpbp_open;
dpbp_reset;
+ dpci_get_opr;
+ dpci_set_opr;
+ dpci_set_rx_queue;
+ dpcon_get_attributes;
+ dpcon_open;
+ dpdmai_close;
+ dpdmai_disable;
+ dpdmai_enable;
+ dpdmai_get_attributes;
+ dpdmai_get_rx_queue;
+ dpdmai_get_tx_queue;
+ dpdmai_open;
+ dpdmai_set_rx_queue;
+ dpio_add_static_dequeue_channel;
dpio_close;
dpio_disable;
dpio_enable;
dpio_get_attributes;
dpio_open;
+ dpio_remove_static_dequeue_channel;
dpio_reset;
dpio_set_stashing_destination;
+ mc_get_soc_version;
+ mc_get_version;
mc_send_command;
per_lcore__dpaa2_io;
+ per_lcore_dpaa2_held_bufs;
qbman_check_command_complete;
+ qbman_check_new_result;
qbman_eq_desc_clear;
+ qbman_eq_desc_set_dca;
qbman_eq_desc_set_fq;
qbman_eq_desc_set_no_orp;
+ qbman_eq_desc_set_orp;
qbman_eq_desc_set_qd;
qbman_eq_desc_set_response;
+ qbman_eq_desc_set_token;
+ qbman_fq_query_state;
+ qbman_fq_state_frame_count;
+ qbman_get_dqrr_from_idx;
+ qbman_get_dqrr_idx;
qbman_pull_desc_clear;
qbman_pull_desc_set_fq;
qbman_pull_desc_set_numframes;
@@ -35,112 +70,43 @@ DPDK_17.05 {
qbman_release_desc_set_bpid;
qbman_result_DQ_fd;
qbman_result_DQ_flags;
- qbman_result_has_new_result;
- qbman_swp_acquire;
- qbman_swp_pull;
- qbman_swp_release;
- rte_fslmc_driver_register;
- rte_fslmc_driver_unregister;
- rte_fslmc_vfio_dmamap;
- rte_mcp_ptr_list;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- dpaa2_io_portal;
- dpaa2_get_qbman_swp;
- dpci_set_rx_queue;
- dpcon_open;
- dpcon_get_attributes;
- dpio_add_static_dequeue_channel;
- dpio_remove_static_dequeue_channel;
- mc_get_soc_version;
- mc_get_version;
- qbman_check_new_result;
- qbman_eq_desc_set_dca;
- qbman_get_dqrr_from_idx;
- qbman_get_dqrr_idx;
qbman_result_DQ_fqd_ctx;
+ qbman_result_DQ_odpid;
+ qbman_result_DQ_seqnum;
qbman_result_SCN_state;
+ qbman_result_eqresp_fd;
+ qbman_result_eqresp_rc;
+ qbman_result_eqresp_rspid;
+ qbman_result_eqresp_set_rspid;
+ qbman_result_has_new_result;
+ qbman_swp_acquire;
qbman_swp_dqrr_consume;
+ qbman_swp_dqrr_idx_consume;
qbman_swp_dqrr_next;
qbman_swp_enqueue_multiple;
qbman_swp_enqueue_multiple_desc;
+ qbman_swp_enqueue_multiple_fd;
qbman_swp_interrupt_clear_status;
+ qbman_swp_prefetch_dqrr_next;
+ qbman_swp_pull;
qbman_swp_push_set;
+ qbman_swp_release;
rte_dpaa2_alloc_dpci_dev;
- rte_fslmc_object_register;
- rte_global_active_dqs_list;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- dpaa2_dpbp_supported;
rte_dpaa2_dev_type;
+ rte_dpaa2_free_dpci_dev;
rte_dpaa2_intr_disable;
rte_dpaa2_intr_enable;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- dpaa2_svr_family;
- dpaa2_virt_mode;
- per_lcore_dpaa2_held_bufs;
- qbman_fq_query_state;
- qbman_fq_state_frame_count;
- qbman_swp_dqrr_idx_consume;
- qbman_swp_prefetch_dqrr_next;
- rte_fslmc_get_device_count;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- dpaa2_affine_qbman_ethrx_swp;
- dpdmai_close;
- dpdmai_disable;
- dpdmai_enable;
- dpdmai_get_attributes;
- dpdmai_get_rx_queue;
- dpdmai_get_tx_queue;
- dpdmai_open;
- dpdmai_set_rx_queue;
- rte_dpaa2_free_dpci_dev;
rte_dpaa2_memsegs;
-
-} DPDK_18.02;
-
-DPDK_18.11 {
- global:
- dpaa2_dqrr_size;
- dpaa2_eqcr_size;
- dpci_get_opr;
- dpci_set_opr;
-
-} DPDK_18.05;
-
-DPDK_19.05 {
- global:
- dpaa2_free_eq_descriptors;
-
- qbman_eq_desc_set_orp;
- qbman_eq_desc_set_token;
- qbman_result_DQ_odpid;
- qbman_result_DQ_seqnum;
- qbman_result_eqresp_fd;
- qbman_result_eqresp_rc;
- qbman_result_eqresp_rspid;
- qbman_result_eqresp_set_rspid;
- qbman_swp_enqueue_multiple_fd;
-} DPDK_18.11;
+ rte_fslmc_driver_register;
+ rte_fslmc_driver_unregister;
+ rte_fslmc_get_device_count;
+ rte_fslmc_object_register;
+ rte_fslmc_vfio_dmamap;
+ rte_global_active_dqs_list;
+ rte_mcp_ptr_list;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/bus/ifpga/rte_bus_ifpga_version.map b/drivers/bus/ifpga/rte_bus_ifpga_version.map
index 964c9a9c45..05b4a28c1b 100644
--- a/drivers/bus/ifpga/rte_bus_ifpga_version.map
+++ b/drivers/bus/ifpga/rte_bus_ifpga_version.map
@@ -1,17 +1,11 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
- rte_ifpga_get_integer32_arg;
- rte_ifpga_get_string_arg;
rte_ifpga_driver_register;
rte_ifpga_driver_unregister;
+ rte_ifpga_find_afu_by_name;
+ rte_ifpga_get_integer32_arg;
+ rte_ifpga_get_string_arg;
local: *;
};
-
-DPDK_19.05 {
- global:
-
- rte_ifpga_find_afu_by_name;
-
-} DPDK_18.05;
diff --git a/drivers/bus/pci/rte_bus_pci_version.map b/drivers/bus/pci/rte_bus_pci_version.map
index 27e9c4f101..012d817e14 100644
--- a/drivers/bus/pci/rte_bus_pci_version.map
+++ b/drivers/bus/pci/rte_bus_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pci_dump;
diff --git a/drivers/bus/vdev/rte_bus_vdev_version.map b/drivers/bus/vdev/rte_bus_vdev_version.map
index 590cf9b437..5abb10ecb0 100644
--- a/drivers/bus/vdev/rte_bus_vdev_version.map
+++ b/drivers/bus/vdev/rte_bus_vdev_version.map
@@ -1,18 +1,12 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
+ rte_vdev_add_custom_scan;
rte_vdev_init;
rte_vdev_register;
+ rte_vdev_remove_custom_scan;
rte_vdev_uninit;
rte_vdev_unregister;
local: *;
};
-
-DPDK_18.02 {
- global:
-
- rte_vdev_add_custom_scan;
- rte_vdev_remove_custom_scan;
-
-} DPDK_17.11;
diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map
index ae231ad329..cbaaebc06c 100644
--- a/drivers/bus/vmbus/rte_bus_vmbus_version.map
+++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map
@@ -1,6 +1,4 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_vmbus_chan_close;
@@ -20,6 +18,7 @@ DPDK_18.08 {
rte_vmbus_probe;
rte_vmbus_register;
rte_vmbus_scan;
+ rte_vmbus_set_latency;
rte_vmbus_sub_channel_index;
rte_vmbus_subchan_open;
rte_vmbus_unmap_device;
@@ -27,10 +26,3 @@ DPDK_18.08 {
local: *;
};
-
-DPDK_18.11 {
- global:
-
- rte_vmbus_set_latency;
-
-} DPDK_18.08;
diff --git a/drivers/common/cpt/rte_common_cpt_version.map b/drivers/common/cpt/rte_common_cpt_version.map
index dec614f0de..79fa5751bc 100644
--- a/drivers/common/cpt/rte_common_cpt_version.map
+++ b/drivers/common/cpt/rte_common_cpt_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
cpt_pmd_ops_helper_get_mlen_direct_mode;
cpt_pmd_ops_helper_get_mlen_sg_mode;
+
+ local: *;
};
diff --git a/drivers/common/dpaax/rte_common_dpaax_version.map b/drivers/common/dpaax/rte_common_dpaax_version.map
index 8131c9e305..45d62aea9d 100644
--- a/drivers/common/dpaax/rte_common_dpaax_version.map
+++ b/drivers/common/dpaax/rte_common_dpaax_version.map
@@ -1,11 +1,11 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- dpaax_iova_table_update;
dpaax_iova_table_depopulate;
dpaax_iova_table_dump;
dpaax_iova_table_p;
dpaax_iova_table_populate;
+ dpaax_iova_table_update;
local: *;
};
diff --git a/drivers/common/mvep/rte_common_mvep_version.map b/drivers/common/mvep/rte_common_mvep_version.map
index c71722d79f..030928439d 100644
--- a/drivers/common/mvep/rte_common_mvep_version.map
+++ b/drivers/common/mvep/rte_common_mvep_version.map
@@ -1,6 +1,8 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
- rte_mvep_init;
rte_mvep_deinit;
+ rte_mvep_init;
+
+ local: *;
};
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index a9b3cff9bc..c15fb89112 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,8 +1,10 @@
-DPDK_18.05 {
+DPDK_20.0 {
global:
octeontx_logtype_mbox;
+ octeontx_mbox_send;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
- octeontx_mbox_send;
+
+ local: *;
};
diff --git a/drivers/common/octeontx2/rte_common_octeontx2_version.map b/drivers/common/octeontx2/rte_common_octeontx2_version.map
index 4400120da0..adad21a2d6 100644
--- a/drivers/common/octeontx2/rte_common_octeontx2_version.map
+++ b/drivers/common/octeontx2/rte_common_octeontx2_version.map
@@ -1,39 +1,35 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
otx2_dev_active_vfs;
otx2_dev_fini;
otx2_dev_priv_init;
-
+ otx2_disable_irqs;
+ otx2_intra_dev_get_cfg;
otx2_logtype_base;
otx2_logtype_dpi;
otx2_logtype_mbox;
+ otx2_logtype_nix;
otx2_logtype_npa;
otx2_logtype_npc;
- otx2_logtype_nix;
otx2_logtype_sso;
- otx2_logtype_tm;
otx2_logtype_tim;
-
+ otx2_logtype_tm;
otx2_mbox_alloc_msg_rsp;
otx2_mbox_get_rsp;
otx2_mbox_get_rsp_tmo;
otx2_mbox_id2name;
otx2_mbox_msg_send;
otx2_mbox_wait_for_rsp;
-
- otx2_intra_dev_get_cfg;
otx2_npa_lf_active;
otx2_npa_lf_obj_get;
otx2_npa_lf_obj_ref;
otx2_npa_pf_func_get;
otx2_npa_set_defaults;
+ otx2_register_irq;
otx2_sso_pf_func_get;
otx2_sso_pf_func_set;
-
- otx2_disable_irqs;
otx2_unregister_irq;
- otx2_register_irq;
local: *;
};
diff --git a/drivers/compress/isal/rte_pmd_isal_version.map b/drivers/compress/isal/rte_pmd_isal_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/compress/isal/rte_pmd_isal_version.map
+++ b/drivers/compress/isal/rte_pmd_isal_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
+++ b/drivers/compress/octeontx/rte_pmd_octeontx_compress_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/qat/rte_pmd_qat_version.map b/drivers/compress/qat/rte_pmd_qat_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/qat/rte_pmd_qat_version.map
+++ b/drivers/compress/qat/rte_pmd_qat_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/compress/zlib/rte_pmd_zlib_version.map b/drivers/compress/zlib/rte_pmd_zlib_version.map
index ad6e191e49..f9f17e4f6e 100644
--- a/drivers/compress/zlib/rte_pmd_zlib_version.map
+++ b/drivers/compress/zlib/rte_pmd_zlib_version.map
@@ -1,3 +1,3 @@
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
+++ b/drivers/crypto/aesni_gcm/rte_pmd_aesni_gcm_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
+++ b/drivers/crypto/aesni_mb/rte_pmd_aesni_mb_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/armv8/rte_pmd_armv8_version.map b/drivers/crypto/armv8/rte_pmd_armv8_version.map
index 1f84b68a83..f9f17e4f6e 100644
--- a/drivers/crypto/armv8/rte_pmd_armv8_version.map
+++ b/drivers/crypto/armv8/rte_pmd_armv8_version.map
@@ -1,3 +1,3 @@
-DPDK_17.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
+++ b/drivers/crypto/caam_jr/rte_pmd_caam_jr_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/ccp/rte_pmd_ccp_version.map b/drivers/crypto/ccp/rte_pmd_ccp_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/crypto/ccp/rte_pmd_ccp_version.map
+++ b/drivers/crypto/ccp/rte_pmd_ccp_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
index 0bfb986d0b..5952d645fd 100644
--- a/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
+++ b/drivers/crypto/dpaa2_sec/rte_pmd_dpaa2_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_18.11 {
+DPDK_20.0 {
global:
dpaa2_sec_eventq_attach;
dpaa2_sec_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
index cc7f2162e0..8580fa13db 100644
--- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
+++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map
@@ -1,12 +1,8 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_19.11 {
+DPDK_20.0 {
global:
dpaa_sec_eventq_attach;
dpaa_sec_eventq_detach;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
index 8ffeca934e..f9f17e4f6e 100644
--- a/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
+++ b/drivers/crypto/kasumi/rte_pmd_kasumi_version.map
@@ -1,3 +1,3 @@
-DPDK_16.07 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
+++ b/drivers/crypto/mvsam/rte_pmd_mvsam_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
index 406964d1fc..f9f17e4f6e 100644
--- a/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
+++ b/drivers/crypto/nitrox/rte_pmd_nitrox_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/null/rte_pmd_null_crypto_version.map b/drivers/crypto/null/rte_pmd_null_crypto_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/null/rte_pmd_null_crypto_version.map
+++ b/drivers/crypto/null/rte_pmd_null_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
+++ b/drivers/crypto/octeontx/rte_pmd_octeontx_crypto_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/openssl/rte_pmd_openssl_version.map b/drivers/crypto/openssl/rte_pmd_openssl_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/openssl/rte_pmd_openssl_version.map
+++ b/drivers/crypto/openssl/rte_pmd_openssl_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
index 5c43127cf2..077afedce7 100644
--- a/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
+++ b/drivers/crypto/scheduler/rte_pmd_crypto_scheduler_version.map
@@ -1,21 +1,16 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_cryptodev_scheduler_load_user_scheduler;
- rte_cryptodev_scheduler_slave_attach;
- rte_cryptodev_scheduler_slave_detach;
- rte_cryptodev_scheduler_ordering_set;
- rte_cryptodev_scheduler_ordering_get;
-
-};
-
-DPDK_17.05 {
- global:
-
rte_cryptodev_scheduler_mode_get;
rte_cryptodev_scheduler_mode_set;
rte_cryptodev_scheduler_option_get;
rte_cryptodev_scheduler_option_set;
+ rte_cryptodev_scheduler_ordering_get;
+ rte_cryptodev_scheduler_ordering_set;
+ rte_cryptodev_scheduler_slave_attach;
+ rte_cryptodev_scheduler_slave_detach;
rte_cryptodev_scheduler_slaves_get;
-} DPDK_17.02;
+ local: *;
+};
diff --git a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
index dc4d417b7b..f9f17e4f6e 100644
--- a/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
+++ b/drivers/crypto/snow3g/rte_pmd_snow3g_version.map
@@ -1,3 +1,3 @@
-DPDK_16.04 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
+++ b/drivers/crypto/virtio/rte_pmd_virtio_crypto_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/crypto/zuc/rte_pmd_zuc_version.map b/drivers/crypto/zuc/rte_pmd_zuc_version.map
index cc5829e30b..f9f17e4f6e 100644
--- a/drivers/crypto/zuc/rte_pmd_zuc_version.map
+++ b/drivers/crypto/zuc/rte_pmd_zuc_version.map
@@ -1,3 +1,3 @@
-DPDK_16.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
+++ b/drivers/event/dpaa/rte_pmd_dpaa_event_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
index 1c0b7559dc..f9f17e4f6e 100644
--- a/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
+++ b/drivers/event/dpaa2/rte_pmd_dpaa2_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/dsw/rte_pmd_dsw_event_version.map b/drivers/event/dsw/rte_pmd_dsw_event_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/event/dsw/rte_pmd_dsw_event_version.map
+++ b/drivers/event/dsw/rte_pmd_dsw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
+++ b/drivers/event/octeontx/rte_pmd_octeontx_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
index 41c65c8c9c..f9f17e4f6e 100644
--- a/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
+++ b/drivers/event/octeontx2/rte_pmd_octeontx2_event_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
+DPDK_20.0 {
local: *;
};
-
diff --git a/drivers/event/opdl/rte_pmd_opdl_event_version.map b/drivers/event/opdl/rte_pmd_opdl_event_version.map
index 58b94270d4..f9f17e4f6e 100644
--- a/drivers/event/opdl/rte_pmd_opdl_event_version.map
+++ b/drivers/event/opdl/rte_pmd_opdl_event_version.map
@@ -1,3 +1,3 @@
-DPDK_18.02 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
+++ b/drivers/event/skeleton/rte_pmd_skeleton_event_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/event/sw/rte_pmd_sw_event_version.map b/drivers/event/sw/rte_pmd_sw_event_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/event/sw/rte_pmd_sw_event_version.map
+++ b/drivers/event/sw/rte_pmd_sw_event_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/bucket/rte_mempool_bucket_version.map b/drivers/mempool/bucket/rte_mempool_bucket_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/mempool/bucket/rte_mempool_bucket_version.map
+++ b/drivers/mempool/bucket/rte_mempool_bucket_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
index 60bf50b2d1..9eebaf7ffd 100644
--- a/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
+++ b/drivers/mempool/dpaa/rte_mempool_dpaa_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_dpaa_bpid_info;
diff --git a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
index b45e7a9ac1..cd4bc88273 100644
--- a/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
+++ b/drivers/mempool/dpaa2/rte_mempool_dpaa2_version.map
@@ -1,16 +1,10 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_dpaa2_bpid_info;
rte_dpaa2_mbuf_alloc_bulk;
-
- local: *;
-};
-
-DPDK_18.05 {
- global:
-
rte_dpaa2_mbuf_from_buf_addr;
rte_dpaa2_mbuf_pool_bpid;
-} DPDK_17.05;
+ local: *;
+};
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
index d703368c31..d4f81aed8e 100644
--- a/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
+++ b/drivers/mempool/octeontx2/rte_mempool_octeontx2_version.map
@@ -1,8 +1,8 @@
-DPDK_19.08 {
+DPDK_20.0 {
global:
- otx2_npa_lf_init;
otx2_npa_lf_fini;
+ otx2_npa_lf_init;
local: *;
};
diff --git a/drivers/mempool/ring/rte_mempool_ring_version.map b/drivers/mempool/ring/rte_mempool_ring_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/ring/rte_mempool_ring_version.map
+++ b/drivers/mempool/ring/rte_mempool_ring_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/mempool/stack/rte_mempool_stack_version.map b/drivers/mempool/stack/rte_mempool_stack_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/mempool/stack/rte_mempool_stack_version.map
+++ b/drivers/mempool/stack/rte_mempool_stack_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_packet/rte_pmd_af_packet_version.map b/drivers/net/af_packet/rte_pmd_af_packet_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/af_packet/rte_pmd_af_packet_version.map
+++ b/drivers/net/af_packet/rte_pmd_af_packet_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
index c6db030fe6..f9f17e4f6e 100644
--- a/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
+++ b/drivers/net/af_xdp/rte_pmd_af_xdp_version.map
@@ -1,3 +1,3 @@
-DPDK_19.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ark/rte_pmd_ark_version.map b/drivers/net/ark/rte_pmd_ark_version.map
index 1062e0429f..f9f17e4f6e 100644
--- a/drivers/net/ark/rte_pmd_ark_version.map
+++ b/drivers/net/ark/rte_pmd_ark_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
- local: *;
-
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/atlantic/rte_pmd_atlantic_version.map b/drivers/net/atlantic/rte_pmd_atlantic_version.map
index b16faa999f..9b04838d84 100644
--- a/drivers/net/atlantic/rte_pmd_atlantic_version.map
+++ b/drivers/net/atlantic/rte_pmd_atlantic_version.map
@@ -1,5 +1,4 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
@@ -13,4 +12,3 @@ EXPERIMENTAL {
rte_pmd_atl_macsec_select_txsa;
rte_pmd_atl_macsec_select_rxsa;
};
-
diff --git a/drivers/net/avp/rte_pmd_avp_version.map b/drivers/net/avp/rte_pmd_avp_version.map
index 5352e7e3bd..f9f17e4f6e 100644
--- a/drivers/net/avp/rte_pmd_avp_version.map
+++ b/drivers/net/avp/rte_pmd_avp_version.map
@@ -1,3 +1,3 @@
-DPDK_17.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/axgbe/rte_pmd_axgbe_version.map b/drivers/net/axgbe/rte_pmd_axgbe_version.map
index de8e412ff1..f9f17e4f6e 100644
--- a/drivers/net/axgbe/rte_pmd_axgbe_version.map
+++ b/drivers/net/axgbe/rte_pmd_axgbe_version.map
@@ -1,3 +1,3 @@
-DPDK_18.05 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
+++ b/drivers/net/bnx2x/rte_pmd_bnx2x_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/bnxt/rte_pmd_bnxt_version.map b/drivers/net/bnxt/rte_pmd_bnxt_version.map
index 4750d40ad6..bb52562347 100644
--- a/drivers/net/bnxt/rte_pmd_bnxt_version.map
+++ b/drivers/net/bnxt/rte_pmd_bnxt_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_pmd_bnxt_get_vf_rx_status;
@@ -10,13 +10,13 @@ DPDK_17.08 {
rte_pmd_bnxt_set_tx_loopback;
rte_pmd_bnxt_set_vf_mac_addr;
rte_pmd_bnxt_set_vf_mac_anti_spoof;
+ rte_pmd_bnxt_set_vf_persist_stats;
rte_pmd_bnxt_set_vf_rate_limit;
rte_pmd_bnxt_set_vf_rxmode;
rte_pmd_bnxt_set_vf_vlan_anti_spoof;
rte_pmd_bnxt_set_vf_vlan_filter;
rte_pmd_bnxt_set_vf_vlan_insert;
rte_pmd_bnxt_set_vf_vlan_stripq;
- rte_pmd_bnxt_set_vf_persist_stats;
local: *;
};
diff --git a/drivers/net/bonding/rte_pmd_bond_version.map b/drivers/net/bonding/rte_pmd_bond_version.map
index 00d955c481..270c7d5d55 100644
--- a/drivers/net/bonding/rte_pmd_bond_version.map
+++ b/drivers/net/bonding/rte_pmd_bond_version.map
@@ -1,9 +1,21 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_bond_8023ad_agg_selection_get;
+ rte_eth_bond_8023ad_agg_selection_set;
+ rte_eth_bond_8023ad_conf_get;
+ rte_eth_bond_8023ad_dedicated_queues_disable;
+ rte_eth_bond_8023ad_dedicated_queues_enable;
+ rte_eth_bond_8023ad_ext_collect;
+ rte_eth_bond_8023ad_ext_collect_get;
+ rte_eth_bond_8023ad_ext_distrib;
+ rte_eth_bond_8023ad_ext_distrib_get;
+ rte_eth_bond_8023ad_ext_slowtx;
+ rte_eth_bond_8023ad_setup;
rte_eth_bond_8023ad_slave_info;
rte_eth_bond_active_slaves_get;
rte_eth_bond_create;
+ rte_eth_bond_free;
rte_eth_bond_link_monitoring_set;
rte_eth_bond_mac_address_reset;
rte_eth_bond_mac_address_set;
@@ -19,36 +31,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- rte_eth_bond_free;
-
-} DPDK_2.0;
-
-DPDK_16.04 {
-};
-
-DPDK_16.07 {
- global:
-
- rte_eth_bond_8023ad_ext_collect;
- rte_eth_bond_8023ad_ext_collect_get;
- rte_eth_bond_8023ad_ext_distrib;
- rte_eth_bond_8023ad_ext_distrib_get;
- rte_eth_bond_8023ad_ext_slowtx;
-
-} DPDK_16.04;
-
-DPDK_17.08 {
- global:
-
- rte_eth_bond_8023ad_dedicated_queues_enable;
- rte_eth_bond_8023ad_dedicated_queues_disable;
- rte_eth_bond_8023ad_agg_selection_get;
- rte_eth_bond_8023ad_agg_selection_set;
- rte_eth_bond_8023ad_conf_get;
- rte_eth_bond_8023ad_setup;
-
-} DPDK_16.07;
diff --git a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
index bd8138a034..f9f17e4f6e 100644
--- a/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
+++ b/drivers/net/cxgbe/rte_pmd_cxgbe_version.map
@@ -1,4 +1,3 @@
-DPDK_2.1 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/dpaa/rte_pmd_dpaa_version.map b/drivers/net/dpaa/rte_pmd_dpaa_version.map
index 8cb4500b51..f403a1526d 100644
--- a/drivers/net/dpaa/rte_pmd_dpaa_version.map
+++ b/drivers/net/dpaa/rte_pmd_dpaa_version.map
@@ -1,12 +1,9 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.08 {
+DPDK_20.0 {
global:
dpaa_eth_eventq_attach;
dpaa_eth_eventq_detach;
rte_pmd_dpaa_set_tx_loopback;
-} DPDK_17.11;
+
+ local: *;
+};
diff --git a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
index d1b4cdb232..f2bb793319 100644
--- a/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
+++ b/drivers/net/dpaa2/rte_pmd_dpaa2_version.map
@@ -1,15 +1,11 @@
-DPDK_17.05 {
-
- local: *;
-};
-
-DPDK_17.11 {
+DPDK_20.0 {
global:
dpaa2_eth_eventq_attach;
dpaa2_eth_eventq_detach;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
@@ -17,4 +13,4 @@ EXPERIMENTAL {
rte_pmd_dpaa2_mux_flow_create;
rte_pmd_dpaa2_set_custom_hash;
rte_pmd_dpaa2_set_timestamp;
-} DPDK_17.11;
+};
diff --git a/drivers/net/e1000/rte_pmd_e1000_version.map b/drivers/net/e1000/rte_pmd_e1000_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/e1000/rte_pmd_e1000_version.map
+++ b/drivers/net/e1000/rte_pmd_e1000_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ena/rte_pmd_ena_version.map b/drivers/net/ena/rte_pmd_ena_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/ena/rte_pmd_ena_version.map
+++ b/drivers/net/ena/rte_pmd_ena_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enetc/rte_pmd_enetc_version.map b/drivers/net/enetc/rte_pmd_enetc_version.map
index 521e51f411..f9f17e4f6e 100644
--- a/drivers/net/enetc/rte_pmd_enetc_version.map
+++ b/drivers/net/enetc/rte_pmd_enetc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.11 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/enic/rte_pmd_enic_version.map b/drivers/net/enic/rte_pmd_enic_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/enic/rte_pmd_enic_version.map
+++ b/drivers/net/enic/rte_pmd_enic_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/failsafe/rte_pmd_failsafe_version.map b/drivers/net/failsafe/rte_pmd_failsafe_version.map
index b6d2840be4..f9f17e4f6e 100644
--- a/drivers/net/failsafe/rte_pmd_failsafe_version.map
+++ b/drivers/net/failsafe/rte_pmd_failsafe_version.map
@@ -1,4 +1,3 @@
-DPDK_17.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/fm10k/rte_pmd_fm10k_version.map b/drivers/net/fm10k/rte_pmd_fm10k_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/fm10k/rte_pmd_fm10k_version.map
+++ b/drivers/net/fm10k/rte_pmd_fm10k_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hinic/rte_pmd_hinic_version.map b/drivers/net/hinic/rte_pmd_hinic_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/hinic/rte_pmd_hinic_version.map
+++ b/drivers/net/hinic/rte_pmd_hinic_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/hns3/rte_pmd_hns3_version.map b/drivers/net/hns3/rte_pmd_hns3_version.map
index 35e5f2debb..f9f17e4f6e 100644
--- a/drivers/net/hns3/rte_pmd_hns3_version.map
+++ b/drivers/net/hns3/rte_pmd_hns3_version.map
@@ -1,3 +1,3 @@
-DPDK_19.11 {
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/i40e/rte_pmd_i40e_version.map b/drivers/net/i40e/rte_pmd_i40e_version.map
index cccd5768c2..a80e69b93e 100644
--- a/drivers/net/i40e/rte_pmd_i40e_version.map
+++ b/drivers/net/i40e/rte_pmd_i40e_version.map
@@ -1,23 +1,34 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_i40e_add_vf_mac_addr;
+ rte_pmd_i40e_flow_add_del_packet_template;
+ rte_pmd_i40e_flow_type_mapping_get;
+ rte_pmd_i40e_flow_type_mapping_reset;
+ rte_pmd_i40e_flow_type_mapping_update;
+ rte_pmd_i40e_get_ddp_info;
+ rte_pmd_i40e_get_ddp_list;
rte_pmd_i40e_get_vf_stats;
+ rte_pmd_i40e_inset_get;
+ rte_pmd_i40e_inset_set;
rte_pmd_i40e_ping_vfs;
+ rte_pmd_i40e_process_ddp_package;
rte_pmd_i40e_ptype_mapping_get;
rte_pmd_i40e_ptype_mapping_replace;
rte_pmd_i40e_ptype_mapping_reset;
rte_pmd_i40e_ptype_mapping_update;
+ rte_pmd_i40e_query_vfid_by_mac;
rte_pmd_i40e_reset_vf_stats;
+ rte_pmd_i40e_rss_queue_region_conf;
+ rte_pmd_i40e_set_tc_strict_prio;
rte_pmd_i40e_set_tx_loopback;
rte_pmd_i40e_set_vf_broadcast;
rte_pmd_i40e_set_vf_mac_addr;
rte_pmd_i40e_set_vf_mac_anti_spoof;
+ rte_pmd_i40e_set_vf_max_bw;
rte_pmd_i40e_set_vf_multicast_promisc;
+ rte_pmd_i40e_set_vf_tc_bw_alloc;
+ rte_pmd_i40e_set_vf_tc_max_bw;
rte_pmd_i40e_set_vf_unicast_promisc;
rte_pmd_i40e_set_vf_vlan_anti_spoof;
rte_pmd_i40e_set_vf_vlan_filter;
@@ -25,43 +36,5 @@ DPDK_17.02 {
rte_pmd_i40e_set_vf_vlan_stripq;
rte_pmd_i40e_set_vf_vlan_tag;
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_pmd_i40e_set_tc_strict_prio;
- rte_pmd_i40e_set_vf_max_bw;
- rte_pmd_i40e_set_vf_tc_bw_alloc;
- rte_pmd_i40e_set_vf_tc_max_bw;
- rte_pmd_i40e_process_ddp_package;
- rte_pmd_i40e_get_ddp_list;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_i40e_get_ddp_info;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_pmd_i40e_add_vf_mac_addr;
- rte_pmd_i40e_flow_add_del_packet_template;
- rte_pmd_i40e_flow_type_mapping_update;
- rte_pmd_i40e_flow_type_mapping_get;
- rte_pmd_i40e_flow_type_mapping_reset;
- rte_pmd_i40e_query_vfid_by_mac;
- rte_pmd_i40e_rss_queue_region_conf;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_pmd_i40e_inset_get;
- rte_pmd_i40e_inset_set;
-} DPDK_17.11;
\ No newline at end of file
+ local: *;
+};
diff --git a/drivers/net/iavf/rte_pmd_iavf_version.map b/drivers/net/iavf/rte_pmd_iavf_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/iavf/rte_pmd_iavf_version.map
+++ b/drivers/net/iavf/rte_pmd_iavf_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ice/rte_pmd_ice_version.map b/drivers/net/ice/rte_pmd_ice_version.map
index 7b23b609da..f9f17e4f6e 100644
--- a/drivers/net/ice/rte_pmd_ice_version.map
+++ b/drivers/net/ice/rte_pmd_ice_version.map
@@ -1,4 +1,3 @@
-DPDK_19.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ifc/rte_pmd_ifc_version.map b/drivers/net/ifc/rte_pmd_ifc_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/net/ifc/rte_pmd_ifc_version.map
+++ b/drivers/net/ifc/rte_pmd_ifc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
+++ b/drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
index c814f96d72..21534dbc3d 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
- rte_pmd_ixgbe_set_all_queues_drop_en;
- rte_pmd_ixgbe_set_tx_loopback;
- rte_pmd_ixgbe_set_vf_mac_addr;
- rte_pmd_ixgbe_set_vf_mac_anti_spoof;
- rte_pmd_ixgbe_set_vf_split_drop_en;
- rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
- rte_pmd_ixgbe_set_vf_vlan_insert;
- rte_pmd_ixgbe_set_vf_vlan_stripq;
-} DPDK_2.0;
-
-DPDK_17.02 {
+DPDK_20.0 {
global:
+ rte_pmd_ixgbe_bypass_event_show;
+ rte_pmd_ixgbe_bypass_event_store;
+ rte_pmd_ixgbe_bypass_init;
+ rte_pmd_ixgbe_bypass_state_set;
+ rte_pmd_ixgbe_bypass_state_show;
+ rte_pmd_ixgbe_bypass_ver_show;
+ rte_pmd_ixgbe_bypass_wd_reset;
+ rte_pmd_ixgbe_bypass_wd_timeout_show;
+ rte_pmd_ixgbe_bypass_wd_timeout_store;
rte_pmd_ixgbe_macsec_config_rxsc;
rte_pmd_ixgbe_macsec_config_txsc;
rte_pmd_ixgbe_macsec_disable;
rte_pmd_ixgbe_macsec_enable;
rte_pmd_ixgbe_macsec_select_rxsa;
rte_pmd_ixgbe_macsec_select_txsa;
+ rte_pmd_ixgbe_ping_vf;
+ rte_pmd_ixgbe_set_all_queues_drop_en;
+ rte_pmd_ixgbe_set_tc_bw_alloc;
+ rte_pmd_ixgbe_set_tx_loopback;
+ rte_pmd_ixgbe_set_vf_mac_addr;
+ rte_pmd_ixgbe_set_vf_mac_anti_spoof;
rte_pmd_ixgbe_set_vf_rate_limit;
rte_pmd_ixgbe_set_vf_rx;
rte_pmd_ixgbe_set_vf_rxmode;
+ rte_pmd_ixgbe_set_vf_split_drop_en;
rte_pmd_ixgbe_set_vf_tx;
+ rte_pmd_ixgbe_set_vf_vlan_anti_spoof;
rte_pmd_ixgbe_set_vf_vlan_filter;
-} DPDK_16.11;
+ rte_pmd_ixgbe_set_vf_vlan_insert;
+ rte_pmd_ixgbe_set_vf_vlan_stripq;
-DPDK_17.05 {
- global:
-
- rte_pmd_ixgbe_ping_vf;
- rte_pmd_ixgbe_set_tc_bw_alloc;
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_pmd_ixgbe_bypass_event_show;
- rte_pmd_ixgbe_bypass_event_store;
- rte_pmd_ixgbe_bypass_init;
- rte_pmd_ixgbe_bypass_state_set;
- rte_pmd_ixgbe_bypass_state_show;
- rte_pmd_ixgbe_bypass_ver_show;
- rte_pmd_ixgbe_bypass_wd_reset;
- rte_pmd_ixgbe_bypass_wd_timeout_show;
- rte_pmd_ixgbe_bypass_wd_timeout_store;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/drivers/net/kni/rte_pmd_kni_version.map b/drivers/net/kni/rte_pmd_kni_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/kni/rte_pmd_kni_version.map
+++ b/drivers/net/kni/rte_pmd_kni_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/liquidio/rte_pmd_liquidio_version.map b/drivers/net/liquidio/rte_pmd_liquidio_version.map
index 8591cc0b18..f9f17e4f6e 100644
--- a/drivers/net/liquidio/rte_pmd_liquidio_version.map
+++ b/drivers/net/liquidio/rte_pmd_liquidio_version.map
@@ -1,4 +1,3 @@
-DPDK_17.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/memif/rte_pmd_memif_version.map b/drivers/net/memif/rte_pmd_memif_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/net/memif/rte_pmd_memif_version.map
+++ b/drivers/net/memif/rte_pmd_memif_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/net/mlx4/rte_pmd_mlx4_version.map b/drivers/net/mlx4/rte_pmd_mlx4_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/mlx4/rte_pmd_mlx4_version.map
+++ b/drivers/net/mlx4/rte_pmd_mlx4_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mlx5/rte_pmd_mlx5_version.map b/drivers/net/mlx5/rte_pmd_mlx5_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/mlx5/rte_pmd_mlx5_version.map
+++ b/drivers/net/mlx5/rte_pmd_mlx5_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvneta/rte_pmd_mvneta_version.map b/drivers/net/mvneta/rte_pmd_mvneta_version.map
index 24bd5cdb35..f9f17e4f6e 100644
--- a/drivers/net/mvneta/rte_pmd_mvneta_version.map
+++ b/drivers/net/mvneta/rte_pmd_mvneta_version.map
@@ -1,3 +1,3 @@
-DPDK_18.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
index a753031720..f9f17e4f6e 100644
--- a/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
+++ b/drivers/net/mvpp2/rte_pmd_mvpp2_version.map
@@ -1,3 +1,3 @@
-DPDK_17.11 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/netvsc/rte_pmd_netvsc_version.map b/drivers/net/netvsc/rte_pmd_netvsc_version.map
index d534019a6b..f9f17e4f6e 100644
--- a/drivers/net/netvsc/rte_pmd_netvsc_version.map
+++ b/drivers/net/netvsc/rte_pmd_netvsc_version.map
@@ -1,5 +1,3 @@
-/* SPDX-License-Identifier: BSD-3-Clause */
-
-DPDK_18.08 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfb/rte_pmd_nfb_version.map b/drivers/net/nfb/rte_pmd_nfb_version.map
index fc8c95e919..f9f17e4f6e 100644
--- a/drivers/net/nfb/rte_pmd_nfb_version.map
+++ b/drivers/net/nfb/rte_pmd_nfb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/nfp/rte_pmd_nfp_version.map b/drivers/net/nfp/rte_pmd_nfp_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/nfp/rte_pmd_nfp_version.map
+++ b/drivers/net/nfp/rte_pmd_nfp_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/null/rte_pmd_null_version.map b/drivers/net/null/rte_pmd_null_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/null/rte_pmd_null_version.map
+++ b/drivers/net/null/rte_pmd_null_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/octeontx/rte_pmd_octeontx_version.map b/drivers/net/octeontx/rte_pmd_octeontx_version.map
index a3161b14d0..f7cae02fac 100644
--- a/drivers/net/octeontx/rte_pmd_octeontx_version.map
+++ b/drivers/net/octeontx/rte_pmd_octeontx_version.map
@@ -1,11 +1,7 @@
-DPDK_17.11 {
-
- local: *;
-};
-
-DPDK_18.02 {
+DPDK_20.0 {
global:
rte_octeontx_pchan_map;
-} DPDK_17.11;
+ local: *;
+};
diff --git a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
+++ b/drivers/net/octeontx2/rte_pmd_octeontx2_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/pcap/rte_pmd_pcap_version.map b/drivers/net/pcap/rte_pmd_pcap_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/pcap/rte_pmd_pcap_version.map
+++ b/drivers/net/pcap/rte_pmd_pcap_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/qede/rte_pmd_qede_version.map b/drivers/net/qede/rte_pmd_qede_version.map
index 349c6e1c22..f9f17e4f6e 100644
--- a/drivers/net/qede/rte_pmd_qede_version.map
+++ b/drivers/net/qede/rte_pmd_qede_version.map
@@ -1,4 +1,3 @@
-DPDK_16.04 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/ring/rte_pmd_ring_version.map b/drivers/net/ring/rte_pmd_ring_version.map
index 1f785d9409..ebb6be2733 100644
--- a/drivers/net/ring/rte_pmd_ring_version.map
+++ b/drivers/net/ring/rte_pmd_ring_version.map
@@ -1,14 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_eth_from_ring;
rte_eth_from_rings;
local: *;
};
-
-DPDK_2.2 {
- global:
-
- rte_eth_from_ring;
-
-} DPDK_2.0;
diff --git a/drivers/net/sfc/rte_pmd_sfc_version.map b/drivers/net/sfc/rte_pmd_sfc_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/sfc/rte_pmd_sfc_version.map
+++ b/drivers/net/sfc/rte_pmd_sfc_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/softnic/rte_pmd_softnic_version.map b/drivers/net/softnic/rte_pmd_softnic_version.map
index bc44b06f98..50f113d5a2 100644
--- a/drivers/net/softnic/rte_pmd_softnic_version.map
+++ b/drivers/net/softnic/rte_pmd_softnic_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_pmd_softnic_run;
diff --git a/drivers/net/szedata2/rte_pmd_szedata2_version.map b/drivers/net/szedata2/rte_pmd_szedata2_version.map
index ad607bbedd..f9f17e4f6e 100644
--- a/drivers/net/szedata2/rte_pmd_szedata2_version.map
+++ b/drivers/net/szedata2/rte_pmd_szedata2_version.map
@@ -1,3 +1,3 @@
-DPDK_2.2 {
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/tap/rte_pmd_tap_version.map b/drivers/net/tap/rte_pmd_tap_version.map
index 31eca32ebe..f9f17e4f6e 100644
--- a/drivers/net/tap/rte_pmd_tap_version.map
+++ b/drivers/net/tap/rte_pmd_tap_version.map
@@ -1,4 +1,3 @@
-DPDK_17.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/thunderx/rte_pmd_thunderx_version.map b/drivers/net/thunderx/rte_pmd_thunderx_version.map
index 1901bcb3b3..f9f17e4f6e 100644
--- a/drivers/net/thunderx/rte_pmd_thunderx_version.map
+++ b/drivers/net/thunderx/rte_pmd_thunderx_version.map
@@ -1,4 +1,3 @@
-DPDK_16.07 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
+++ b/drivers/net/vdev_netvsc/rte_pmd_vdev_netvsc_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vhost/rte_pmd_vhost_version.map b/drivers/net/vhost/rte_pmd_vhost_version.map
index 695db85749..16b591ccc4 100644
--- a/drivers/net/vhost/rte_pmd_vhost_version.map
+++ b/drivers/net/vhost/rte_pmd_vhost_version.map
@@ -1,13 +1,8 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
rte_eth_vhost_get_queue_event;
-
- local: *;
-};
-
-DPDK_16.11 {
- global:
-
rte_eth_vhost_get_vid_from_port_id;
+
+ local: *;
};
diff --git a/drivers/net/virtio/rte_pmd_virtio_version.map b/drivers/net/virtio/rte_pmd_virtio_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/virtio/rte_pmd_virtio_version.map
+++ b/drivers/net/virtio/rte_pmd_virtio_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
index ef35398402..f9f17e4f6e 100644
--- a/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
+++ b/drivers/net/vmxnet3/rte_pmd_vmxnet3_version.map
@@ -1,4 +1,3 @@
-DPDK_2.0 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
+++ b/drivers/raw/dpaa2_cmdif/rte_rawdev_dpaa2_cmdif_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
index d16a136fc8..ca6a0d7626 100644
--- a/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
+++ b/drivers/raw/dpaa2_qdma/rte_rawdev_dpaa2_qdma_version.map
@@ -1,4 +1,4 @@
-DPDK_19.05 {
+DPDK_20.0 {
global:
rte_qdma_attr_get;
@@ -9,9 +9,9 @@ DPDK_19.05 {
rte_qdma_start;
rte_qdma_stop;
rte_qdma_vq_create;
- rte_qdma_vq_destroy;
rte_qdma_vq_dequeue;
rte_qdma_vq_dequeue_multi;
+ rte_qdma_vq_destroy;
rte_qdma_vq_enqueue;
rte_qdma_vq_enqueue_multi;
rte_qdma_vq_stats;
diff --git a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
index 9b9ab1a4cf..f9f17e4f6e 100644
--- a/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
+++ b/drivers/raw/ifpga/rte_rawdev_ifpga_version.map
@@ -1,4 +1,3 @@
-DPDK_18.05 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ioat/rte_rawdev_ioat_version.map b/drivers/raw/ioat/rte_rawdev_ioat_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/ioat/rte_rawdev_ioat_version.map
+++ b/drivers/raw/ioat/rte_rawdev_ioat_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/ntb/rte_rawdev_ntb_version.map b/drivers/raw/ntb/rte_rawdev_ntb_version.map
index 8861484fb3..f9f17e4f6e 100644
--- a/drivers/raw/ntb/rte_rawdev_ntb_version.map
+++ b/drivers/raw/ntb/rte_rawdev_ntb_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
- local: *;
+DPDK_20.0 {
+ local: *;
};
diff --git a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
index 9a61188cd5..f9f17e4f6e 100644
--- a/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
+++ b/drivers/raw/octeontx2_dma/rte_rawdev_octeontx2_dma_version.map
@@ -1,4 +1,3 @@
-DPDK_19.08 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
index 179140fb87..f9f17e4f6e 100644
--- a/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
+++ b/drivers/raw/skeleton/rte_rawdev_skeleton_version.map
@@ -1,4 +1,3 @@
-DPDK_18.02 {
-
+DPDK_20.0 {
local: *;
};
diff --git a/lib/librte_acl/rte_acl_version.map b/lib/librte_acl/rte_acl_version.map
index b09370a104..c3daca8115 100644
--- a/lib/librte_acl/rte_acl_version.map
+++ b/lib/librte_acl/rte_acl_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_acl_add_rules;
diff --git a/lib/librte_bbdev/rte_bbdev_version.map b/lib/librte_bbdev/rte_bbdev_version.map
index 3624eb1cb4..45b560dbe7 100644
--- a/lib/librte_bbdev/rte_bbdev_version.map
+++ b/lib/librte_bbdev/rte_bbdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_bitratestats/rte_bitratestats_version.map b/lib/librte_bitratestats/rte_bitratestats_version.map
index fe7454452d..88fc2912db 100644
--- a/lib/librte_bitratestats/rte_bitratestats_version.map
+++ b/lib/librte_bitratestats/rte_bitratestats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_stats_bitrate_calc;
diff --git a/lib/librte_bpf/rte_bpf_version.map b/lib/librte_bpf/rte_bpf_version.map
index a203e088ea..e1ec43faa0 100644
--- a/lib/librte_bpf/rte_bpf_version.map
+++ b/lib/librte_bpf/rte_bpf_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cfgfile/rte_cfgfile_version.map b/lib/librte_cfgfile/rte_cfgfile_version.map
index a0a11cea8d..906eee96bf 100644
--- a/lib/librte_cfgfile/rte_cfgfile_version.map
+++ b/lib/librte_cfgfile/rte_cfgfile_version.map
@@ -1,40 +1,22 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_cfgfile_add_entry;
+ rte_cfgfile_add_section;
rte_cfgfile_close;
+ rte_cfgfile_create;
rte_cfgfile_get_entry;
rte_cfgfile_has_entry;
rte_cfgfile_has_section;
rte_cfgfile_load;
+ rte_cfgfile_load_with_params;
rte_cfgfile_num_sections;
+ rte_cfgfile_save;
rte_cfgfile_section_entries;
+ rte_cfgfile_section_entries_by_index;
rte_cfgfile_section_num_entries;
rte_cfgfile_sections;
+ rte_cfgfile_set_entry;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_cfgfile_section_entries_by_index;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_cfgfile_load_with_params;
-
-} DPDK_16.04;
-
-DPDK_17.11 {
- global:
-
- rte_cfgfile_add_entry;
- rte_cfgfile_add_section;
- rte_cfgfile_create;
- rte_cfgfile_save;
- rte_cfgfile_set_entry;
-
-} DPDK_17.05;
diff --git a/lib/librte_cmdline/rte_cmdline_version.map b/lib/librte_cmdline/rte_cmdline_version.map
index 04bcb387f2..95fce812ff 100644
--- a/lib/librte_cmdline/rte_cmdline_version.map
+++ b/lib/librte_cmdline/rte_cmdline_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
cirbuf_add_buf_head;
@@ -40,6 +40,7 @@ DPDK_2.0 {
cmdline_parse_num;
cmdline_parse_portlist;
cmdline_parse_string;
+ cmdline_poll;
cmdline_printf;
cmdline_quit;
cmdline_set_prompt;
@@ -68,10 +69,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_2.1 {
- global:
-
- cmdline_poll;
-
-} DPDK_2.0;
diff --git a/lib/librte_compressdev/rte_compressdev_version.map b/lib/librte_compressdev/rte_compressdev_version.map
index e2a108b650..cfcd50ac1c 100644
--- a/lib/librte_compressdev/rte_compressdev_version.map
+++ b/lib/librte_compressdev/rte_compressdev_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_cryptodev/rte_cryptodev_version.map b/lib/librte_cryptodev/rte_cryptodev_version.map
index 3deb265ac2..1dd1e259a0 100644
--- a/lib/librte_cryptodev/rte_cryptodev_version.map
+++ b/lib/librte_cryptodev/rte_cryptodev_version.map
@@ -1,92 +1,62 @@
-DPDK_16.04 {
+DPDK_20.0 {
global:
- rte_cryptodevs;
+ rte_crypto_aead_algorithm_strings;
+ rte_crypto_aead_operation_strings;
+ rte_crypto_auth_algorithm_strings;
+ rte_crypto_auth_operation_strings;
+ rte_crypto_cipher_algorithm_strings;
+ rte_crypto_cipher_operation_strings;
+ rte_crypto_op_pool_create;
+ rte_cryptodev_allocate_driver;
rte_cryptodev_callback_register;
rte_cryptodev_callback_unregister;
rte_cryptodev_close;
- rte_cryptodev_count;
rte_cryptodev_configure;
+ rte_cryptodev_count;
+ rte_cryptodev_device_count_by_driver;
+ rte_cryptodev_devices_get;
+ rte_cryptodev_driver_id_get;
+ rte_cryptodev_driver_name_get;
+ rte_cryptodev_get_aead_algo_enum;
+ rte_cryptodev_get_auth_algo_enum;
+ rte_cryptodev_get_cipher_algo_enum;
rte_cryptodev_get_dev_id;
rte_cryptodev_get_feature_name;
+ rte_cryptodev_get_sec_ctx;
rte_cryptodev_info_get;
+ rte_cryptodev_name_get;
rte_cryptodev_pmd_allocate;
rte_cryptodev_pmd_callback_process;
+ rte_cryptodev_pmd_create;
+ rte_cryptodev_pmd_create_dev_name;
+ rte_cryptodev_pmd_destroy;
+ rte_cryptodev_pmd_get_dev;
+ rte_cryptodev_pmd_get_named_dev;
+ rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_pmd_parse_input_args;
rte_cryptodev_pmd_release_device;
- rte_cryptodev_sym_session_create;
- rte_cryptodev_sym_session_free;
+ rte_cryptodev_queue_pair_count;
+ rte_cryptodev_queue_pair_setup;
rte_cryptodev_socket_id;
rte_cryptodev_start;
rte_cryptodev_stats_get;
rte_cryptodev_stats_reset;
rte_cryptodev_stop;
- rte_cryptodev_queue_pair_count;
- rte_cryptodev_queue_pair_setup;
- rte_crypto_op_pool_create;
-
- local: *;
-};
-
-DPDK_17.02 {
- global:
-
- rte_cryptodev_devices_get;
- rte_cryptodev_pmd_create_dev_name;
- rte_cryptodev_pmd_get_dev;
- rte_cryptodev_pmd_get_named_dev;
- rte_cryptodev_pmd_is_valid_dev;
+ rte_cryptodev_sym_capability_check_aead;
rte_cryptodev_sym_capability_check_auth;
rte_cryptodev_sym_capability_check_cipher;
rte_cryptodev_sym_capability_get;
- rte_crypto_auth_algorithm_strings;
- rte_crypto_auth_operation_strings;
- rte_crypto_cipher_algorithm_strings;
- rte_crypto_cipher_operation_strings;
-
-} DPDK_16.04;
-
-DPDK_17.05 {
- global:
-
- rte_cryptodev_get_auth_algo_enum;
- rte_cryptodev_get_cipher_algo_enum;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_cryptodev_allocate_driver;
- rte_cryptodev_device_count_by_driver;
- rte_cryptodev_driver_id_get;
- rte_cryptodev_driver_name_get;
- rte_cryptodev_get_aead_algo_enum;
- rte_cryptodev_sym_capability_check_aead;
- rte_cryptodev_sym_session_init;
- rte_cryptodev_sym_session_clear;
- rte_crypto_aead_algorithm_strings;
- rte_crypto_aead_operation_strings;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_cryptodev_get_sec_ctx;
- rte_cryptodev_name_get;
- rte_cryptodev_pmd_create;
- rte_cryptodev_pmd_destroy;
- rte_cryptodev_pmd_parse_input_args;
-
-} DPDK_17.08;
-
-DPDK_18.05 {
- global:
-
rte_cryptodev_sym_get_header_session_size;
rte_cryptodev_sym_get_private_session_size;
+ rte_cryptodev_sym_session_clear;
+ rte_cryptodev_sym_session_create;
+ rte_cryptodev_sym_session_free;
+ rte_cryptodev_sym_session_init;
+ rte_cryptodevs;
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 3a285b394e..1b7c643005 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_distributor_clear_returns;
@@ -13,17 +13,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_17.05 {
- global:
-
- rte_distributor_clear_returns;
- rte_distributor_create;
- rte_distributor_flush;
- rte_distributor_get_pkt;
- rte_distributor_poll_pkt;
- rte_distributor_process;
- rte_distributor_request_pkt;
- rte_distributor_return_pkt;
- rte_distributor_returned_pkts;
-} DPDK_2.0;
diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map
index 7cbf82d37b..8c41999317 100644
--- a/lib/librte_eal/rte_eal_version.map
+++ b/lib/librte_eal/rte_eal_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
__rte_panic;
@@ -7,46 +7,111 @@ DPDK_2.0 {
lcore_config;
per_lcore__lcore_id;
per_lcore__rte_errno;
+ rte_bus_dump;
+ rte_bus_find;
+ rte_bus_find_by_device;
+ rte_bus_find_by_name;
+ rte_bus_get_iommu_class;
+ rte_bus_probe;
+ rte_bus_register;
+ rte_bus_scan;
+ rte_bus_unregister;
rte_calloc;
rte_calloc_socket;
rte_cpu_check_supported;
rte_cpu_get_flag_enabled;
+ rte_cpu_get_flag_name;
+ rte_cpu_is_supported;
+ rte_ctrl_thread_create;
rte_cycles_vmware_tsc_map;
rte_delay_us;
+ rte_delay_us_block;
+ rte_delay_us_callback_register;
+ rte_dev_is_probed;
+ rte_dev_probe;
+ rte_dev_remove;
+ rte_devargs_add;
+ rte_devargs_dump;
+ rte_devargs_insert;
+ rte_devargs_next;
+ rte_devargs_parse;
+ rte_devargs_parsef;
+ rte_devargs_remove;
+ rte_devargs_type_count;
rte_dump_physmem_layout;
rte_dump_registers;
rte_dump_stack;
rte_dump_tailq;
rte_eal_alarm_cancel;
rte_eal_alarm_set;
+ rte_eal_cleanup;
+ rte_eal_create_uio_dev;
rte_eal_get_configuration;
rte_eal_get_lcore_state;
rte_eal_get_physmem_size;
+ rte_eal_get_runtime_dir;
rte_eal_has_hugepages;
+ rte_eal_has_pci;
+ rte_eal_hotplug_add;
+ rte_eal_hotplug_remove;
rte_eal_hpet_init;
rte_eal_init;
rte_eal_iopl_init;
+ rte_eal_iova_mode;
rte_eal_lcore_role;
+ rte_eal_mbuf_user_pool_ops;
rte_eal_mp_remote_launch;
rte_eal_mp_wait_lcore;
+ rte_eal_primary_proc_alive;
rte_eal_process_type;
rte_eal_remote_launch;
rte_eal_tailq_lookup;
rte_eal_tailq_register;
+ rte_eal_using_phys_addrs;
+ rte_eal_vfio_intr_mode;
rte_eal_wait_lcore;
+ rte_epoll_ctl;
+ rte_epoll_wait;
rte_exit;
rte_free;
rte_get_hpet_cycles;
rte_get_hpet_hz;
rte_get_tsc_hz;
rte_hexdump;
+ rte_hypervisor_get;
+ rte_hypervisor_get_name;
+ rte_intr_allow_others;
rte_intr_callback_register;
rte_intr_callback_unregister;
+ rte_intr_cap_multiple;
rte_intr_disable;
+ rte_intr_dp_is_en;
+ rte_intr_efd_disable;
+ rte_intr_efd_enable;
rte_intr_enable;
+ rte_intr_free_epoll_fd;
+ rte_intr_rx_ctl;
+ rte_intr_tls_epfd;
+ rte_keepalive_create;
+ rte_keepalive_dispatch_pings;
+ rte_keepalive_mark_alive;
+ rte_keepalive_mark_sleep;
+ rte_keepalive_register_core;
+ rte_keepalive_register_relay_callback;
+ rte_lcore_has_role;
+ rte_lcore_index;
+ rte_lcore_to_socket_id;
rte_log;
rte_log_cur_msg_loglevel;
rte_log_cur_msg_logtype;
+ rte_log_dump;
+ rte_log_get_global_level;
+ rte_log_get_level;
+ rte_log_register;
+ rte_log_set_global_level;
+ rte_log_set_level;
+ rte_log_set_level_pattern;
+ rte_log_set_level_regexp;
rte_logs;
rte_malloc;
rte_malloc_dump_stats;
@@ -54,155 +119,38 @@ DPDK_2.0 {
rte_malloc_set_limit;
rte_malloc_socket;
rte_malloc_validate;
+ rte_malloc_virt2iova;
+ rte_mcfg_mem_read_lock;
+ rte_mcfg_mem_read_unlock;
+ rte_mcfg_mem_write_lock;
+ rte_mcfg_mem_write_unlock;
+ rte_mcfg_mempool_read_lock;
+ rte_mcfg_mempool_read_unlock;
+ rte_mcfg_mempool_write_lock;
+ rte_mcfg_mempool_write_unlock;
+ rte_mcfg_tailq_read_lock;
+ rte_mcfg_tailq_read_unlock;
+ rte_mcfg_tailq_write_lock;
+ rte_mcfg_tailq_write_unlock;
rte_mem_lock_page;
+ rte_mem_virt2iova;
rte_mem_virt2phy;
rte_memdump;
rte_memory_get_nchannel;
rte_memory_get_nrank;
rte_memzone_dump;
+ rte_memzone_free;
rte_memzone_lookup;
rte_memzone_reserve;
rte_memzone_reserve_aligned;
rte_memzone_reserve_bounded;
rte_memzone_walk;
rte_openlog_stream;
+ rte_rand;
rte_realloc;
- rte_set_application_usage_hook;
- rte_socket_id;
- rte_strerror;
- rte_strsplit;
- rte_sys_gettid;
- rte_thread_get_affinity;
- rte_thread_set_affinity;
- rte_vlog;
- rte_zmalloc;
- rte_zmalloc_socket;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_epoll_ctl;
- rte_epoll_wait;
- rte_intr_allow_others;
- rte_intr_dp_is_en;
- rte_intr_efd_disable;
- rte_intr_efd_enable;
- rte_intr_rx_ctl;
- rte_intr_tls_epfd;
- rte_memzone_free;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_intr_cap_multiple;
- rte_keepalive_create;
- rte_keepalive_dispatch_pings;
- rte_keepalive_mark_alive;
- rte_keepalive_register_core;
-
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_cpu_get_flag_name;
- rte_eal_primary_proc_alive;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_keepalive_mark_sleep;
- rte_keepalive_register_relay_callback;
- rte_rtm_supported;
- rte_thread_setname;
-
-} DPDK_16.04;
-
-DPDK_16.11 {
- global:
-
- rte_delay_us_block;
- rte_delay_us_callback_register;
-
-} DPDK_16.07;
-
-DPDK_17.02 {
- global:
-
- rte_bus_dump;
- rte_bus_probe;
- rte_bus_register;
- rte_bus_scan;
- rte_bus_unregister;
-
-} DPDK_16.11;
-
-DPDK_17.05 {
- global:
-
- rte_cpu_is_supported;
- rte_intr_free_epoll_fd;
- rte_log_dump;
- rte_log_get_global_level;
- rte_log_register;
- rte_log_set_global_level;
- rte_log_set_level;
- rte_log_set_level_regexp;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- rte_bus_find;
- rte_bus_find_by_device;
- rte_bus_find_by_name;
- rte_log_get_level;
-
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eal_create_uio_dev;
- rte_bus_get_iommu_class;
- rte_eal_has_pci;
- rte_eal_iova_mode;
- rte_eal_using_phys_addrs;
- rte_eal_vfio_intr_mode;
- rte_lcore_has_role;
- rte_malloc_virt2iova;
- rte_mem_virt2iova;
- rte_vfio_enable;
- rte_vfio_is_enabled;
- rte_vfio_noiommu_is_enabled;
- rte_vfio_release_device;
- rte_vfio_setup_device;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_hypervisor_get;
- rte_hypervisor_get_name;
- rte_vfio_clear_group;
rte_reciprocal_value;
rte_reciprocal_value_u64;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_log_set_level_pattern;
+ rte_rtm_supported;
rte_service_attr_get;
rte_service_attr_reset_all;
rte_service_component_register;
@@ -215,6 +163,8 @@ DPDK_18.05 {
rte_service_get_count;
rte_service_get_name;
rte_service_lcore_add;
+ rte_service_lcore_attr_get;
+ rte_service_lcore_attr_reset_all;
rte_service_lcore_count;
rte_service_lcore_count_services;
rte_service_lcore_del;
@@ -224,6 +174,7 @@ DPDK_18.05 {
rte_service_lcore_stop;
rte_service_map_lcore_get;
rte_service_map_lcore_set;
+ rte_service_may_be_active;
rte_service_probe_capability;
rte_service_run_iter_on_app_lcore;
rte_service_runstate_get;
@@ -231,17 +182,23 @@ DPDK_18.05 {
rte_service_set_runstate_mapped_check;
rte_service_set_stats_enable;
rte_service_start_with_defaults;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eal_mbuf_user_pool_ops;
+ rte_set_application_usage_hook;
+ rte_socket_count;
+ rte_socket_id;
+ rte_socket_id_by_idx;
+ rte_srand;
+ rte_strerror;
+ rte_strscpy;
+ rte_strsplit;
+ rte_sys_gettid;
+ rte_thread_get_affinity;
+ rte_thread_set_affinity;
+ rte_thread_setname;
rte_uuid_compare;
rte_uuid_is_null;
rte_uuid_parse;
rte_uuid_unparse;
+ rte_vfio_clear_group;
rte_vfio_container_create;
rte_vfio_container_destroy;
rte_vfio_container_dma_map;
@@ -250,67 +207,20 @@ DPDK_18.08 {
rte_vfio_container_group_unbind;
rte_vfio_dma_map;
rte_vfio_dma_unmap;
+ rte_vfio_enable;
rte_vfio_get_container_fd;
rte_vfio_get_group_fd;
rte_vfio_get_group_num;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_dev_probe;
- rte_dev_remove;
- rte_eal_get_runtime_dir;
- rte_eal_hotplug_add;
- rte_eal_hotplug_remove;
- rte_strscpy;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_ctrl_thread_create;
- rte_dev_is_probed;
- rte_devargs_add;
- rte_devargs_dump;
- rte_devargs_insert;
- rte_devargs_next;
- rte_devargs_parse;
- rte_devargs_parsef;
- rte_devargs_remove;
- rte_devargs_type_count;
- rte_eal_cleanup;
- rte_socket_count;
- rte_socket_id_by_idx;
-
-} DPDK_18.11;
-
-DPDK_19.08 {
- global:
-
- rte_lcore_index;
- rte_lcore_to_socket_id;
- rte_mcfg_mem_read_lock;
- rte_mcfg_mem_read_unlock;
- rte_mcfg_mem_write_lock;
- rte_mcfg_mem_write_unlock;
- rte_mcfg_mempool_read_lock;
- rte_mcfg_mempool_read_unlock;
- rte_mcfg_mempool_write_lock;
- rte_mcfg_mempool_write_unlock;
- rte_mcfg_tailq_read_lock;
- rte_mcfg_tailq_read_unlock;
- rte_mcfg_tailq_write_lock;
- rte_mcfg_tailq_write_unlock;
- rte_rand;
- rte_service_lcore_attr_get;
- rte_service_lcore_attr_reset_all;
- rte_service_may_be_active;
- rte_srand;
-
-} DPDK_19.05;
+ rte_vfio_is_enabled;
+ rte_vfio_noiommu_is_enabled;
+ rte_vfio_release_device;
+ rte_vfio_setup_device;
+ rte_vlog;
+ rte_zmalloc;
+ rte_zmalloc_socket;
+
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_efd/rte_efd_version.map b/lib/librte_efd/rte_efd_version.map
index ae60a64178..e010eecfe4 100644
--- a/lib/librte_efd/rte_efd_version.map
+++ b/lib/librte_efd/rte_efd_version.map
@@ -1,4 +1,4 @@
-DPDK_17.02 {
+DPDK_20.0 {
global:
rte_efd_create;
diff --git a/lib/librte_ethdev/rte_ethdev_version.map b/lib/librte_ethdev/rte_ethdev_version.map
index 6df42a47b8..9e1dbdebb4 100644
--- a/lib/librte_ethdev/rte_ethdev_version.map
+++ b/lib/librte_ethdev/rte_ethdev_version.map
@@ -1,35 +1,53 @@
-DPDK_2.2 {
+DPDK_20.0 {
global:
+ _rte_eth_dev_callback_process;
+ _rte_eth_dev_reset;
+ rte_eth_add_first_rx_callback;
rte_eth_add_rx_callback;
rte_eth_add_tx_callback;
rte_eth_allmulticast_disable;
rte_eth_allmulticast_enable;
rte_eth_allmulticast_get;
+ rte_eth_dev_adjust_nb_rx_tx_desc;
rte_eth_dev_allocate;
rte_eth_dev_allocated;
+ rte_eth_dev_attach_secondary;
rte_eth_dev_callback_register;
rte_eth_dev_callback_unregister;
rte_eth_dev_close;
rte_eth_dev_configure;
rte_eth_dev_count;
+ rte_eth_dev_count_avail;
+ rte_eth_dev_count_total;
rte_eth_dev_default_mac_addr_set;
+ rte_eth_dev_filter_ctrl;
rte_eth_dev_filter_supported;
rte_eth_dev_flow_ctrl_get;
rte_eth_dev_flow_ctrl_set;
+ rte_eth_dev_fw_version_get;
rte_eth_dev_get_dcb_info;
rte_eth_dev_get_eeprom;
rte_eth_dev_get_eeprom_length;
rte_eth_dev_get_mtu;
+ rte_eth_dev_get_name_by_port;
+ rte_eth_dev_get_port_by_name;
rte_eth_dev_get_reg_info;
+ rte_eth_dev_get_sec_ctx;
+ rte_eth_dev_get_supported_ptypes;
rte_eth_dev_get_vlan_offload;
- rte_eth_devices;
rte_eth_dev_info_get;
rte_eth_dev_is_valid_port;
+ rte_eth_dev_l2_tunnel_eth_type_conf;
+ rte_eth_dev_l2_tunnel_offload_set;
+ rte_eth_dev_logtype;
rte_eth_dev_mac_addr_add;
rte_eth_dev_mac_addr_remove;
+ rte_eth_dev_pool_ops_supported;
rte_eth_dev_priority_flow_ctrl_set;
+ rte_eth_dev_probing_finish;
rte_eth_dev_release_port;
+ rte_eth_dev_reset;
rte_eth_dev_rss_hash_conf_get;
rte_eth_dev_rss_hash_update;
rte_eth_dev_rss_reta_query;
@@ -38,6 +56,7 @@ DPDK_2.2 {
rte_eth_dev_rx_intr_ctl_q;
rte_eth_dev_rx_intr_disable;
rte_eth_dev_rx_intr_enable;
+ rte_eth_dev_rx_offload_name;
rte_eth_dev_rx_queue_start;
rte_eth_dev_rx_queue_stop;
rte_eth_dev_set_eeprom;
@@ -47,18 +66,28 @@ DPDK_2.2 {
rte_eth_dev_set_mtu;
rte_eth_dev_set_rx_queue_stats_mapping;
rte_eth_dev_set_tx_queue_stats_mapping;
+ rte_eth_dev_set_vlan_ether_type;
rte_eth_dev_set_vlan_offload;
rte_eth_dev_set_vlan_pvid;
rte_eth_dev_set_vlan_strip_on_queue;
rte_eth_dev_socket_id;
rte_eth_dev_start;
rte_eth_dev_stop;
+ rte_eth_dev_tx_offload_name;
rte_eth_dev_tx_queue_start;
rte_eth_dev_tx_queue_stop;
rte_eth_dev_uc_all_hash_table_set;
rte_eth_dev_uc_hash_table_set;
+ rte_eth_dev_udp_tunnel_port_add;
+ rte_eth_dev_udp_tunnel_port_delete;
rte_eth_dev_vlan_filter;
+ rte_eth_devices;
rte_eth_dma_zone_reserve;
+ rte_eth_find_next;
+ rte_eth_find_next_owned_by;
+ rte_eth_iterator_cleanup;
+ rte_eth_iterator_init;
+ rte_eth_iterator_next;
rte_eth_led_off;
rte_eth_led_on;
rte_eth_link;
@@ -75,6 +104,7 @@ DPDK_2.2 {
rte_eth_rx_queue_info_get;
rte_eth_rx_queue_setup;
rte_eth_set_queue_rate_limit;
+ rte_eth_speed_bitflag;
rte_eth_stats;
rte_eth_stats_get;
rte_eth_stats_reset;
@@ -85,66 +115,27 @@ DPDK_2.2 {
rte_eth_timesync_read_time;
rte_eth_timesync_read_tx_timestamp;
rte_eth_timesync_write_time;
- rte_eth_tx_queue_info_get;
- rte_eth_tx_queue_setup;
- rte_eth_xstats_get;
- rte_eth_xstats_reset;
-
- local: *;
-};
-
-DPDK_16.04 {
- global:
-
- rte_eth_dev_get_supported_ptypes;
- rte_eth_dev_l2_tunnel_eth_type_conf;
- rte_eth_dev_l2_tunnel_offload_set;
- rte_eth_dev_set_vlan_ether_type;
- rte_eth_dev_udp_tunnel_port_add;
- rte_eth_dev_udp_tunnel_port_delete;
- rte_eth_speed_bitflag;
rte_eth_tx_buffer_count_callback;
rte_eth_tx_buffer_drop_callback;
rte_eth_tx_buffer_init;
rte_eth_tx_buffer_set_err_callback;
-
-} DPDK_2.2;
-
-DPDK_16.07 {
- global:
-
- rte_eth_add_first_rx_callback;
- rte_eth_dev_get_name_by_port;
- rte_eth_dev_get_port_by_name;
- rte_eth_xstats_get_names;
-
-} DPDK_16.04;
-
-DPDK_17.02 {
- global:
-
- _rte_eth_dev_reset;
- rte_eth_dev_fw_version_get;
-
-} DPDK_16.07;
-
-DPDK_17.05 {
- global:
-
- rte_eth_dev_attach_secondary;
- rte_eth_find_next;
rte_eth_tx_done_cleanup;
+ rte_eth_tx_queue_info_get;
+ rte_eth_tx_queue_setup;
+ rte_eth_xstats_get;
rte_eth_xstats_get_by_id;
rte_eth_xstats_get_id_by_name;
+ rte_eth_xstats_get_names;
rte_eth_xstats_get_names_by_id;
-
-} DPDK_17.02;
-
-DPDK_17.08 {
- global:
-
- _rte_eth_dev_callback_process;
- rte_eth_dev_adjust_nb_rx_tx_desc;
+ rte_eth_xstats_reset;
+ rte_flow_copy;
+ rte_flow_create;
+ rte_flow_destroy;
+ rte_flow_error_set;
+ rte_flow_flush;
+ rte_flow_isolate;
+ rte_flow_query;
+ rte_flow_validate;
rte_tm_capabilities_get;
rte_tm_get_number_of_leaf_nodes;
rte_tm_hierarchy_commit;
@@ -176,65 +167,8 @@ DPDK_17.08 {
rte_tm_wred_profile_add;
rte_tm_wred_profile_delete;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_eth_dev_get_sec_ctx;
- rte_eth_dev_pool_ops_supported;
- rte_eth_dev_reset;
-
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_eth_dev_filter_ctrl;
-
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_eth_dev_count_avail;
- rte_eth_dev_probing_finish;
- rte_eth_find_next_owned_by;
- rte_flow_copy;
- rte_flow_create;
- rte_flow_destroy;
- rte_flow_error_set;
- rte_flow_flush;
- rte_flow_isolate;
- rte_flow_query;
- rte_flow_validate;
-
-} DPDK_18.02;
-
-DPDK_18.08 {
- global:
-
- rte_eth_dev_logtype;
-
-} DPDK_18.05;
-
-DPDK_18.11 {
- global:
-
- rte_eth_dev_rx_offload_name;
- rte_eth_dev_tx_offload_name;
- rte_eth_iterator_cleanup;
- rte_eth_iterator_init;
- rte_eth_iterator_next;
-
-} DPDK_18.08;
-
-DPDK_19.05 {
- global:
-
- rte_eth_dev_count_total;
-
-} DPDK_18.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map
index 76b3021d3a..edfc15282d 100644
--- a/lib/librte_eventdev/rte_eventdev_version.map
+++ b/lib/librte_eventdev/rte_eventdev_version.map
@@ -1,61 +1,38 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
- rte_eventdevs;
-
+ rte_event_crypto_adapter_caps_get;
+ rte_event_crypto_adapter_create;
+ rte_event_crypto_adapter_create_ext;
+ rte_event_crypto_adapter_event_port_get;
+ rte_event_crypto_adapter_free;
+ rte_event_crypto_adapter_queue_pair_add;
+ rte_event_crypto_adapter_queue_pair_del;
+ rte_event_crypto_adapter_service_id_get;
+ rte_event_crypto_adapter_start;
+ rte_event_crypto_adapter_stats_get;
+ rte_event_crypto_adapter_stats_reset;
+ rte_event_crypto_adapter_stop;
+ rte_event_dequeue_timeout_ticks;
+ rte_event_dev_attr_get;
+ rte_event_dev_close;
+ rte_event_dev_configure;
rte_event_dev_count;
+ rte_event_dev_dump;
rte_event_dev_get_dev_id;
- rte_event_dev_socket_id;
rte_event_dev_info_get;
- rte_event_dev_configure;
+ rte_event_dev_selftest;
+ rte_event_dev_service_id_get;
+ rte_event_dev_socket_id;
rte_event_dev_start;
rte_event_dev_stop;
- rte_event_dev_close;
- rte_event_dev_dump;
+ rte_event_dev_stop_flush_callback_register;
rte_event_dev_xstats_by_name_get;
rte_event_dev_xstats_get;
rte_event_dev_xstats_names_get;
rte_event_dev_xstats_reset;
-
- rte_event_port_default_conf_get;
- rte_event_port_setup;
- rte_event_port_link;
- rte_event_port_unlink;
- rte_event_port_links_get;
-
- rte_event_queue_default_conf_get;
- rte_event_queue_setup;
-
- rte_event_dequeue_timeout_ticks;
-
- rte_event_pmd_allocate;
- rte_event_pmd_release;
- rte_event_pmd_vdev_init;
- rte_event_pmd_vdev_uninit;
- rte_event_pmd_pci_probe;
- rte_event_pmd_pci_remove;
-
- local: *;
-};
-
-DPDK_17.08 {
- global:
-
- rte_event_ring_create;
- rte_event_ring_free;
- rte_event_ring_init;
- rte_event_ring_lookup;
-} DPDK_17.05;
-
-DPDK_17.11 {
- global:
-
- rte_event_dev_attr_get;
- rte_event_dev_service_id_get;
- rte_event_port_attr_get;
- rte_event_queue_attr_get;
-
rte_event_eth_rx_adapter_caps_get;
+ rte_event_eth_rx_adapter_cb_register;
rte_event_eth_rx_adapter_create;
rte_event_eth_rx_adapter_create_ext;
rte_event_eth_rx_adapter_free;
@@ -63,38 +40,9 @@ DPDK_17.11 {
rte_event_eth_rx_adapter_queue_del;
rte_event_eth_rx_adapter_service_id_get;
rte_event_eth_rx_adapter_start;
+ rte_event_eth_rx_adapter_stats_get;
rte_event_eth_rx_adapter_stats_reset;
rte_event_eth_rx_adapter_stop;
-} DPDK_17.08;
-
-DPDK_18.02 {
- global:
-
- rte_event_dev_selftest;
-} DPDK_17.11;
-
-DPDK_18.05 {
- global:
-
- rte_event_dev_stop_flush_callback_register;
-} DPDK_18.02;
-
-DPDK_19.05 {
- global:
-
- rte_event_crypto_adapter_caps_get;
- rte_event_crypto_adapter_create;
- rte_event_crypto_adapter_create_ext;
- rte_event_crypto_adapter_event_port_get;
- rte_event_crypto_adapter_free;
- rte_event_crypto_adapter_queue_pair_add;
- rte_event_crypto_adapter_queue_pair_del;
- rte_event_crypto_adapter_service_id_get;
- rte_event_crypto_adapter_start;
- rte_event_crypto_adapter_stats_get;
- rte_event_crypto_adapter_stats_reset;
- rte_event_crypto_adapter_stop;
- rte_event_port_unlinks_in_progress;
rte_event_eth_tx_adapter_caps_get;
rte_event_eth_tx_adapter_create;
rte_event_eth_tx_adapter_create_ext;
@@ -107,6 +55,26 @@ DPDK_19.05 {
rte_event_eth_tx_adapter_stats_get;
rte_event_eth_tx_adapter_stats_reset;
rte_event_eth_tx_adapter_stop;
+ rte_event_pmd_allocate;
+ rte_event_pmd_pci_probe;
+ rte_event_pmd_pci_remove;
+ rte_event_pmd_release;
+ rte_event_pmd_vdev_init;
+ rte_event_pmd_vdev_uninit;
+ rte_event_port_attr_get;
+ rte_event_port_default_conf_get;
+ rte_event_port_link;
+ rte_event_port_links_get;
+ rte_event_port_setup;
+ rte_event_port_unlink;
+ rte_event_port_unlinks_in_progress;
+ rte_event_queue_attr_get;
+ rte_event_queue_default_conf_get;
+ rte_event_queue_setup;
+ rte_event_ring_create;
+ rte_event_ring_free;
+ rte_event_ring_init;
+ rte_event_ring_lookup;
rte_event_timer_adapter_caps_get;
rte_event_timer_adapter_create;
rte_event_timer_adapter_create_ext;
@@ -121,11 +89,7 @@ DPDK_19.05 {
rte_event_timer_arm_burst;
rte_event_timer_arm_tmo_tick_burst;
rte_event_timer_cancel_burst;
-} DPDK_18.05;
+ rte_eventdevs;
-DPDK_19.08 {
- global:
-
- rte_event_eth_rx_adapter_cb_register;
- rte_event_eth_rx_adapter_stats_get;
-} DPDK_19.05;
+ local: *;
+};
diff --git a/lib/librte_flow_classify/rte_flow_classify_version.map b/lib/librte_flow_classify/rte_flow_classify_version.map
index 49bc25c6a0..001ff660e3 100644
--- a/lib/librte_flow_classify/rte_flow_classify_version.map
+++ b/lib/librte_flow_classify/rte_flow_classify_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_gro/rte_gro_version.map b/lib/librte_gro/rte_gro_version.map
index 1606b6dc72..9f6fe79e57 100644
--- a/lib/librte_gro/rte_gro_version.map
+++ b/lib/librte_gro/rte_gro_version.map
@@ -1,4 +1,4 @@
-DPDK_17.08 {
+DPDK_20.0 {
global:
rte_gro_ctx_create;
diff --git a/lib/librte_gso/rte_gso_version.map b/lib/librte_gso/rte_gso_version.map
index e1fd453edb..8505a59c27 100644
--- a/lib/librte_gso/rte_gso_version.map
+++ b/lib/librte_gso/rte_gso_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_gso_segment;
diff --git a/lib/librte_hash/rte_hash_version.map b/lib/librte_hash/rte_hash_version.map
index 734ae28b04..138c130c1b 100644
--- a/lib/librte_hash/rte_hash_version.map
+++ b/lib/librte_hash/rte_hash_version.map
@@ -1,58 +1,33 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_fbk_hash_create;
rte_fbk_hash_find_existing;
rte_fbk_hash_free;
rte_hash_add_key;
+ rte_hash_add_key_data;
rte_hash_add_key_with_hash;
+ rte_hash_add_key_with_hash_data;
+ rte_hash_count;
rte_hash_create;
rte_hash_del_key;
rte_hash_del_key_with_hash;
rte_hash_find_existing;
rte_hash_free;
+ rte_hash_get_key_with_position;
rte_hash_hash;
+ rte_hash_iterate;
rte_hash_lookup;
rte_hash_lookup_bulk;
- rte_hash_lookup_with_hash;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_hash_add_key_data;
- rte_hash_add_key_with_hash_data;
- rte_hash_iterate;
rte_hash_lookup_bulk_data;
rte_hash_lookup_data;
+ rte_hash_lookup_with_hash;
rte_hash_lookup_with_hash_data;
rte_hash_reset;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_hash_set_cmp_func;
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_hash_get_key_with_position;
-
-} DPDK_2.2;
-
-
-DPDK_18.08 {
- global:
-
- rte_hash_count;
-
-} DPDK_16.07;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_ip_frag/rte_ip_frag_version.map b/lib/librte_ip_frag/rte_ip_frag_version.map
index a193007c61..5dd34f828c 100644
--- a/lib/librte_ip_frag/rte_ip_frag_version.map
+++ b/lib/librte_ip_frag/rte_ip_frag_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ip_frag_free_death_row;
rte_ip_frag_table_create;
+ rte_ip_frag_table_destroy;
rte_ip_frag_table_statistics_dump;
rte_ipv4_frag_reassemble_packet;
rte_ipv4_fragment_packet;
@@ -12,13 +13,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_17.08 {
- global:
-
- rte_ip_frag_table_destroy;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_ipsec/rte_ipsec_version.map b/lib/librte_ipsec/rte_ipsec_version.map
index ee9f1961b0..3723b812fc 100644
--- a/lib/librte_ipsec/rte_ipsec_version.map
+++ b/lib/librte_ipsec/rte_ipsec_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_jobstats/rte_jobstats_version.map b/lib/librte_jobstats/rte_jobstats_version.map
index f89441438e..dbd2664ae2 100644
--- a/lib/librte_jobstats/rte_jobstats_version.map
+++ b/lib/librte_jobstats/rte_jobstats_version.map
@@ -1,6 +1,7 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_jobstats_abort;
rte_jobstats_context_finish;
rte_jobstats_context_init;
rte_jobstats_context_reset;
@@ -17,10 +18,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_jobstats_abort;
-
-} DPDK_2.0;
diff --git a/lib/librte_kni/rte_kni_version.map b/lib/librte_kni/rte_kni_version.map
index c877dc6aaa..9cd3cedc54 100644
--- a/lib/librte_kni/rte_kni_version.map
+++ b/lib/librte_kni/rte_kni_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kni_alloc;
diff --git a/lib/librte_kvargs/rte_kvargs_version.map b/lib/librte_kvargs/rte_kvargs_version.map
index 8f4b4e3f8f..3ba0f4b59c 100644
--- a/lib/librte_kvargs/rte_kvargs_version.map
+++ b/lib/librte_kvargs/rte_kvargs_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_kvargs_count;
@@ -15,4 +15,4 @@ EXPERIMENTAL {
rte_kvargs_parse_delim;
rte_kvargs_strcmp;
-} DPDK_2.0;
+};
diff --git a/lib/librte_latencystats/rte_latencystats_version.map b/lib/librte_latencystats/rte_latencystats_version.map
index ac8403e821..e04e63463f 100644
--- a/lib/librte_latencystats/rte_latencystats_version.map
+++ b/lib/librte_latencystats/rte_latencystats_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_latencystats_get;
diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map
index 90beac853d..500f58b806 100644
--- a/lib/librte_lpm/rte_lpm_version.map
+++ b/lib/librte_lpm/rte_lpm_version.map
@@ -1,13 +1,6 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
- rte_lpm_add;
- rte_lpm_create;
- rte_lpm_delete;
- rte_lpm_delete_all;
- rte_lpm_find_existing;
- rte_lpm_free;
- rte_lpm_is_rule_present;
rte_lpm6_add;
rte_lpm6_create;
rte_lpm6_delete;
@@ -18,29 +11,13 @@ DPDK_2.0 {
rte_lpm6_is_rule_present;
rte_lpm6_lookup;
rte_lpm6_lookup_bulk_func;
+ rte_lpm_add;
+ rte_lpm_create;
+ rte_lpm_delete;
+ rte_lpm_delete_all;
+ rte_lpm_find_existing;
+ rte_lpm_free;
+ rte_lpm_is_rule_present;
local: *;
};
-
-DPDK_16.04 {
- global:
-
- rte_lpm_add;
- rte_lpm_find_existing;
- rte_lpm_create;
- rte_lpm_free;
- rte_lpm_is_rule_present;
- rte_lpm_delete;
- rte_lpm_delete_all;
-
-} DPDK_2.0;
-
-DPDK_17.05 {
- global:
-
- rte_lpm6_add;
- rte_lpm6_is_rule_present;
- rte_lpm6_lookup;
- rte_lpm6_lookup_bulk_func;
-
-} DPDK_16.04;
diff --git a/lib/librte_mbuf/rte_mbuf_version.map b/lib/librte_mbuf/rte_mbuf_version.map
index 2662a37bf6..d20aa31857 100644
--- a/lib/librte_mbuf/rte_mbuf_version.map
+++ b/lib/librte_mbuf/rte_mbuf_version.map
@@ -1,24 +1,4 @@
-DPDK_2.0 {
- global:
-
- rte_get_rx_ol_flag_name;
- rte_get_tx_ol_flag_name;
- rte_mbuf_sanity_check;
- rte_pktmbuf_dump;
- rte_pktmbuf_init;
- rte_pktmbuf_pool_init;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pktmbuf_pool_create;
-
-} DPDK_2.0;
-
-DPDK_16.11 {
+DPDK_20.0 {
global:
__rte_pktmbuf_read;
@@ -31,23 +11,26 @@ DPDK_16.11 {
rte_get_ptype_name;
rte_get_ptype_tunnel_name;
rte_get_rx_ol_flag_list;
+ rte_get_rx_ol_flag_name;
rte_get_tx_ol_flag_list;
-
-} DPDK_2.1;
-
-DPDK_18.08 {
- global:
-
+ rte_get_tx_ol_flag_name;
rte_mbuf_best_mempool_ops;
rte_mbuf_platform_mempool_ops;
+ rte_mbuf_sanity_check;
rte_mbuf_set_platform_mempool_ops;
rte_mbuf_set_user_mempool_ops;
rte_mbuf_user_mempool_ops;
+ rte_pktmbuf_dump;
+ rte_pktmbuf_init;
+ rte_pktmbuf_pool_create;
rte_pktmbuf_pool_create_by_ops;
-} DPDK_16.11;
+ rte_pktmbuf_pool_init;
+
+ local: *;
+};
EXPERIMENTAL {
global:
rte_mbuf_check;
-} DPDK_18.08;
+};
diff --git a/lib/librte_member/rte_member_version.map b/lib/librte_member/rte_member_version.map
index 019e4cd962..87780ae611 100644
--- a/lib/librte_member/rte_member_version.map
+++ b/lib/librte_member/rte_member_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_member_add;
diff --git a/lib/librte_mempool/rte_mempool_version.map b/lib/librte_mempool/rte_mempool_version.map
index 17cbca4607..6a425d203a 100644
--- a/lib/librte_mempool/rte_mempool_version.map
+++ b/lib/librte_mempool/rte_mempool_version.map
@@ -1,57 +1,39 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_mempool_audit;
- rte_mempool_calc_obj_size;
- rte_mempool_create;
- rte_mempool_dump;
- rte_mempool_list_dump;
- rte_mempool_lookup;
- rte_mempool_walk;
-
- local: *;
-};
-
-DPDK_16.07 {
- global:
-
rte_mempool_avail_count;
rte_mempool_cache_create;
rte_mempool_cache_flush;
rte_mempool_cache_free;
+ rte_mempool_calc_obj_size;
rte_mempool_check_cookies;
+ rte_mempool_contig_blocks_check_cookies;
+ rte_mempool_create;
rte_mempool_create_empty;
rte_mempool_default_cache;
+ rte_mempool_dump;
rte_mempool_free;
rte_mempool_generic_get;
rte_mempool_generic_put;
rte_mempool_in_use_count;
+ rte_mempool_list_dump;
+ rte_mempool_lookup;
rte_mempool_mem_iter;
rte_mempool_obj_iter;
+ rte_mempool_op_calc_mem_size_default;
+ rte_mempool_op_populate_default;
rte_mempool_ops_table;
rte_mempool_populate_anon;
rte_mempool_populate_default;
+ rte_mempool_populate_iova;
rte_mempool_populate_virt;
rte_mempool_register_ops;
rte_mempool_set_ops_byname;
+ rte_mempool_walk;
-} DPDK_2.0;
-
-DPDK_17.11 {
- global:
-
- rte_mempool_populate_iova;
-
-} DPDK_16.07;
-
-DPDK_18.05 {
- global:
-
- rte_mempool_contig_blocks_check_cookies;
- rte_mempool_op_calc_mem_size_default;
- rte_mempool_op_populate_default;
-
-} DPDK_17.11;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_meter/rte_meter_version.map b/lib/librte_meter/rte_meter_version.map
index 4b460d5803..46410b0369 100644
--- a/lib/librte_meter/rte_meter_version.map
+++ b/lib/librte_meter/rte_meter_version.map
@@ -1,21 +1,16 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_meter_srtcm_color_aware_check;
rte_meter_srtcm_color_blind_check;
rte_meter_srtcm_config;
+ rte_meter_srtcm_profile_config;
rte_meter_trtcm_color_aware_check;
rte_meter_trtcm_color_blind_check;
rte_meter_trtcm_config;
-
- local: *;
-};
-
-DPDK_18.08 {
- global:
-
- rte_meter_srtcm_profile_config;
rte_meter_trtcm_profile_config;
+
+ local: *;
};
EXPERIMENTAL {
diff --git a/lib/librte_metrics/rte_metrics_version.map b/lib/librte_metrics/rte_metrics_version.map
index 6ac99a44a1..85663f356e 100644
--- a/lib/librte_metrics/rte_metrics_version.map
+++ b/lib/librte_metrics/rte_metrics_version.map
@@ -1,4 +1,4 @@
-DPDK_17.05 {
+DPDK_20.0 {
global:
rte_metrics_get_names;
diff --git a/lib/librte_net/rte_net_version.map b/lib/librte_net/rte_net_version.map
index fffc4a3723..8a4e75a3a0 100644
--- a/lib/librte_net/rte_net_version.map
+++ b/lib/librte_net/rte_net_version.map
@@ -1,25 +1,14 @@
-DPDK_16.11 {
- global:
- rte_net_get_ptype;
-
- local: *;
-};
-
-DPDK_17.05 {
- global:
-
- rte_net_crc_calc;
- rte_net_crc_set_alg;
-
-} DPDK_16.11;
-
-DPDK_19.08 {
+DPDK_20.0 {
global:
rte_eth_random_addr;
rte_ether_format_addr;
+ rte_net_crc_calc;
+ rte_net_crc_set_alg;
+ rte_net_get_ptype;
-} DPDK_17.05;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_pci/rte_pci_version.map b/lib/librte_pci/rte_pci_version.map
index c0280277bb..539785f5f4 100644
--- a/lib/librte_pci/rte_pci_version.map
+++ b/lib/librte_pci/rte_pci_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
eal_parse_pci_BDF;
diff --git a/lib/librte_pdump/rte_pdump_version.map b/lib/librte_pdump/rte_pdump_version.map
index 3e744f3012..6d02ccce6d 100644
--- a/lib/librte_pdump/rte_pdump_version.map
+++ b/lib/librte_pdump/rte_pdump_version.map
@@ -1,4 +1,4 @@
-DPDK_16.07 {
+DPDK_20.0 {
global:
rte_pdump_disable;
diff --git a/lib/librte_pipeline/rte_pipeline_version.map b/lib/librte_pipeline/rte_pipeline_version.map
index 420f065d6e..64d38afecd 100644
--- a/lib/librte_pipeline/rte_pipeline_version.map
+++ b/lib/librte_pipeline/rte_pipeline_version.map
@@ -1,6 +1,8 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_pipeline_ah_packet_drop;
+ rte_pipeline_ah_packet_hijack;
rte_pipeline_check;
rte_pipeline_create;
rte_pipeline_flush;
@@ -9,42 +11,22 @@ DPDK_2.0 {
rte_pipeline_port_in_create;
rte_pipeline_port_in_disable;
rte_pipeline_port_in_enable;
+ rte_pipeline_port_in_stats_read;
rte_pipeline_port_out_create;
rte_pipeline_port_out_packet_insert;
+ rte_pipeline_port_out_stats_read;
rte_pipeline_run;
rte_pipeline_table_create;
rte_pipeline_table_default_entry_add;
rte_pipeline_table_default_entry_delete;
rte_pipeline_table_entry_add;
- rte_pipeline_table_entry_delete;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_pipeline_port_in_stats_read;
- rte_pipeline_port_out_stats_read;
- rte_pipeline_table_stats_read;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
rte_pipeline_table_entry_add_bulk;
+ rte_pipeline_table_entry_delete;
rte_pipeline_table_entry_delete_bulk;
+ rte_pipeline_table_stats_read;
-} DPDK_2.1;
-
-DPDK_16.04 {
- global:
-
- rte_pipeline_ah_packet_hijack;
- rte_pipeline_ah_packet_drop;
-
-} DPDK_2.2;
+ local: *;
+};
EXPERIMENTAL {
global:
diff --git a/lib/librte_port/rte_port_version.map b/lib/librte_port/rte_port_version.map
index 609bcec3ff..db1b8681d9 100644
--- a/lib/librte_port/rte_port_version.map
+++ b/lib/librte_port/rte_port_version.map
@@ -1,62 +1,32 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_port_ethdev_reader_ops;
+ rte_port_ethdev_writer_nodrop_ops;
rte_port_ethdev_writer_ops;
+ rte_port_fd_reader_ops;
+ rte_port_fd_writer_nodrop_ops;
+ rte_port_fd_writer_ops;
+ rte_port_kni_reader_ops;
+ rte_port_kni_writer_nodrop_ops;
+ rte_port_kni_writer_ops;
+ rte_port_ring_multi_reader_ops;
+ rte_port_ring_multi_writer_nodrop_ops;
+ rte_port_ring_multi_writer_ops;
rte_port_ring_reader_ipv4_frag_ops;
+ rte_port_ring_reader_ipv6_frag_ops;
rte_port_ring_reader_ops;
rte_port_ring_writer_ipv4_ras_ops;
+ rte_port_ring_writer_ipv6_ras_ops;
+ rte_port_ring_writer_nodrop_ops;
rte_port_ring_writer_ops;
rte_port_sched_reader_ops;
rte_port_sched_writer_ops;
rte_port_sink_ops;
rte_port_source_ops;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_port_ethdev_writer_nodrop_ops;
- rte_port_ring_reader_ipv6_frag_ops;
- rte_port_ring_writer_ipv6_ras_ops;
- rte_port_ring_writer_nodrop_ops;
-
-} DPDK_2.0;
-
-DPDK_2.2 {
- global:
-
- rte_port_ring_multi_reader_ops;
- rte_port_ring_multi_writer_ops;
- rte_port_ring_multi_writer_nodrop_ops;
-
-} DPDK_2.1;
-
-DPDK_16.07 {
- global:
-
- rte_port_kni_reader_ops;
- rte_port_kni_writer_ops;
- rte_port_kni_writer_nodrop_ops;
-
-} DPDK_2.2;
-
-DPDK_16.11 {
- global:
-
- rte_port_fd_reader_ops;
- rte_port_fd_writer_ops;
- rte_port_fd_writer_nodrop_ops;
-
-} DPDK_16.07;
-
-DPDK_18.11 {
- global:
-
rte_port_sym_crypto_reader_ops;
- rte_port_sym_crypto_writer_ops;
rte_port_sym_crypto_writer_nodrop_ops;
+ rte_port_sym_crypto_writer_ops;
-} DPDK_16.11;
+ local: *;
+};
diff --git a/lib/librte_power/rte_power_version.map b/lib/librte_power/rte_power_version.map
index 042917360e..a94ab30c3d 100644
--- a/lib/librte_power/rte_power_version.map
+++ b/lib/librte_power/rte_power_version.map
@@ -1,39 +1,27 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_power_exit;
+ rte_power_freq_disable_turbo;
rte_power_freq_down;
+ rte_power_freq_enable_turbo;
rte_power_freq_max;
rte_power_freq_min;
rte_power_freq_up;
rte_power_freqs;
+ rte_power_get_capabilities;
rte_power_get_env;
rte_power_get_freq;
+ rte_power_guest_channel_send_msg;
rte_power_init;
rte_power_set_env;
rte_power_set_freq;
+ rte_power_turbo_status;
rte_power_unset_env;
local: *;
};
-DPDK_17.11 {
- global:
-
- rte_power_guest_channel_send_msg;
- rte_power_freq_disable_turbo;
- rte_power_freq_enable_turbo;
- rte_power_turbo_status;
-
-} DPDK_2.0;
-
-DPDK_18.08 {
- global:
-
- rte_power_get_capabilities;
-
-} DPDK_17.11;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_rawdev/rte_rawdev_version.map b/lib/librte_rawdev/rte_rawdev_version.map
index b61dbff11c..d847c9e0d3 100644
--- a/lib/librte_rawdev/rte_rawdev_version.map
+++ b/lib/librte_rawdev/rte_rawdev_version.map
@@ -1,4 +1,4 @@
-DPDK_18.08 {
+DPDK_20.0 {
global:
rte_rawdev_close;
@@ -17,8 +17,8 @@ DPDK_18.08 {
rte_rawdev_pmd_release;
rte_rawdev_queue_conf_get;
rte_rawdev_queue_count;
- rte_rawdev_queue_setup;
rte_rawdev_queue_release;
+ rte_rawdev_queue_setup;
rte_rawdev_reset;
rte_rawdev_selftest;
rte_rawdev_set_attr;
diff --git a/lib/librte_rcu/rte_rcu_version.map b/lib/librte_rcu/rte_rcu_version.map
index f8b9ef2abb..787e51ef27 100644
--- a/lib/librte_rcu/rte_rcu_version.map
+++ b/lib/librte_rcu/rte_rcu_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_reorder/rte_reorder_version.map b/lib/librte_reorder/rte_reorder_version.map
index 0a8a54de83..cf444062df 100644
--- a/lib/librte_reorder/rte_reorder_version.map
+++ b/lib/librte_reorder/rte_reorder_version.map
@@ -1,13 +1,13 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_reorder_create;
- rte_reorder_init;
+ rte_reorder_drain;
rte_reorder_find_existing;
- rte_reorder_reset;
rte_reorder_free;
+ rte_reorder_init;
rte_reorder_insert;
- rte_reorder_drain;
+ rte_reorder_reset;
local: *;
};
diff --git a/lib/librte_ring/rte_ring_version.map b/lib/librte_ring/rte_ring_version.map
index 510c1386e0..89d84bcf48 100644
--- a/lib/librte_ring/rte_ring_version.map
+++ b/lib/librte_ring/rte_ring_version.map
@@ -1,8 +1,9 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_ring_create;
rte_ring_dump;
+ rte_ring_free;
rte_ring_get_memsize;
rte_ring_init;
rte_ring_list_dump;
@@ -11,13 +12,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.2 {
- global:
-
- rte_ring_free;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_sched/rte_sched_version.map b/lib/librte_sched/rte_sched_version.map
index 729588794e..1b48bfbf36 100644
--- a/lib/librte_sched/rte_sched_version.map
+++ b/lib/librte_sched/rte_sched_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_approx;
@@ -14,6 +14,9 @@ DPDK_2.0 {
rte_sched_port_enqueue;
rte_sched_port_free;
rte_sched_port_get_memory_footprint;
+ rte_sched_port_pkt_read_color;
+ rte_sched_port_pkt_read_tree_path;
+ rte_sched_port_pkt_write;
rte_sched_queue_read_stats;
rte_sched_subport_config;
rte_sched_subport_read_stats;
@@ -21,15 +24,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_2.1 {
- global:
-
- rte_sched_port_pkt_write;
- rte_sched_port_pkt_read_tree_path;
- rte_sched_port_pkt_read_color;
-
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_security/rte_security_version.map b/lib/librte_security/rte_security_version.map
index 53267bf3cc..b07314bbf4 100644
--- a/lib/librte_security/rte_security_version.map
+++ b/lib/librte_security/rte_security_version.map
@@ -1,4 +1,4 @@
-DPDK_18.11 {
+DPDK_20.0 {
global:
rte_security_attach_session;
diff --git a/lib/librte_stack/rte_stack_version.map b/lib/librte_stack/rte_stack_version.map
index 6662679c36..adbb7be9d9 100644
--- a/lib/librte_stack/rte_stack_version.map
+++ b/lib/librte_stack/rte_stack_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_table/rte_table_version.map b/lib/librte_table/rte_table_version.map
index 6237252bec..40f72b1fe8 100644
--- a/lib/librte_table/rte_table_version.map
+++ b/lib/librte_table/rte_table_version.map
@@ -1,4 +1,4 @@
-DPDK_17.11 {
+DPDK_20.0 {
global:
rte_table_acl_ops;
diff --git a/lib/librte_telemetry/rte_telemetry_version.map b/lib/librte_telemetry/rte_telemetry_version.map
index fa62d7718c..c1f4613af5 100644
--- a/lib/librte_telemetry/rte_telemetry_version.map
+++ b/lib/librte_telemetry/rte_telemetry_version.map
@@ -1,3 +1,7 @@
+DPDK_20.0 {
+ local: *;
+};
+
EXPERIMENTAL {
global:
diff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map
index 72f75c8181..2a59d3f081 100644
--- a/lib/librte_timer/rte_timer_version.map
+++ b/lib/librte_timer/rte_timer_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
rte_timer_dump_stats;
@@ -14,16 +14,6 @@ DPDK_2.0 {
local: *;
};
-DPDK_19.05 {
- global:
-
- rte_timer_dump_stats;
- rte_timer_manage;
- rte_timer_reset;
- rte_timer_stop;
- rte_timer_subsystem_init;
-} DPDK_2.0;
-
EXPERIMENTAL {
global:
diff --git a/lib/librte_vhost/rte_vhost_version.map b/lib/librte_vhost/rte_vhost_version.map
index 5f1d4a75c2..8e9ffac2c2 100644
--- a/lib/librte_vhost/rte_vhost_version.map
+++ b/lib/librte_vhost/rte_vhost_version.map
@@ -1,64 +1,34 @@
-DPDK_2.0 {
+DPDK_20.0 {
global:
+ rte_vhost_avail_entries;
rte_vhost_dequeue_burst;
rte_vhost_driver_callback_register;
- rte_vhost_driver_register;
- rte_vhost_enable_guest_notification;
- rte_vhost_enqueue_burst;
-
- local: *;
-};
-
-DPDK_2.1 {
- global:
-
- rte_vhost_driver_unregister;
-
-} DPDK_2.0;
-
-DPDK_16.07 {
- global:
-
- rte_vhost_avail_entries;
- rte_vhost_get_ifname;
- rte_vhost_get_numa_node;
- rte_vhost_get_queue_num;
-
-} DPDK_2.1;
-
-DPDK_17.05 {
- global:
-
rte_vhost_driver_disable_features;
rte_vhost_driver_enable_features;
rte_vhost_driver_get_features;
+ rte_vhost_driver_register;
rte_vhost_driver_set_features;
rte_vhost_driver_start;
+ rte_vhost_driver_unregister;
+ rte_vhost_enable_guest_notification;
+ rte_vhost_enqueue_burst;
+ rte_vhost_get_ifname;
rte_vhost_get_mem_table;
rte_vhost_get_mtu;
rte_vhost_get_negotiated_features;
+ rte_vhost_get_numa_node;
+ rte_vhost_get_queue_num;
rte_vhost_get_vhost_vring;
rte_vhost_get_vring_num;
rte_vhost_gpa_to_vva;
rte_vhost_log_used_vring;
rte_vhost_log_write;
-
-} DPDK_16.07;
-
-DPDK_17.08 {
- global:
-
rte_vhost_rx_queue_count;
-
-} DPDK_17.05;
-
-DPDK_18.02 {
- global:
-
rte_vhost_vring_call;
-} DPDK_17.08;
+ local: *;
+};
EXPERIMENTAL {
global:
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 9/9] buildtools: add ABI versioning check script
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (8 preceding siblings ...)
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 8/9] build: change ABI version to 20.0 Anatoly Burakov
@ 2019-10-16 17:03 23% ` Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
Add a shell script that checks whether built libraries are
versioned with expected ABI (current ABI, current ABI + 1,
or EXPERIMENTAL).
The following command was used to verify current source tree
(assuming build directory is in ./build):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to the end of the patchset
- Fixed bug when ABI symbols were not found because the .so
did not declare any public symbols
buildtools/check-abi-version.sh | 54 +++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100755 buildtools/check-abi-version.sh
diff --git a/buildtools/check-abi-version.sh b/buildtools/check-abi-version.sh
new file mode 100755
index 0000000000..29aea97735
--- /dev/null
+++ b/buildtools/check-abi-version.sh
@@ -0,0 +1,54 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+# Check whether library symbols have correct
+# version (provided ABI number or provided ABI
+# number + 1 or EXPERIMENTAL).
+# Args:
+# $1: path of the library .so file
+# $2: ABI major version number to check
+# (defaults to ABI_VERSION file value)
+
+if [ -z "$1" ]; then
+ echo "Script checks whether library symbols have"
+ echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL)"
+ echo "Usage:"
+ echo " $0 SO_FILE_PATH [ABI_VER]"
+ exit 1
+fi
+
+LIB="$1"
+DEFAULT_ABI=$(cat "$(dirname \
+ $(readlink -f $0))/../config/ABI_VERSION" | \
+ cut -d'.' -f 1)
+ABIVER="DPDK_${2-$DEFAULT_ABI}"
+NEXT_ABIVER="DPDK_$((${2-$DEFAULT_ABI}+1))"
+
+ret=0
+
+# get output of objdump
+OBJ_DUMP_OUTPUT=`objdump -TC --section=.text ${LIB} 2>&1 | grep ".text"`
+
+# there may not be any .text sections in the .so file, in which case exit early
+echo "${OBJ_DUMP_OUTPUT}" | grep "not found in any input file" -q
+if [ "$?" -eq 0 ]; then
+ exit 0
+fi
+
+# we have symbols, so let's see if the versions are correct
+for SYM in `echo "${OBJ_DUMP_OUTPUT}" | awk '{print $(NF-1) "-" $NF}'`
+do
+ version=$(echo $SYM | cut -d'-' -f 1)
+ symbol=$(echo $SYM | cut -d'-' -f 2)
+ case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL")
+ ;;
+ (*)
+ echo "Warning: symbol $symbol ($version) should be annotated " \
+ "as ABI version $ABIVER / $NEXT_ABIVER, or EXPERIMENTAL."
+ ret=1
+ ;;
+ esac
+done
+
+exit $ret
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v3 7/9] drivers/octeontx: add missing public symbol
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (6 preceding siblings ...)
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 6/9] distributor: " Anatoly Burakov
@ 2019-10-16 17:03 3% ` Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 8/9] build: change ABI version to 20.0 Anatoly Burakov
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 9/9] buildtools: add ABI versioning check script Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Jerin Jacob, john.mcnamara, bruce.richardson, thomas,
david.marchand, pbhagavatula, stable
The logtype symbol was missing from the .map file. Add it.
Fixes: d8dd31652cf4 ("common/octeontx: move mbox to common folder")
Cc: pbhagavatula@caviumnetworks.com
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- add this patch to avoid compile breakage when bumping ABI
drivers/common/octeontx/rte_common_octeontx_version.map | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index f04b3b7f8a..a9b3cff9bc 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,6 +1,7 @@
DPDK_18.05 {
global:
+ octeontx_logtype_mbox;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
octeontx_mbox_send;
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v3 6/9] distributor: remove deprecated code
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (5 preceding siblings ...)
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 5/9] lpm: " Anatoly Burakov
@ 2019-10-16 17:03 2% ` Anatoly Burakov
2019-10-17 10:53 0% ` Hunt, David
2019-10-16 17:03 3% ` [dpdk-dev] [PATCH v3 7/9] drivers/octeontx: add missing public symbol Anatoly Burakov
` (2 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, bruce.richardson,
thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v3:
- Removed single mode from distributor as per Dave's comments
v2:
- Moved this to before ABI version bump to avoid compile breakage
app/test/test_distributor.c | 102 ++---
app/test/test_distributor_perf.c | 12 -
lib/librte_distributor/Makefile | 1 -
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 126 +-----
lib/librte_distributor/rte_distributor.h | 1 -
.../rte_distributor_private.h | 35 --
.../rte_distributor_v1705.h | 61 ---
lib/librte_distributor/rte_distributor_v20.c | 402 ------------------
lib/librte_distributor/rte_distributor_v20.h | 218 ----------
.../rte_distributor_version.map | 16 +-
11 files changed, 38 insertions(+), 938 deletions(-)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
delete mode 100644 lib/librte_distributor/rte_distributor_v20.c
delete mode 100644 lib/librte_distributor/rte_distributor_v20.h
diff --git a/app/test/test_distributor.c b/app/test/test_distributor.c
index 7090b55f88..af42f3a991 100644
--- a/app/test/test_distributor.c
+++ b/app/test/test_distributor.c
@@ -511,18 +511,9 @@ test_flush_with_worker_shutdown(struct worker_params *wp,
static
int test_error_distributor_create_name(void)
{
- struct rte_distributor *d = NULL;
struct rte_distributor *db = NULL;
char *name = NULL;
- d = rte_distributor_create(name, rte_socket_id(),
- rte_lcore_count() - 1,
- RTE_DIST_ALG_SINGLE);
- if (d != NULL || rte_errno != EINVAL) {
- printf("ERROR: No error on create() with NULL name param\n");
- return -1;
- }
-
db = rte_distributor_create(name, rte_socket_id(),
rte_lcore_count() - 1,
RTE_DIST_ALG_BURST);
@@ -538,17 +529,8 @@ int test_error_distributor_create_name(void)
static
int test_error_distributor_create_numworkers(void)
{
- struct rte_distributor *ds = NULL;
struct rte_distributor *db = NULL;
- ds = rte_distributor_create("test_numworkers", rte_socket_id(),
- RTE_MAX_LCORE + 10,
- RTE_DIST_ALG_SINGLE);
- if (ds != NULL || rte_errno != EINVAL) {
- printf("ERROR: No error on create() with num_workers > MAX\n");
- return -1;
- }
-
db = rte_distributor_create("test_numworkers", rte_socket_id(),
RTE_MAX_LCORE + 10,
RTE_DIST_ALG_BURST);
@@ -589,11 +571,8 @@ quit_workers(struct worker_params *wp, struct rte_mempool *p)
static int
test_distributor(void)
{
- static struct rte_distributor *ds;
static struct rte_distributor *db;
- static struct rte_distributor *dist[2];
static struct rte_mempool *p;
- int i;
if (rte_lcore_count() < 2) {
printf("Not enough cores for distributor_autotest, expecting at least 2\n");
@@ -613,20 +592,6 @@ test_distributor(void)
rte_distributor_clear_returns(db);
}
- if (ds == NULL) {
- ds = rte_distributor_create("Test_dist_single",
- rte_socket_id(),
- rte_lcore_count() - 1,
- RTE_DIST_ALG_SINGLE);
- if (ds == NULL) {
- printf("Error creating single distributor\n");
- return -1;
- }
- } else {
- rte_distributor_flush(ds);
- rte_distributor_clear_returns(ds);
- }
-
const unsigned nb_bufs = (511 * rte_lcore_count()) < BIG_BATCH ?
(BIG_BATCH * 2) - 1 : (511 * rte_lcore_count());
if (p == NULL) {
@@ -638,52 +603,39 @@ test_distributor(void)
}
}
- dist[0] = ds;
- dist[1] = db;
-
- for (i = 0; i < 2; i++) {
-
- worker_params.dist = dist[i];
- if (i)
- strlcpy(worker_params.name, "burst",
- sizeof(worker_params.name));
- else
- strlcpy(worker_params.name, "single",
- sizeof(worker_params.name));
-
- rte_eal_mp_remote_launch(handle_work,
- &worker_params, SKIP_MASTER);
- if (sanity_test(&worker_params, p) < 0)
+ worker_params.dist = db;
+
+ rte_eal_mp_remote_launch(handle_work,
+ &worker_params, SKIP_MASTER);
+ if (sanity_test(&worker_params, p) < 0)
+ goto err;
+ quit_workers(&worker_params, p);
+
+ rte_eal_mp_remote_launch(handle_work_with_free_mbufs,
+ &worker_params, SKIP_MASTER);
+ if (sanity_test_with_mbuf_alloc(&worker_params, p) < 0)
+ goto err;
+ quit_workers(&worker_params, p);
+
+ if (rte_lcore_count() > 2) {
+ rte_eal_mp_remote_launch(handle_work_for_shutdown_test,
+ &worker_params,
+ SKIP_MASTER);
+ if (sanity_test_with_worker_shutdown(&worker_params,
+ p) < 0)
goto err;
quit_workers(&worker_params, p);
- rte_eal_mp_remote_launch(handle_work_with_free_mbufs,
- &worker_params, SKIP_MASTER);
- if (sanity_test_with_mbuf_alloc(&worker_params, p) < 0)
+ rte_eal_mp_remote_launch(handle_work_for_shutdown_test,
+ &worker_params,
+ SKIP_MASTER);
+ if (test_flush_with_worker_shutdown(&worker_params,
+ p) < 0)
goto err;
quit_workers(&worker_params, p);
- if (rte_lcore_count() > 2) {
- rte_eal_mp_remote_launch(handle_work_for_shutdown_test,
- &worker_params,
- SKIP_MASTER);
- if (sanity_test_with_worker_shutdown(&worker_params,
- p) < 0)
- goto err;
- quit_workers(&worker_params, p);
-
- rte_eal_mp_remote_launch(handle_work_for_shutdown_test,
- &worker_params,
- SKIP_MASTER);
- if (test_flush_with_worker_shutdown(&worker_params,
- p) < 0)
- goto err;
- quit_workers(&worker_params, p);
-
- } else {
- printf("Too few cores to run worker shutdown test\n");
- }
-
+ } else {
+ printf("Too few cores to run worker shutdown test\n");
}
if (test_error_distributor_create_numworkers() == -1 ||
diff --git a/app/test/test_distributor_perf.c b/app/test/test_distributor_perf.c
index 664530ff9e..a0bbae1a16 100644
--- a/app/test/test_distributor_perf.c
+++ b/app/test/test_distributor_perf.c
@@ -215,18 +215,6 @@ test_distributor_perf(void)
/* first time how long it takes to round-trip a cache line */
time_cache_line_switch();
- if (ds == NULL) {
- ds = rte_distributor_create("Test_perf", rte_socket_id(),
- rte_lcore_count() - 1,
- RTE_DIST_ALG_SINGLE);
- if (ds == NULL) {
- printf("Error creating distributor\n");
- return -1;
- }
- } else {
- rte_distributor_clear_returns(ds);
- }
-
if (db == NULL) {
db = rte_distributor_create("Test_burst", rte_socket_id(),
rte_lcore_count() - 1,
diff --git a/lib/librte_distributor/Makefile b/lib/librte_distributor/Makefile
index 0ef80dcff4..54e9b0cc27 100644
--- a/lib/librte_distributor/Makefile
+++ b/lib/librte_distributor/Makefile
@@ -15,7 +15,6 @@ EXPORT_MAP := rte_distributor_version.map
LIBABIVER := 1
# all source are stored in SRCS-y
-SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) := rte_distributor_v20.c
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor.c
ifeq ($(CONFIG_RTE_ARCH_X86),y)
SRCS-$(CONFIG_RTE_LIBRTE_DISTRIBUTOR) += rte_distributor_match_sse.c
diff --git a/lib/librte_distributor/meson.build b/lib/librte_distributor/meson.build
index dba7e3b2aa..d3e2aaa9e0 100644
--- a/lib/librte_distributor/meson.build
+++ b/lib/librte_distributor/meson.build
@@ -1,7 +1,7 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright(c) 2017 Intel Corporation
-sources = files('rte_distributor.c', 'rte_distributor_v20.c')
+sources = files('rte_distributor.c')
if arch_subdir == 'x86'
sources += files('rte_distributor_match_sse.c')
else
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 21eb1fb0a1..d74fa468c8 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -18,8 +18,6 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
-#include "rte_distributor_v20.h"
-#include "rte_distributor_v1705.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -33,7 +31,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
/**** Burst Packet APIs called by workers ****/
void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
+rte_distributor_request_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt,
unsigned int count)
{
@@ -42,12 +40,6 @@ rte_distributor_request_pkt_v1705(struct rte_distributor *d,
volatile int64_t *retptr64;
- if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- rte_distributor_request_pkt_v20(d->d_v20,
- worker_id, oldpkt[0]);
- return;
- }
-
retptr64 = &(buf->retptr64[0]);
/* Spin while handshake bits are set (scheduler clears it) */
while (unlikely(*retptr64 & RTE_DISTRIB_GET_BUF)) {
@@ -78,14 +70,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor *d,
*/
*retptr64 |= RTE_DISTRIB_GET_BUF;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_request_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_request_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count),
- rte_distributor_request_pkt_v1705);
int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
+rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -93,11 +80,6 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
int count = 0;
unsigned int i;
- if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- pkts[0] = rte_distributor_poll_pkt_v20(d->d_v20, worker_id);
- return (pkts[0]) ? 1 : 0;
- }
-
/* If bit is set, return */
if (buf->bufptr64[0] & RTE_DISTRIB_GET_BUF)
return -1;
@@ -119,27 +101,14 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_poll_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_poll_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts),
- rte_distributor_poll_pkt_v1705);
int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
+rte_distributor_get_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts,
struct rte_mbuf **oldpkt, unsigned int return_count)
{
int count;
- if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- if (return_count <= 1) {
- pkts[0] = rte_distributor_get_pkt_v20(d->d_v20,
- worker_id, oldpkt[0]);
- return (pkts[0]) ? 1 : 0;
- } else
- return -EINVAL;
- }
-
rte_distributor_request_pkt(d, worker_id, oldpkt, return_count);
count = rte_distributor_poll_pkt(d, worker_id, pkts);
@@ -153,27 +122,14 @@ rte_distributor_get_pkt_v1705(struct rte_distributor *d,
}
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_get_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_get_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int return_count),
- rte_distributor_get_pkt_v1705);
int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
+rte_distributor_return_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
unsigned int i;
- if (unlikely(d->alg_type == RTE_DIST_ALG_SINGLE)) {
- if (num == 1)
- return rte_distributor_return_pkt_v20(d->d_v20,
- worker_id, oldpkt[0]);
- else
- return -EINVAL;
- }
-
for (i = 0; i < RTE_DIST_BURST_SIZE; i++)
/* Switch off the return bit first */
buf->retptr64[i] &= ~RTE_DISTRIB_RETURN_BUF;
@@ -187,10 +143,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributor *d,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_return_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_return_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num),
- rte_distributor_return_pkt_v1705);
/**** APIs called on distributor core ***/
@@ -336,7 +288,7 @@ release(struct rte_distributor *d, unsigned int wkr)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v1705(struct rte_distributor *d,
+rte_distributor_process(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs)
{
unsigned int next_idx = 0;
@@ -347,11 +299,6 @@ rte_distributor_process_v1705(struct rte_distributor *d,
uint16_t flows[RTE_DIST_BURST_SIZE] __rte_cache_aligned;
unsigned int i, j, w, wid;
- if (d->alg_type == RTE_DIST_ALG_SINGLE) {
- /* Call the old API */
- return rte_distributor_process_v20(d->d_v20, mbufs, num_mbufs);
- }
-
if (unlikely(num_mbufs == 0)) {
/* Flush out all non-full cache-lines to workers. */
for (wid = 0 ; wid < d->num_workers; wid++) {
@@ -470,14 +417,10 @@ rte_distributor_process_v1705(struct rte_distributor *d,
return num_mbufs;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_process, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_process(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs),
- rte_distributor_process_v1705);
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
+rte_distributor_returned_pkts(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -485,12 +428,6 @@ rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
max_mbufs : returns->count;
unsigned int i;
- if (d->alg_type == RTE_DIST_ALG_SINGLE) {
- /* Call the old API */
- return rte_distributor_returned_pkts_v20(d->d_v20,
- mbufs, max_mbufs);
- }
-
for (i = 0; i < retval; i++) {
unsigned int idx = (returns->start + i) &
RTE_DISTRIB_RETURNS_MASK;
@@ -502,10 +439,6 @@ rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
return retval;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_returned_pkts, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_returned_pkts(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs),
- rte_distributor_returned_pkts_v1705);
/*
* Return the number of packets in-flight in a distributor, i.e. packets
@@ -527,16 +460,11 @@ total_outstanding(const struct rte_distributor *d)
* queued up.
*/
int
-rte_distributor_flush_v1705(struct rte_distributor *d)
+rte_distributor_flush(struct rte_distributor *d)
{
unsigned int flushed;
unsigned int wkr;
- if (d->alg_type == RTE_DIST_ALG_SINGLE) {
- /* Call the old API */
- return rte_distributor_flush_v20(d->d_v20);
- }
-
flushed = total_outstanding(d);
while (total_outstanding(d) > 0)
@@ -556,33 +484,21 @@ rte_distributor_flush_v1705(struct rte_distributor *d)
return flushed;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_flush, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_flush(struct rte_distributor *d),
- rte_distributor_flush_v1705);
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d)
+rte_distributor_clear_returns(struct rte_distributor *d)
{
unsigned int wkr;
- if (d->alg_type == RTE_DIST_ALG_SINGLE) {
- /* Call the old API */
- rte_distributor_clear_returns_v20(d->d_v20);
- return;
- }
-
/* throw away returns, so workers can exit */
for (wkr = 0; wkr < d->num_workers; wkr++)
d->bufs[wkr].retptr64[0] = 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_clear_returns, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_clear_returns(struct rte_distributor *d),
- rte_distributor_clear_returns_v1705);
/* creates a distributor instance */
struct rte_distributor *
-rte_distributor_create_v1705(const char *name,
+rte_distributor_create(const char *name,
unsigned int socket_id,
unsigned int num_workers,
unsigned int alg_type)
@@ -593,8 +509,6 @@ rte_distributor_create_v1705(const char *name,
const struct rte_memzone *mz;
unsigned int i;
- /* TODO Reorganise function properly around RTE_DIST_ALG_SINGLE/BURST */
-
/* compilation-time checks */
RTE_BUILD_BUG_ON((sizeof(*d) & RTE_CACHE_LINE_MASK) != 0);
RTE_BUILD_BUG_ON((RTE_DISTRIB_MAX_WORKERS & 7) != 0);
@@ -605,23 +519,6 @@ rte_distributor_create_v1705(const char *name,
return NULL;
}
- if (alg_type == RTE_DIST_ALG_SINGLE) {
- d = malloc(sizeof(struct rte_distributor));
- if (d == NULL) {
- rte_errno = ENOMEM;
- return NULL;
- }
- d->d_v20 = rte_distributor_create_v20(name,
- socket_id, num_workers);
- if (d->d_v20 == NULL) {
- free(d);
- /* rte_errno will have been set */
- return NULL;
- }
- d->alg_type = alg_type;
- return d;
- }
-
snprintf(mz_name, sizeof(mz_name), RTE_DISTRIB_PREFIX"%s", name);
mz = rte_memzone_reserve(mz_name, sizeof(*d), socket_id, NO_FLAGS);
if (mz == NULL) {
@@ -656,8 +553,3 @@ rte_distributor_create_v1705(const char *name,
return d;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_create, _v1705, 17.05);
-MAP_STATIC_SYMBOL(struct rte_distributor *rte_distributor_create(
- const char *name, unsigned int socket_id,
- unsigned int num_workers, unsigned int alg_type),
- rte_distributor_create_v1705);
diff --git a/lib/librte_distributor/rte_distributor.h b/lib/librte_distributor/rte_distributor.h
index 327c0c4ab2..41c06093ee 100644
--- a/lib/librte_distributor/rte_distributor.h
+++ b/lib/librte_distributor/rte_distributor.h
@@ -20,7 +20,6 @@ extern "C" {
/* Type of distribution (burst/single) */
enum rte_distributor_alg_type {
RTE_DIST_ALG_BURST = 0,
- RTE_DIST_ALG_SINGLE,
RTE_DIST_NUM_ALG_TYPES
};
diff --git a/lib/librte_distributor/rte_distributor_private.h b/lib/librte_distributor/rte_distributor_private.h
index 33cd89410c..552eecc88f 100644
--- a/lib/librte_distributor/rte_distributor_private.h
+++ b/lib/librte_distributor/rte_distributor_private.h
@@ -48,18 +48,6 @@ extern "C" {
#define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */
-/**
- * Buffer structure used to pass the pointer data between cores. This is cache
- * line aligned, but to improve performance and prevent adjacent cache-line
- * prefetches of buffers for other workers, e.g. when worker 1's buffer is on
- * the next cache line to worker 0, we pad this out to three cache lines.
- * Only 64-bits of the memory is actually used though.
- */
-union rte_distributor_buffer_v20 {
- volatile int64_t bufptr64;
- char pad[RTE_CACHE_LINE_SIZE*3];
-} __rte_cache_aligned;
-
/*
* Transfer up to 8 mbufs at a time to/from workers, and
* flow matching algorithm optimized for 8 flow IDs at a time
@@ -80,27 +68,6 @@ struct rte_distributor_returned_pkts {
struct rte_mbuf *mbufs[RTE_DISTRIB_MAX_RETURNS];
};
-struct rte_distributor_v20 {
- TAILQ_ENTRY(rte_distributor_v20) next; /**< Next in list. */
-
- char name[RTE_DISTRIBUTOR_NAMESIZE]; /**< Name of the ring. */
- unsigned int num_workers; /**< Number of workers polling */
-
- uint32_t in_flight_tags[RTE_DISTRIB_MAX_WORKERS];
- /**< Tracks the tag being processed per core */
- uint64_t in_flight_bitmask;
- /**< on/off bits for in-flight tags.
- * Note that if RTE_DISTRIB_MAX_WORKERS is larger than 64 then
- * the bitmask has to expand.
- */
-
- struct rte_distributor_backlog backlog[RTE_DISTRIB_MAX_WORKERS];
-
- union rte_distributor_buffer_v20 bufs[RTE_DISTRIB_MAX_WORKERS];
-
- struct rte_distributor_returned_pkts returns;
-};
-
/* All different signature compare functions */
enum rte_distributor_match_function {
RTE_DIST_MATCH_SCALAR = 0,
@@ -153,8 +120,6 @@ struct rte_distributor {
struct rte_distributor_returned_pkts returns;
enum rte_distributor_match_function dist_match_fn;
-
- struct rte_distributor_v20 *d_v20;
};
void
diff --git a/lib/librte_distributor/rte_distributor_v1705.h b/lib/librte_distributor/rte_distributor_v1705.h
deleted file mode 100644
index df4d9e8150..0000000000
--- a/lib/librte_distributor/rte_distributor_v1705.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#ifndef _RTE_DISTRIB_V1705_H_
-#define _RTE_DISTRIB_V1705_H_
-
-/**
- * @file
- * RTE distributor
- *
- * The distributor is a component which is designed to pass packets
- * one-at-a-time to workers, with dynamic load balancing.
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_distributor *
-rte_distributor_create_v1705(const char *name, unsigned int socket_id,
- unsigned int num_workers,
- unsigned int alg_type);
-
-int
-rte_distributor_process_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs);
-
-int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs);
-
-int
-rte_distributor_flush_v1705(struct rte_distributor *d);
-
-void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d);
-
-int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int retcount);
-
-int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num);
-
-void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count);
-
-int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **mbufs);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/librte_distributor/rte_distributor_v20.c b/lib/librte_distributor/rte_distributor_v20.c
deleted file mode 100644
index cdc0969a89..0000000000
--- a/lib/librte_distributor/rte_distributor_v20.c
+++ /dev/null
@@ -1,402 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#include <stdio.h>
-#include <sys/queue.h>
-#include <string.h>
-#include <rte_mbuf.h>
-#include <rte_memory.h>
-#include <rte_memzone.h>
-#include <rte_errno.h>
-#include <rte_compat.h>
-#include <rte_string_fns.h>
-#include <rte_eal_memconfig.h>
-#include <rte_pause.h>
-#include <rte_tailq.h>
-
-#include "rte_distributor_v20.h"
-#include "rte_distributor_private.h"
-
-TAILQ_HEAD(rte_distributor_list, rte_distributor_v20);
-
-static struct rte_tailq_elem rte_distributor_tailq = {
- .name = "RTE_DISTRIBUTOR",
-};
-EAL_REGISTER_TAILQ(rte_distributor_tailq)
-
-/**** APIs called by workers ****/
-
-void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
- unsigned worker_id, struct rte_mbuf *oldpkt)
-{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
- int64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
- | RTE_DISTRIB_GET_BUF;
- while (unlikely(buf->bufptr64 & RTE_DISTRIB_FLAGS_MASK))
- rte_pause();
- buf->bufptr64 = req;
-}
-VERSION_SYMBOL(rte_distributor_request_pkt, _v20, 2.0);
-
-struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
- unsigned worker_id)
-{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
- if (buf->bufptr64 & RTE_DISTRIB_GET_BUF)
- return NULL;
-
- /* since bufptr64 is signed, this should be an arithmetic shift */
- int64_t ret = buf->bufptr64 >> RTE_DISTRIB_FLAG_BITS;
- return (struct rte_mbuf *)((uintptr_t)ret);
-}
-VERSION_SYMBOL(rte_distributor_poll_pkt, _v20, 2.0);
-
-struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
- unsigned worker_id, struct rte_mbuf *oldpkt)
-{
- struct rte_mbuf *ret;
- rte_distributor_request_pkt_v20(d, worker_id, oldpkt);
- while ((ret = rte_distributor_poll_pkt_v20(d, worker_id)) == NULL)
- rte_pause();
- return ret;
-}
-VERSION_SYMBOL(rte_distributor_get_pkt, _v20, 2.0);
-
-int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
- unsigned worker_id, struct rte_mbuf *oldpkt)
-{
- union rte_distributor_buffer_v20 *buf = &d->bufs[worker_id];
- uint64_t req = (((int64_t)(uintptr_t)oldpkt) << RTE_DISTRIB_FLAG_BITS)
- | RTE_DISTRIB_RETURN_BUF;
- buf->bufptr64 = req;
- return 0;
-}
-VERSION_SYMBOL(rte_distributor_return_pkt, _v20, 2.0);
-
-/**** APIs called on distributor core ***/
-
-/* as name suggests, adds a packet to the backlog for a particular worker */
-static int
-add_to_backlog(struct rte_distributor_backlog *bl, int64_t item)
-{
- if (bl->count == RTE_DISTRIB_BACKLOG_SIZE)
- return -1;
-
- bl->pkts[(bl->start + bl->count++) & (RTE_DISTRIB_BACKLOG_MASK)]
- = item;
- return 0;
-}
-
-/* takes the next packet for a worker off the backlog */
-static int64_t
-backlog_pop(struct rte_distributor_backlog *bl)
-{
- bl->count--;
- return bl->pkts[bl->start++ & RTE_DISTRIB_BACKLOG_MASK];
-}
-
-/* stores a packet returned from a worker inside the returns array */
-static inline void
-store_return(uintptr_t oldbuf, struct rte_distributor_v20 *d,
- unsigned *ret_start, unsigned *ret_count)
-{
- /* store returns in a circular buffer - code is branch-free */
- d->returns.mbufs[(*ret_start + *ret_count) & RTE_DISTRIB_RETURNS_MASK]
- = (void *)oldbuf;
- *ret_start += (*ret_count == RTE_DISTRIB_RETURNS_MASK) & !!(oldbuf);
- *ret_count += (*ret_count != RTE_DISTRIB_RETURNS_MASK) & !!(oldbuf);
-}
-
-static inline void
-handle_worker_shutdown(struct rte_distributor_v20 *d, unsigned int wkr)
-{
- d->in_flight_tags[wkr] = 0;
- d->in_flight_bitmask &= ~(1UL << wkr);
- d->bufs[wkr].bufptr64 = 0;
- if (unlikely(d->backlog[wkr].count != 0)) {
- /* On return of a packet, we need to move the
- * queued packets for this core elsewhere.
- * Easiest solution is to set things up for
- * a recursive call. That will cause those
- * packets to be queued up for the next free
- * core, i.e. it will return as soon as a
- * core becomes free to accept the first
- * packet, as subsequent ones will be added to
- * the backlog for that core.
- */
- struct rte_mbuf *pkts[RTE_DISTRIB_BACKLOG_SIZE];
- unsigned i;
- struct rte_distributor_backlog *bl = &d->backlog[wkr];
-
- for (i = 0; i < bl->count; i++) {
- unsigned idx = (bl->start + i) &
- RTE_DISTRIB_BACKLOG_MASK;
- pkts[i] = (void *)((uintptr_t)(bl->pkts[idx] >>
- RTE_DISTRIB_FLAG_BITS));
- }
- /* recursive call.
- * Note that the tags were set before first level call
- * to rte_distributor_process.
- */
- rte_distributor_process_v20(d, pkts, i);
- bl->count = bl->start = 0;
- }
-}
-
-/* this function is called when process() fn is called without any new
- * packets. It goes through all the workers and clears any returned packets
- * to do a partial flush.
- */
-static int
-process_returns(struct rte_distributor_v20 *d)
-{
- unsigned wkr;
- unsigned flushed = 0;
- unsigned ret_start = d->returns.start,
- ret_count = d->returns.count;
-
- for (wkr = 0; wkr < d->num_workers; wkr++) {
-
- const int64_t data = d->bufs[wkr].bufptr64;
- uintptr_t oldbuf = 0;
-
- if (data & RTE_DISTRIB_GET_BUF) {
- flushed++;
- if (d->backlog[wkr].count)
- d->bufs[wkr].bufptr64 =
- backlog_pop(&d->backlog[wkr]);
- else {
- d->bufs[wkr].bufptr64 = RTE_DISTRIB_GET_BUF;
- d->in_flight_tags[wkr] = 0;
- d->in_flight_bitmask &= ~(1UL << wkr);
- }
- oldbuf = data >> RTE_DISTRIB_FLAG_BITS;
- } else if (data & RTE_DISTRIB_RETURN_BUF) {
- handle_worker_shutdown(d, wkr);
- oldbuf = data >> RTE_DISTRIB_FLAG_BITS;
- }
-
- store_return(oldbuf, d, &ret_start, &ret_count);
- }
-
- d->returns.start = ret_start;
- d->returns.count = ret_count;
-
- return flushed;
-}
-
-/* process a set of packets to distribute them to workers */
-int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
- struct rte_mbuf **mbufs, unsigned num_mbufs)
-{
- unsigned next_idx = 0;
- unsigned wkr = 0;
- struct rte_mbuf *next_mb = NULL;
- int64_t next_value = 0;
- uint32_t new_tag = 0;
- unsigned ret_start = d->returns.start,
- ret_count = d->returns.count;
-
- if (unlikely(num_mbufs == 0))
- return process_returns(d);
-
- while (next_idx < num_mbufs || next_mb != NULL) {
-
- int64_t data = d->bufs[wkr].bufptr64;
- uintptr_t oldbuf = 0;
-
- if (!next_mb) {
- next_mb = mbufs[next_idx++];
- next_value = (((int64_t)(uintptr_t)next_mb)
- << RTE_DISTRIB_FLAG_BITS);
- /*
- * User is advocated to set tag value for each
- * mbuf before calling rte_distributor_process.
- * User defined tags are used to identify flows,
- * or sessions.
- */
- new_tag = next_mb->hash.usr;
-
- /*
- * Note that if RTE_DISTRIB_MAX_WORKERS is larger than 64
- * then the size of match has to be expanded.
- */
- uint64_t match = 0;
- unsigned i;
- /*
- * to scan for a match use "xor" and "not" to get a 0/1
- * value, then use shifting to merge to single "match"
- * variable, where a one-bit indicates a match for the
- * worker given by the bit-position
- */
- for (i = 0; i < d->num_workers; i++)
- match |= (!(d->in_flight_tags[i] ^ new_tag)
- << i);
-
- /* Only turned-on bits are considered as match */
- match &= d->in_flight_bitmask;
-
- if (match) {
- next_mb = NULL;
- unsigned worker = __builtin_ctzl(match);
- if (add_to_backlog(&d->backlog[worker],
- next_value) < 0)
- next_idx--;
- }
- }
-
- if ((data & RTE_DISTRIB_GET_BUF) &&
- (d->backlog[wkr].count || next_mb)) {
-
- if (d->backlog[wkr].count)
- d->bufs[wkr].bufptr64 =
- backlog_pop(&d->backlog[wkr]);
-
- else {
- d->bufs[wkr].bufptr64 = next_value;
- d->in_flight_tags[wkr] = new_tag;
- d->in_flight_bitmask |= (1UL << wkr);
- next_mb = NULL;
- }
- oldbuf = data >> RTE_DISTRIB_FLAG_BITS;
- } else if (data & RTE_DISTRIB_RETURN_BUF) {
- handle_worker_shutdown(d, wkr);
- oldbuf = data >> RTE_DISTRIB_FLAG_BITS;
- }
-
- /* store returns in a circular buffer */
- store_return(oldbuf, d, &ret_start, &ret_count);
-
- if (++wkr == d->num_workers)
- wkr = 0;
- }
- /* to finish, check all workers for backlog and schedule work for them
- * if they are ready */
- for (wkr = 0; wkr < d->num_workers; wkr++)
- if (d->backlog[wkr].count &&
- (d->bufs[wkr].bufptr64 & RTE_DISTRIB_GET_BUF)) {
-
- int64_t oldbuf = d->bufs[wkr].bufptr64 >>
- RTE_DISTRIB_FLAG_BITS;
- store_return(oldbuf, d, &ret_start, &ret_count);
-
- d->bufs[wkr].bufptr64 = backlog_pop(&d->backlog[wkr]);
- }
-
- d->returns.start = ret_start;
- d->returns.count = ret_count;
- return num_mbufs;
-}
-VERSION_SYMBOL(rte_distributor_process, _v20, 2.0);
-
-/* return to the caller, packets returned from workers */
-int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
- struct rte_mbuf **mbufs, unsigned max_mbufs)
-{
- struct rte_distributor_returned_pkts *returns = &d->returns;
- unsigned retval = (max_mbufs < returns->count) ?
- max_mbufs : returns->count;
- unsigned i;
-
- for (i = 0; i < retval; i++) {
- unsigned idx = (returns->start + i) & RTE_DISTRIB_RETURNS_MASK;
- mbufs[i] = returns->mbufs[idx];
- }
- returns->start += i;
- returns->count -= i;
-
- return retval;
-}
-VERSION_SYMBOL(rte_distributor_returned_pkts, _v20, 2.0);
-
-/* return the number of packets in-flight in a distributor, i.e. packets
- * being worked on or queued up in a backlog.
- */
-static inline unsigned
-total_outstanding(const struct rte_distributor_v20 *d)
-{
- unsigned wkr, total_outstanding;
-
- total_outstanding = __builtin_popcountl(d->in_flight_bitmask);
-
- for (wkr = 0; wkr < d->num_workers; wkr++)
- total_outstanding += d->backlog[wkr].count;
-
- return total_outstanding;
-}
-
-/* flush the distributor, so that there are no outstanding packets in flight or
- * queued up. */
-int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d)
-{
- const unsigned flushed = total_outstanding(d);
-
- while (total_outstanding(d) > 0)
- rte_distributor_process_v20(d, NULL, 0);
-
- return flushed;
-}
-VERSION_SYMBOL(rte_distributor_flush, _v20, 2.0);
-
-/* clears the internal returns array in the distributor */
-void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d)
-{
- d->returns.start = d->returns.count = 0;
-#ifndef __OPTIMIZE__
- memset(d->returns.mbufs, 0, sizeof(d->returns.mbufs));
-#endif
-}
-VERSION_SYMBOL(rte_distributor_clear_returns, _v20, 2.0);
-
-/* creates a distributor instance */
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name,
- unsigned socket_id,
- unsigned num_workers)
-{
- struct rte_distributor_v20 *d;
- struct rte_distributor_list *distributor_list;
- char mz_name[RTE_MEMZONE_NAMESIZE];
- const struct rte_memzone *mz;
-
- /* compilation-time checks */
- RTE_BUILD_BUG_ON((sizeof(*d) & RTE_CACHE_LINE_MASK) != 0);
- RTE_BUILD_BUG_ON((RTE_DISTRIB_MAX_WORKERS & 7) != 0);
- RTE_BUILD_BUG_ON(RTE_DISTRIB_MAX_WORKERS >
- sizeof(d->in_flight_bitmask) * CHAR_BIT);
-
- if (name == NULL || num_workers >= RTE_DISTRIB_MAX_WORKERS) {
- rte_errno = EINVAL;
- return NULL;
- }
-
- snprintf(mz_name, sizeof(mz_name), RTE_DISTRIB_PREFIX"%s", name);
- mz = rte_memzone_reserve(mz_name, sizeof(*d), socket_id, NO_FLAGS);
- if (mz == NULL) {
- rte_errno = ENOMEM;
- return NULL;
- }
-
- d = mz->addr;
- strlcpy(d->name, name, sizeof(d->name));
- d->num_workers = num_workers;
-
- distributor_list = RTE_TAILQ_CAST(rte_distributor_tailq.head,
- rte_distributor_list);
-
- rte_mcfg_tailq_write_lock();
- TAILQ_INSERT_TAIL(distributor_list, d, next);
- rte_mcfg_tailq_write_unlock();
-
- return d;
-}
-VERSION_SYMBOL(rte_distributor_create, _v20, 2.0);
diff --git a/lib/librte_distributor/rte_distributor_v20.h b/lib/librte_distributor/rte_distributor_v20.h
deleted file mode 100644
index 12865658ba..0000000000
--- a/lib/librte_distributor/rte_distributor_v20.h
+++ /dev/null
@@ -1,218 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2010-2014 Intel Corporation
- */
-
-#ifndef _RTE_DISTRIB_V20_H_
-#define _RTE_DISTRIB_V20_H_
-
-/**
- * @file
- * RTE distributor
- *
- * The distributor is a component which is designed to pass packets
- * one-at-a-time to workers, with dynamic load balancing.
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-#define RTE_DISTRIBUTOR_NAMESIZE 32 /**< Length of name for instance */
-
-struct rte_distributor_v20;
-struct rte_mbuf;
-
-/**
- * Function to create a new distributor instance
- *
- * Reserves the memory needed for the distributor operation and
- * initializes the distributor to work with the configured number of workers.
- *
- * @param name
- * The name to be given to the distributor instance.
- * @param socket_id
- * The NUMA node on which the memory is to be allocated
- * @param num_workers
- * The maximum number of workers that will request packets from this
- * distributor
- * @return
- * The newly created distributor instance
- */
-struct rte_distributor_v20 *
-rte_distributor_create_v20(const char *name, unsigned int socket_id,
- unsigned int num_workers);
-
-/* *** APIS to be called on the distributor lcore *** */
-/*
- * The following APIs are the public APIs which are designed for use on a
- * single lcore which acts as the distributor lcore for a given distributor
- * instance. These functions cannot be called on multiple cores simultaneously
- * without using locking to protect access to the internals of the distributor.
- *
- * NOTE: a given lcore cannot act as both a distributor lcore and a worker lcore
- * for the same distributor instance, otherwise deadlock will result.
- */
-
-/**
- * Process a set of packets by distributing them among workers that request
- * packets. The distributor will ensure that no two packets that have the
- * same flow id, or tag, in the mbuf will be processed at the same time.
- *
- * The user is advocated to set tag for each mbuf before calling this function.
- * If user doesn't set the tag, the tag value can be various values depending on
- * driver implementation and configuration.
- *
- * This is not multi-thread safe and should only be called on a single lcore.
- *
- * @param d
- * The distributor instance to be used
- * @param mbufs
- * The mbufs to be distributed
- * @param num_mbufs
- * The number of mbufs in the mbufs array
- * @return
- * The number of mbufs processed.
- */
-int
-rte_distributor_process_v20(struct rte_distributor_v20 *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs);
-
-/**
- * Get a set of mbufs that have been returned to the distributor by workers
- *
- * This should only be called on the same lcore as rte_distributor_process()
- *
- * @param d
- * The distributor instance to be used
- * @param mbufs
- * The mbufs pointer array to be filled in
- * @param max_mbufs
- * The size of the mbufs array
- * @return
- * The number of mbufs returned in the mbufs array.
- */
-int
-rte_distributor_returned_pkts_v20(struct rte_distributor_v20 *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs);
-
-/**
- * Flush the distributor component, so that there are no in-flight or
- * backlogged packets awaiting processing
- *
- * This should only be called on the same lcore as rte_distributor_process()
- *
- * @param d
- * The distributor instance to be used
- * @return
- * The number of queued/in-flight packets that were completed by this call.
- */
-int
-rte_distributor_flush_v20(struct rte_distributor_v20 *d);
-
-/**
- * Clears the array of returned packets used as the source for the
- * rte_distributor_returned_pkts() API call.
- *
- * This should only be called on the same lcore as rte_distributor_process()
- *
- * @param d
- * The distributor instance to be used
- */
-void
-rte_distributor_clear_returns_v20(struct rte_distributor_v20 *d);
-
-/* *** APIS to be called on the worker lcores *** */
-/*
- * The following APIs are the public APIs which are designed for use on
- * multiple lcores which act as workers for a distributor. Each lcore should use
- * a unique worker id when requesting packets.
- *
- * NOTE: a given lcore cannot act as both a distributor lcore and a worker lcore
- * for the same distributor instance, otherwise deadlock will result.
- */
-
-/**
- * API called by a worker to get a new packet to process. Any previous packet
- * given to the worker is assumed to have completed processing, and may be
- * optionally returned to the distributor via the oldpkt parameter.
- *
- * @param d
- * The distributor instance to be used
- * @param worker_id
- * The worker instance number to use - must be less that num_workers passed
- * at distributor creation time.
- * @param oldpkt
- * The previous packet, if any, being processed by the worker
- *
- * @return
- * A new packet to be processed by the worker thread.
- */
-struct rte_mbuf *
-rte_distributor_get_pkt_v20(struct rte_distributor_v20 *d,
- unsigned int worker_id, struct rte_mbuf *oldpkt);
-
-/**
- * API called by a worker to return a completed packet without requesting a
- * new packet, for example, because a worker thread is shutting down
- *
- * @param d
- * The distributor instance to be used
- * @param worker_id
- * The worker instance number to use - must be less that num_workers passed
- * at distributor creation time.
- * @param mbuf
- * The previous packet being processed by the worker
- */
-int
-rte_distributor_return_pkt_v20(struct rte_distributor_v20 *d,
- unsigned int worker_id, struct rte_mbuf *mbuf);
-
-/**
- * API called by a worker to request a new packet to process.
- * Any previous packet given to the worker is assumed to have completed
- * processing, and may be optionally returned to the distributor via
- * the oldpkt parameter.
- * Unlike rte_distributor_get_pkt(), this function does not wait for a new
- * packet to be provided by the distributor.
- *
- * NOTE: after calling this function, rte_distributor_poll_pkt() should
- * be used to poll for the packet requested. The rte_distributor_get_pkt()
- * API should *not* be used to try and retrieve the new packet.
- *
- * @param d
- * The distributor instance to be used
- * @param worker_id
- * The worker instance number to use - must be less that num_workers passed
- * at distributor creation time.
- * @param oldpkt
- * The previous packet, if any, being processed by the worker
- */
-void
-rte_distributor_request_pkt_v20(struct rte_distributor_v20 *d,
- unsigned int worker_id, struct rte_mbuf *oldpkt);
-
-/**
- * API called by a worker to check for a new packet that was previously
- * requested by a call to rte_distributor_request_pkt(). It does not wait
- * for the new packet to be available, but returns NULL if the request has
- * not yet been fulfilled by the distributor.
- *
- * @param d
- * The distributor instance to be used
- * @param worker_id
- * The worker instance number to use - must be less that num_workers passed
- * at distributor creation time.
- *
- * @return
- * A new packet to be processed by the worker thread, or NULL if no
- * packet is yet available.
- */
-struct rte_mbuf *
-rte_distributor_poll_pkt_v20(struct rte_distributor_v20 *d,
- unsigned int worker_id);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
diff --git a/lib/librte_distributor/rte_distributor_version.map b/lib/librte_distributor/rte_distributor_version.map
index 3a285b394e..5643ab85fb 100644
--- a/lib/librte_distributor/rte_distributor_version.map
+++ b/lib/librte_distributor/rte_distributor_version.map
@@ -1,4 +1,4 @@
-DPDK_2.0 {
+DPDK_17.05 {
global:
rte_distributor_clear_returns;
@@ -13,17 +13,3 @@ DPDK_2.0 {
local: *;
};
-
-DPDK_17.05 {
- global:
-
- rte_distributor_clear_returns;
- rte_distributor_create;
- rte_distributor_flush;
- rte_distributor_get_pkt;
- rte_distributor_poll_pkt;
- rte_distributor_process;
- rte_distributor_request_pkt;
- rte_distributor_return_pkt;
- rte_distributor_returned_pkts;
-} DPDK_2.0;
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 5/9] lpm: remove deprecated code
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (4 preceding siblings ...)
2019-10-16 17:03 4% ` [dpdk-dev] [PATCH v3 4/9] timer: remove deprecated code Anatoly Burakov
@ 2019-10-16 17:03 2% ` Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 6/9] distributor: " Anatoly Burakov
` (3 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Bruce Richardson, Vladimir Medvedkin,
john.mcnamara, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_lpm/rte_lpm.c | 996 ++------------------------------------
lib/librte_lpm/rte_lpm.h | 88 ----
lib/librte_lpm/rte_lpm6.c | 132 +----
lib/librte_lpm/rte_lpm6.h | 25 -
4 files changed, 48 insertions(+), 1193 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 3a929a1b16..2687564194 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -89,34 +89,8 @@ depth_to_range(uint8_t depth)
/*
* Find an existing lpm table and return a pointer to it.
*/
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name)
-{
- struct rte_lpm_v20 *l = NULL;
- struct rte_tailq_entry *te;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_read_lock();
- TAILQ_FOREACH(te, lpm_list, next) {
- l = te->data;
- if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
- rte_mcfg_tailq_read_unlock();
-
- if (te == NULL) {
- rte_errno = ENOENT;
- return NULL;
- }
-
- return l;
-}
-VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name)
+rte_lpm_find_existing(const char *name)
{
struct rte_lpm *l = NULL;
struct rte_tailq_entry *te;
@@ -139,88 +113,12 @@ rte_lpm_find_existing_v1604(const char *name)
return l;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v1604, 16.04);
-MAP_STATIC_SYMBOL(struct rte_lpm *rte_lpm_find_existing(const char *name),
- rte_lpm_find_existing_v1604);
/*
* Allocates memory for LPM object
*/
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
- __rte_unused int flags)
-{
- char mem_name[RTE_LPM_NAMESIZE];
- struct rte_lpm_v20 *lpm = NULL;
- struct rte_tailq_entry *te;
- uint32_t mem_size;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry_v20) != 2);
-
- /* Check user arguments. */
- if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
- rte_errno = EINVAL;
- return NULL;
- }
-
- snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
-
- /* Determine the amount of memory to allocate. */
- mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
- rte_mcfg_tailq_write_lock();
-
- /* guarantee there's no existing */
- TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
-
- if (te != NULL) {
- lpm = NULL;
- rte_errno = EEXIST;
- goto exit;
- }
-
- /* allocate tailq entry */
- te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Allocate memory to store the LPM data structures. */
- lpm = rte_zmalloc_socket(mem_name, mem_size,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
- rte_free(te);
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Save user arguments. */
- lpm->max_rules = max_rules;
- strlcpy(lpm->name, name, sizeof(lpm->name));
-
- te->data = lpm;
-
- TAILQ_INSERT_TAIL(lpm_list, te, next);
-
-exit:
- rte_mcfg_tailq_write_unlock();
-
- return lpm;
-}
-VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
+rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config)
{
char mem_name[RTE_LPM_NAMESIZE];
@@ -320,45 +218,12 @@ rte_lpm_create_v1604(const char *name, int socket_id,
return lpm;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_create, _v1604, 16.04);
-MAP_STATIC_SYMBOL(
- struct rte_lpm *rte_lpm_create(const char *name, int socket_id,
- const struct rte_lpm_config *config), rte_lpm_create_v1604);
/*
* Deallocates memory for given LPM table.
*/
void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm)
-{
- struct rte_lpm_list *lpm_list;
- struct rte_tailq_entry *te;
-
- /* Check user arguments. */
- if (lpm == NULL)
- return;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_write_lock();
-
- /* find our tailq entry */
- TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
- break;
- }
- if (te != NULL)
- TAILQ_REMOVE(lpm_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- rte_free(lpm);
- rte_free(te);
-}
-VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
-
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm)
+rte_lpm_free(struct rte_lpm *lpm)
{
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -386,9 +251,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
rte_free(lpm);
rte_free(te);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
- rte_lpm_free_v1604);
/*
* Adds a rule to the rule table.
@@ -401,79 +263,7 @@ MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t rule_gindex, rule_index, last_rule;
- int i;
-
- VERIFY_DEPTH(depth);
-
- /* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
-
- /* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- /* Initialise rule_index to point to start of rule group. */
- rule_index = rule_gindex;
- /* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- for (; rule_index < last_rule; rule_index++) {
-
- /* If rule already exists update its next_hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- return rule_index;
- }
- }
-
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
- } else {
- /* Calculate the position in which the rule will be stored. */
- rule_index = 0;
-
- for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
- break;
- }
- }
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
-
- lpm->rule_info[depth - 1].first_rule = rule_index;
- }
-
- /* Make room for the new rule in the array. */
- for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
- return -ENOSPC;
-
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
- }
- }
-
- /* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- /* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
-
- return rule_index;
-}
-
-static int32_t
-rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
@@ -549,30 +339,7 @@ rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static void
-rule_delete_v20(struct rte_lpm_v20 *lpm, int32_t rule_index, uint8_t depth)
-{
- int i;
-
- VERIFY_DEPTH(depth);
-
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
-
- for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
- }
- }
-
- lpm->rule_info[depth - 1].used_rules--;
-}
-
-static void
-rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
int i;
@@ -599,28 +366,7 @@ rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_find_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth)
-{
- uint32_t rule_gindex, last_rule, rule_index;
-
- VERIFY_DEPTH(depth);
-
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- /* Scan used rules at given depth to find rule. */
- for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
- /* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
- return rule_index;
- }
-
- /* If rule is not found return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
@@ -644,42 +390,7 @@ rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static int32_t
-tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20 *tbl8)
-{
- uint32_t group_idx; /* tbl8 group index. */
- struct rte_lpm_tbl_entry_v20 *tbl8_entry;
-
- /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < RTE_LPM_TBL8_NUM_GROUPS;
- group_idx++) {
- tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
- /* If a free tbl8 group is found clean it and set as VALID. */
- if (!tbl8_entry->valid_group) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = VALID,
- };
- new_tbl8_entry.next_hop = 0;
-
- memset(&tbl8_entry[0], 0,
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
- sizeof(tbl8_entry[0]));
-
- __atomic_store(tbl8_entry, &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- /* Return group index for allocated tbl8 group. */
- return group_idx;
- }
- }
-
- /* If there are no tbl8 groups free then return error. */
- return -ENOSPC;
-}
-
-static int32_t
-tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
@@ -713,22 +424,7 @@ tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
}
static void
-tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start)
-{
- /* Set tbl8 group invalid*/
- struct rte_lpm_tbl_entry_v20 zero_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = INVALID,
- };
- zero_tbl8_entry.next_hop = 0;
-
- __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
- __ATOMIC_RELAXED);
-}
-
-static void
-tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
@@ -738,78 +434,7 @@ tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
}
static __rte_noinline int32_t
-add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
-
- /* Calculate the index into Table24. */
- tbl24_index = ip >> 8;
- tbl24_range = depth_to_range(depth);
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- /*
- * For invalid OR valid and non-extended tbl 24 entries set
- * entry.
- */
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth)) {
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .valid = VALID,
- .valid_group = 0,
- .depth = depth,
- };
- new_tbl24_entry.next_hop = next_hop;
-
- /* Setting tbl24 entry in one go to avoid race
- * conditions
- */
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- continue;
- }
-
- if (lpm->tbl24[i].valid_group == 1) {
- /* If tbl24 entry is valid and extended calculate the
- * index into tbl8.
- */
- tbl8_index = lpm->tbl24[i].group_idx *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- };
- new_tbl8_entry.next_hop = next_hop;
-
- /*
- * Setting tbl8 entry in one go to avoid
- * race conditions
- */
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -881,150 +506,7 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static __rte_noinline int32_t
-add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index;
- int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
- tbl8_range, i;
-
- tbl24_index = (ip_masked >> 8);
- tbl8_range = depth_to_range(depth);
-
- if (!lpm->tbl24[tbl24_index].valid) {
- /* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- /* Check tbl8 allocation was successful. */
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- /* Find index into tbl8 and range. */
- tbl8_index = (tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
- (ip_masked & 0xFF);
-
- /* Set tbl8 entry. */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } /* If valid entry but not extended calculate the index into Table8. */
- else if (lpm->tbl24[tbl24_index].valid_group == 0) {
- /* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_group_start +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /* Populate new tbl8 with tbl24 value. */
- for (i = tbl8_group_start; i < tbl8_group_end; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = lpm->tbl24[tbl24_index].depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop =
- lpm->tbl24[tbl24_index].next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- /* Insert new rule into the tbl8 entry. */
- for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } else { /*
- * If it is valid, extended entry calculate the index into tbl8.
- */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- /*
- * Setting tbl8 entry in one go to avoid race
- * condition
- */
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -1037,7 +519,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
if (!lpm->tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
/* Check tbl8 allocation was successful. */
if (tbl8_group_index < 0) {
@@ -1083,7 +565,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
} /* If valid entry but not extended calculate the index into Table8. */
else if (lpm->tbl24[tbl24_index].valid_group == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
if (tbl8_group_index < 0) {
return tbl8_group_index;
@@ -1177,48 +659,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* Add a route
*/
int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- int32_t rule_index, status = 0;
- uint32_t ip_masked;
-
- /* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- ip_masked = ip & depth_to_mask(depth);
-
- /* Add the rule to the rule table. */
- rule_index = rule_add_v20(lpm, ip_masked, depth, next_hop);
-
- /* If the is no space available for new rule return error. */
- if (rule_index < 0) {
- return rule_index;
- }
-
- if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v20(lpm, ip_masked, depth, next_hop);
- } else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v20(lpm, ip_masked, depth, next_hop);
-
- /*
- * If add fails due to exhaustion of tbl8 extensions delete
- * rule that was added to rule table.
- */
- if (status < 0) {
- rule_delete_v20(lpm, rule_index, depth);
-
- return status;
- }
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
-
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
int32_t rule_index, status = 0;
@@ -1231,7 +672,7 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add_v1604(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(lpm, ip_masked, depth, next_hop);
/* If the is no space available for new rule return error. */
if (rule_index < 0) {
@@ -1239,16 +680,16 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(lpm, ip_masked, depth, next_hop);
} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(lpm, ip_masked, depth, next_hop);
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
if (status < 0) {
- rule_delete_v1604(lpm, rule_index, depth);
+ rule_delete(lpm, rule_index, depth);
return status;
}
@@ -1256,42 +697,12 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_add, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t next_hop), rte_lpm_add_v1604);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
-{
- uint32_t ip_masked;
- int32_t rule_index;
-
- /* Check user arguments. */
- if ((lpm == NULL) ||
- (next_hop == NULL) ||
- (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- /* Look for the rule using rule_find. */
- ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v20(lpm, ip_masked, depth);
-
- if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
- return 1;
- }
-
- /* If rule is not found return 0. */
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop)
{
uint32_t ip_masked;
@@ -1305,7 +716,7 @@ uint32_t *next_hop)
/* Look for the rule using rule_find. */
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v1604(lpm, ip_masked, depth);
+ rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
*next_hop = lpm->rules_tbl[rule_index].next_hop;
@@ -1315,12 +726,9 @@ uint32_t *next_hop)
/* If rule is not found return 0. */
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t *next_hop), rte_lpm_is_rule_present_v1604);
static int32_t
-find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *sub_rule_depth)
{
int32_t rule_index;
@@ -1330,7 +738,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
- rule_index = rule_find_v20(lpm, ip_masked, prev_depth);
+ rule_index = rule_find(lpm, ip_masked, prev_depth);
if (rule_index >= 0) {
*sub_rule_depth = prev_depth;
@@ -1342,133 +750,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
}
static int32_t
-find_previous_rule_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t *sub_rule_depth)
-{
- int32_t rule_index;
- uint32_t ip_masked;
- uint8_t prev_depth;
-
- for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
- ip_masked = ip & depth_to_mask(prev_depth);
-
- rule_index = rule_find_v1604(lpm, ip_masked, prev_depth);
-
- if (rule_index >= 0) {
- *sub_rule_depth = prev_depth;
- return rule_index;
- }
- }
-
- return -1;
-}
-
-static int32_t
-delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
-
- /* Calculate the range and index into Table24. */
- tbl24_range = depth_to_range(depth);
- tbl24_index = (ip_masked >> 8);
-
- /*
- * Firstly check the sub_rule_index. A -1 indicates no replacement rule
- * and a positive number indicates a sub_rule_index.
- */
- if (sub_rule_index < 0) {
- /*
- * If no replacement rule exists then invalidate entries
- * associated with this rule.
- */
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- zero_tbl24_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = 0,
- };
- zero_tbl24_entry.next_hop = 0;
- __atomic_store(&lpm->tbl24[i],
- &zero_tbl24_entry, __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- lpm->tbl8[j].valid = INVALID;
- }
- }
- }
- } else {
- /*
- * If a replacement rule exists then modify entries
- * associated with this rule.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = sub_rule_depth,
- };
-
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = sub_rule_depth,
- };
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1575,7 +857,7 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* thus can be recycled
*/
static int32_t
-tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8,
uint32_t tbl8_group_start)
{
uint32_t tbl8_group_end, i;
@@ -1622,140 +904,7 @@ tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
}
static int32_t
-tbl8_recycle_check_v1604(struct rte_lpm_tbl_entry *tbl8,
- uint32_t tbl8_group_start)
-{
- uint32_t tbl8_group_end, i;
- tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /*
- * Check the first entry of the given tbl8. If it is invalid we know
- * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
- * (As they would affect all entries in a tbl8) and thus this table
- * can not be recycled.
- */
- if (tbl8[tbl8_group_start].valid) {
- /*
- * If first entry is valid check if the depth is less than 24
- * and if so check the rest of the entries to verify that they
- * are all of this depth.
- */
- if (tbl8[tbl8_group_start].depth <= MAX_DEPTH_TBL24) {
- for (i = (tbl8_group_start + 1); i < tbl8_group_end;
- i++) {
-
- if (tbl8[i].depth !=
- tbl8[tbl8_group_start].depth) {
-
- return -EEXIST;
- }
- }
- /* If all entries are the same return the tb8 index */
- return tbl8_group_start;
- }
-
- return -EEXIST;
- }
- /*
- * If the first entry is invalid check if the rest of the entries in
- * the tbl8 are invalid.
- */
- for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
- if (tbl8[i].valid)
- return -EEXIST;
- }
- /* If no valid entries are found then return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
- tbl8_range, i;
- int32_t tbl8_recycle_index;
-
- /*
- * Calculate the index into tbl24 and range. Note: All depths larger
- * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
- */
- tbl24_index = ip_masked >> 8;
-
- /* Calculate the index into tbl8 and range. */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
- tbl8_range = depth_to_range(depth);
-
- if (sub_rule_index < 0) {
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be removed or modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- lpm->tbl8[i].valid = INVALID;
- }
- } else {
- /* Set new tbl8 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- };
-
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
-
- /*
- * Check if there are any valid entries in this tbl8 group. If all
- * tbl8 entries are invalid we can free the tbl8 and invalidate the
- * associated tbl24 entry.
- */
-
- tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start);
-
- if (tbl8_recycle_index == -EINVAL) {
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- lpm->tbl24[tbl24_index].valid = 0;
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- } else if (tbl8_recycle_index > -1) {
- /* Update tbl24 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = lpm->tbl8[tbl8_recycle_index].depth,
- };
-
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELAXED);
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1810,7 +959,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* associated tbl24 entry.
*/
- tbl8_recycle_index = tbl8_recycle_check_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition.
@@ -1818,7 +967,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
*/
lpm->tbl24[tbl24_index].valid = 0;
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
} else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl_entry new_tbl24_entry = {
@@ -1834,7 +983,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
__atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELAXED);
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
}
#undef group_idx
return 0;
@@ -1844,7 +993,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* Deletes a rule
*/
int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
uint32_t ip_masked;
@@ -1863,7 +1012,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* Find the index of the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find_v20(lpm, ip_masked, depth);
+ rule_to_delete_index = rule_find(lpm, ip_masked, depth);
/*
* Check if rule_to_delete_index was found. If no rule was found the
@@ -1873,7 +1022,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
/* Delete the rule from the rule table. */
- rule_delete_v20(lpm, rule_to_delete_index, depth);
+ rule_delete(lpm, rule_to_delete_index, depth);
/*
* Find rule to replace the rule_to_delete. If there is no rule to
@@ -1881,100 +1030,26 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* entries associated with this rule.
*/
sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v20(lpm, ip, depth, &sub_rule_depth);
+ sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
/*
* If the input depth value is less than 25 use function
* delete_depth_small otherwise use delete_depth_big.
*/
if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v20(lpm, ip_masked, depth,
+ return delete_depth_small(lpm, ip_masked, depth,
sub_rule_index, sub_rule_depth);
} else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v20(lpm, ip_masked, depth, sub_rule_index,
+ return delete_depth_big(lpm, ip_masked, depth, sub_rule_index,
sub_rule_depth);
}
}
-VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
-
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
-{
- int32_t rule_to_delete_index, sub_rule_index;
- uint32_t ip_masked;
- uint8_t sub_rule_depth;
- /*
- * Check input arguments. Note: IP must be a positive integer of 32
- * bits in length therefore it need not be checked.
- */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
- return -EINVAL;
- }
-
- ip_masked = ip & depth_to_mask(depth);
-
- /*
- * Find the index of the input rule, that needs to be deleted, in the
- * rule table.
- */
- rule_to_delete_index = rule_find_v1604(lpm, ip_masked, depth);
-
- /*
- * Check if rule_to_delete_index was found. If no rule was found the
- * function rule_find returns -EINVAL.
- */
- if (rule_to_delete_index < 0)
- return -EINVAL;
-
- /* Delete the rule from the rule table. */
- rule_delete_v1604(lpm, rule_to_delete_index, depth);
-
- /*
- * Find rule to replace the rule_to_delete. If there is no rule to
- * replace the rule_to_delete we return -1 and invalidate the table
- * entries associated with this rule.
- */
- sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v1604(lpm, ip, depth, &sub_rule_depth);
-
- /*
- * If the input depth value is less than 25 use function
- * delete_depth_small otherwise use delete_depth_big.
- */
- if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v1604(lpm, ip_masked, depth,
- sub_rule_index, sub_rule_depth);
- } else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v1604(lpm, ip_masked, depth, sub_rule_index,
- sub_rule_depth);
- }
-}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth), rte_lpm_delete_v1604);
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm)
-{
- /* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
-
- /* Zero tbl24. */
- memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
-
- /* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
-
- /* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
-}
-VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
-
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm)
{
/* Zero rule information. */
memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1989,6 +1064,3 @@ rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm *lpm),
- rte_lpm_delete_all_v1604);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 906ec44830..ca9627a141 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -65,31 +65,6 @@ extern "C" {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- /**
- * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
- * a group index pointing to a tbl8 structure (tbl24 only, when
- * valid_group is set)
- */
- RTE_STD_C11
- union {
- uint8_t next_hop;
- uint8_t group_idx;
- };
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- /**
- * For tbl24:
- * - valid_group == 0: entry stores a next hop
- * - valid_group == 1: entry stores a group_index pointing to a tbl8
- * For tbl8:
- * - valid_group indicates whether the current tbl8 is in use or not
- */
- uint8_t valid_group :1;
- uint8_t depth :6; /**< Rule depth. */
-} __rte_aligned(sizeof(uint16_t));
-
__extension__
struct rte_lpm_tbl_entry {
/**
@@ -112,16 +87,6 @@ struct rte_lpm_tbl_entry {
};
#else
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- union {
- uint8_t group_idx;
- uint8_t next_hop;
- };
-} __rte_aligned(sizeof(uint16_t));
__extension__
struct rte_lpm_tbl_entry {
@@ -142,11 +107,6 @@ struct rte_lpm_config {
};
/** @internal Rule structure. */
-struct rte_lpm_rule_v20 {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
-};
-
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint32_t next_hop; /**< Rule next hop. */
@@ -159,21 +119,6 @@ struct rte_lpm_rule_info {
};
/** @internal LPM structure. */
-struct rte_lpm_v20 {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
- /* LPM Tables. */
- struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl8 table. */
- struct rte_lpm_rule_v20 rules_tbl[]
- __rte_cache_aligned; /**< LPM rules. */
-};
-
struct rte_lpm {
/* LPM metadata. */
char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
@@ -210,11 +155,6 @@ struct rte_lpm {
struct rte_lpm *
rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config);
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
-struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
- const struct rte_lpm_config *config);
/**
* Find an existing LPM object and return a pointer to it.
@@ -228,10 +168,6 @@ rte_lpm_create_v1604(const char *name, int socket_id,
*/
struct rte_lpm *
rte_lpm_find_existing(const char *name);
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name);
-struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name);
/**
* Free an LPM object.
@@ -243,10 +179,6 @@ rte_lpm_find_existing_v1604(const char *name);
*/
void
rte_lpm_free(struct rte_lpm *lpm);
-void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm);
/**
* Add a rule to the LPM table.
@@ -264,12 +196,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm);
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
-int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -289,12 +215,6 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -310,10 +230,6 @@ uint32_t *next_hop);
*/
int
rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
/**
* Delete all rules from the LPM table.
@@ -323,10 +239,6 @@ rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
*/
void
rte_lpm_delete_all(struct rte_lpm *lpm);
-void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
/**
* Lookup an IP into the LPM table.
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 9b8aeb9721..b981e40714 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -808,18 +808,6 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
return 1;
}
-/*
- * Add a route
- */
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop)
-{
- return rte_lpm6_add_v1705(lpm, ip, depth, next_hop);
-}
-VERSION_SYMBOL(rte_lpm6_add, _v20, 2.0);
-
-
/*
* Simulate adding a route to LPM
*
@@ -841,7 +829,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
/* Inspect the first three bytes through tbl24 on the first step. */
ret = simulate_add_step(lpm, lpm->tbl24, &tbl_next, masked_ip,
- ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
+ ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
total_need_tbl_nb = need_tbl_nb;
/*
* Inspect one by one the rest of the bytes until
@@ -850,7 +838,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && ret == 1; i++) {
tbl = tbl_next;
ret = simulate_add_step(lpm, tbl, &tbl_next, masked_ip, 1,
- (uint8_t)(i+1), depth, &need_tbl_nb);
+ (uint8_t)(i + 1), depth, &need_tbl_nb);
total_need_tbl_nb += need_tbl_nb;
}
@@ -861,9 +849,12 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
return 0;
}
+/*
+ * Add a route
+ */
int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop)
+rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+ uint32_t next_hop)
{
struct rte_lpm6_tbl_entry *tbl;
struct rte_lpm6_tbl_entry *tbl_next = NULL;
@@ -895,8 +886,8 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
/* Inspect the first three bytes through tbl24 on the first step. */
tbl = lpm->tbl24;
status = add_step(lpm, tbl, TBL24_IND, &tbl_next, &tbl_next_num,
- masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
- is_new_rule);
+ masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
+ is_new_rule);
assert(status >= 0);
/*
@@ -906,17 +897,13 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && status == 1; i++) {
tbl = tbl_next;
status = add_step(lpm, tbl, tbl_next_num, &tbl_next,
- &tbl_next_num, masked_ip, 1, (uint8_t)(i+1),
- depth, next_hop, is_new_rule);
+ &tbl_next_num, masked_ip, 1, (uint8_t)(i + 1),
+ depth, next_hop, is_new_rule);
assert(status >= 0);
}
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_add, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip,
- uint8_t depth, uint32_t next_hop),
- rte_lpm6_add_v1705);
/*
* Takes a pointer to a table entry and inspect one level.
@@ -955,25 +942,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
* Looks up an IP
*/
int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_lookup_v1705(lpm, ip, &next_hop32);
- if (status == 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-}
-VERSION_SYMBOL(rte_lpm6_lookup, _v20, 2.0);
-
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
uint32_t *next_hop)
{
const struct rte_lpm6_tbl_entry *tbl;
@@ -1000,56 +969,12 @@ rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop), rte_lpm6_lookup_v1705);
/*
* Looks up a group of IP addresses
*/
int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n)
-{
- unsigned i;
- const struct rte_lpm6_tbl_entry *tbl;
- const struct rte_lpm6_tbl_entry *tbl_next = NULL;
- uint32_t tbl24_index, next_hop;
- uint8_t first_byte;
- int status;
-
- /* DEBUG: Check user input arguments. */
- if ((lpm == NULL) || (ips == NULL) || (next_hops == NULL))
- return -EINVAL;
-
- for (i = 0; i < n; i++) {
- first_byte = LOOKUP_FIRST_BYTE;
- tbl24_index = (ips[i][0] << BYTES2_SIZE) |
- (ips[i][1] << BYTE_SIZE) | ips[i][2];
-
- /* Calculate pointer to the first entry to be inspected */
- tbl = &lpm->tbl24[tbl24_index];
-
- do {
- /* Continue inspecting following levels until success or failure */
- status = lookup_step(lpm, tbl, &tbl_next, ips[i], first_byte++,
- &next_hop);
- tbl = tbl_next;
- } while (status == 1);
-
- if (status < 0)
- next_hops[i] = -1;
- else
- next_hops[i] = (int16_t)next_hop;
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm6_lookup_bulk_func, _v20, 2.0);
-
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
+rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n)
{
@@ -1089,37 +1014,12 @@ rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup_bulk_func, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n),
- rte_lpm6_lookup_bulk_func_v1705);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_is_rule_present_v1705(lpm, ip, depth, &next_hop32);
- if (status > 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-
-}
-VERSION_SYMBOL(rte_lpm6_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop)
{
uint8_t masked_ip[RTE_LPM6_IPV6_ADDR_SIZE];
@@ -1135,10 +1035,6 @@ rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
return rule_find(lpm, masked_ip, depth, next_hop);
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_is_rule_present, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_is_rule_present(struct rte_lpm6 *lpm,
- uint8_t *ip, uint8_t depth, uint32_t *next_hop),
- rte_lpm6_is_rule_present_v1705);
/*
* Delete a rule from the rule table.
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index 5d59ccb1fe..37dfb20249 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -96,12 +96,6 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t next_hop);
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -121,12 +115,6 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop);
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -184,11 +172,6 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
*/
int
rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
-int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table.
@@ -210,14 +193,6 @@ int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n);
#ifdef __cplusplus
}
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v3 4/9] timer: remove deprecated code
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (3 preceding siblings ...)
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 3/9] buildtools: add ABI update shell script Anatoly Burakov
@ 2019-10-16 17:03 4% ` Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 5/9] lpm: " Anatoly Burakov
` (4 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, bruce.richardson, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_timer/rte_timer.c | 90 ++----------------------------------
lib/librte_timer/rte_timer.h | 15 ------
2 files changed, 5 insertions(+), 100 deletions(-)
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index bdcf05d06b..de6959b809 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -68,9 +68,6 @@ static struct rte_timer_data *rte_timer_data_arr;
static const uint32_t default_data_id;
static uint32_t rte_timer_subsystem_initialized;
-/* For maintaining older interfaces for a period */
-static struct rte_timer_data default_timer_data;
-
/* when debug is enabled, store some statistics */
#ifdef RTE_LIBRTE_TIMER_DEBUG
#define __TIMER_STAT_ADD(priv_timer, name, n) do { \
@@ -131,22 +128,6 @@ rte_timer_data_dealloc(uint32_t id)
return 0;
}
-void
-rte_timer_subsystem_init_v20(void)
-{
- unsigned lcore_id;
- struct priv_timer *priv_timer = default_timer_data.priv_timer;
-
- /* since priv_timer is static, it's zeroed by default, so only init some
- * fields.
- */
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id ++) {
- rte_spinlock_init(&priv_timer[lcore_id].list_lock);
- priv_timer[lcore_id].prev_lcore = lcore_id;
- }
-}
-VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
-
/* Init the timer library. Allocate an array of timer data structs in shared
* memory, and allocate the zeroth entry for use with original timer
* APIs. Since the intersection of the sets of lcore ids in primary and
@@ -154,7 +135,7 @@ VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
* multiple processes.
*/
int
-rte_timer_subsystem_init_v1905(void)
+rte_timer_subsystem_init(void)
{
const struct rte_memzone *mz;
struct rte_timer_data *data;
@@ -209,9 +190,6 @@ rte_timer_subsystem_init_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),
- rte_timer_subsystem_init_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);
void
rte_timer_subsystem_finalize(void)
@@ -552,42 +530,13 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
/* Reset and start the timer associated with the timer handle tim */
int
-rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg)
-{
- uint64_t cur_time = rte_get_timer_cycles();
- uint64_t period;
-
- if (unlikely((tim_lcore != (unsigned)LCORE_ID_ANY) &&
- !(rte_lcore_is_enabled(tim_lcore) ||
- rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))
- return -1;
-
- if (type == PERIODICAL)
- period = ticks;
- else
- period = 0;
-
- return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
- fct, arg, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
-
-int
-rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned int tim_lcore,
rte_timer_cb_t fct, void *arg)
{
return rte_timer_alt_reset(default_data_id, tim, ticks, type,
tim_lcore, fct, arg);
}
-MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type,
- unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg),
- rte_timer_reset_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);
int
rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
@@ -658,20 +607,10 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked,
/* Stop the timer associated with the timer handle tim */
int
-rte_timer_stop_v20(struct rte_timer *tim)
-{
- return __rte_timer_stop(tim, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);
-
-int
-rte_timer_stop_v1905(struct rte_timer *tim)
+rte_timer_stop(struct rte_timer *tim)
{
return rte_timer_alt_stop(default_data_id, tim);
}
-MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),
- rte_timer_stop_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);
int
rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
@@ -817,15 +756,8 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
priv_timer[lcore_id].running_tim = NULL;
}
-void
-rte_timer_manage_v20(void)
-{
- __rte_timer_manage(&default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
-
int
-rte_timer_manage_v1905(void)
+rte_timer_manage(void)
{
struct rte_timer_data *timer_data;
@@ -835,8 +767,6 @@ rte_timer_manage_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);
int
rte_timer_alt_manage(uint32_t timer_data_id,
@@ -1074,21 +1004,11 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
#endif
}
-void
-rte_timer_dump_stats_v20(FILE *f)
-{
- __rte_timer_dump_stats(&default_timer_data, f);
-}
-VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);
-
int
-rte_timer_dump_stats_v1905(FILE *f)
+rte_timer_dump_stats(FILE *f)
{
return rte_timer_alt_dump_stats(default_data_id, f);
}
-MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),
- rte_timer_dump_stats_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);
int
rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
index 05d287d8f2..9dc5fc3092 100644
--- a/lib/librte_timer/rte_timer.h
+++ b/lib/librte_timer/rte_timer.h
@@ -181,8 +181,6 @@ int rte_timer_data_dealloc(uint32_t id);
* subsystem
*/
int rte_timer_subsystem_init(void);
-int rte_timer_subsystem_init_v1905(void);
-void rte_timer_subsystem_init_v20(void);
/**
* @warning
@@ -250,13 +248,6 @@ void rte_timer_init(struct rte_timer *tim);
int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-
/**
* Loop until rte_timer_reset() succeeds.
@@ -313,8 +304,6 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
* - (-1): The timer is in the RUNNING or CONFIG state.
*/
int rte_timer_stop(struct rte_timer *tim);
-int rte_timer_stop_v1905(struct rte_timer *tim);
-int rte_timer_stop_v20(struct rte_timer *tim);
/**
* Loop until rte_timer_stop() succeeds.
@@ -358,8 +347,6 @@ int rte_timer_pending(struct rte_timer *tim);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_manage(void);
-int rte_timer_manage_v1905(void);
-void rte_timer_manage_v20(void);
/**
* Dump statistics about timers.
@@ -371,8 +358,6 @@ void rte_timer_manage_v20(void);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_dump_stats(FILE *f);
-int rte_timer_dump_stats_v1905(FILE *f);
-void rte_timer_dump_stats_v20(FILE *f);
/**
* @warning
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v3 3/9] buildtools: add ABI update shell script
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
` (2 preceding siblings ...)
2019-10-16 17:03 14% ` [dpdk-dev] [PATCH v3 2/9] buildtools: add script for updating symbols abi version Anatoly Burakov
@ 2019-10-16 17:03 23% ` Anatoly Burakov
2019-10-16 17:03 4% ` [dpdk-dev] [PATCH v3 4/9] timer: remove deprecated code Anatoly Burakov
` (5 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
In order to facilitate mass updating of version files, add a shell
script that recurses into lib/ and drivers/ directories and calls
the ABI version update script.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v3:
- Switch to sh rather than bash, and remove bash-isms
- Address review comments
v2:
- Add this patch to split the shell script from previous commit
- Fixup miscellaneous bugs
buildtools/update-abi.sh | 42 ++++++++++++++++++++++++++++++++++++++++
1 file changed, 42 insertions(+)
create mode 100755 buildtools/update-abi.sh
diff --git a/buildtools/update-abi.sh b/buildtools/update-abi.sh
new file mode 100755
index 0000000000..89ba5804a6
--- /dev/null
+++ b/buildtools/update-abi.sh
@@ -0,0 +1,42 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+abi_version=$1
+abi_version_file="./config/ABI_VERSION"
+update_path="lib drivers"
+
+if [ -z "$1" ]; then
+ # output to stderr
+ >&2 echo "Please provide ABI version"
+ exit 1
+fi
+
+# check version string format
+echo $abi_version | grep -q -e "^[[:digit:]]\{1,2\}\.[[:digit:]]\{1,2\}$"
+if [ "$?" -ne 0 ]; then
+ # output to stderr
+ >&2 echo "ABI version must be formatted as MAJOR.MINOR version"
+ exit 1
+fi
+
+if [ -n "$2" ]; then
+ abi_version_file=$2
+fi
+
+if [ -n "$3" ]; then
+ # drop $1 and $2
+ shift 2
+ # assign all other arguments as update paths
+ update_path=$@
+fi
+
+echo "New ABI version:" $abi_version
+echo "ABI_VERSION path:" $abi_version_file
+echo "Path to update:" $update_path
+
+echo $abi_version > $abi_version_file
+
+find $update_path -name \*version.map -exec \
+ ./buildtools/update_version_map_abi.py {} \
+ $abi_version \; -print
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v3 2/9] buildtools: add script for updating symbols abi version
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
2019-10-16 17:03 7% ` [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global Anatoly Burakov
@ 2019-10-16 17:03 14% ` Anatoly Burakov
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 3/9] buildtools: add ABI update shell script Anatoly Burakov
` (6 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev; +Cc: Pawel Modrak, john.mcnamara, bruce.richardson, thomas, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Add a script that automatically merges all stable ABI's under one
ABI section with the new version, while leaving experimental
section exactly as it is.
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v3:
- Add comments to regex patterns
v2:
- Reworked script to be pep8-compliant and more reliable
buildtools/update_version_map_abi.py | 170 +++++++++++++++++++++++++++
1 file changed, 170 insertions(+)
create mode 100755 buildtools/update_version_map_abi.py
diff --git a/buildtools/update_version_map_abi.py b/buildtools/update_version_map_abi.py
new file mode 100755
index 0000000000..50283e6a3d
--- /dev/null
+++ b/buildtools/update_version_map_abi.py
@@ -0,0 +1,170 @@
+#!/usr/bin/env python
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+"""
+A Python program to update the ABI version and function names in a DPDK
+lib_*_version.map file. Called from the buildtools/update_abi.sh utility.
+"""
+
+from __future__ import print_function
+import argparse
+import sys
+import re
+
+
+def __parse_map_file(f_in):
+ # match function name, followed by semicolon, followed by EOL, optionally
+ # with whitespace inbetween each item
+ func_line_regex = re.compile(r"\s*"
+ r"(?P<func>[a-zA-Z_0-9]+)"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+ # match section name, followed by opening bracked, followed by EOL,
+ # optionally with whitespace inbetween each item
+ section_begin_regex = re.compile(r"\s*"
+ r"(?P<version>[a-zA-Z0-9_\.]+)"
+ r"\s*"
+ r"{"
+ r"\s*"
+ r"$")
+ # match closing bracket, optionally followed by section name (for when we
+ # inherit from another ABI version), followed by semicolon, followed by
+ # EOL, optionally with whitespace inbetween each item
+ section_end_regex = re.compile(r"\s*"
+ r"}"
+ r"\s*"
+ r"(?P<parent>[a-zA-Z0-9_\.]+)?"
+ r"\s*"
+ r";"
+ r"\s*"
+ r"$")
+
+ # for stable ABI, we don't care about which version introduced which
+ # function, we just flatten the list. there are dupes in certain files, so
+ # use a set instead of a list
+ stable_lines = set()
+ # copy experimental section as is
+ experimental_lines = []
+ is_experimental = False
+
+ # gather all functions
+ for line in f_in:
+ # clean up the line
+ line = line.strip('\n').strip()
+
+ # is this an end of section?
+ match = section_end_regex.match(line)
+ if match:
+ # whatever section this was, it's not active any more
+ is_experimental = False
+ continue
+
+ # if we're in the middle of experimental section, we need to copy
+ # the section verbatim, so just add the line
+ if is_experimental:
+ experimental_lines += [line]
+ continue
+
+ # skip empty lines
+ if not line:
+ continue
+
+ # is this a beginning of a new section?
+ match = section_begin_regex.match(line)
+ if match:
+ cur_section = match.group("version")
+ # is it experimental?
+ is_experimental = cur_section == "EXPERIMENTAL"
+ continue
+
+ # is this a function?
+ match = func_line_regex.match(line)
+ if match:
+ stable_lines.add(match.group("func"))
+
+ return stable_lines, experimental_lines
+
+
+def __regenerate_map_file(f_out, abi_version, stable_lines,
+ experimental_lines):
+ # print ABI version header
+ print("DPDK_{} {{".format(abi_version), file=f_out)
+
+ if stable_lines:
+ # print global section
+ print("\tglobal:", file=f_out)
+ # blank line
+ print(file=f_out)
+
+ # print all stable lines, alphabetically sorted
+ for line in sorted(stable_lines):
+ print("\t{};".format(line), file=f_out)
+
+ # another blank line
+ print(file=f_out)
+
+ # print local section
+ print("\tlocal: *;", file=f_out)
+
+ # end stable version
+ print("};", file=f_out)
+
+ # do we have experimental lines?
+ if not experimental_lines:
+ return
+
+ # another blank line
+ print(file=f_out)
+
+ # start experimental section
+ print("EXPERIMENTAL {", file=f_out)
+
+ # print all experimental lines as they were
+ for line in experimental_lines:
+ # don't print empty whitespace
+ if not line:
+ print("", file=f_out)
+ else:
+ print("\t{}".format(line), file=f_out)
+
+ # end section
+ print("};", file=f_out)
+
+
+def __main():
+ arg_parser = argparse.ArgumentParser(
+ description='Merge versions in linker version script.')
+
+ arg_parser.add_argument("map_file", type=str,
+ help='path to linker version script file '
+ '(pattern: *version.map)')
+ arg_parser.add_argument("abi_version", type=str,
+ help='target ABI version (pattern: MAJOR.MINOR)')
+
+ parsed = arg_parser.parse_args()
+
+ if not parsed.map_file.endswith('version.map'):
+ print("Invalid input file: {}".format(parsed.map_file),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ if not re.match(r"\d{1,2}\.\d{1,2}", parsed.abi_version):
+ print("Invalid ABI version: {}".format(parsed.abi_version),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ with open(parsed.map_file) as f_in:
+ stable_lines, experimental_lines = __parse_map_file(f_in)
+
+ with open(parsed.map_file, 'w') as f_out:
+ __regenerate_map_file(f_out, parsed.abi_version, stable_lines,
+ experimental_lines)
+
+
+if __name__ == "__main__":
+ __main()
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
@ 2019-10-16 17:03 7% ` Anatoly Burakov
2019-10-17 8:44 9% ` Bruce Richardson
2019-10-16 17:03 14% ` [dpdk-dev] [PATCH v3 2/9] buildtools: add script for updating symbols abi version Anatoly Burakov
` (7 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Thomas Monjalon, Bruce Richardson, john.mcnamara,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
As per new ABI policy, all of the libraries are now versioned using
one global ABI version. Changes in this patch implement the
necessary steps to enable that.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v3:
- Removed Windows support from Makefile changes
- Removed unneeded path conversions from meson files
buildtools/meson.build | 2 ++
config/ABI_VERSION | 1 +
config/meson.build | 5 +++--
drivers/meson.build | 20 ++++++++++++--------
lib/meson.build | 18 +++++++++++-------
meson_options.txt | 2 --
mk/rte.lib.mk | 13 ++++---------
7 files changed, 33 insertions(+), 28 deletions(-)
create mode 100644 config/ABI_VERSION
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 32c79c1308..78ce69977d 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -12,3 +12,5 @@ if python3.found()
else
map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
endif
+
+is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
diff --git a/config/ABI_VERSION b/config/ABI_VERSION
new file mode 100644
index 0000000000..9a7c1e503f
--- /dev/null
+++ b/config/ABI_VERSION
@@ -0,0 +1 @@
+20.0
diff --git a/config/meson.build b/config/meson.build
index a27f731f85..3cfc02406c 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -17,7 +17,8 @@ endforeach
# set the major version, which might be used by drivers and libraries
# depending on the configuration options
pver = meson.project_version().split('.')
-major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
+abi_version = run_command(find_program('cat', 'more'),
+ files('ABI_VERSION')).stdout().strip()
# extract all version information into the build configuration
dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
@@ -37,7 +38,7 @@ endif
pmd_subdir_opt = get_option('drivers_install_subdir')
if pmd_subdir_opt.contains('<VERSION>')
- pmd_subdir_opt = major_version.join(pmd_subdir_opt.split('<VERSION>'))
+ pmd_subdir_opt = abi_version.join(pmd_subdir_opt.split('<VERSION>'))
endif
driver_install_path = join_paths(get_option('libdir'), pmd_subdir_opt)
eal_pmd_path = join_paths(get_option('prefix'), driver_install_path)
diff --git a/drivers/meson.build b/drivers/meson.build
index 2ed2e95411..fd628d9587 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -110,12 +110,19 @@ foreach class:dpdk_driver_classes
output: out_filename,
depends: [pmdinfogen, tmp_lib])
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/@2@_version.map'.format(
+ meson.current_source_dir(),
+ drv_path, lib_name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# now build the static driver
@@ -128,9 +135,6 @@ foreach class:dpdk_driver_classes
install: true)
# now build the shared driver
- version_map = '@0@/@1@/@2@_version.map'.format(
- meson.current_source_dir(),
- drv_path, lib_name)
shared_lib = shared_library(lib_name,
sources,
objects: objs,
diff --git a/lib/meson.build b/lib/meson.build
index e5ff838934..e626da778c 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -97,12 +97,18 @@ foreach l:libraries
cflags += '-DALLOW_EXPERIMENTAL_API'
endif
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/rte_@2@_version.map'.format(
+ meson.current_source_dir(), dir_name, name)
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
- lib_version = major_version
- so_version = major_version
+ lib_version = abi_version
+ so_version = abi_version
endif
# first build static lib
@@ -120,8 +126,6 @@ foreach l:libraries
# then use pre-build objects to build shared lib
sources = []
objs += static_lib.extract_all_objects(recursive: false)
- version_map = '@0@/@1@/rte_@2@_version.map'.format(
- meson.current_source_dir(), dir_name, name)
implib = dir_name + '.dll.a'
def_file = custom_target(name + '_def',
diff --git a/meson_options.txt b/meson_options.txt
index 448f3e63dc..000e38fd98 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -28,8 +28,6 @@ option('max_lcores', type: 'integer', value: 128,
description: 'maximum number of cores/threads supported by EAL')
option('max_numa_nodes', type: 'integer', value: 4,
description: 'maximum number of NUMA nodes supported by EAL')
-option('per_library_versions', type: 'boolean', value: true,
- description: 'true: each lib gets its own version number, false: DPDK version used for each lib')
option('tests', type: 'boolean', value: true,
description: 'build unit tests')
option('use_hpet', type: 'boolean', value: false,
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 4df8849a08..e1ea292b6e 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -11,20 +11,15 @@ EXTLIB_BUILD ?= n
# VPATH contains at least SRCDIR
VPATH += $(SRCDIR)
-ifneq ($(CONFIG_RTE_MAJOR_ABI),)
-ifneq ($(LIBABIVER),)
-LIBABIVER := $(CONFIG_RTE_MAJOR_ABI)
-endif
+ifneq ($(shell grep "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
+LIBABIVER := $(shell cat $(RTE_SRCDIR)/config/ABI_VERSION)
+else
+LIBABIVER := 0
endif
ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
LIB := $(patsubst %.a,%.so.$(LIBABIVER),$(LIB))
ifeq ($(EXTLIB_BUILD),n)
-ifeq ($(CONFIG_RTE_MAJOR_ABI),)
-ifeq ($(CONFIG_RTE_NEXT_ABI),y)
-LIB := $(LIB).1
-endif
-endif
CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
endif
endif
--
2.17.1
^ permalink raw reply [relevance 7%]
* [dpdk-dev] [PATCH v3 0/9] Implement the new ABI policy and add helper scripts
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
@ 2019-10-16 17:03 8% ` Anatoly Burakov
2019-10-17 8:50 4% ` Bruce Richardson
` (11 more replies)
2019-10-16 17:03 7% ` [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global Anatoly Burakov
` (8 subsequent siblings)
9 siblings, 12 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 17:03 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
This patchset prepares the codebase for the new ABI policy and
adds a few helper scripts.
There are two new scripts for managing ABI versions added. The
first one is a Python script that will read in a .map file,
flatten it and update the ABI version to the ABI version
specified on the command-line.
The second one is a shell script that will run the above mentioned
Python script recursively over the source tree and set the ABI
version to either that which is defined in config/ABI_VERSION, or
a user-specified one.
Example of its usage: buildtools/update-abi.sh 20.0
This will recurse into lib/ and drivers/ directory and update
whatever .map files it can find.
The other shell script that's added is one that can take in a .so
file and ensure that its declared public ABI matches either
current ABI, next ABI, or EXPERIMENTAL. This was moved to the
last commit because it made no sense to have it beforehand.
The source tree was verified to follow the new ABI policy using
the following command (assuming built binaries are in build/):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
This returns 0.
Changes since v2:
- Addressed Bruce's review comments
- Removed single distributor mode as per Dave's suggestion
Changes since v1:
- Reordered patchset to have removal of old ABI's before introducing
the new one to avoid compile breakages between patches
- Added a new patch fixing missing symbol in octeontx common
- Split script commits into multiple commits and reordered them
- Re-generated the ABI bump commit
- Verified all scripts to work
Anatoly Burakov (2):
buildtools: add ABI update shell script
drivers/octeontx: add missing public symbol
Marcin Baran (5):
config: change ABI versioning to global
timer: remove deprecated code
lpm: remove deprecated code
distributor: remove deprecated code
buildtools: add ABI versioning check script
Pawel Modrak (2):
buildtools: add script for updating symbols abi version
build: change ABI version to 20.0
app/test/test_distributor.c | 102 +-
app/test/test_distributor_perf.c | 12 -
buildtools/check-abi-version.sh | 54 +
buildtools/meson.build | 2 +
buildtools/update-abi.sh | 42 +
buildtools/update_version_map_abi.py | 170 +++
config/ABI_VERSION | 1 +
config/meson.build | 5 +-
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++-
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 7 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
drivers/meson.build | 20 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +-
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 +-
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 +-
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 +-
lib/librte_distributor/Makefile | 1 -
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 126 +--
lib/librte_distributor/rte_distributor.h | 1 -
.../rte_distributor_private.h | 35 -
.../rte_distributor_v1705.h | 61 --
lib/librte_distributor/rte_distributor_v20.c | 402 -------
lib/librte_distributor/rte_distributor_v20.h | 218 ----
.../rte_distributor_version.map | 16 +-
lib/librte_eal/rte_eal_version.map | 310 ++----
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +--
lib/librte_eventdev/rte_eventdev_version.map | 130 +--
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +-
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm.c | 996 +-----------------
lib/librte_lpm/rte_lpm.h | 88 --
lib/librte_lpm/rte_lpm6.c | 132 +--
lib/librte_lpm/rte_lpm6.h | 25 -
lib/librte_lpm/rte_lpm_version.map | 39 +-
lib/librte_mbuf/rte_mbuf_version.map | 41 +-
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +-
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +-
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer.c | 90 +-
lib/librte_timer/rte_timer.h | 15 -
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +-
lib/meson.build | 18 +-
meson_options.txt | 2 -
mk/rte.lib.mk | 13 +-
180 files changed, 1111 insertions(+), 3657 deletions(-)
create mode 100755 buildtools/check-abi-version.sh
create mode 100755 buildtools/update-abi.sh
create mode 100755 buildtools/update_version_map_abi.py
create mode 100644 config/ABI_VERSION
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
delete mode 100644 lib/librte_distributor/rte_distributor_v20.c
delete mode 100644 lib/librte_distributor/rte_distributor_v20.h
--
2.17.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script
2019-10-16 12:43 22% ` [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script Anatoly Burakov
@ 2019-10-16 13:33 4% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2019-10-16 13:33 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, john.mcnamara, thomas, david.marchand
On Wed, Oct 16, 2019 at 01:43:18PM +0100, Anatoly Burakov wrote:
> In order to facilitate mass updating of version files, add a shell
> script that recurses into lib/ and drivers/ directories and calls
> the ABI version update script.
>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v2:
> - Add this patch to split the shell script from previous commit
> - Fixup miscellaneous bugs
>
> buildtools/update-abi.sh | 36 ++++++++++++++++++++++++++++++++++++
> 1 file changed, 36 insertions(+)
> create mode 100755 buildtools/update-abi.sh
>
> diff --git a/buildtools/update-abi.sh b/buildtools/update-abi.sh
> new file mode 100755
> index 0000000000..a6f916a437
> --- /dev/null
> +++ b/buildtools/update-abi.sh
> @@ -0,0 +1,36 @@
> +#!/bin/bash
Does this actually need to be bash? Most of our scripts use plain "sh".
Also on FreeBSD bash is generally in /usr/local/bin not /bin.
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2019 Intel Corporation
> +
> +abi_version=""
> +abi_version_file="./config/ABI_VERSION"
> +update_path="lib drivers"
> +
> +if [ -z "$1" ]
> +then
While there are a few scripts in DPDK putting the "then" on the next line
most scripts put it on the same line as the "if", after a ";".
> + # output to stderr
> + >&2 echo "provide ABI version"
> + exit 1
> +fi
> +
> +abi_version=$1
I think you can just do this assignment at the top when you define
abi_version in the first place. Using $1 when it doesn't exist isn't a
problem.
> +
> +if [ -n "$2" ]
> +then
> + abi_version_file=$2
> +fi
> +
> +if [ -n "$3" ]
> +then
> + update_path=${@:3}
I think this might be a bash-ism, right? If so, I think using "shift" and
then directly using $@ should work instead to make it sh-compatible..
> +fi
> +
> +echo "New ABI version:" $abi_version
> +echo "ABI_VERSION path:" $abi_version_file
> +echo "Path to update:" $update_path
> +
> +echo $abi_version > $abi_version_file
Do we need to check the abi_version provided is in the correct format?
Should it have both major and minor components, or just major. I think the
former, so we can do minor bumps which keeping major compatibility.
> +
> +find $update_path -name \*version.map -exec \
> + ./buildtools/update_version_map_abi.py {} \
> + $abi_version \; -print
> --
> 2.17.1
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version
2019-10-16 12:43 14% ` [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
@ 2019-10-16 13:25 4% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2019-10-16 13:25 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: dev, Pawel Modrak, john.mcnamara, thomas, david.marchand
On Wed, Oct 16, 2019 at 01:43:17PM +0100, Anatoly Burakov wrote:
> From: Pawel Modrak <pawelx.modrak@intel.com>
>
> Add a script that automatically merges all stable ABI's under one
> ABI section with the new version, while leaving experimental
> section exactly as it is.
>
> Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
>
> Notes:
> v2:
> - Reworked script to be pep8-compliant and more reliable
>
> buildtools/update_version_map_abi.py | 148 +++++++++++++++++++++++++++
> 1 file changed, 148 insertions(+)
> create mode 100755 buildtools/update_version_map_abi.py
>
> diff --git a/buildtools/update_version_map_abi.py b/buildtools/update_version_map_abi.py
> new file mode 100755
> index 0000000000..ea9044cc81
> --- /dev/null
> +++ b/buildtools/update_version_map_abi.py
> @@ -0,0 +1,148 @@
> +#!/usr/bin/env python
> +# SPDX-License-Identifier: BSD-3-Clause
> +# Copyright(c) 2019 Intel Corporation
> +
> +"""
> +A Python program to update the ABI version and function names in a DPDK
> +lib_*_version.map file. Called from the buildtools/update_abi.sh utility.
> +"""
> +
> +from __future__ import print_function
> +import argparse
> +import sys
> +import re
> +
> +
> +def __parse_map_file(f_in):
> + func_line_regex = re.compile(r"\s*(?P<func>[a-zA-Z_0-9]+)\s*;\s*$")
> + section_begin_regex = re.compile(
> + r"\s*(?P<version>[a-zA-Z0-9_\.]+)\s*{\s*$")
> + section_end_regex = re.compile(
> + r"\s*}\s*(?P<parent>[a-zA-Z0-9_\.]+)?\s*;\s*$")
> +
To help readers of the code, can you put in a line or two of comment
explaining each regex a bit above.
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global Anatoly Burakov
@ 2019-10-16 13:22 4% ` Bruce Richardson
0 siblings, 0 replies; 200+ results
From: Bruce Richardson @ 2019-10-16 13:22 UTC (permalink / raw)
To: Anatoly Burakov
Cc: dev, Marcin Baran, Thomas Monjalon, john.mcnamara,
david.marchand, Pawel Modrak
On Wed, Oct 16, 2019 at 01:43:16PM +0100, Anatoly Burakov wrote:
> From: Marcin Baran <marcinx.baran@intel.com>
>
> The libraries should be maintained using global
> ABI versioning. The changes includes adding global
> ABI version support for both makefile and meson
> build system. Experimental libraries should be
> marked as 0.
>
> Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
> Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
Some comments inline below.
/Bruce
> buildtools/meson.build | 2 ++
> config/ABI_VERSION | 1 +
> config/meson.build | 3 ++-
> drivers/meson.build | 20 ++++++++++++++------
> lib/meson.build | 18 +++++++++++++-----
> meson_options.txt | 2 --
> mk/rte.lib.mk | 19 +++++++++++--------
> 7 files changed, 43 insertions(+), 22 deletions(-)
> create mode 100644 config/ABI_VERSION
>
> diff --git a/buildtools/meson.build b/buildtools/meson.build
> index 32c79c1308..78ce69977d 100644
> --- a/buildtools/meson.build
> +++ b/buildtools/meson.build
> @@ -12,3 +12,5 @@ if python3.found()
> else
> map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
> endif
> +
> +is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
> diff --git a/config/ABI_VERSION b/config/ABI_VERSION
> new file mode 100644
> index 0000000000..9a7c1e503f
> --- /dev/null
> +++ b/config/ABI_VERSION
> @@ -0,0 +1 @@
> +20.0
> diff --git a/config/meson.build b/config/meson.build
> index a27f731f85..25ecf928e4 100644
> --- a/config/meson.build
> +++ b/config/meson.build
> @@ -17,7 +17,8 @@ endforeach
> # set the major version, which might be used by drivers and libraries
> # depending on the configuration options
> pver = meson.project_version().split('.')
> -major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
> +major_version = run_command(find_program('cat', 'more'),
> + files('ABI_VERSION')).stdout().strip()
>
I wonder if we should rename this to abi_version rather than major_version?
> # extract all version information into the build configuration
> dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
> diff --git a/drivers/meson.build b/drivers/meson.build
> index 2ed2e95411..5c5fe87c7e 100644
> --- a/drivers/meson.build
> +++ b/drivers/meson.build
> @@ -110,9 +110,20 @@ foreach class:dpdk_driver_classes
> output: out_filename,
> depends: [pmdinfogen, tmp_lib])
>
> - if get_option('per_library_versions')
> - lib_version = '@0@.1'.format(version)
> - so_version = '@0@'.format(version)
> + version_map = '@0@/@1@/@2@_version.map'.format(
> + meson.current_source_dir(),
> + drv_path, lib_name)
> +
> + if is_windows
> + version_map = '\\'.join(version_map.split('/'))
> + endif
Don't think this block should be needed. Windows generally supports using
"/" as a separator, even if traditionally "\" was used.
> +
> + is_experimental = run_command(is_experimental_cmd,
> + files(version_map)).returncode()
> +
> + if is_experimental != 0
> + lib_version = '0.1'
> + so_version = '0'
> else
> lib_version = major_version
> so_version = major_version
> @@ -128,9 +139,6 @@ foreach class:dpdk_driver_classes
> install: true)
>
> # now build the shared driver
> - version_map = '@0@/@1@/@2@_version.map'.format(
> - meson.current_source_dir(),
> - drv_path, lib_name)
> shared_lib = shared_library(lib_name,
> sources,
> objects: objs,
> diff --git a/lib/meson.build b/lib/meson.build
> index e5ff838934..3892c16e8f 100644
> --- a/lib/meson.build
> +++ b/lib/meson.build
> @@ -97,9 +97,19 @@ foreach l:libraries
> cflags += '-DALLOW_EXPERIMENTAL_API'
> endif
>
> - if get_option('per_library_versions')
> - lib_version = '@0@.1'.format(version)
> - so_version = '@0@'.format(version)
> + version_map = '@0@/@1@/rte_@2@_version.map'.format(
> + meson.current_source_dir(), dir_name, name)
> +
> + if is_windows
> + version_map = '\\'.join(version_map.split('/'))
> + endif
As above.
> +
> + is_experimental = run_command(is_experimental_cmd,
> + files(version_map)).returncode()
> +
> + if is_experimental != 0
> + lib_version = '0.1'
> + so_version = '0'
> else
> lib_version = major_version
> so_version = major_version
> @@ -120,8 +130,6 @@ foreach l:libraries
> # then use pre-build objects to build shared lib
> sources = []
> objs += static_lib.extract_all_objects(recursive: false)
> - version_map = '@0@/@1@/rte_@2@_version.map'.format(
> - meson.current_source_dir(), dir_name, name)
> implib = dir_name + '.dll.a'
>
> def_file = custom_target(name + '_def',
> diff --git a/meson_options.txt b/meson_options.txt
> index 448f3e63dc..000e38fd98 100644
> --- a/meson_options.txt
> +++ b/meson_options.txt
> @@ -28,8 +28,6 @@ option('max_lcores', type: 'integer', value: 128,
> description: 'maximum number of cores/threads supported by EAL')
> option('max_numa_nodes', type: 'integer', value: 4,
> description: 'maximum number of NUMA nodes supported by EAL')
> -option('per_library_versions', type: 'boolean', value: true,
> - description: 'true: each lib gets its own version number, false: DPDK version used for each lib')
> option('tests', type: 'boolean', value: true,
> description: 'build unit tests')
> option('use_hpet', type: 'boolean', value: false,
> diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
> index 4df8849a08..f84161c6d5 100644
> --- a/mk/rte.lib.mk
> +++ b/mk/rte.lib.mk
> @@ -11,20 +11,23 @@ EXTLIB_BUILD ?= n
> # VPATH contains at least SRCDIR
> VPATH += $(SRCDIR)
>
> -ifneq ($(CONFIG_RTE_MAJOR_ABI),)
> -ifneq ($(LIBABIVER),)
> -LIBABIVER := $(CONFIG_RTE_MAJOR_ABI)
> +ifeq ($(OS), Windows_NT)
> +search_cmd = findstr
> +print_cmd = more
> +else
> +search_cmd = grep
> +print_cmd = cat
We don't support make on windows, so no need for using findstr.
> endif
> +
> +ifneq ($(shell $(search_cmd) "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
> +LIBABIVER := $(shell $(print_cmd) $(RTE_SRCDIR)/config/ABI_VERSION)
> +else
> +LIBABIVER := 0
> endif
>
> ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
> LIB := $(patsubst %.a,%.so.$(LIBABIVER),$(LIB))
> ifeq ($(EXTLIB_BUILD),n)
> -ifeq ($(CONFIG_RTE_MAJOR_ABI),)
> -ifeq ($(CONFIG_RTE_NEXT_ABI),y)
> -LIB := $(LIB).1
> -endif
> -endif
> CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
> endif
> endif
> --
> 2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 10/10] buildtools: add ABI versioning check script
` (8 preceding siblings ...)
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 09/10] build: change ABI version to 20.0 Anatoly Burakov
@ 2019-10-16 12:43 23% ` Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, john.mcnamara, bruce.richardson, thomas,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
Add a shell script that checks whether built libraries are
versioned with expected ABI (current ABI, current ABI + 1,
or EXPERIMENTAL).
The following command was used to verify current source tree
(assuming build directory is in ./build):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to the end of the patchset
- Fixed bug when ABI symbols were not found because the .so
did not declare any public symbols
buildtools/check-abi-version.sh | 54 +++++++++++++++++++++++++++++++++
1 file changed, 54 insertions(+)
create mode 100755 buildtools/check-abi-version.sh
diff --git a/buildtools/check-abi-version.sh b/buildtools/check-abi-version.sh
new file mode 100755
index 0000000000..29aea97735
--- /dev/null
+++ b/buildtools/check-abi-version.sh
@@ -0,0 +1,54 @@
+#!/bin/sh
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+# Check whether library symbols have correct
+# version (provided ABI number or provided ABI
+# number + 1 or EXPERIMENTAL).
+# Args:
+# $1: path of the library .so file
+# $2: ABI major version number to check
+# (defaults to ABI_VERSION file value)
+
+if [ -z "$1" ]; then
+ echo "Script checks whether library symbols have"
+ echo "correct version (ABI_VER/ABI_VER+1/EXPERIMENTAL)"
+ echo "Usage:"
+ echo " $0 SO_FILE_PATH [ABI_VER]"
+ exit 1
+fi
+
+LIB="$1"
+DEFAULT_ABI=$(cat "$(dirname \
+ $(readlink -f $0))/../config/ABI_VERSION" | \
+ cut -d'.' -f 1)
+ABIVER="DPDK_${2-$DEFAULT_ABI}"
+NEXT_ABIVER="DPDK_$((${2-$DEFAULT_ABI}+1))"
+
+ret=0
+
+# get output of objdump
+OBJ_DUMP_OUTPUT=`objdump -TC --section=.text ${LIB} 2>&1 | grep ".text"`
+
+# there may not be any .text sections in the .so file, in which case exit early
+echo "${OBJ_DUMP_OUTPUT}" | grep "not found in any input file" -q
+if [ "$?" -eq 0 ]; then
+ exit 0
+fi
+
+# we have symbols, so let's see if the versions are correct
+for SYM in `echo "${OBJ_DUMP_OUTPUT}" | awk '{print $(NF-1) "-" $NF}'`
+do
+ version=$(echo $SYM | cut -d'-' -f 1)
+ symbol=$(echo $SYM | cut -d'-' -f 2)
+ case $version in (*"$ABIVER"*|*"$NEXT_ABIVER"*|"EXPERIMENTAL")
+ ;;
+ (*)
+ echo "Warning: symbol $symbol ($version) should be annotated " \
+ "as ABI version $ABIVER / $NEXT_ABIVER, or EXPERIMENTAL."
+ ret=1
+ ;;
+ esac
+done
+
+exit $ret
--
2.17.1
^ permalink raw reply [relevance 23%]
* [dpdk-dev] [PATCH v2 08/10] drivers/octeontx: add missing public symbol
` (6 preceding siblings ...)
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 06/10] distributor: " Anatoly Burakov
@ 2019-10-16 12:43 3% ` Anatoly Burakov
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-16 12:43 23% ` [dpdk-dev] [PATCH v2 10/10] buildtools: add ABI versioning check script Anatoly Burakov
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Jerin Jacob, john.mcnamara, bruce.richardson, thomas,
david.marchand, pbhagavatula, stable
The logtype symbol was missing from the .map file. Add it.
Fixes: d8dd31652cf4 ("common/octeontx: move mbox to common folder")
Cc: pbhagavatula@caviumnetworks.com
Cc: stable@dpdk.org
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- add this patch to avoid compile breakage when bumping ABI
drivers/common/octeontx/rte_common_octeontx_version.map | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/octeontx/rte_common_octeontx_version.map b/drivers/common/octeontx/rte_common_octeontx_version.map
index f04b3b7f8a..a9b3cff9bc 100644
--- a/drivers/common/octeontx/rte_common_octeontx_version.map
+++ b/drivers/common/octeontx/rte_common_octeontx_version.map
@@ -1,6 +1,7 @@
DPDK_18.05 {
global:
+ octeontx_logtype_mbox;
octeontx_mbox_set_ram_mbox_base;
octeontx_mbox_set_reg;
octeontx_mbox_send;
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v2 06/10] distributor: remove deprecated code
` (5 preceding siblings ...)
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 05/10] lpm: " Anatoly Burakov
@ 2019-10-16 12:43 4% ` Anatoly Burakov
2019-10-16 12:43 3% ` [dpdk-dev] [PATCH v2 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
` (2 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, David Hunt, john.mcnamara, bruce.richardson,
thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_distributor/rte_distributor.c | 56 +++--------------
.../rte_distributor_v1705.h | 61 -------------------
2 files changed, 9 insertions(+), 108 deletions(-)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
diff --git a/lib/librte_distributor/rte_distributor.c b/lib/librte_distributor/rte_distributor.c
index 21eb1fb0a1..ca3f21b833 100644
--- a/lib/librte_distributor/rte_distributor.c
+++ b/lib/librte_distributor/rte_distributor.c
@@ -19,7 +19,6 @@
#include "rte_distributor_private.h"
#include "rte_distributor.h"
#include "rte_distributor_v20.h"
-#include "rte_distributor_v1705.h"
TAILQ_HEAD(rte_dist_burst_list, rte_distributor);
@@ -33,7 +32,7 @@ EAL_REGISTER_TAILQ(rte_dist_burst_tailq)
/**** Burst Packet APIs called by workers ****/
void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
+rte_distributor_request_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt,
unsigned int count)
{
@@ -78,14 +77,9 @@ rte_distributor_request_pkt_v1705(struct rte_distributor *d,
*/
*retptr64 |= RTE_DISTRIB_GET_BUF;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_request_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_request_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count),
- rte_distributor_request_pkt_v1705);
int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
+rte_distributor_poll_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -119,13 +113,9 @@ rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_poll_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_poll_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts),
- rte_distributor_poll_pkt_v1705);
int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
+rte_distributor_get_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **pkts,
struct rte_mbuf **oldpkt, unsigned int return_count)
{
@@ -153,14 +143,9 @@ rte_distributor_get_pkt_v1705(struct rte_distributor *d,
}
return count;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_get_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_get_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int return_count),
- rte_distributor_get_pkt_v1705);
int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
+rte_distributor_return_pkt(struct rte_distributor *d,
unsigned int worker_id, struct rte_mbuf **oldpkt, int num)
{
struct rte_distributor_buffer *buf = &d->bufs[worker_id];
@@ -187,10 +172,6 @@ rte_distributor_return_pkt_v1705(struct rte_distributor *d,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_return_pkt, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_return_pkt(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num),
- rte_distributor_return_pkt_v1705);
/**** APIs called on distributor core ***/
@@ -336,7 +317,7 @@ release(struct rte_distributor *d, unsigned int wkr)
/* process a set of packets to distribute them to workers */
int
-rte_distributor_process_v1705(struct rte_distributor *d,
+rte_distributor_process(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int num_mbufs)
{
unsigned int next_idx = 0;
@@ -470,14 +451,10 @@ rte_distributor_process_v1705(struct rte_distributor *d,
return num_mbufs;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_process, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_process(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs),
- rte_distributor_process_v1705);
/* return to the caller, packets returned from workers */
int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
+rte_distributor_returned_pkts(struct rte_distributor *d,
struct rte_mbuf **mbufs, unsigned int max_mbufs)
{
struct rte_distributor_returned_pkts *returns = &d->returns;
@@ -502,10 +479,6 @@ rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
return retval;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_returned_pkts, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_returned_pkts(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs),
- rte_distributor_returned_pkts_v1705);
/*
* Return the number of packets in-flight in a distributor, i.e. packets
@@ -527,7 +500,7 @@ total_outstanding(const struct rte_distributor *d)
* queued up.
*/
int
-rte_distributor_flush_v1705(struct rte_distributor *d)
+rte_distributor_flush(struct rte_distributor *d)
{
unsigned int flushed;
unsigned int wkr;
@@ -556,13 +529,10 @@ rte_distributor_flush_v1705(struct rte_distributor *d)
return flushed;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_flush, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_distributor_flush(struct rte_distributor *d),
- rte_distributor_flush_v1705);
/* clears the internal returns array in the distributor */
void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d)
+rte_distributor_clear_returns(struct rte_distributor *d)
{
unsigned int wkr;
@@ -576,13 +546,10 @@ rte_distributor_clear_returns_v1705(struct rte_distributor *d)
for (wkr = 0; wkr < d->num_workers; wkr++)
d->bufs[wkr].retptr64[0] = 0;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_clear_returns, _v1705, 17.05);
-MAP_STATIC_SYMBOL(void rte_distributor_clear_returns(struct rte_distributor *d),
- rte_distributor_clear_returns_v1705);
/* creates a distributor instance */
struct rte_distributor *
-rte_distributor_create_v1705(const char *name,
+rte_distributor_create(const char *name,
unsigned int socket_id,
unsigned int num_workers,
unsigned int alg_type)
@@ -656,8 +623,3 @@ rte_distributor_create_v1705(const char *name,
return d;
}
-BIND_DEFAULT_SYMBOL(rte_distributor_create, _v1705, 17.05);
-MAP_STATIC_SYMBOL(struct rte_distributor *rte_distributor_create(
- const char *name, unsigned int socket_id,
- unsigned int num_workers, unsigned int alg_type),
- rte_distributor_create_v1705);
diff --git a/lib/librte_distributor/rte_distributor_v1705.h b/lib/librte_distributor/rte_distributor_v1705.h
deleted file mode 100644
index df4d9e8150..0000000000
--- a/lib/librte_distributor/rte_distributor_v1705.h
+++ /dev/null
@@ -1,61 +0,0 @@
-/* SPDX-License-Identifier: BSD-3-Clause
- * Copyright(c) 2017 Intel Corporation
- */
-
-#ifndef _RTE_DISTRIB_V1705_H_
-#define _RTE_DISTRIB_V1705_H_
-
-/**
- * @file
- * RTE distributor
- *
- * The distributor is a component which is designed to pass packets
- * one-at-a-time to workers, with dynamic load balancing.
- */
-
-#ifdef __cplusplus
-extern "C" {
-#endif
-
-struct rte_distributor *
-rte_distributor_create_v1705(const char *name, unsigned int socket_id,
- unsigned int num_workers,
- unsigned int alg_type);
-
-int
-rte_distributor_process_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int num_mbufs);
-
-int
-rte_distributor_returned_pkts_v1705(struct rte_distributor *d,
- struct rte_mbuf **mbufs, unsigned int max_mbufs);
-
-int
-rte_distributor_flush_v1705(struct rte_distributor *d);
-
-void
-rte_distributor_clear_returns_v1705(struct rte_distributor *d);
-
-int
-rte_distributor_get_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **pkts,
- struct rte_mbuf **oldpkt, unsigned int retcount);
-
-int
-rte_distributor_return_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt, int num);
-
-void
-rte_distributor_request_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **oldpkt,
- unsigned int count);
-
-int
-rte_distributor_poll_pkt_v1705(struct rte_distributor *d,
- unsigned int worker_id, struct rte_mbuf **mbufs);
-
-#ifdef __cplusplus
-}
-#endif
-
-#endif
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 05/10] lpm: remove deprecated code
` (4 preceding siblings ...)
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 04/10] timer: remove deprecated code Anatoly Burakov
@ 2019-10-16 12:43 2% ` Anatoly Burakov
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 06/10] distributor: " Anatoly Burakov
` (3 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Bruce Richardson, Vladimir Medvedkin,
john.mcnamara, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_lpm/rte_lpm.c | 996 ++------------------------------------
lib/librte_lpm/rte_lpm.h | 88 ----
lib/librte_lpm/rte_lpm6.c | 132 +----
lib/librte_lpm/rte_lpm6.h | 25 -
4 files changed, 48 insertions(+), 1193 deletions(-)
diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c
index 3a929a1b16..2687564194 100644
--- a/lib/librte_lpm/rte_lpm.c
+++ b/lib/librte_lpm/rte_lpm.c
@@ -89,34 +89,8 @@ depth_to_range(uint8_t depth)
/*
* Find an existing lpm table and return a pointer to it.
*/
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name)
-{
- struct rte_lpm_v20 *l = NULL;
- struct rte_tailq_entry *te;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_read_lock();
- TAILQ_FOREACH(te, lpm_list, next) {
- l = te->data;
- if (strncmp(name, l->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
- rte_mcfg_tailq_read_unlock();
-
- if (te == NULL) {
- rte_errno = ENOENT;
- return NULL;
- }
-
- return l;
-}
-VERSION_SYMBOL(rte_lpm_find_existing, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name)
+rte_lpm_find_existing(const char *name)
{
struct rte_lpm *l = NULL;
struct rte_tailq_entry *te;
@@ -139,88 +113,12 @@ rte_lpm_find_existing_v1604(const char *name)
return l;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_find_existing, _v1604, 16.04);
-MAP_STATIC_SYMBOL(struct rte_lpm *rte_lpm_find_existing(const char *name),
- rte_lpm_find_existing_v1604);
/*
* Allocates memory for LPM object
*/
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules,
- __rte_unused int flags)
-{
- char mem_name[RTE_LPM_NAMESIZE];
- struct rte_lpm_v20 *lpm = NULL;
- struct rte_tailq_entry *te;
- uint32_t mem_size;
- struct rte_lpm_list *lpm_list;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- RTE_BUILD_BUG_ON(sizeof(struct rte_lpm_tbl_entry_v20) != 2);
-
- /* Check user arguments. */
- if ((name == NULL) || (socket_id < -1) || (max_rules == 0)) {
- rte_errno = EINVAL;
- return NULL;
- }
-
- snprintf(mem_name, sizeof(mem_name), "LPM_%s", name);
-
- /* Determine the amount of memory to allocate. */
- mem_size = sizeof(*lpm) + (sizeof(lpm->rules_tbl[0]) * max_rules);
-
- rte_mcfg_tailq_write_lock();
-
- /* guarantee there's no existing */
- TAILQ_FOREACH(te, lpm_list, next) {
- lpm = te->data;
- if (strncmp(name, lpm->name, RTE_LPM_NAMESIZE) == 0)
- break;
- }
-
- if (te != NULL) {
- lpm = NULL;
- rte_errno = EEXIST;
- goto exit;
- }
-
- /* allocate tailq entry */
- te = rte_zmalloc("LPM_TAILQ_ENTRY", sizeof(*te), 0);
- if (te == NULL) {
- RTE_LOG(ERR, LPM, "Failed to allocate tailq entry\n");
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Allocate memory to store the LPM data structures. */
- lpm = rte_zmalloc_socket(mem_name, mem_size,
- RTE_CACHE_LINE_SIZE, socket_id);
- if (lpm == NULL) {
- RTE_LOG(ERR, LPM, "LPM memory allocation failed\n");
- rte_free(te);
- rte_errno = ENOMEM;
- goto exit;
- }
-
- /* Save user arguments. */
- lpm->max_rules = max_rules;
- strlcpy(lpm->name, name, sizeof(lpm->name));
-
- te->data = lpm;
-
- TAILQ_INSERT_TAIL(lpm_list, te, next);
-
-exit:
- rte_mcfg_tailq_write_unlock();
-
- return lpm;
-}
-VERSION_SYMBOL(rte_lpm_create, _v20, 2.0);
-
struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
+rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config)
{
char mem_name[RTE_LPM_NAMESIZE];
@@ -320,45 +218,12 @@ rte_lpm_create_v1604(const char *name, int socket_id,
return lpm;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_create, _v1604, 16.04);
-MAP_STATIC_SYMBOL(
- struct rte_lpm *rte_lpm_create(const char *name, int socket_id,
- const struct rte_lpm_config *config), rte_lpm_create_v1604);
/*
* Deallocates memory for given LPM table.
*/
void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm)
-{
- struct rte_lpm_list *lpm_list;
- struct rte_tailq_entry *te;
-
- /* Check user arguments. */
- if (lpm == NULL)
- return;
-
- lpm_list = RTE_TAILQ_CAST(rte_lpm_tailq.head, rte_lpm_list);
-
- rte_mcfg_tailq_write_lock();
-
- /* find our tailq entry */
- TAILQ_FOREACH(te, lpm_list, next) {
- if (te->data == (void *) lpm)
- break;
- }
- if (te != NULL)
- TAILQ_REMOVE(lpm_list, te, next);
-
- rte_mcfg_tailq_write_unlock();
-
- rte_free(lpm);
- rte_free(te);
-}
-VERSION_SYMBOL(rte_lpm_free, _v20, 2.0);
-
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm)
+rte_lpm_free(struct rte_lpm *lpm)
{
struct rte_lpm_list *lpm_list;
struct rte_tailq_entry *te;
@@ -386,9 +251,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
rte_free(lpm);
rte_free(te);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
- rte_lpm_free_v1604);
/*
* Adds a rule to the rule table.
@@ -401,79 +263,7 @@ MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t rule_gindex, rule_index, last_rule;
- int i;
-
- VERIFY_DEPTH(depth);
-
- /* Scan through rule group to see if rule already exists. */
- if (lpm->rule_info[depth - 1].used_rules > 0) {
-
- /* rule_gindex stands for rule group index. */
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- /* Initialise rule_index to point to start of rule group. */
- rule_index = rule_gindex;
- /* Last rule = Last used rule in this rule group. */
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- for (; rule_index < last_rule; rule_index++) {
-
- /* If rule already exists update its next_hop and return. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked) {
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- return rule_index;
- }
- }
-
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
- } else {
- /* Calculate the position in which the rule will be stored. */
- rule_index = 0;
-
- for (i = depth - 1; i > 0; i--) {
- if (lpm->rule_info[i - 1].used_rules > 0) {
- rule_index = lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules;
- break;
- }
- }
- if (rule_index == lpm->max_rules)
- return -ENOSPC;
-
- lpm->rule_info[depth - 1].first_rule = rule_index;
- }
-
- /* Make room for the new rule in the array. */
- for (i = RTE_LPM_MAX_DEPTH; i > depth; i--) {
- if (lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules == lpm->max_rules)
- return -ENOSPC;
-
- if (lpm->rule_info[i - 1].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i - 1].first_rule
- + lpm->rule_info[i - 1].used_rules]
- = lpm->rules_tbl[lpm->rule_info[i - 1].first_rule];
- lpm->rule_info[i - 1].first_rule++;
- }
- }
-
- /* Add the new rule. */
- lpm->rules_tbl[rule_index].ip = ip_masked;
- lpm->rules_tbl[rule_index].next_hop = next_hop;
-
- /* Increment the used rules counter for this rule group. */
- lpm->rule_info[depth - 1].used_rules++;
-
- return rule_index;
-}
-
-static int32_t
-rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+rule_add(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
uint32_t rule_gindex, rule_index, last_rule;
@@ -549,30 +339,7 @@ rule_add_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static void
-rule_delete_v20(struct rte_lpm_v20 *lpm, int32_t rule_index, uint8_t depth)
-{
- int i;
-
- VERIFY_DEPTH(depth);
-
- lpm->rules_tbl[rule_index] =
- lpm->rules_tbl[lpm->rule_info[depth - 1].first_rule
- + lpm->rule_info[depth - 1].used_rules - 1];
-
- for (i = depth; i < RTE_LPM_MAX_DEPTH; i++) {
- if (lpm->rule_info[i].used_rules > 0) {
- lpm->rules_tbl[lpm->rule_info[i].first_rule - 1] =
- lpm->rules_tbl[lpm->rule_info[i].first_rule
- + lpm->rule_info[i].used_rules - 1];
- lpm->rule_info[i].first_rule--;
- }
- }
-
- lpm->rule_info[depth - 1].used_rules--;
-}
-
-static void
-rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
+rule_delete(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
{
int i;
@@ -599,28 +366,7 @@ rule_delete_v1604(struct rte_lpm *lpm, int32_t rule_index, uint8_t depth)
* NOTE: Valid range for depth parameter is 1 .. 32 inclusive.
*/
static int32_t
-rule_find_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth)
-{
- uint32_t rule_gindex, last_rule, rule_index;
-
- VERIFY_DEPTH(depth);
-
- rule_gindex = lpm->rule_info[depth - 1].first_rule;
- last_rule = rule_gindex + lpm->rule_info[depth - 1].used_rules;
-
- /* Scan used rules at given depth to find rule. */
- for (rule_index = rule_gindex; rule_index < last_rule; rule_index++) {
- /* If rule is found return the rule index. */
- if (lpm->rules_tbl[rule_index].ip == ip_masked)
- return rule_index;
- }
-
- /* If rule is not found return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
+rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
{
uint32_t rule_gindex, last_rule, rule_index;
@@ -644,42 +390,7 @@ rule_find_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)
* Find, clean and allocate a tbl8.
*/
static int32_t
-tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20 *tbl8)
-{
- uint32_t group_idx; /* tbl8 group index. */
- struct rte_lpm_tbl_entry_v20 *tbl8_entry;
-
- /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
- for (group_idx = 0; group_idx < RTE_LPM_TBL8_NUM_GROUPS;
- group_idx++) {
- tbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
- /* If a free tbl8 group is found clean it and set as VALID. */
- if (!tbl8_entry->valid_group) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = VALID,
- };
- new_tbl8_entry.next_hop = 0;
-
- memset(&tbl8_entry[0], 0,
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES *
- sizeof(tbl8_entry[0]));
-
- __atomic_store(tbl8_entry, &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- /* Return group index for allocated tbl8 group. */
- return group_idx;
- }
- }
-
- /* If there are no tbl8 groups free then return error. */
- return -ENOSPC;
-}
-
-static int32_t
-tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
+tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
{
uint32_t group_idx; /* tbl8 group index. */
struct rte_lpm_tbl_entry *tbl8_entry;
@@ -713,22 +424,7 @@ tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
}
static void
-tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start)
-{
- /* Set tbl8 group invalid*/
- struct rte_lpm_tbl_entry_v20 zero_tbl8_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = INVALID,
- };
- zero_tbl8_entry.next_hop = 0;
-
- __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
- __ATOMIC_RELAXED);
-}
-
-static void
-tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
+tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
{
/* Set tbl8 group invalid*/
struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
@@ -738,78 +434,7 @@ tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)
}
static __rte_noinline int32_t
-add_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index, tbl24_range, tbl8_index, tbl8_group_end, i, j;
-
- /* Calculate the index into Table24. */
- tbl24_index = ip >> 8;
- tbl24_range = depth_to_range(depth);
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
- /*
- * For invalid OR valid and non-extended tbl 24 entries set
- * entry.
- */
- if (!lpm->tbl24[i].valid || (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth)) {
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .valid = VALID,
- .valid_group = 0,
- .depth = depth,
- };
- new_tbl24_entry.next_hop = next_hop;
-
- /* Setting tbl24 entry in one go to avoid race
- * conditions
- */
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- continue;
- }
-
- if (lpm->tbl24[i].valid_group == 1) {
- /* If tbl24 entry is valid and extended calculate the
- * index into tbl8.
- */
- tbl8_index = lpm->tbl24[i].group_idx *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < tbl8_group_end; j++) {
- if (!lpm->tbl8[j].valid ||
- lpm->tbl8[j].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = depth,
- };
- new_tbl8_entry.next_hop = next_hop;
-
- /*
- * Setting tbl8 entry in one go to avoid
- * race conditions
- */
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+add_depth_small(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -881,150 +506,7 @@ add_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
static __rte_noinline int32_t
-add_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked, uint8_t depth,
- uint8_t next_hop)
-{
- uint32_t tbl24_index;
- int32_t tbl8_group_index, tbl8_group_start, tbl8_group_end, tbl8_index,
- tbl8_range, i;
-
- tbl24_index = (ip_masked >> 8);
- tbl8_range = depth_to_range(depth);
-
- if (!lpm->tbl24[tbl24_index].valid) {
- /* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- /* Check tbl8 allocation was successful. */
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- /* Find index into tbl8 and range. */
- tbl8_index = (tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES) +
- (ip_masked & 0xFF);
-
- /* Set tbl8 entry. */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } /* If valid entry but not extended calculate the index into Table8. */
- else if (lpm->tbl24[tbl24_index].valid_group == 0) {
- /* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v20(lpm->tbl8);
-
- if (tbl8_group_index < 0) {
- return tbl8_group_index;
- }
-
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_group_end = tbl8_group_start +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /* Populate new tbl8 with tbl24 value. */
- for (i = tbl8_group_start; i < tbl8_group_end; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = lpm->tbl24[tbl24_index].depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop =
- lpm->tbl24[tbl24_index].next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- /* Insert new rule into the tbl8 entry. */
- for (i = tbl8_index; i < tbl8_index + tbl8_range; i++) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
-
- /*
- * Update tbl24 entry to point to new tbl8 entry. Note: The
- * ext_flag and tbl8_index need to be updated simultaneously,
- * so assign whole structure in one go.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .group_idx = (uint8_t)tbl8_group_index,
- .valid = VALID,
- .valid_group = 1,
- .depth = 0,
- };
-
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELEASE);
-
- } else { /*
- * If it is valid, extended entry calculate the index into tbl8.
- */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
-
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
-
- if (!lpm->tbl8[i].valid ||
- lpm->tbl8[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = depth,
- .valid_group = lpm->tbl8[i].valid_group,
- };
- new_tbl8_entry.next_hop = next_hop;
- /*
- * Setting tbl8 entry in one go to avoid race
- * condition
- */
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
-
- continue;
- }
- }
- }
-
- return 0;
-}
-
-static __rte_noinline int32_t
-add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
+add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
uint32_t next_hop)
{
#define group_idx next_hop
@@ -1037,7 +519,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
if (!lpm->tbl24[tbl24_index].valid) {
/* Search for a free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
/* Check tbl8 allocation was successful. */
if (tbl8_group_index < 0) {
@@ -1083,7 +565,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
} /* If valid entry but not extended calculate the index into Table8. */
else if (lpm->tbl24[tbl24_index].valid_group == 0) {
/* Search for free tbl8 group. */
- tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm->number_tbl8s);
+ tbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);
if (tbl8_group_index < 0) {
return tbl8_group_index;
@@ -1177,48 +659,7 @@ add_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,
* Add a route
*/
int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop)
-{
- int32_t rule_index, status = 0;
- uint32_t ip_masked;
-
- /* Check user arguments. */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- ip_masked = ip & depth_to_mask(depth);
-
- /* Add the rule to the rule table. */
- rule_index = rule_add_v20(lpm, ip_masked, depth, next_hop);
-
- /* If the is no space available for new rule return error. */
- if (rule_index < 0) {
- return rule_index;
- }
-
- if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v20(lpm, ip_masked, depth, next_hop);
- } else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v20(lpm, ip_masked, depth, next_hop);
-
- /*
- * If add fails due to exhaustion of tbl8 extensions delete
- * rule that was added to rule table.
- */
- if (status < 0) {
- rule_delete_v20(lpm, rule_index, depth);
-
- return status;
- }
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_add, _v20, 2.0);
-
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t next_hop)
{
int32_t rule_index, status = 0;
@@ -1231,7 +672,7 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
ip_masked = ip & depth_to_mask(depth);
/* Add the rule to the rule table. */
- rule_index = rule_add_v1604(lpm, ip_masked, depth, next_hop);
+ rule_index = rule_add(lpm, ip_masked, depth, next_hop);
/* If the is no space available for new rule return error. */
if (rule_index < 0) {
@@ -1239,16 +680,16 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
}
if (depth <= MAX_DEPTH_TBL24) {
- status = add_depth_small_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_small(lpm, ip_masked, depth, next_hop);
} else { /* If depth > RTE_LPM_MAX_DEPTH_TBL24 */
- status = add_depth_big_v1604(lpm, ip_masked, depth, next_hop);
+ status = add_depth_big(lpm, ip_masked, depth, next_hop);
/*
* If add fails due to exhaustion of tbl8 extensions delete
* rule that was added to rule table.
*/
if (status < 0) {
- rule_delete_v1604(lpm, rule_index, depth);
+ rule_delete(lpm, rule_index, depth);
return status;
}
@@ -1256,42 +697,12 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_add, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_add(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t next_hop), rte_lpm_add_v1604);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop)
-{
- uint32_t ip_masked;
- int32_t rule_index;
-
- /* Check user arguments. */
- if ((lpm == NULL) ||
- (next_hop == NULL) ||
- (depth < 1) || (depth > RTE_LPM_MAX_DEPTH))
- return -EINVAL;
-
- /* Look for the rule using rule_find. */
- ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v20(lpm, ip_masked, depth);
-
- if (rule_index >= 0) {
- *next_hop = lpm->rules_tbl[rule_index].next_hop;
- return 1;
- }
-
- /* If rule is not found return 0. */
- return 0;
-}
-VERSION_SYMBOL(rte_lpm_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
+rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop)
{
uint32_t ip_masked;
@@ -1305,7 +716,7 @@ uint32_t *next_hop)
/* Look for the rule using rule_find. */
ip_masked = ip & depth_to_mask(depth);
- rule_index = rule_find_v1604(lpm, ip_masked, depth);
+ rule_index = rule_find(lpm, ip_masked, depth);
if (rule_index >= 0) {
*next_hop = lpm->rules_tbl[rule_index].next_hop;
@@ -1315,12 +726,9 @@ uint32_t *next_hop)
/* If rule is not found return 0. */
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm_is_rule_present, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth, uint32_t *next_hop), rte_lpm_is_rule_present_v1604);
static int32_t
-find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
+find_previous_rule(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint8_t *sub_rule_depth)
{
int32_t rule_index;
@@ -1330,7 +738,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
ip_masked = ip & depth_to_mask(prev_depth);
- rule_index = rule_find_v20(lpm, ip_masked, prev_depth);
+ rule_index = rule_find(lpm, ip_masked, prev_depth);
if (rule_index >= 0) {
*sub_rule_depth = prev_depth;
@@ -1342,133 +750,7 @@ find_previous_rule_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
}
static int32_t
-find_previous_rule_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint8_t *sub_rule_depth)
-{
- int32_t rule_index;
- uint32_t ip_masked;
- uint8_t prev_depth;
-
- for (prev_depth = (uint8_t)(depth - 1); prev_depth > 0; prev_depth--) {
- ip_masked = ip & depth_to_mask(prev_depth);
-
- rule_index = rule_find_v1604(lpm, ip_masked, prev_depth);
-
- if (rule_index >= 0) {
- *sub_rule_depth = prev_depth;
- return rule_index;
- }
- }
-
- return -1;
-}
-
-static int32_t
-delete_depth_small_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_range, tbl24_index, tbl8_group_index, tbl8_index, i, j;
-
- /* Calculate the range and index into Table24. */
- tbl24_range = depth_to_range(depth);
- tbl24_index = (ip_masked >> 8);
-
- /*
- * Firstly check the sub_rule_index. A -1 indicates no replacement rule
- * and a positive number indicates a sub_rule_index.
- */
- if (sub_rule_index < 0) {
- /*
- * If no replacement rule exists then invalidate entries
- * associated with this rule.
- */
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- struct rte_lpm_tbl_entry_v20
- zero_tbl24_entry = {
- .valid = INVALID,
- .depth = 0,
- .valid_group = 0,
- };
- zero_tbl24_entry.next_hop = 0;
- __atomic_store(&lpm->tbl24[i],
- &zero_tbl24_entry, __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- lpm->tbl8[j].valid = INVALID;
- }
- }
- }
- } else {
- /*
- * If a replacement rule exists then modify entries
- * associated with this rule.
- */
-
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->rules_tbl[sub_rule_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = sub_rule_depth,
- };
-
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .valid_group = VALID,
- .depth = sub_rule_depth,
- };
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
-
- for (i = tbl24_index; i < (tbl24_index + tbl24_range); i++) {
-
- if (lpm->tbl24[i].valid_group == 0 &&
- lpm->tbl24[i].depth <= depth) {
- __atomic_store(&lpm->tbl24[i], &new_tbl24_entry,
- __ATOMIC_RELEASE);
- } else if (lpm->tbl24[i].valid_group == 1) {
- /*
- * If TBL24 entry is extended, then there has
- * to be a rule with depth >= 25 in the
- * associated TBL8 group.
- */
-
- tbl8_group_index = lpm->tbl24[i].group_idx;
- tbl8_index = tbl8_group_index *
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- for (j = tbl8_index; j < (tbl8_index +
- RTE_LPM_TBL8_GROUP_NUM_ENTRIES); j++) {
-
- if (lpm->tbl8[j].depth <= depth)
- __atomic_store(&lpm->tbl8[j],
- &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
- }
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_small(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1575,7 +857,7 @@ delete_depth_small_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* thus can be recycled
*/
static int32_t
-tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
+tbl8_recycle_check(struct rte_lpm_tbl_entry *tbl8,
uint32_t tbl8_group_start)
{
uint32_t tbl8_group_end, i;
@@ -1622,140 +904,7 @@ tbl8_recycle_check_v20(struct rte_lpm_tbl_entry_v20 *tbl8,
}
static int32_t
-tbl8_recycle_check_v1604(struct rte_lpm_tbl_entry *tbl8,
- uint32_t tbl8_group_start)
-{
- uint32_t tbl8_group_end, i;
- tbl8_group_end = tbl8_group_start + RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
-
- /*
- * Check the first entry of the given tbl8. If it is invalid we know
- * this tbl8 does not contain any rule with a depth < RTE_LPM_MAX_DEPTH
- * (As they would affect all entries in a tbl8) and thus this table
- * can not be recycled.
- */
- if (tbl8[tbl8_group_start].valid) {
- /*
- * If first entry is valid check if the depth is less than 24
- * and if so check the rest of the entries to verify that they
- * are all of this depth.
- */
- if (tbl8[tbl8_group_start].depth <= MAX_DEPTH_TBL24) {
- for (i = (tbl8_group_start + 1); i < tbl8_group_end;
- i++) {
-
- if (tbl8[i].depth !=
- tbl8[tbl8_group_start].depth) {
-
- return -EEXIST;
- }
- }
- /* If all entries are the same return the tb8 index */
- return tbl8_group_start;
- }
-
- return -EEXIST;
- }
- /*
- * If the first entry is invalid check if the rest of the entries in
- * the tbl8 are invalid.
- */
- for (i = (tbl8_group_start + 1); i < tbl8_group_end; i++) {
- if (tbl8[i].valid)
- return -EEXIST;
- }
- /* If no valid entries are found then return -EINVAL. */
- return -EINVAL;
-}
-
-static int32_t
-delete_depth_big_v20(struct rte_lpm_v20 *lpm, uint32_t ip_masked,
- uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
-{
- uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index,
- tbl8_range, i;
- int32_t tbl8_recycle_index;
-
- /*
- * Calculate the index into tbl24 and range. Note: All depths larger
- * than MAX_DEPTH_TBL24 are associated with only one tbl24 entry.
- */
- tbl24_index = ip_masked >> 8;
-
- /* Calculate the index into tbl8 and range. */
- tbl8_group_index = lpm->tbl24[tbl24_index].group_idx;
- tbl8_group_start = tbl8_group_index * RTE_LPM_TBL8_GROUP_NUM_ENTRIES;
- tbl8_index = tbl8_group_start + (ip_masked & 0xFF);
- tbl8_range = depth_to_range(depth);
-
- if (sub_rule_index < 0) {
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be removed or modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- lpm->tbl8[i].valid = INVALID;
- }
- } else {
- /* Set new tbl8 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl8_entry = {
- .valid = VALID,
- .depth = sub_rule_depth,
- .valid_group = lpm->tbl8[tbl8_group_start].valid_group,
- };
-
- new_tbl8_entry.next_hop =
- lpm->rules_tbl[sub_rule_index].next_hop;
- /*
- * Loop through the range of entries on tbl8 for which the
- * rule_to_delete must be modified.
- */
- for (i = tbl8_index; i < (tbl8_index + tbl8_range); i++) {
- if (lpm->tbl8[i].depth <= depth)
- __atomic_store(&lpm->tbl8[i], &new_tbl8_entry,
- __ATOMIC_RELAXED);
- }
- }
-
- /*
- * Check if there are any valid entries in this tbl8 group. If all
- * tbl8 entries are invalid we can free the tbl8 and invalidate the
- * associated tbl24 entry.
- */
-
- tbl8_recycle_index = tbl8_recycle_check_v20(lpm->tbl8, tbl8_group_start);
-
- if (tbl8_recycle_index == -EINVAL) {
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- lpm->tbl24[tbl24_index].valid = 0;
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- } else if (tbl8_recycle_index > -1) {
- /* Update tbl24 entry. */
- struct rte_lpm_tbl_entry_v20 new_tbl24_entry = {
- .next_hop = lpm->tbl8[tbl8_recycle_index].next_hop,
- .valid = VALID,
- .valid_group = 0,
- .depth = lpm->tbl8[tbl8_recycle_index].depth,
- };
-
- /* Set tbl24 before freeing tbl8 to avoid race condition.
- * Prevent the free of the tbl8 group from hoisting.
- */
- __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
- __ATOMIC_RELAXED);
- __atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v20(lpm->tbl8, tbl8_group_start);
- }
-
- return 0;
-}
-
-static int32_t
-delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
+delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,
uint8_t depth, int32_t sub_rule_index, uint8_t sub_rule_depth)
{
#define group_idx next_hop
@@ -1810,7 +959,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* associated tbl24 entry.
*/
- tbl8_recycle_index = tbl8_recycle_check_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_recycle_index = tbl8_recycle_check(lpm->tbl8, tbl8_group_start);
if (tbl8_recycle_index == -EINVAL) {
/* Set tbl24 before freeing tbl8 to avoid race condition.
@@ -1818,7 +967,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
*/
lpm->tbl24[tbl24_index].valid = 0;
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
} else if (tbl8_recycle_index > -1) {
/* Update tbl24 entry. */
struct rte_lpm_tbl_entry new_tbl24_entry = {
@@ -1834,7 +983,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
__atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,
__ATOMIC_RELAXED);
__atomic_thread_fence(__ATOMIC_RELEASE);
- tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
+ tbl8_free(lpm->tbl8, tbl8_group_start);
}
#undef group_idx
return 0;
@@ -1844,7 +993,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
* Deletes a rule
*/
int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
+rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
{
int32_t rule_to_delete_index, sub_rule_index;
uint32_t ip_masked;
@@ -1863,7 +1012,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* Find the index of the input rule, that needs to be deleted, in the
* rule table.
*/
- rule_to_delete_index = rule_find_v20(lpm, ip_masked, depth);
+ rule_to_delete_index = rule_find(lpm, ip_masked, depth);
/*
* Check if rule_to_delete_index was found. If no rule was found the
@@ -1873,7 +1022,7 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
return -EINVAL;
/* Delete the rule from the rule table. */
- rule_delete_v20(lpm, rule_to_delete_index, depth);
+ rule_delete(lpm, rule_to_delete_index, depth);
/*
* Find rule to replace the rule_to_delete. If there is no rule to
@@ -1881,100 +1030,26 @@ rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth)
* entries associated with this rule.
*/
sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v20(lpm, ip, depth, &sub_rule_depth);
+ sub_rule_index = find_previous_rule(lpm, ip, depth, &sub_rule_depth);
/*
* If the input depth value is less than 25 use function
* delete_depth_small otherwise use delete_depth_big.
*/
if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v20(lpm, ip_masked, depth,
+ return delete_depth_small(lpm, ip_masked, depth,
sub_rule_index, sub_rule_depth);
} else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v20(lpm, ip_masked, depth, sub_rule_index,
+ return delete_depth_big(lpm, ip_masked, depth, sub_rule_index,
sub_rule_depth);
}
}
-VERSION_SYMBOL(rte_lpm_delete, _v20, 2.0);
-
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth)
-{
- int32_t rule_to_delete_index, sub_rule_index;
- uint32_t ip_masked;
- uint8_t sub_rule_depth;
- /*
- * Check input arguments. Note: IP must be a positive integer of 32
- * bits in length therefore it need not be checked.
- */
- if ((lpm == NULL) || (depth < 1) || (depth > RTE_LPM_MAX_DEPTH)) {
- return -EINVAL;
- }
-
- ip_masked = ip & depth_to_mask(depth);
-
- /*
- * Find the index of the input rule, that needs to be deleted, in the
- * rule table.
- */
- rule_to_delete_index = rule_find_v1604(lpm, ip_masked, depth);
-
- /*
- * Check if rule_to_delete_index was found. If no rule was found the
- * function rule_find returns -EINVAL.
- */
- if (rule_to_delete_index < 0)
- return -EINVAL;
-
- /* Delete the rule from the rule table. */
- rule_delete_v1604(lpm, rule_to_delete_index, depth);
-
- /*
- * Find rule to replace the rule_to_delete. If there is no rule to
- * replace the rule_to_delete we return -1 and invalidate the table
- * entries associated with this rule.
- */
- sub_rule_depth = 0;
- sub_rule_index = find_previous_rule_v1604(lpm, ip, depth, &sub_rule_depth);
-
- /*
- * If the input depth value is less than 25 use function
- * delete_depth_small otherwise use delete_depth_big.
- */
- if (depth <= MAX_DEPTH_TBL24) {
- return delete_depth_small_v1604(lpm, ip_masked, depth,
- sub_rule_index, sub_rule_depth);
- } else { /* If depth > MAX_DEPTH_TBL24 */
- return delete_depth_big_v1604(lpm, ip_masked, depth, sub_rule_index,
- sub_rule_depth);
- }
-}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete, _v1604, 16.04);
-MAP_STATIC_SYMBOL(int rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip,
- uint8_t depth), rte_lpm_delete_v1604);
/*
* Delete all rules from the LPM table.
*/
void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm)
-{
- /* Zero rule information. */
- memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
-
- /* Zero tbl24. */
- memset(lpm->tbl24, 0, sizeof(lpm->tbl24));
-
- /* Zero tbl8. */
- memset(lpm->tbl8, 0, sizeof(lpm->tbl8));
-
- /* Delete all rules form the rules table. */
- memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
-}
-VERSION_SYMBOL(rte_lpm_delete_all, _v20, 2.0);
-
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
+rte_lpm_delete_all(struct rte_lpm *lpm)
{
/* Zero rule information. */
memset(lpm->rule_info, 0, sizeof(lpm->rule_info));
@@ -1989,6 +1064,3 @@ rte_lpm_delete_all_v1604(struct rte_lpm *lpm)
/* Delete all rules form the rules table. */
memset(lpm->rules_tbl, 0, sizeof(lpm->rules_tbl[0]) * lpm->max_rules);
}
-BIND_DEFAULT_SYMBOL(rte_lpm_delete_all, _v1604, 16.04);
-MAP_STATIC_SYMBOL(void rte_lpm_delete_all(struct rte_lpm *lpm),
- rte_lpm_delete_all_v1604);
diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h
index 906ec44830..ca9627a141 100644
--- a/lib/librte_lpm/rte_lpm.h
+++ b/lib/librte_lpm/rte_lpm.h
@@ -65,31 +65,6 @@ extern "C" {
#if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN
/** @internal Tbl24 entry structure. */
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- /**
- * Stores Next hop (tbl8 or tbl24 when valid_group is not set) or
- * a group index pointing to a tbl8 structure (tbl24 only, when
- * valid_group is set)
- */
- RTE_STD_C11
- union {
- uint8_t next_hop;
- uint8_t group_idx;
- };
- /* Using single uint8_t to store 3 values. */
- uint8_t valid :1; /**< Validation flag. */
- /**
- * For tbl24:
- * - valid_group == 0: entry stores a next hop
- * - valid_group == 1: entry stores a group_index pointing to a tbl8
- * For tbl8:
- * - valid_group indicates whether the current tbl8 is in use or not
- */
- uint8_t valid_group :1;
- uint8_t depth :6; /**< Rule depth. */
-} __rte_aligned(sizeof(uint16_t));
-
__extension__
struct rte_lpm_tbl_entry {
/**
@@ -112,16 +87,6 @@ struct rte_lpm_tbl_entry {
};
#else
-__extension__
-struct rte_lpm_tbl_entry_v20 {
- uint8_t depth :6;
- uint8_t valid_group :1;
- uint8_t valid :1;
- union {
- uint8_t group_idx;
- uint8_t next_hop;
- };
-} __rte_aligned(sizeof(uint16_t));
__extension__
struct rte_lpm_tbl_entry {
@@ -142,11 +107,6 @@ struct rte_lpm_config {
};
/** @internal Rule structure. */
-struct rte_lpm_rule_v20 {
- uint32_t ip; /**< Rule IP address. */
- uint8_t next_hop; /**< Rule next hop. */
-};
-
struct rte_lpm_rule {
uint32_t ip; /**< Rule IP address. */
uint32_t next_hop; /**< Rule next hop. */
@@ -159,21 +119,6 @@ struct rte_lpm_rule_info {
};
/** @internal LPM structure. */
-struct rte_lpm_v20 {
- /* LPM metadata. */
- char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
- uint32_t max_rules; /**< Max. balanced rules per lpm. */
- struct rte_lpm_rule_info rule_info[RTE_LPM_MAX_DEPTH]; /**< Rule info table. */
-
- /* LPM Tables. */
- struct rte_lpm_tbl_entry_v20 tbl24[RTE_LPM_TBL24_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl24 table. */
- struct rte_lpm_tbl_entry_v20 tbl8[RTE_LPM_TBL8_NUM_ENTRIES]
- __rte_cache_aligned; /**< LPM tbl8 table. */
- struct rte_lpm_rule_v20 rules_tbl[]
- __rte_cache_aligned; /**< LPM rules. */
-};
-
struct rte_lpm {
/* LPM metadata. */
char name[RTE_LPM_NAMESIZE]; /**< Name of the lpm. */
@@ -210,11 +155,6 @@ struct rte_lpm {
struct rte_lpm *
rte_lpm_create(const char *name, int socket_id,
const struct rte_lpm_config *config);
-struct rte_lpm_v20 *
-rte_lpm_create_v20(const char *name, int socket_id, int max_rules, int flags);
-struct rte_lpm *
-rte_lpm_create_v1604(const char *name, int socket_id,
- const struct rte_lpm_config *config);
/**
* Find an existing LPM object and return a pointer to it.
@@ -228,10 +168,6 @@ rte_lpm_create_v1604(const char *name, int socket_id,
*/
struct rte_lpm *
rte_lpm_find_existing(const char *name);
-struct rte_lpm_v20 *
-rte_lpm_find_existing_v20(const char *name);
-struct rte_lpm *
-rte_lpm_find_existing_v1604(const char *name);
/**
* Free an LPM object.
@@ -243,10 +179,6 @@ rte_lpm_find_existing_v1604(const char *name);
*/
void
rte_lpm_free(struct rte_lpm *lpm);
-void
-rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_free_v1604(struct rte_lpm *lpm);
/**
* Add a rule to the LPM table.
@@ -264,12 +196,6 @@ rte_lpm_free_v1604(struct rte_lpm *lpm);
*/
int
rte_lpm_add(struct rte_lpm *lpm, uint32_t ip, uint8_t depth, uint32_t next_hop);
-int
-rte_lpm_add_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -289,12 +215,6 @@ rte_lpm_add_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
int
rte_lpm_is_rule_present(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm_is_rule_present_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth,
-uint8_t *next_hop);
-int
-rte_lpm_is_rule_present_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth,
-uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -310,10 +230,6 @@ uint32_t *next_hop);
*/
int
rte_lpm_delete(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v20(struct rte_lpm_v20 *lpm, uint32_t ip, uint8_t depth);
-int
-rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
/**
* Delete all rules from the LPM table.
@@ -323,10 +239,6 @@ rte_lpm_delete_v1604(struct rte_lpm *lpm, uint32_t ip, uint8_t depth);
*/
void
rte_lpm_delete_all(struct rte_lpm *lpm);
-void
-rte_lpm_delete_all_v20(struct rte_lpm_v20 *lpm);
-void
-rte_lpm_delete_all_v1604(struct rte_lpm *lpm);
/**
* Lookup an IP into the LPM table.
diff --git a/lib/librte_lpm/rte_lpm6.c b/lib/librte_lpm/rte_lpm6.c
index 9b8aeb9721..b981e40714 100644
--- a/lib/librte_lpm/rte_lpm6.c
+++ b/lib/librte_lpm/rte_lpm6.c
@@ -808,18 +808,6 @@ add_step(struct rte_lpm6 *lpm, struct rte_lpm6_tbl_entry *tbl,
return 1;
}
-/*
- * Add a route
- */
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop)
-{
- return rte_lpm6_add_v1705(lpm, ip, depth, next_hop);
-}
-VERSION_SYMBOL(rte_lpm6_add, _v20, 2.0);
-
-
/*
* Simulate adding a route to LPM
*
@@ -841,7 +829,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
/* Inspect the first three bytes through tbl24 on the first step. */
ret = simulate_add_step(lpm, lpm->tbl24, &tbl_next, masked_ip,
- ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
+ ADD_FIRST_BYTE, 1, depth, &need_tbl_nb);
total_need_tbl_nb = need_tbl_nb;
/*
* Inspect one by one the rest of the bytes until
@@ -850,7 +838,7 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && ret == 1; i++) {
tbl = tbl_next;
ret = simulate_add_step(lpm, tbl, &tbl_next, masked_ip, 1,
- (uint8_t)(i+1), depth, &need_tbl_nb);
+ (uint8_t)(i + 1), depth, &need_tbl_nb);
total_need_tbl_nb += need_tbl_nb;
}
@@ -861,9 +849,12 @@ simulate_add(struct rte_lpm6 *lpm, const uint8_t *masked_ip, uint8_t depth)
return 0;
}
+/*
+ * Add a route
+ */
int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop)
+rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+ uint32_t next_hop)
{
struct rte_lpm6_tbl_entry *tbl;
struct rte_lpm6_tbl_entry *tbl_next = NULL;
@@ -895,8 +886,8 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
/* Inspect the first three bytes through tbl24 on the first step. */
tbl = lpm->tbl24;
status = add_step(lpm, tbl, TBL24_IND, &tbl_next, &tbl_next_num,
- masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
- is_new_rule);
+ masked_ip, ADD_FIRST_BYTE, 1, depth, next_hop,
+ is_new_rule);
assert(status >= 0);
/*
@@ -906,17 +897,13 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
for (i = ADD_FIRST_BYTE; i < RTE_LPM6_IPV6_ADDR_SIZE && status == 1; i++) {
tbl = tbl_next;
status = add_step(lpm, tbl, tbl_next_num, &tbl_next,
- &tbl_next_num, masked_ip, 1, (uint8_t)(i+1),
- depth, next_hop, is_new_rule);
+ &tbl_next_num, masked_ip, 1, (uint8_t)(i + 1),
+ depth, next_hop, is_new_rule);
assert(status >= 0);
}
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_add, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip,
- uint8_t depth, uint32_t next_hop),
- rte_lpm6_add_v1705);
/*
* Takes a pointer to a table entry and inspect one level.
@@ -955,25 +942,7 @@ lookup_step(const struct rte_lpm6 *lpm, const struct rte_lpm6_tbl_entry *tbl,
* Looks up an IP
*/
int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_lookup_v1705(lpm, ip, &next_hop32);
- if (status == 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-}
-VERSION_SYMBOL(rte_lpm6_lookup, _v20, 2.0);
-
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
+rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
uint32_t *next_hop)
{
const struct rte_lpm6_tbl_entry *tbl;
@@ -1000,56 +969,12 @@ rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
return status;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop), rte_lpm6_lookup_v1705);
/*
* Looks up a group of IP addresses
*/
int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t * next_hops, unsigned n)
-{
- unsigned i;
- const struct rte_lpm6_tbl_entry *tbl;
- const struct rte_lpm6_tbl_entry *tbl_next = NULL;
- uint32_t tbl24_index, next_hop;
- uint8_t first_byte;
- int status;
-
- /* DEBUG: Check user input arguments. */
- if ((lpm == NULL) || (ips == NULL) || (next_hops == NULL))
- return -EINVAL;
-
- for (i = 0; i < n; i++) {
- first_byte = LOOKUP_FIRST_BYTE;
- tbl24_index = (ips[i][0] << BYTES2_SIZE) |
- (ips[i][1] << BYTE_SIZE) | ips[i][2];
-
- /* Calculate pointer to the first entry to be inspected */
- tbl = &lpm->tbl24[tbl24_index];
-
- do {
- /* Continue inspecting following levels until success or failure */
- status = lookup_step(lpm, tbl, &tbl_next, ips[i], first_byte++,
- &next_hop);
- tbl = tbl_next;
- } while (status == 1);
-
- if (status < 0)
- next_hops[i] = -1;
- else
- next_hops[i] = (int16_t)next_hop;
- }
-
- return 0;
-}
-VERSION_SYMBOL(rte_lpm6_lookup_bulk_func, _v20, 2.0);
-
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
+rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n)
{
@@ -1089,37 +1014,12 @@ rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
return 0;
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_lookup_bulk_func, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n),
- rte_lpm6_lookup_bulk_func_v1705);
/*
* Look for a rule in the high-level rules table
*/
int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop)
-{
- uint32_t next_hop32 = 0;
- int32_t status;
-
- /* DEBUG: Check user input arguments. */
- if (next_hop == NULL)
- return -EINVAL;
-
- status = rte_lpm6_is_rule_present_v1705(lpm, ip, depth, &next_hop32);
- if (status > 0)
- *next_hop = (uint8_t)next_hop32;
-
- return status;
-
-}
-VERSION_SYMBOL(rte_lpm6_is_rule_present, _v20, 2.0);
-
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
+rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop)
{
uint8_t masked_ip[RTE_LPM6_IPV6_ADDR_SIZE];
@@ -1135,10 +1035,6 @@ rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
return rule_find(lpm, masked_ip, depth, next_hop);
}
-BIND_DEFAULT_SYMBOL(rte_lpm6_is_rule_present, _v1705, 17.05);
-MAP_STATIC_SYMBOL(int rte_lpm6_is_rule_present(struct rte_lpm6 *lpm,
- uint8_t *ip, uint8_t depth, uint32_t *next_hop),
- rte_lpm6_is_rule_present_v1705);
/*
* Delete a rule from the rule table.
diff --git a/lib/librte_lpm/rte_lpm6.h b/lib/librte_lpm/rte_lpm6.h
index 5d59ccb1fe..37dfb20249 100644
--- a/lib/librte_lpm/rte_lpm6.h
+++ b/lib/librte_lpm/rte_lpm6.h
@@ -96,12 +96,6 @@ rte_lpm6_free(struct rte_lpm6 *lpm);
int
rte_lpm6_add(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t next_hop);
-int
-rte_lpm6_add_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t next_hop);
-int
-rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t next_hop);
/**
* Check if a rule is present in the LPM table,
@@ -121,12 +115,6 @@ rte_lpm6_add_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
int
rte_lpm6_is_rule_present(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
uint32_t *next_hop);
-int
-rte_lpm6_is_rule_present_v20(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint8_t *next_hop);
-int
-rte_lpm6_is_rule_present_v1705(struct rte_lpm6 *lpm, uint8_t *ip, uint8_t depth,
- uint32_t *next_hop);
/**
* Delete a rule from the LPM table.
@@ -184,11 +172,6 @@ rte_lpm6_delete_all(struct rte_lpm6 *lpm);
*/
int
rte_lpm6_lookup(const struct rte_lpm6 *lpm, uint8_t *ip, uint32_t *next_hop);
-int
-rte_lpm6_lookup_v20(const struct rte_lpm6 *lpm, uint8_t *ip, uint8_t *next_hop);
-int
-rte_lpm6_lookup_v1705(const struct rte_lpm6 *lpm, uint8_t *ip,
- uint32_t *next_hop);
/**
* Lookup multiple IP addresses in an LPM table.
@@ -210,14 +193,6 @@ int
rte_lpm6_lookup_bulk_func(const struct rte_lpm6 *lpm,
uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
int32_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v20(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int16_t *next_hops, unsigned int n);
-int
-rte_lpm6_lookup_bulk_func_v1705(const struct rte_lpm6 *lpm,
- uint8_t ips[][RTE_LPM6_IPV6_ADDR_SIZE],
- int32_t *next_hops, unsigned int n);
#ifdef __cplusplus
}
--
2.17.1
^ permalink raw reply [relevance 2%]
* [dpdk-dev] [PATCH v2 04/10] timer: remove deprecated code
` (3 preceding siblings ...)
2019-10-16 12:43 22% ` [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script Anatoly Burakov
@ 2019-10-16 12:43 4% ` Anatoly Burakov
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 05/10] lpm: " Anatoly Burakov
` (4 subsequent siblings)
9 siblings, 0 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Robert Sanford, Erik Gabriel Carrillo,
john.mcnamara, bruce.richardson, thomas, david.marchand
From: Marcin Baran <marcinx.baran@intel.com>
Remove code for old ABI versions ahead of ABI version bump.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Moved this to before ABI version bump to avoid compile breakage
lib/librte_timer/rte_timer.c | 90 ++----------------------------------
lib/librte_timer/rte_timer.h | 15 ------
2 files changed, 5 insertions(+), 100 deletions(-)
diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c
index bdcf05d06b..de6959b809 100644
--- a/lib/librte_timer/rte_timer.c
+++ b/lib/librte_timer/rte_timer.c
@@ -68,9 +68,6 @@ static struct rte_timer_data *rte_timer_data_arr;
static const uint32_t default_data_id;
static uint32_t rte_timer_subsystem_initialized;
-/* For maintaining older interfaces for a period */
-static struct rte_timer_data default_timer_data;
-
/* when debug is enabled, store some statistics */
#ifdef RTE_LIBRTE_TIMER_DEBUG
#define __TIMER_STAT_ADD(priv_timer, name, n) do { \
@@ -131,22 +128,6 @@ rte_timer_data_dealloc(uint32_t id)
return 0;
}
-void
-rte_timer_subsystem_init_v20(void)
-{
- unsigned lcore_id;
- struct priv_timer *priv_timer = default_timer_data.priv_timer;
-
- /* since priv_timer is static, it's zeroed by default, so only init some
- * fields.
- */
- for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id ++) {
- rte_spinlock_init(&priv_timer[lcore_id].list_lock);
- priv_timer[lcore_id].prev_lcore = lcore_id;
- }
-}
-VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
-
/* Init the timer library. Allocate an array of timer data structs in shared
* memory, and allocate the zeroth entry for use with original timer
* APIs. Since the intersection of the sets of lcore ids in primary and
@@ -154,7 +135,7 @@ VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);
* multiple processes.
*/
int
-rte_timer_subsystem_init_v1905(void)
+rte_timer_subsystem_init(void)
{
const struct rte_memzone *mz;
struct rte_timer_data *data;
@@ -209,9 +190,6 @@ rte_timer_subsystem_init_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),
- rte_timer_subsystem_init_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);
void
rte_timer_subsystem_finalize(void)
@@ -552,42 +530,13 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,
/* Reset and start the timer associated with the timer handle tim */
int
-rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg)
-{
- uint64_t cur_time = rte_get_timer_cycles();
- uint64_t period;
-
- if (unlikely((tim_lcore != (unsigned)LCORE_ID_ANY) &&
- !(rte_lcore_is_enabled(tim_lcore) ||
- rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))
- return -1;
-
- if (type == PERIODICAL)
- period = ticks;
- else
- period = 0;
-
- return __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,
- fct, arg, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);
-
-int
-rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
+rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned int tim_lcore,
rte_timer_cb_t fct, void *arg)
{
return rte_timer_alt_reset(default_data_id, tim, ticks, type,
tim_lcore, fct, arg);
}
-MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type,
- unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg),
- rte_timer_reset_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);
int
rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,
@@ -658,20 +607,10 @@ __rte_timer_stop(struct rte_timer *tim, int local_is_locked,
/* Stop the timer associated with the timer handle tim */
int
-rte_timer_stop_v20(struct rte_timer *tim)
-{
- return __rte_timer_stop(tim, 0, &default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);
-
-int
-rte_timer_stop_v1905(struct rte_timer *tim)
+rte_timer_stop(struct rte_timer *tim)
{
return rte_timer_alt_stop(default_data_id, tim);
}
-MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),
- rte_timer_stop_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);
int
rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)
@@ -817,15 +756,8 @@ __rte_timer_manage(struct rte_timer_data *timer_data)
priv_timer[lcore_id].running_tim = NULL;
}
-void
-rte_timer_manage_v20(void)
-{
- __rte_timer_manage(&default_timer_data);
-}
-VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);
-
int
-rte_timer_manage_v1905(void)
+rte_timer_manage(void)
{
struct rte_timer_data *timer_data;
@@ -835,8 +767,6 @@ rte_timer_manage_v1905(void)
return 0;
}
-MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);
int
rte_timer_alt_manage(uint32_t timer_data_id,
@@ -1074,21 +1004,11 @@ __rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)
#endif
}
-void
-rte_timer_dump_stats_v20(FILE *f)
-{
- __rte_timer_dump_stats(&default_timer_data, f);
-}
-VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);
-
int
-rte_timer_dump_stats_v1905(FILE *f)
+rte_timer_dump_stats(FILE *f)
{
return rte_timer_alt_dump_stats(default_data_id, f);
}
-MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),
- rte_timer_dump_stats_v1905);
-BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);
int
rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)
diff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h
index 05d287d8f2..9dc5fc3092 100644
--- a/lib/librte_timer/rte_timer.h
+++ b/lib/librte_timer/rte_timer.h
@@ -181,8 +181,6 @@ int rte_timer_data_dealloc(uint32_t id);
* subsystem
*/
int rte_timer_subsystem_init(void);
-int rte_timer_subsystem_init_v1905(void);
-void rte_timer_subsystem_init_v20(void);
/**
* @warning
@@ -250,13 +248,6 @@ void rte_timer_init(struct rte_timer *tim);
int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,
enum rte_timer_type type, unsigned tim_lcore,
rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,
- enum rte_timer_type type, unsigned int tim_lcore,
- rte_timer_cb_t fct, void *arg);
-
/**
* Loop until rte_timer_reset() succeeds.
@@ -313,8 +304,6 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,
* - (-1): The timer is in the RUNNING or CONFIG state.
*/
int rte_timer_stop(struct rte_timer *tim);
-int rte_timer_stop_v1905(struct rte_timer *tim);
-int rte_timer_stop_v20(struct rte_timer *tim);
/**
* Loop until rte_timer_stop() succeeds.
@@ -358,8 +347,6 @@ int rte_timer_pending(struct rte_timer *tim);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_manage(void);
-int rte_timer_manage_v1905(void);
-void rte_timer_manage_v20(void);
/**
* Dump statistics about timers.
@@ -371,8 +358,6 @@ void rte_timer_manage_v20(void);
* - -EINVAL: timer subsystem not yet initialized
*/
int rte_timer_dump_stats(FILE *f);
-int rte_timer_dump_stats_v1905(FILE *f);
-void rte_timer_dump_stats_v20(FILE *f);
/**
* @warning
--
2.17.1
^ permalink raw reply [relevance 4%]
* [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script
` (2 preceding siblings ...)
2019-10-16 12:43 14% ` [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
@ 2019-10-16 12:43 22% ` Anatoly Burakov
2019-10-16 13:33 4% ` Bruce Richardson
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 04/10] timer: remove deprecated code Anatoly Burakov
` (5 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
In order to facilitate mass updating of version files, add a shell
script that recurses into lib/ and drivers/ directories and calls
the ABI version update script.
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Add this patch to split the shell script from previous commit
- Fixup miscellaneous bugs
buildtools/update-abi.sh | 36 ++++++++++++++++++++++++++++++++++++
1 file changed, 36 insertions(+)
create mode 100755 buildtools/update-abi.sh
diff --git a/buildtools/update-abi.sh b/buildtools/update-abi.sh
new file mode 100755
index 0000000000..a6f916a437
--- /dev/null
+++ b/buildtools/update-abi.sh
@@ -0,0 +1,36 @@
+#!/bin/bash
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+abi_version=""
+abi_version_file="./config/ABI_VERSION"
+update_path="lib drivers"
+
+if [ -z "$1" ]
+then
+ # output to stderr
+ >&2 echo "provide ABI version"
+ exit 1
+fi
+
+abi_version=$1
+
+if [ -n "$2" ]
+then
+ abi_version_file=$2
+fi
+
+if [ -n "$3" ]
+then
+ update_path=${@:3}
+fi
+
+echo "New ABI version:" $abi_version
+echo "ABI_VERSION path:" $abi_version_file
+echo "Path to update:" $update_path
+
+echo $abi_version > $abi_version_file
+
+find $update_path -name \*version.map -exec \
+ ./buildtools/update_version_map_abi.py {} \
+ $abi_version \; -print
--
2.17.1
^ permalink raw reply [relevance 22%]
* [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global Anatoly Burakov
@ 2019-10-16 12:43 14% ` Anatoly Burakov
2019-10-16 13:25 4% ` Bruce Richardson
2019-10-16 12:43 22% ` [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script Anatoly Burakov
` (6 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev; +Cc: Pawel Modrak, john.mcnamara, bruce.richardson, thomas, david.marchand
From: Pawel Modrak <pawelx.modrak@intel.com>
Add a script that automatically merges all stable ABI's under one
ABI section with the new version, while leaving experimental
section exactly as it is.
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
Notes:
v2:
- Reworked script to be pep8-compliant and more reliable
buildtools/update_version_map_abi.py | 148 +++++++++++++++++++++++++++
1 file changed, 148 insertions(+)
create mode 100755 buildtools/update_version_map_abi.py
diff --git a/buildtools/update_version_map_abi.py b/buildtools/update_version_map_abi.py
new file mode 100755
index 0000000000..ea9044cc81
--- /dev/null
+++ b/buildtools/update_version_map_abi.py
@@ -0,0 +1,148 @@
+#!/usr/bin/env python
+# SPDX-License-Identifier: BSD-3-Clause
+# Copyright(c) 2019 Intel Corporation
+
+"""
+A Python program to update the ABI version and function names in a DPDK
+lib_*_version.map file. Called from the buildtools/update_abi.sh utility.
+"""
+
+from __future__ import print_function
+import argparse
+import sys
+import re
+
+
+def __parse_map_file(f_in):
+ func_line_regex = re.compile(r"\s*(?P<func>[a-zA-Z_0-9]+)\s*;\s*$")
+ section_begin_regex = re.compile(
+ r"\s*(?P<version>[a-zA-Z0-9_\.]+)\s*{\s*$")
+ section_end_regex = re.compile(
+ r"\s*}\s*(?P<parent>[a-zA-Z0-9_\.]+)?\s*;\s*$")
+
+ # for stable ABI, we don't care about which version introduced which
+ # function, we just flatten the list. there are dupes in certain files, so
+ # use a set instead of a list
+ stable_lines = set()
+ # copy experimental section as is
+ experimental_lines = []
+ is_experimental = False
+
+ # gather all functions
+ for line in f_in:
+ # clean up the line
+ line = line.strip('\n').strip()
+
+ # is this an end of section?
+ match = section_end_regex.match(line)
+ if match:
+ # whatever section this was, it's not active any more
+ is_experimental = False
+ continue
+
+ # if we're in the middle of experimental section, we need to copy
+ # the section verbatim, so just add the line
+ if is_experimental:
+ experimental_lines += [line]
+ continue
+
+ # skip empty lines
+ if not line:
+ continue
+
+ # is this a beginning of a new section?
+ match = section_begin_regex.match(line)
+ if match:
+ cur_section = match.group("version")
+ # is it experimental?
+ is_experimental = cur_section == "EXPERIMENTAL"
+ continue
+
+ # is this a function?
+ match = func_line_regex.match(line)
+ if match:
+ stable_lines.add(match.group("func"))
+
+ return stable_lines, experimental_lines
+
+
+def __regenerate_map_file(f_out, abi_version, stable_lines,
+ experimental_lines):
+ # print ABI version header
+ print("DPDK_{} {{".format(abi_version), file=f_out)
+
+ if stable_lines:
+ # print global section
+ print("\tglobal:", file=f_out)
+ # blank line
+ print(file=f_out)
+
+ # print all stable lines, alphabetically sorted
+ for line in sorted(stable_lines):
+ print("\t{};".format(line), file=f_out)
+
+ # another blank line
+ print(file=f_out)
+
+ # print local section
+ print("\tlocal: *;", file=f_out)
+
+ # end stable version
+ print("};", file=f_out)
+
+ # do we have experimental lines?
+ if not experimental_lines:
+ return
+
+ # another blank line
+ print(file=f_out)
+
+ # start experimental section
+ print("EXPERIMENTAL {", file=f_out)
+
+ # print all experimental lines as they were
+ for line in experimental_lines:
+ # don't print empty whitespace
+ if not line:
+ print("", file=f_out)
+ else:
+ print("\t{}".format(line), file=f_out)
+
+ # end section
+ print("};", file=f_out)
+
+
+def __main():
+ arg_parser = argparse.ArgumentParser(
+ description='Merge versions in linker version script.')
+
+ arg_parser.add_argument("map_file", type=str,
+ help='path to linker version script file '
+ '(pattern: *version.map)')
+ arg_parser.add_argument("abi_version", type=str,
+ help='target ABI version (pattern: MAJOR.MINOR)')
+
+ parsed = arg_parser.parse_args()
+
+ if not parsed.map_file.endswith('version.map'):
+ print("Invalid input file: {}".format(parsed.map_file),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ if not re.match(r"\d{1,2}\.\d{1,2}", parsed.abi_version):
+ print("Invalid ABI version: {}".format(parsed.abi_version),
+ file=sys.stderr)
+ arg_parser.print_help()
+ sys.exit(1)
+
+ with open(parsed.map_file) as f_in:
+ stable_lines, experimental_lines = __parse_map_file(f_in)
+
+ with open(parsed.map_file, 'w') as f_out:
+ __regenerate_map_file(f_out, parsed.abi_version, stable_lines,
+ experimental_lines)
+
+
+if __name__ == "__main__":
+ __main()
--
2.17.1
^ permalink raw reply [relevance 14%]
* [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
@ 2019-10-16 12:43 8% ` Anatoly Burakov
2019-10-16 13:22 4% ` Bruce Richardson
2019-10-16 12:43 14% ` [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
` (7 subsequent siblings)
9 siblings, 1 reply; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev
Cc: Marcin Baran, Thomas Monjalon, Bruce Richardson, john.mcnamara,
david.marchand, Pawel Modrak
From: Marcin Baran <marcinx.baran@intel.com>
The libraries should be maintained using global
ABI versioning. The changes includes adding global
ABI version support for both makefile and meson
build system. Experimental libraries should be
marked as 0.
Signed-off-by: Marcin Baran <marcinx.baran@intel.com>
Signed-off-by: Pawel Modrak <pawelx.modrak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
buildtools/meson.build | 2 ++
config/ABI_VERSION | 1 +
config/meson.build | 3 ++-
drivers/meson.build | 20 ++++++++++++++------
lib/meson.build | 18 +++++++++++++-----
meson_options.txt | 2 --
mk/rte.lib.mk | 19 +++++++++++--------
7 files changed, 43 insertions(+), 22 deletions(-)
create mode 100644 config/ABI_VERSION
diff --git a/buildtools/meson.build b/buildtools/meson.build
index 32c79c1308..78ce69977d 100644
--- a/buildtools/meson.build
+++ b/buildtools/meson.build
@@ -12,3 +12,5 @@ if python3.found()
else
map_to_def_cmd = ['meson', 'runpython', files('map_to_def.py')]
endif
+
+is_experimental_cmd = [find_program('grep', 'findstr'), '^DPDK_']
diff --git a/config/ABI_VERSION b/config/ABI_VERSION
new file mode 100644
index 0000000000..9a7c1e503f
--- /dev/null
+++ b/config/ABI_VERSION
@@ -0,0 +1 @@
+20.0
diff --git a/config/meson.build b/config/meson.build
index a27f731f85..25ecf928e4 100644
--- a/config/meson.build
+++ b/config/meson.build
@@ -17,7 +17,8 @@ endforeach
# set the major version, which might be used by drivers and libraries
# depending on the configuration options
pver = meson.project_version().split('.')
-major_version = '@0@.@1@'.format(pver.get(0), pver.get(1))
+major_version = run_command(find_program('cat', 'more'),
+ files('ABI_VERSION')).stdout().strip()
# extract all version information into the build configuration
dpdk_conf.set('RTE_VER_YEAR', pver.get(0).to_int())
diff --git a/drivers/meson.build b/drivers/meson.build
index 2ed2e95411..5c5fe87c7e 100644
--- a/drivers/meson.build
+++ b/drivers/meson.build
@@ -110,9 +110,20 @@ foreach class:dpdk_driver_classes
output: out_filename,
depends: [pmdinfogen, tmp_lib])
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/@2@_version.map'.format(
+ meson.current_source_dir(),
+ drv_path, lib_name)
+
+ if is_windows
+ version_map = '\\'.join(version_map.split('/'))
+ endif
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
lib_version = major_version
so_version = major_version
@@ -128,9 +139,6 @@ foreach class:dpdk_driver_classes
install: true)
# now build the shared driver
- version_map = '@0@/@1@/@2@_version.map'.format(
- meson.current_source_dir(),
- drv_path, lib_name)
shared_lib = shared_library(lib_name,
sources,
objects: objs,
diff --git a/lib/meson.build b/lib/meson.build
index e5ff838934..3892c16e8f 100644
--- a/lib/meson.build
+++ b/lib/meson.build
@@ -97,9 +97,19 @@ foreach l:libraries
cflags += '-DALLOW_EXPERIMENTAL_API'
endif
- if get_option('per_library_versions')
- lib_version = '@0@.1'.format(version)
- so_version = '@0@'.format(version)
+ version_map = '@0@/@1@/rte_@2@_version.map'.format(
+ meson.current_source_dir(), dir_name, name)
+
+ if is_windows
+ version_map = '\\'.join(version_map.split('/'))
+ endif
+
+ is_experimental = run_command(is_experimental_cmd,
+ files(version_map)).returncode()
+
+ if is_experimental != 0
+ lib_version = '0.1'
+ so_version = '0'
else
lib_version = major_version
so_version = major_version
@@ -120,8 +130,6 @@ foreach l:libraries
# then use pre-build objects to build shared lib
sources = []
objs += static_lib.extract_all_objects(recursive: false)
- version_map = '@0@/@1@/rte_@2@_version.map'.format(
- meson.current_source_dir(), dir_name, name)
implib = dir_name + '.dll.a'
def_file = custom_target(name + '_def',
diff --git a/meson_options.txt b/meson_options.txt
index 448f3e63dc..000e38fd98 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -28,8 +28,6 @@ option('max_lcores', type: 'integer', value: 128,
description: 'maximum number of cores/threads supported by EAL')
option('max_numa_nodes', type: 'integer', value: 4,
description: 'maximum number of NUMA nodes supported by EAL')
-option('per_library_versions', type: 'boolean', value: true,
- description: 'true: each lib gets its own version number, false: DPDK version used for each lib')
option('tests', type: 'boolean', value: true,
description: 'build unit tests')
option('use_hpet', type: 'boolean', value: false,
diff --git a/mk/rte.lib.mk b/mk/rte.lib.mk
index 4df8849a08..f84161c6d5 100644
--- a/mk/rte.lib.mk
+++ b/mk/rte.lib.mk
@@ -11,20 +11,23 @@ EXTLIB_BUILD ?= n
# VPATH contains at least SRCDIR
VPATH += $(SRCDIR)
-ifneq ($(CONFIG_RTE_MAJOR_ABI),)
-ifneq ($(LIBABIVER),)
-LIBABIVER := $(CONFIG_RTE_MAJOR_ABI)
+ifeq ($(OS), Windows_NT)
+search_cmd = findstr
+print_cmd = more
+else
+search_cmd = grep
+print_cmd = cat
endif
+
+ifneq ($(shell $(search_cmd) "^DPDK_" $(SRCDIR)/$(EXPORT_MAP)),)
+LIBABIVER := $(shell $(print_cmd) $(RTE_SRCDIR)/config/ABI_VERSION)
+else
+LIBABIVER := 0
endif
ifeq ($(CONFIG_RTE_BUILD_SHARED_LIB),y)
LIB := $(patsubst %.a,%.so.$(LIBABIVER),$(LIB))
ifeq ($(EXTLIB_BUILD),n)
-ifeq ($(CONFIG_RTE_MAJOR_ABI),)
-ifeq ($(CONFIG_RTE_NEXT_ABI),y)
-LIB := $(LIB).1
-endif
-endif
CPU_LDFLAGS += --version-script=$(SRCDIR)/$(EXPORT_MAP)
endif
endif
--
2.17.1
^ permalink raw reply [relevance 8%]
* [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts
@ 2019-10-16 12:43 8% ` Anatoly Burakov
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
` (9 more replies)
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global Anatoly Burakov
` (8 subsequent siblings)
9 siblings, 10 replies; 200+ results
From: Anatoly Burakov @ 2019-10-16 12:43 UTC (permalink / raw)
To: dev; +Cc: john.mcnamara, bruce.richardson, thomas, david.marchand
This patchset prepares the codebase for the new ABI policy and
adds a few helper scripts.
There are two new scripts for managing ABI versions added. The
first one is a Python script that will read in a .map file,
flatten it and update the ABI version to the ABI version
specified on the command-line.
The second one is a shell script that will run the above mentioned
Python script recursively over the source tree and set the ABI
version to either that which is defined in config/ABI_VERSION, or
a user-specified one.
Example of its usage: buildtools/update-abi.sh 20.0
This will recurse into lib/ and drivers/ directory and update
whatever .map files it can find.
The other shell script that's added is one that can take in a .so
file and ensure that its declared public ABI matches either
current ABI, next ABI, or EXPERIMENTAL. This was moved to the
last commit because it made no sense to have it beforehand.
The source tree was verified to follow the new ABI policy using
the following command (assuming built binaries are in build/):
find ./build/lib ./build/drivers -name \*.so \
-exec ./buildtools/check-abi-version.sh {} \; -print
This returns 0.
Changes since v1:
- Reordered patchset to have removal of old ABI's before introducing
the new one to avoid compile breakages between patches
- Added a new patch fixing missing symbol in octeontx common
- Split script commits into multiple commits and reordered them
- Re-generated the ABI bump commit
- Verified all scripts to work
Anatoly Burakov (2):
buildtools: add ABI update shell script
drivers/octeontx: add missing public symbol
Marcin Baran (6):
config: change ABI versioning for global
timer: remove deprecated code
lpm: remove deprecated code
distributor: remove deprecated code
lib: change function suffix in distributor
buildtools: add ABI versioning check script
Pawel Modrak (2):
buildtools: add script for updating symbols abi version
build: change ABI version to 20.0
buildtools/check-abi-version.sh | 54 +
buildtools/meson.build | 2 +
buildtools/update-abi.sh | 36 +
buildtools/update_version_map_abi.py | 148 +++
config/ABI_VERSION | 1 +
config/meson.build | 3 +-
.../rte_pmd_bbdev_fpga_lte_fec_version.map | 8 +-
.../null/rte_pmd_bbdev_null_version.map | 2 +-
.../rte_pmd_bbdev_turbo_sw_version.map | 2 +-
drivers/bus/dpaa/rte_bus_dpaa_version.map | 115 +-
drivers/bus/fslmc/rte_bus_fslmc_version.map | 154 ++-
drivers/bus/ifpga/rte_bus_ifpga_version.map | 14 +-
drivers/bus/pci/rte_bus_pci_version.map | 2 +-
drivers/bus/vdev/rte_bus_vdev_version.map | 12 +-
drivers/bus/vmbus/rte_bus_vmbus_version.map | 12 +-
drivers/common/cpt/rte_common_cpt_version.map | 4 +-
.../common/dpaax/rte_common_dpaax_version.map | 4 +-
.../common/mvep/rte_common_mvep_version.map | 6 +-
.../octeontx/rte_common_octeontx_version.map | 7 +-
.../rte_common_octeontx2_version.map | 16 +-
.../compress/isal/rte_pmd_isal_version.map | 2 +-
.../rte_pmd_octeontx_compress_version.map | 2 +-
drivers/compress/qat/rte_pmd_qat_version.map | 2 +-
.../compress/zlib/rte_pmd_zlib_version.map | 2 +-
.../aesni_gcm/rte_pmd_aesni_gcm_version.map | 2 +-
.../aesni_mb/rte_pmd_aesni_mb_version.map | 2 +-
.../crypto/armv8/rte_pmd_armv8_version.map | 2 +-
.../caam_jr/rte_pmd_caam_jr_version.map | 3 +-
drivers/crypto/ccp/rte_pmd_ccp_version.map | 3 +-
.../dpaa2_sec/rte_pmd_dpaa2_sec_version.map | 10 +-
.../dpaa_sec/rte_pmd_dpaa_sec_version.map | 10 +-
.../crypto/kasumi/rte_pmd_kasumi_version.map | 2 +-
.../crypto/mvsam/rte_pmd_mvsam_version.map | 2 +-
.../crypto/nitrox/rte_pmd_nitrox_version.map | 2 +-
.../null/rte_pmd_null_crypto_version.map | 2 +-
.../rte_pmd_octeontx_crypto_version.map | 3 +-
.../openssl/rte_pmd_openssl_version.map | 2 +-
.../rte_pmd_crypto_scheduler_version.map | 19 +-
.../crypto/snow3g/rte_pmd_snow3g_version.map | 2 +-
.../virtio/rte_pmd_virtio_crypto_version.map | 2 +-
drivers/crypto/zuc/rte_pmd_zuc_version.map | 2 +-
.../event/dpaa/rte_pmd_dpaa_event_version.map | 3 +-
.../dpaa2/rte_pmd_dpaa2_event_version.map | 2 +-
.../event/dsw/rte_pmd_dsw_event_version.map | 2 +-
.../rte_pmd_octeontx_event_version.map | 2 +-
.../rte_pmd_octeontx2_event_version.map | 3 +-
.../event/opdl/rte_pmd_opdl_event_version.map | 2 +-
.../rte_pmd_skeleton_event_version.map | 3 +-
drivers/event/sw/rte_pmd_sw_event_version.map | 2 +-
.../bucket/rte_mempool_bucket_version.map | 3 +-
.../mempool/dpaa/rte_mempool_dpaa_version.map | 2 +-
.../dpaa2/rte_mempool_dpaa2_version.map | 12 +-
.../octeontx/rte_mempool_octeontx_version.map | 2 +-
.../rte_mempool_octeontx2_version.map | 4 +-
.../mempool/ring/rte_mempool_ring_version.map | 3 +-
.../stack/rte_mempool_stack_version.map | 3 +-
drivers/meson.build | 20 +-
.../af_packet/rte_pmd_af_packet_version.map | 3 +-
drivers/net/af_xdp/rte_pmd_af_xdp_version.map | 2 +-
drivers/net/ark/rte_pmd_ark_version.map | 5 +-
.../net/atlantic/rte_pmd_atlantic_version.map | 4 +-
drivers/net/avp/rte_pmd_avp_version.map | 2 +-
drivers/net/axgbe/rte_pmd_axgbe_version.map | 2 +-
drivers/net/bnx2x/rte_pmd_bnx2x_version.map | 3 +-
drivers/net/bnxt/rte_pmd_bnxt_version.map | 4 +-
drivers/net/bonding/rte_pmd_bond_version.map | 47 +-
drivers/net/cxgbe/rte_pmd_cxgbe_version.map | 3 +-
drivers/net/dpaa/rte_pmd_dpaa_version.map | 11 +-
drivers/net/dpaa2/rte_pmd_dpaa2_version.map | 12 +-
drivers/net/e1000/rte_pmd_e1000_version.map | 3 +-
drivers/net/ena/rte_pmd_ena_version.map | 3 +-
drivers/net/enetc/rte_pmd_enetc_version.map | 3 +-
drivers/net/enic/rte_pmd_enic_version.map | 3 +-
.../net/failsafe/rte_pmd_failsafe_version.map | 3 +-
drivers/net/fm10k/rte_pmd_fm10k_version.map | 3 +-
drivers/net/hinic/rte_pmd_hinic_version.map | 3 +-
drivers/net/hns3/rte_pmd_hns3_version.map | 4 +-
drivers/net/i40e/rte_pmd_i40e_version.map | 65 +-
drivers/net/iavf/rte_pmd_iavf_version.map | 3 +-
drivers/net/ice/rte_pmd_ice_version.map | 3 +-
drivers/net/ifc/rte_pmd_ifc_version.map | 3 +-
drivers/net/ipn3ke/rte_pmd_ipn3ke_version.map | 3 +-
drivers/net/ixgbe/rte_pmd_ixgbe_version.map | 62 +-
drivers/net/kni/rte_pmd_kni_version.map | 3 +-
.../net/liquidio/rte_pmd_liquidio_version.map | 3 +-
drivers/net/memif/rte_pmd_memif_version.map | 5 +-
drivers/net/mlx4/rte_pmd_mlx4_version.map | 3 +-
drivers/net/mlx5/rte_pmd_mlx5_version.map | 2 +-
drivers/net/mvneta/rte_pmd_mvneta_version.map | 2 +-
drivers/net/mvpp2/rte_pmd_mvpp2_version.map | 2 +-
drivers/net/netvsc/rte_pmd_netvsc_version.map | 4 +-
drivers/net/nfb/rte_pmd_nfb_version.map | 3 +-
drivers/net/nfp/rte_pmd_nfp_version.map | 2 +-
drivers/net/null/rte_pmd_null_version.map | 3 +-
.../net/octeontx/rte_pmd_octeontx_version.map | 10 +-
.../octeontx2/rte_pmd_octeontx2_version.map | 3 +-
drivers/net/pcap/rte_pmd_pcap_version.map | 3 +-
drivers/net/qede/rte_pmd_qede_version.map | 3 +-
drivers/net/ring/rte_pmd_ring_version.map | 10 +-
drivers/net/sfc/rte_pmd_sfc_version.map | 3 +-
.../net/softnic/rte_pmd_softnic_version.map | 2 +-
.../net/szedata2/rte_pmd_szedata2_version.map | 2 +-
drivers/net/tap/rte_pmd_tap_version.map | 3 +-
.../net/thunderx/rte_pmd_thunderx_version.map | 3 +-
.../rte_pmd_vdev_netvsc_version.map | 3 +-
drivers/net/vhost/rte_pmd_vhost_version.map | 11 +-
drivers/net/virtio/rte_pmd_virtio_version.map | 3 +-
.../net/vmxnet3/rte_pmd_vmxnet3_version.map | 3 +-
.../rte_rawdev_dpaa2_cmdif_version.map | 3 +-
.../rte_rawdev_dpaa2_qdma_version.map | 4 +-
.../raw/ifpga/rte_rawdev_ifpga_version.map | 3 +-
drivers/raw/ioat/rte_rawdev_ioat_version.map | 3 +-
drivers/raw/ntb/rte_rawdev_ntb_version.map | 5 +-
.../rte_rawdev_octeontx2_dma_version.map | 3 +-
.../skeleton/rte_rawdev_skeleton_version.map | 3 +-
lib/librte_acl/rte_acl_version.map | 2 +-
lib/librte_bbdev/rte_bbdev_version.map | 4 +
.../rte_bitratestats_version.map | 2 +-
lib/librte_bpf/rte_bpf_version.map | 4 +
lib/librte_cfgfile/rte_cfgfile_version.map | 34 +-
lib/librte_cmdline/rte_cmdline_version.map | 10 +-
.../rte_compressdev_version.map | 4 +
.../rte_cryptodev_version.map | 102 +-
lib/librte_distributor/Makefile | 2 +-
lib/librte_distributor/meson.build | 2 +-
lib/librte_distributor/rte_distributor.c | 80 +-
.../rte_distributor_private.h | 10 +-
...ributor_v20.c => rte_distributor_single.c} | 48 +-
...ributor_v20.h => rte_distributor_single.h} | 26 +-
.../rte_distributor_v1705.h | 61 --
.../rte_distributor_version.map | 16 +-
lib/librte_eal/rte_eal_version.map | 310 ++----
lib/librte_efd/rte_efd_version.map | 2 +-
lib/librte_ethdev/rte_ethdev_version.map | 160 +--
lib/librte_eventdev/rte_eventdev_version.map | 130 +--
.../rte_flow_classify_version.map | 4 +
lib/librte_gro/rte_gro_version.map | 2 +-
lib/librte_gso/rte_gso_version.map | 2 +-
lib/librte_hash/rte_hash_version.map | 43 +-
lib/librte_ip_frag/rte_ip_frag_version.map | 10 +-
lib/librte_ipsec/rte_ipsec_version.map | 4 +
lib/librte_jobstats/rte_jobstats_version.map | 10 +-
lib/librte_kni/rte_kni_version.map | 2 +-
lib/librte_kvargs/rte_kvargs_version.map | 4 +-
.../rte_latencystats_version.map | 2 +-
lib/librte_lpm/rte_lpm.c | 996 +-----------------
lib/librte_lpm/rte_lpm.h | 88 --
lib/librte_lpm/rte_lpm6.c | 132 +--
lib/librte_lpm/rte_lpm6.h | 25 -
lib/librte_lpm/rte_lpm_version.map | 39 +-
lib/librte_mbuf/rte_mbuf_version.map | 41 +-
lib/librte_member/rte_member_version.map | 2 +-
lib/librte_mempool/rte_mempool_version.map | 44 +-
lib/librte_meter/rte_meter_version.map | 13 +-
lib/librte_metrics/rte_metrics_version.map | 2 +-
lib/librte_net/rte_net_version.map | 23 +-
lib/librte_pci/rte_pci_version.map | 2 +-
lib/librte_pdump/rte_pdump_version.map | 2 +-
lib/librte_pipeline/rte_pipeline_version.map | 36 +-
lib/librte_port/rte_port_version.map | 64 +-
lib/librte_power/rte_power_version.map | 24 +-
lib/librte_rawdev/rte_rawdev_version.map | 4 +-
lib/librte_rcu/rte_rcu_version.map | 4 +
lib/librte_reorder/rte_reorder_version.map | 8 +-
lib/librte_ring/rte_ring_version.map | 10 +-
lib/librte_sched/rte_sched_version.map | 14 +-
lib/librte_security/rte_security_version.map | 2 +-
lib/librte_stack/rte_stack_version.map | 4 +
lib/librte_table/rte_table_version.map | 2 +-
.../rte_telemetry_version.map | 4 +
lib/librte_timer/rte_timer.c | 90 +-
lib/librte_timer/rte_timer.h | 15 -
lib/librte_timer/rte_timer_version.map | 12 +-
lib/librte_vhost/rte_vhost_version.map | 52 +-
lib/meson.build | 18 +-
meson_options.txt | 2 -
mk/rte.lib.mk | 19 +-
177 files changed, 1122 insertions(+), 2891 deletions(-)
create mode 100755 buildtools/check-abi-version.sh
create mode 100755 buildtools/update-abi.sh
create mode 100755 buildtools/update_version_map_abi.py
create mode 100644 config/ABI_VERSION
rename lib/librte_distributor/{rte_distributor_v20.c => rte_distributor_single.c} (87%)
rename lib/librte_distributor/{rte_distributor_v20.h => rte_distributor_single.h} (89%)
delete mode 100644 lib/librte_distributor/rte_distributor_v1705.h
--
2.17.1
^ permalink raw reply [relevance 8%]
* Re: [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic compare exchange
@ 2019-10-16 9:04 4% ` Phil Yang (Arm Technology China)
2019-10-17 12:45 0% ` David Marchand
0 siblings, 1 reply; 200+ results
From: Phil Yang (Arm Technology China) @ 2019-10-16 9:04 UTC (permalink / raw)
To: David Marchand
Cc: thomas, jerinj, Gage Eads, dev, hemant.agrawal,
Honnappa Nagarahalli, Gavin Hu (Arm Technology China),
nd, nd
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, October 15, 2019 8:16 PM
> To: Phil Yang (Arm Technology China) <Phil.Yang@arm.com>
> Cc: thomas@monjalon.net; jerinj@marvell.com; Gage Eads
> <gage.eads@intel.com>; dev <dev@dpdk.org>; hemant.agrawal@nxp.com;
> Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Gavin Hu (Arm
> Technology China) <Gavin.Hu@arm.com>; nd <nd@arm.com>
> Subject: Re: [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic
> compare exchange
>
> On Tue, Oct 15, 2019 at 1:32 PM Phil Yang (Arm Technology China)
> <Phil.Yang@arm.com> wrote:
> > > -----Original Message-----
> > > From: David Marchand <david.marchand@redhat.com>
> > > If LSE is available, we expose __rte_cas_XX (explicitely) *non*
> > > inlined functions, while without LSE, we expose inlined __rte_ldr_XX
> > > and __rte_stx_XX functions.
> > > So we have a first disparity with non-inlined vs inlined functions
> > > depending on a #ifdef.
>
> You did not comment on the inline / no inline part and I still see
> this in the v10.
> Is this __rte_noinline on the CAS function intentional?
Apologize for missing this item. Yes, it is to avoid ABI break.
Please check
5b40ec6b966260e0ff66a8a2c689664f75d6a0e6 ("mempool/octeontx2: fix possible arm64 ABI break")
>
>
> > > Then, we have a second disparity with two sets of "apis" depending on
> > > this #ifdef.
> > >
> > > And we expose those sets with a rte_ prefix, meaning people will try
> > > to use them, but those are not part of a public api.
> > >
> > > Can't we do without them ? (see below [2] for a proposal with ldr/stx,
> > > cas should be the same)
> >
> > No, it doesn't work.
> > Because we need to verify the return value at the end of the loop for these
> macros.
>
> Do you mean the return value for the stores?
It is my bad. I missed the ret option in the macro. This approach works.
However, I suggest to keep them as static inline functions rather than a piece of macro in the rte_atomic128_cmp_exchange API.
One reason is APIs name can indicate the memory ordering of these operations.
Moreover, it uses the register type to pass the value in the inline function, so it should not have too much cost comparing with the macro.
I also think these 128bit load and store functions can be used in other places, once it has been proved valuable in rte_atomic128_cmp_exchange API. But let's keep them private for the current stage.
BTW, Linux kernel implemented in the same way. https://github.com/torvalds/linux/blob/master/arch/arm64/include/asm/atomic_lse.h#L19
> > > #define __STORE_128(op_string, dst, val, ret) \
> > > asm volatile( \
> > > op_string " %w0, %1, %2, %3" \
> > > : "=&r" (ret) \
> > > : "r" (val.val[0]), \
> > > "r" (val.val[1]), \
> > > "Q" (dst->val[0]) \
> > > : "memory")
>
> The ret variable is still passed in this macro and the while loop can
> check it later.
>
>
> > > > diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h
> > > b/lib/librte_eal/common/include/generic/rte_atomic.h
> > > > index 24ff7dc..e6ab15a 100644
> > > > --- a/lib/librte_eal/common/include/generic/rte_atomic.h
> > > > +++ b/lib/librte_eal/common/include/generic/rte_atomic.h
> > > > @@ -1081,6 +1081,20 @@ static inline void
> > > rte_atomic64_clear(rte_atomic64_t *v)
> > > >
> > > > /*------------------------ 128 bit atomic operations -------------------------*/
> > > >
> > > > +/**
> > > > + * 128-bit integer structure.
> > > > + */
> > > > +RTE_STD_C11
> > > > +typedef struct {
> > > > + RTE_STD_C11
> > > > + union {
> > > > + uint64_t val[2];
> > > > +#ifdef RTE_ARCH_64
> > > > + __extension__ __int128 int128;
> > > > +#endif
> > >
> > > You hid this field for x86.
> > > What is the reason?
> > No, we are not hid it for x86. The RTE_ARCH_64 flag covered x86 as well.
>
> Ah indeed, I read it wrong, ARCH_64 ... AARCH64 ... :-)
>
>
>
> --
> David Marchand
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH 2/2] sched: modify internal structs and functions for 64 bit values
2019-10-15 15:47 4% ` Dumitrescu, Cristian
@ 2019-10-15 16:01 0% ` Singh, Jasvinder
0 siblings, 0 replies; 200+ results
From: Singh, Jasvinder @ 2019-10-15 16:01 UTC (permalink / raw)
To: Dumitrescu, Cristian, dev; +Cc: Krakowiak, LukaszX
> -----Original Message-----
> From: Dumitrescu, Cristian
> Sent: Tuesday, October 15, 2019 4:47 PM
> To: Singh, Jasvinder <jasvinder.singh@intel.com>; dev@dpdk.org
> Cc: Krakowiak, LukaszX <lukaszx.krakowiak@intel.com>
> Subject: RE: [PATCH 2/2] sched: modify internal structs and functions for 64 bit
> values
>
> Hi Jasvinder,
>
> > -----Original Message-----
> > From: Singh, Jasvinder
> > Sent: Monday, October 14, 2019 6:25 PM
> > To: dev@dpdk.org
> > Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Krakowiak,
> > LukaszX <lukaszx.krakowiak@intel.com>
> > Subject: [PATCH 2/2] sched: modify internal structs and functions for
> > 64 bit values
> >
> > Modify internal structure and functions to support 64-bit values for
> > rates and stats parameters.
> >
> > Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> > Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
> > ---
> > lib/librte_sched/rte_approx.c | 57 ++++----
> > lib/librte_sched/rte_approx.h | 3 +-
> > lib/librte_sched/rte_sched.c | 211 +++++++++++++++-------------
> > lib/librte_sched/rte_sched_common.h | 12 +-
> > 4 files changed, 156 insertions(+), 127 deletions(-)
> >
> > diff --git a/lib/librte_sched/rte_approx.c
> > b/lib/librte_sched/rte_approx.c index 30620b83d..4883d3969 100644
> > --- a/lib/librte_sched/rte_approx.c
> > +++ b/lib/librte_sched/rte_approx.c
> > @@ -18,22 +18,23 @@
> > */
> >
> > /* fraction comparison: compare (a/b) and (c/d) */ -static inline
> > uint32_t -less(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
> > +static inline sched_counter_t
> > +less(sched_counter_t a, sched_counter_t b, sched_counter_t c,
> > sched_counter_t d)
> > {
> > return a*d < b*c;
> > }
> >
> > -static inline uint32_t
> > -less_or_equal(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
> > +static inline sched_counter_t
> > +less_or_equal(sched_counter_t a, sched_counter_t b, sched_counter_t c,
> > + sched_counter_t d)
> > {
> > return a*d <= b*c;
> > }
> >
> > /* check whether a/b is a valid approximation */ -static inline
> > uint32_t -matches(uint32_t a, uint32_t b,
> > - uint32_t alpha_num, uint32_t d_num, uint32_t denum)
> > +static inline sched_counter_t
> > +matches(sched_counter_t a, sched_counter_t b,
> > + sched_counter_t alpha_num, sched_counter_t d_num,
> > sched_counter_t denum)
> > {
> > if (less_or_equal(a, b, alpha_num - d_num, denum))
> > return 0;
> > @@ -45,33 +46,39 @@ matches(uint32_t a, uint32_t b, }
> >
> > static inline void
> > -find_exact_solution_left(uint32_t p_a, uint32_t q_a, uint32_t p_b,
> > uint32_t q_b,
> > - uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p,
> > uint32_t *q)
> > +find_exact_solution_left(sched_counter_t p_a, sched_counter_t q_a,
> > + sched_counter_t p_b, sched_counter_t q_b, sched_counter_t
> > alpha_num,
> > + sched_counter_t d_num, sched_counter_t denum,
> > sched_counter_t *p,
> > + sched_counter_t *q)
> > {
> > - uint32_t k_num = denum * p_b - (alpha_num + d_num) * q_b;
> > - uint32_t k_denum = (alpha_num + d_num) * q_a - denum * p_a;
> > - uint32_t k = (k_num / k_denum) + 1;
> > + sched_counter_t k_num = denum * p_b - (alpha_num + d_num) *
> > q_b;
> > + sched_counter_t k_denum = (alpha_num + d_num) * q_a - denum *
> > p_a;
> > + sched_counter_t k = (k_num / k_denum) + 1;
> >
> > *p = p_b + k * p_a;
> > *q = q_b + k * q_a;
> > }
> >
> > static inline void
> > -find_exact_solution_right(uint32_t p_a, uint32_t q_a, uint32_t p_b,
> > uint32_t q_b,
> > - uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p,
> > uint32_t *q)
> > +find_exact_solution_right(sched_counter_t p_a, sched_counter_t q_a,
> > + sched_counter_t p_b, sched_counter_t q_b, sched_counter_t
> > alpha_num,
> > + sched_counter_t d_num, sched_counter_t denum,
> > sched_counter_t *p,
> > + sched_counter_t *q)
> > {
> > - uint32_t k_num = - denum * p_b + (alpha_num - d_num) * q_b;
> > - uint32_t k_denum = - (alpha_num - d_num) * q_a + denum * p_a;
> > - uint32_t k = (k_num / k_denum) + 1;
> > + sched_counter_t k_num = -denum * p_b + (alpha_num - d_num) *
> > q_b;
> > + sched_counter_t k_denum = -(alpha_num - d_num) * q_a + denum
> > * p_a;
> > + sched_counter_t k = (k_num / k_denum) + 1;
> >
> > *p = p_b + k * p_a;
> > *q = q_b + k * q_a;
> > }
> >
> > static int
> > -find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num,
> > uint32_t denum, uint32_t *p, uint32_t *q)
> > +find_best_rational_approximation(sched_counter_t alpha_num,
> > + sched_counter_t d_num, sched_counter_t denum,
> > sched_counter_t *p,
> > + sched_counter_t *q)
> > {
> > - uint32_t p_a, q_a, p_b, q_b;
> > + sched_counter_t p_a, q_a, p_b, q_b;
> >
> > /* check assumptions on the inputs */
> > if (!((0 < d_num) && (d_num < alpha_num) && (alpha_num <
> > denum) && (d_num + alpha_num < denum))) { @@ -85,8 +92,8 @@
> > find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num,
> > uint32_t de
> > q_b = 1;
> >
> > while (1) {
> > - uint32_t new_p_a, new_q_a, new_p_b, new_q_b;
> > - uint32_t x_num, x_denum, x;
> > + sched_counter_t new_p_a, new_q_a, new_p_b, new_q_b;
> > + sched_counter_t x_num, x_denum, x;
> > int aa, bb;
> >
> > /* compute the number of steps to the left */ @@ -139,9
> +146,9 @@
> > find_best_rational_approximation(uint32_t
> > alpha_num, uint32_t d_num, uint32_t de
> > }
> > }
> >
> > -int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
> > +int rte_approx(double alpha, double d, sched_counter_t *p,
> > sched_counter_t *q)
> > {
> > - uint32_t alpha_num, d_num, denum;
> > + sched_counter_t alpha_num, d_num, denum;
> >
> > /* Check input arguments */
> > if (!((0.0 < d) && (d < alpha) && (alpha < 1.0))) { @@ -159,8 +166,8
> > @@ int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
> > d *= 10;
> > denum *= 10;
> > }
> > - alpha_num = (uint32_t) alpha;
> > - d_num = (uint32_t) d;
> > + alpha_num = (sched_counter_t) alpha;
> > + d_num = (sched_counter_t) d;
> >
> > /* Perform approximation */
> > return find_best_rational_approximation(alpha_num, d_num, denum,
> p,
> > q); diff --git a/lib/librte_sched/rte_approx.h
> > b/lib/librte_sched/rte_approx.h index 0244d98f1..e591e122d 100644
> > --- a/lib/librte_sched/rte_approx.h
> > +++ b/lib/librte_sched/rte_approx.h
> > @@ -20,6 +20,7 @@ extern "C" {
> > ***/
> >
> > #include <stdint.h>
> > +#include "rte_sched_common.h"
> >
> > /**
> > * Find best rational approximation
> > @@ -37,7 +38,7 @@ extern "C" {
> > * @return
> > * 0 upon success, error code otherwise
> > */
> > -int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q);
> > +int rte_approx(double alpha, double d, sched_counter_t *p,
> > sched_counter_t *q);
> >
> > #ifdef __cplusplus
> > }
>
> Please keep the rte_approx.[hc] independent of the librte_sched library, so use
> unit32_t or uint64_t instead of sched_counter_t that is librte_sched dependent.
> Also, for the same reason, remove the above inclusion of rte_sched_common.h.
Ok, will make these changes.
> Please keep the existing 32-bit functions with their current name & prototype
> and create new 64-bit functions that have the "64" suffix to their name, and use
> the 64-bit versions in the rte_sched.c implementation. Makes sense?
Yes, will add new functions with suffix "64" in rte_approx.[hc].
> The rte_approx.[hc] files represent the implementation of an arithmetic
> algorithm that is completely independent of the scheduler library. In fact, they
> could be moved to a more generic location in DPDK where they could be
> leveraged by other libraries without the need to create a (fake) dependency to
> librte_sched.
>
> > diff --git a/lib/librte_sched/rte_sched.c
> > b/lib/librte_sched/rte_sched.c index 710ecf65a..11d1febe2 100644
> > --- a/lib/librte_sched/rte_sched.c
> > +++ b/lib/librte_sched/rte_sched.c
> > @@ -49,13 +49,13 @@
> >
> > struct rte_sched_pipe_profile {
> > /* Token bucket (TB) */
> > - uint32_t tb_period;
> > - uint32_t tb_credits_per_period;
> > - uint32_t tb_size;
> > + sched_counter_t tb_period;
> > + sched_counter_t tb_credits_per_period;
> > + sched_counter_t tb_size;
> >
> > /* Pipe traffic classes */
> > - uint32_t tc_period;
> > - uint32_t
> > tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t tc_period;
> > + sched_counter_t
> > tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > uint8_t tc_ov_weight;
> >
> > /* Pipe best-effort traffic class queues */ @@ -65,20 +65,20 @@
> > struct rte_sched_pipe_profile { struct rte_sched_pipe {
> > /* Token bucket (TB) */
> > uint64_t tb_time; /* time of last update */
> > - uint32_t tb_credits;
> > + sched_counter_t tb_credits;
> >
> > /* Pipe profile and flags */
> > uint32_t profile;
> >
> > /* Traffic classes (TCs) */
> > uint64_t tc_time; /* time of next update */
> > - uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t
> > tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> >
> > /* Weighted Round Robin (WRR) */
> > uint8_t wrr_tokens[RTE_SCHED_BE_QUEUES_PER_PIPE];
> >
> > /* TC oversubscription */
> > - uint32_t tc_ov_credits;
> > + sched_counter_t tc_ov_credits;
> > uint8_t tc_ov_period_id;
> > } __rte_cache_aligned;
> >
> > @@ -141,28 +141,28 @@ struct rte_sched_grinder { struct
> > rte_sched_subport {
> > /* Token bucket (TB) */
> > uint64_t tb_time; /* time of last update */
> > - uint32_t tb_period;
> > - uint32_t tb_credits_per_period;
> > - uint32_t tb_size;
> > - uint32_t tb_credits;
> > + sched_counter_t tb_period;
> > + sched_counter_t tb_credits_per_period;
> > + sched_counter_t tb_size;
> > + sched_counter_t tb_credits;
> >
> > /* Traffic classes (TCs) */
> > uint64_t tc_time; /* time of next update */
> > - uint32_t
> > tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > - uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > - uint32_t tc_period;
> > + sched_counter_t
> > tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t
> > tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t tc_period;
> >
> > /* TC oversubscription */
> > - uint32_t tc_ov_wm;
> > - uint32_t tc_ov_wm_min;
> > - uint32_t tc_ov_wm_max;
> > + sched_counter_t tc_ov_wm;
> > + sched_counter_t tc_ov_wm_min;
> > + sched_counter_t tc_ov_wm_max;
> > uint8_t tc_ov_period_id;
> > uint8_t tc_ov;
> > uint32_t tc_ov_n;
> > double tc_ov_rate;
> >
> > /* Statistics */
> > - struct rte_sched_subport_stats stats;
> > + struct rte_sched_subport_stats stats __rte_cache_aligned;
> >
> > /* Subport pipes */
> > uint32_t n_pipes_per_subport_enabled; @@ -170,7 +170,7 @@ struct
> > rte_sched_subport {
> > uint32_t n_max_pipe_profiles;
> >
> > /* Pipe best-effort TC rate */
> > - uint32_t pipe_tc_be_rate_max;
> > + sched_counter_t pipe_tc_be_rate_max;
> >
> > /* Pipe queues size */
> > uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > @@ -212,7 +212,7 @@ struct rte_sched_port {
> > uint16_t pipe_queue[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > uint8_t pipe_tc[RTE_SCHED_QUEUES_PER_PIPE];
> > uint8_t tc_queue[RTE_SCHED_QUEUES_PER_PIPE];
> > - uint32_t rate;
> > + sched_counter_t rate;
> > uint32_t mtu;
> > uint32_t frame_overhead;
> > int socket;
> > @@ -517,33 +517,35 @@ rte_sched_port_log_pipe_profile(struct
> > rte_sched_subport *subport, uint32_t i)
> > struct rte_sched_pipe_profile *p = subport->pipe_profiles + i;
> >
> > RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n"
> > - " Token bucket: period = %u, credits per period = %u,
> > size = %u\n"
> > - " Traffic classes: period = %u,\n"
> > - " credits per period = [%u, %u, %u, %u, %u, %u, %u,
> > %u, %u, %u, %u, %u, %u]\n"
> > + " Token bucket: period = %"PRIu64", credits per period
> > = %"PRIu64", size = %"PRIu64"\n"
> > + " Traffic classes: period = %"PRIu64",\n"
> > + " credits per period = [%"PRIu64", %"PRIu64",
> > %"PRIu64", %"PRIu64
> > + ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64",
> > %"PRIu64
> > + ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
> > " Best-effort traffic class oversubscription: weight =
> > %hhu\n"
> > " WRR cost: [%hhu, %hhu, %hhu, %hhu]\n",
> > i,
> >
> > /* Token bucket */
> > - p->tb_period,
> > - p->tb_credits_per_period,
> > - p->tb_size,
> > + (uint64_t)p->tb_period,
> > + (uint64_t)p->tb_credits_per_period,
> > + (uint64_t)p->tb_size,
> >
> > /* Traffic classes */
> > - p->tc_period,
> > - p->tc_credits_per_period[0],
> > - p->tc_credits_per_period[1],
> > - p->tc_credits_per_period[2],
> > - p->tc_credits_per_period[3],
> > - p->tc_credits_per_period[4],
> > - p->tc_credits_per_period[5],
> > - p->tc_credits_per_period[6],
> > - p->tc_credits_per_period[7],
> > - p->tc_credits_per_period[8],
> > - p->tc_credits_per_period[9],
> > - p->tc_credits_per_period[10],
> > - p->tc_credits_per_period[11],
> > - p->tc_credits_per_period[12],
> > + (uint64_t)p->tc_period,
> > + (uint64_t)p->tc_credits_per_period[0],
> > + (uint64_t)p->tc_credits_per_period[1],
> > + (uint64_t)p->tc_credits_per_period[2],
> > + (uint64_t)p->tc_credits_per_period[3],
> > + (uint64_t)p->tc_credits_per_period[4],
> > + (uint64_t)p->tc_credits_per_period[5],
> > + (uint64_t)p->tc_credits_per_period[6],
> > + (uint64_t)p->tc_credits_per_period[7],
> > + (uint64_t)p->tc_credits_per_period[8],
> > + (uint64_t)p->tc_credits_per_period[9],
> > + (uint64_t)p->tc_credits_per_period[10],
> > + (uint64_t)p->tc_credits_per_period[11],
> > + (uint64_t)p->tc_credits_per_period[12],
> >
> > /* Best-effort traffic class oversubscription */
> > p->tc_ov_weight,
> > @@ -553,7 +555,7 @@ rte_sched_port_log_pipe_profile(struct
> > rte_sched_subport *subport, uint32_t i) }
> >
> > static inline uint64_t
> > -rte_sched_time_ms_to_bytes(uint32_t time_ms, uint32_t rate)
> > +rte_sched_time_ms_to_bytes(sched_counter_t time_ms,
> > sched_counter_t rate)
> > {
> > uint64_t time = time_ms;
> >
> > @@ -566,7 +568,7 @@ static void
> > rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
> > struct rte_sched_pipe_params *src,
> > struct rte_sched_pipe_profile *dst,
> > - uint32_t rate)
> > + sched_counter_t rate)
> > {
> > uint32_t wrr_cost[RTE_SCHED_BE_QUEUES_PER_PIPE];
> > uint32_t lcd1, lcd2, lcd;
> > @@ -581,8 +583,8 @@ rte_sched_pipe_profile_convert(struct
> > rte_sched_subport *subport,
> > / (double) rate;
> > double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
> >
> > - rte_approx(tb_rate, d,
> > - &dst->tb_credits_per_period, &dst->tb_period);
> > + rte_approx(tb_rate, d, &dst->tb_credits_per_period,
> > + &dst->tb_period);
> > }
> >
> > dst->tb_size = src->tb_size;
> > @@ -594,8 +596,8 @@ rte_sched_pipe_profile_convert(struct
> > rte_sched_subport *subport,
> > for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
> > if (subport->qsize[i])
> > dst->tc_credits_per_period[i]
> > - = rte_sched_time_ms_to_bytes(src-
> > >tc_period,
> > - src->tc_rate[i]);
> > + = (sched_counter_t)
> > rte_sched_time_ms_to_bytes(
> > + src->tc_period, src->tc_rate[i]);
> >
> > dst->tc_ov_weight = src->tc_ov_weight;
> >
> > @@ -637,7 +639,8 @@ rte_sched_subport_config_pipe_profile_table(struct
> > rte_sched_subport *subport,
> > subport->pipe_tc_be_rate_max = 0;
> > for (i = 0; i < subport->n_pipe_profiles; i++) {
> > struct rte_sched_pipe_params *src = params->pipe_profiles
> > + i;
> > - uint32_t pipe_tc_be_rate = src-
> > >tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
> > + sched_counter_t pipe_tc_be_rate =
> > + src->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
> >
> > if (subport->pipe_tc_be_rate_max < pipe_tc_be_rate)
> > subport->pipe_tc_be_rate_max = pipe_tc_be_rate;
> @@ -647,7 +650,7
> > @@ rte_sched_subport_config_pipe_profile_table(struct
> > rte_sched_subport *subport,
> > static int
> > rte_sched_subport_check_params(struct rte_sched_subport_params
> > *params,
> > uint32_t n_max_pipes_per_subport,
> > - uint32_t rate)
> > + sched_counter_t rate)
> > {
> > uint32_t i;
> >
> > @@ -684,7 +687,7 @@ rte_sched_subport_check_params(struct
> > rte_sched_subport_params *params,
> > }
> >
> > for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
> > - uint32_t tc_rate = params->tc_rate[i];
> > + sched_counter_t tc_rate = params->tc_rate[i];
> > uint16_t qsize = params->qsize[i];
> >
> > if ((qsize == 0 && tc_rate != 0) || @@ -910,36 +913,40 @@
> > rte_sched_port_log_subport_config(struct
> > rte_sched_port *port, uint32_t i)
> > struct rte_sched_subport *s = port->subports[i];
> >
> > RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
> > - " Token bucket: period = %u, credits per period = %u,
> > size = %u\n"
> > - " Traffic classes: period = %u\n"
> > - " credits per period = [%u, %u, %u, %u, %u, %u, %u,
> > %u, %u, %u, %u, %u, %u]\n"
> > - " Best effort traffic class oversubscription: wm min =
> > %u, wm max = %u\n",
> > + " Token bucket: period = %"PRIu64", credits per period
> > = %"PRIu64
> > + ", size = %"PRIu64"\n"
> > + " Traffic classes: period = %"PRIu64"\n"
> > + " credits per period = [%"PRIu64", %"PRIu64",
> > %"PRIu64", %"PRIu64
> > + ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64",
> > %"PRIu64
> > + ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
> > + " Best effort traffic class oversubscription: wm min =
> > %"PRIu64
> > + ", wm max = %"PRIu64"\n",
> > i,
> >
> > /* Token bucket */
> > - s->tb_period,
> > - s->tb_credits_per_period,
> > - s->tb_size,
> > + (uint64_t)s->tb_period,
> > + (uint64_t)s->tb_credits_per_period,
> > + (uint64_t)s->tb_size,
> >
> > /* Traffic classes */
> > - s->tc_period,
> > - s->tc_credits_per_period[0],
> > - s->tc_credits_per_period[1],
> > - s->tc_credits_per_period[2],
> > - s->tc_credits_per_period[3],
> > - s->tc_credits_per_period[4],
> > - s->tc_credits_per_period[5],
> > - s->tc_credits_per_period[6],
> > - s->tc_credits_per_period[7],
> > - s->tc_credits_per_period[8],
> > - s->tc_credits_per_period[9],
> > - s->tc_credits_per_period[10],
> > - s->tc_credits_per_period[11],
> > - s->tc_credits_per_period[12],
> > + (uint64_t)s->tc_period,
> > + (uint64_t)s->tc_credits_per_period[0],
> > + (uint64_t)s->tc_credits_per_period[1],
> > + (uint64_t)s->tc_credits_per_period[2],
> > + (uint64_t)s->tc_credits_per_period[3],
> > + (uint64_t)s->tc_credits_per_period[4],
> > + (uint64_t)s->tc_credits_per_period[5],
> > + (uint64_t)s->tc_credits_per_period[6],
> > + (uint64_t)s->tc_credits_per_period[7],
> > + (uint64_t)s->tc_credits_per_period[8],
> > + (uint64_t)s->tc_credits_per_period[9],
> > + (uint64_t)s->tc_credits_per_period[10],
> > + (uint64_t)s->tc_credits_per_period[11],
> > + (uint64_t)s->tc_credits_per_period[12],
> >
> > /* Best effort traffic class oversubscription */
> > - s->tc_ov_wm_min,
> > - s->tc_ov_wm_max);
> > + (uint64_t)s->tc_ov_wm_min,
> > + (uint64_t)s->tc_ov_wm_max);
> > }
> >
> > static void
> > @@ -1023,7 +1030,8 @@ rte_sched_subport_config(struct rte_sched_port
> > *port,
> > double tb_rate = ((double) params->tb_rate) / ((double)
> > port->rate);
> > double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
> >
> > - rte_approx(tb_rate, d, &s->tb_credits_per_period, &s-
> > >tb_period);
> > + rte_approx(tb_rate, d, &s->tb_credits_per_period,
> > + &s->tb_period);
> > }
> >
> > s->tb_size = params->tb_size;
> > @@ -1035,8 +1043,8 @@ rte_sched_subport_config(struct rte_sched_port
> > *port,
> > for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
> > if (params->qsize[i])
> > s->tc_credits_per_period[i]
> > - = rte_sched_time_ms_to_bytes(params-
> > >tc_period,
> > - params->tc_rate[i]);
> > + = (sched_counter_t)
> > rte_sched_time_ms_to_bytes(
> > + params->tc_period, params-
> > >tc_rate[i]);
> > }
> > s->tc_time = port->time + s->tc_period;
> > for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) @@ -
> 1970,13
> > +1978,15 @@ grinder_credits_update(struct rte_sched_port *port,
> > /* Subport TB */
> > n_periods = (port->time - subport->tb_time) / subport->tb_period;
> > subport->tb_credits += n_periods * subport-
> > >tb_credits_per_period;
> > - subport->tb_credits = rte_sched_min_val_2_u32(subport-
> > >tb_credits, subport->tb_size);
> > + subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
> > + subport->tb_size);
> > subport->tb_time += n_periods * subport->tb_period;
> >
> > /* Pipe TB */
> > n_periods = (port->time - pipe->tb_time) / params->tb_period;
> > pipe->tb_credits += n_periods * params->tb_credits_per_period;
> > - pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits,
> > params->tb_size);
> > + pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
> > + params->tb_size);
>
> Can we remove all the usages of rte_sched_min_val() (including its definition in
> rte_sched_common.h) and replace it with RTE_MIN, please?
>
> > pipe->tb_time += n_periods * params->tb_period;
> >
> > /* Subport TCs */
> > @@ -1998,13 +2008,13 @@ grinder_credits_update(struct rte_sched_port
> > *port,
> >
> > #else
> >
> > -static inline uint32_t
> > +static inline sched_counter_t
> > grinder_tc_ov_credits_update(struct rte_sched_port *port,
> > struct rte_sched_subport *subport)
> > {
> > - uint32_t
> > tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > - uint32_t tc_consumption = 0, tc_ov_consumption_max;
> > - uint32_t tc_ov_wm = subport->tc_ov_wm;
> > + sched_counter_t
> > tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t tc_consumption = 0, tc_ov_consumption_max;
> > + sched_counter_t tc_ov_wm = subport->tc_ov_wm;
> > uint32_t i;
> >
> > if (subport->tc_ov == 0)
> > @@ -2053,13 +2063,15 @@ grinder_credits_update(struct rte_sched_port
> > *port,
> > /* Subport TB */
> > n_periods = (port->time - subport->tb_time) / subport->tb_period;
> > subport->tb_credits += n_periods * subport-
> > >tb_credits_per_period;
> > - subport->tb_credits = rte_sched_min_val_2_u32(subport-
> > >tb_credits, subport->tb_size);
> > + subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
> > + subport->tb_size);
> > subport->tb_time += n_periods * subport->tb_period;
> >
> > /* Pipe TB */
> > n_periods = (port->time - pipe->tb_time) / params->tb_period;
> > pipe->tb_credits += n_periods * params->tb_credits_per_period;
> > - pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits,
> > params->tb_size);
> > + pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
> > + params->tb_size);
> > pipe->tb_time += n_periods * params->tb_period;
> >
> > /* Subport TCs */
> > @@ -2101,11 +2113,11 @@ grinder_credits_check(struct rte_sched_port
> > *port,
> > struct rte_sched_pipe *pipe = grinder->pipe;
> > struct rte_mbuf *pkt = grinder->pkt;
> > uint32_t tc_index = grinder->tc_index;
> > - uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
> > - uint32_t subport_tb_credits = subport->tb_credits;
> > - uint32_t subport_tc_credits = subport->tc_credits[tc_index];
> > - uint32_t pipe_tb_credits = pipe->tb_credits;
> > - uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
> > + sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
> > + sched_counter_t subport_tb_credits = subport->tb_credits;
> > + sched_counter_t subport_tc_credits = subport-
> > >tc_credits[tc_index];
> > + sched_counter_t pipe_tb_credits = pipe->tb_credits;
> > + sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
> > int enough_credits;
> >
> > /* Check queue credits */
> > @@ -2136,21 +2148,22 @@ grinder_credits_check(struct rte_sched_port
> > *port,
> > struct rte_sched_pipe *pipe = grinder->pipe;
> > struct rte_mbuf *pkt = grinder->pkt;
> > uint32_t tc_index = grinder->tc_index;
> > - uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
> > - uint32_t subport_tb_credits = subport->tb_credits;
> > - uint32_t subport_tc_credits = subport->tc_credits[tc_index];
> > - uint32_t pipe_tb_credits = pipe->tb_credits;
> > - uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
> > - uint32_t
> > pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > - uint32_t
> > pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
> > - uint32_t pipe_tc_ov_credits, i;
> > + sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
> > + sched_counter_t subport_tb_credits = subport->tb_credits;
> > + sched_counter_t subport_tc_credits = subport-
> > >tc_credits[tc_index];
> > + sched_counter_t pipe_tb_credits = pipe->tb_credits;
> > + sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
> > + sched_counter_t
> > pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> > + sched_counter_t
> > pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
> > + sched_counter_t pipe_tc_ov_credits;
> > + uint32_t i;
> > int enough_credits;
> >
> > for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
> > - pipe_tc_ov_mask1[i] = UINT32_MAX;
> > + pipe_tc_ov_mask1[i] = ~0;
>
> Please use ~0LLU (or UINT64_MAX) to cover the 64-bit case gracefully. Please
> also double-check that there are no usages of UINT32_MAX left in this code,
> unless there is a reason for it. Translation from 32-bit to 64-bit arithmetic can
> be very tricky and yield some very difficult to debug issues.
Ok, will make this change. Also will make sure the there are no usage of UINT32_MAX left.
> >
> > pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASS_BE] = pipe-
> > >tc_ov_credits;
> > - pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] =
> > UINT32_MAX;
> > + pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] = ~0;
> > pipe_tc_ov_credits = pipe_tc_ov_mask1[tc_index];
> >
> > /* Check pipe and subport credits */ diff --git
> > a/lib/librte_sched/rte_sched_common.h
> > b/lib/librte_sched/rte_sched_common.h
> > index 8c191a9b8..06520a686 100644
> > --- a/lib/librte_sched/rte_sched_common.h
> > +++ b/lib/librte_sched/rte_sched_common.h
> > @@ -14,8 +14,16 @@ extern "C" {
> >
> > #define __rte_aligned_16 __attribute__((__aligned__(16)))
> >
> > -static inline uint32_t
> > -rte_sched_min_val_2_u32(uint32_t x, uint32_t y)
> > +//#define COUNTER_SIZE_64
> > +
> > +#ifdef COUNTER_SIZE_64
> > +typedef uint64_t sched_counter_t;
> > +#else
> > +typedef uint32_t sched_counter_t;
> > +#endif
> > +
> > +static inline sched_counter_t
> > +rte_sched_min_val_2(sched_counter_t x, sched_counter_t y)
> > {
> > return (x < y)? x : y;
> > }
> > --
> > 2.21.0
>
> I know I have previously suggested the creation of sched_counter_t, but this
> was meant to be a temporary solution until the full implementation is made
> available. Now that 19.11 is meant to be an ABI stable release, we cannot
> really afford this trick (which might not be necessary either, since you have the
> full implementation), as this #ifdef COUNTER_SIZE is a massive ABI breakage.
>
> Therefore, I strongly suggest we remove the sched_counter_t and use uint64_t
> everywhere throughout the implementation. Agree?
Yes, will make changes.
Thank you for the detailed review. I'll send revised version.
Regards,
Jasvinder
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH 2/2] sched: modify internal structs and functions for 64 bit values
@ 2019-10-15 15:47 4% ` Dumitrescu, Cristian
2019-10-15 16:01 0% ` Singh, Jasvinder
0 siblings, 1 reply; 200+ results
From: Dumitrescu, Cristian @ 2019-10-15 15:47 UTC (permalink / raw)
To: Singh, Jasvinder, dev; +Cc: Krakowiak, LukaszX
Hi Jasvinder,
> -----Original Message-----
> From: Singh, Jasvinder
> Sent: Monday, October 14, 2019 6:25 PM
> To: dev@dpdk.org
> Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; Krakowiak,
> LukaszX <lukaszx.krakowiak@intel.com>
> Subject: [PATCH 2/2] sched: modify internal structs and functions for 64 bit
> values
>
> Modify internal structure and functions to support 64-bit
> values for rates and stats parameters.
>
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
> ---
> lib/librte_sched/rte_approx.c | 57 ++++----
> lib/librte_sched/rte_approx.h | 3 +-
> lib/librte_sched/rte_sched.c | 211 +++++++++++++++-------------
> lib/librte_sched/rte_sched_common.h | 12 +-
> 4 files changed, 156 insertions(+), 127 deletions(-)
>
> diff --git a/lib/librte_sched/rte_approx.c b/lib/librte_sched/rte_approx.c
> index 30620b83d..4883d3969 100644
> --- a/lib/librte_sched/rte_approx.c
> +++ b/lib/librte_sched/rte_approx.c
> @@ -18,22 +18,23 @@
> */
>
> /* fraction comparison: compare (a/b) and (c/d) */
> -static inline uint32_t
> -less(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
> +static inline sched_counter_t
> +less(sched_counter_t a, sched_counter_t b, sched_counter_t c,
> sched_counter_t d)
> {
> return a*d < b*c;
> }
>
> -static inline uint32_t
> -less_or_equal(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
> +static inline sched_counter_t
> +less_or_equal(sched_counter_t a, sched_counter_t b, sched_counter_t c,
> + sched_counter_t d)
> {
> return a*d <= b*c;
> }
>
> /* check whether a/b is a valid approximation */
> -static inline uint32_t
> -matches(uint32_t a, uint32_t b,
> - uint32_t alpha_num, uint32_t d_num, uint32_t denum)
> +static inline sched_counter_t
> +matches(sched_counter_t a, sched_counter_t b,
> + sched_counter_t alpha_num, sched_counter_t d_num,
> sched_counter_t denum)
> {
> if (less_or_equal(a, b, alpha_num - d_num, denum))
> return 0;
> @@ -45,33 +46,39 @@ matches(uint32_t a, uint32_t b,
> }
>
> static inline void
> -find_exact_solution_left(uint32_t p_a, uint32_t q_a, uint32_t p_b, uint32_t
> q_b,
> - uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p,
> uint32_t *q)
> +find_exact_solution_left(sched_counter_t p_a, sched_counter_t q_a,
> + sched_counter_t p_b, sched_counter_t q_b, sched_counter_t
> alpha_num,
> + sched_counter_t d_num, sched_counter_t denum,
> sched_counter_t *p,
> + sched_counter_t *q)
> {
> - uint32_t k_num = denum * p_b - (alpha_num + d_num) * q_b;
> - uint32_t k_denum = (alpha_num + d_num) * q_a - denum * p_a;
> - uint32_t k = (k_num / k_denum) + 1;
> + sched_counter_t k_num = denum * p_b - (alpha_num + d_num) *
> q_b;
> + sched_counter_t k_denum = (alpha_num + d_num) * q_a - denum *
> p_a;
> + sched_counter_t k = (k_num / k_denum) + 1;
>
> *p = p_b + k * p_a;
> *q = q_b + k * q_a;
> }
>
> static inline void
> -find_exact_solution_right(uint32_t p_a, uint32_t q_a, uint32_t p_b,
> uint32_t q_b,
> - uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p,
> uint32_t *q)
> +find_exact_solution_right(sched_counter_t p_a, sched_counter_t q_a,
> + sched_counter_t p_b, sched_counter_t q_b, sched_counter_t
> alpha_num,
> + sched_counter_t d_num, sched_counter_t denum,
> sched_counter_t *p,
> + sched_counter_t *q)
> {
> - uint32_t k_num = - denum * p_b + (alpha_num - d_num) * q_b;
> - uint32_t k_denum = - (alpha_num - d_num) * q_a + denum * p_a;
> - uint32_t k = (k_num / k_denum) + 1;
> + sched_counter_t k_num = -denum * p_b + (alpha_num - d_num) *
> q_b;
> + sched_counter_t k_denum = -(alpha_num - d_num) * q_a + denum
> * p_a;
> + sched_counter_t k = (k_num / k_denum) + 1;
>
> *p = p_b + k * p_a;
> *q = q_b + k * q_a;
> }
>
> static int
> -find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num,
> uint32_t denum, uint32_t *p, uint32_t *q)
> +find_best_rational_approximation(sched_counter_t alpha_num,
> + sched_counter_t d_num, sched_counter_t denum,
> sched_counter_t *p,
> + sched_counter_t *q)
> {
> - uint32_t p_a, q_a, p_b, q_b;
> + sched_counter_t p_a, q_a, p_b, q_b;
>
> /* check assumptions on the inputs */
> if (!((0 < d_num) && (d_num < alpha_num) && (alpha_num <
> denum) && (d_num + alpha_num < denum))) {
> @@ -85,8 +92,8 @@ find_best_rational_approximation(uint32_t alpha_num,
> uint32_t d_num, uint32_t de
> q_b = 1;
>
> while (1) {
> - uint32_t new_p_a, new_q_a, new_p_b, new_q_b;
> - uint32_t x_num, x_denum, x;
> + sched_counter_t new_p_a, new_q_a, new_p_b, new_q_b;
> + sched_counter_t x_num, x_denum, x;
> int aa, bb;
>
> /* compute the number of steps to the left */
> @@ -139,9 +146,9 @@ find_best_rational_approximation(uint32_t
> alpha_num, uint32_t d_num, uint32_t de
> }
> }
>
> -int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
> +int rte_approx(double alpha, double d, sched_counter_t *p,
> sched_counter_t *q)
> {
> - uint32_t alpha_num, d_num, denum;
> + sched_counter_t alpha_num, d_num, denum;
>
> /* Check input arguments */
> if (!((0.0 < d) && (d < alpha) && (alpha < 1.0))) {
> @@ -159,8 +166,8 @@ int rte_approx(double alpha, double d, uint32_t *p,
> uint32_t *q)
> d *= 10;
> denum *= 10;
> }
> - alpha_num = (uint32_t) alpha;
> - d_num = (uint32_t) d;
> + alpha_num = (sched_counter_t) alpha;
> + d_num = (sched_counter_t) d;
>
> /* Perform approximation */
> return find_best_rational_approximation(alpha_num, d_num,
> denum, p, q);
> diff --git a/lib/librte_sched/rte_approx.h b/lib/librte_sched/rte_approx.h
> index 0244d98f1..e591e122d 100644
> --- a/lib/librte_sched/rte_approx.h
> +++ b/lib/librte_sched/rte_approx.h
> @@ -20,6 +20,7 @@ extern "C" {
> ***/
>
> #include <stdint.h>
> +#include "rte_sched_common.h"
>
> /**
> * Find best rational approximation
> @@ -37,7 +38,7 @@ extern "C" {
> * @return
> * 0 upon success, error code otherwise
> */
> -int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q);
> +int rte_approx(double alpha, double d, sched_counter_t *p,
> sched_counter_t *q);
>
> #ifdef __cplusplus
> }
Please keep the rte_approx.[hc] independent of the librte_sched library, so use unit32_t or uint64_t instead of sched_counter_t that is librte_sched dependent. Also, for the same reason, remove the above inclusion of rte_sched_common.h.
Please keep the existing 32-bit functions with their current name & prototype and create new 64-bit functions that have the "64" suffix to their name, and use the 64-bit versions in the rte_sched.c implementation. Makes sense?
The rte_approx.[hc] files represent the implementation of an arithmetic algorithm that is completely independent of the scheduler library. In fact, they could be moved to a more generic location in DPDK where they could be leveraged by other libraries without the need to create a (fake) dependency to librte_sched.
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 710ecf65a..11d1febe2 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -49,13 +49,13 @@
>
> struct rte_sched_pipe_profile {
> /* Token bucket (TB) */
> - uint32_t tb_period;
> - uint32_t tb_credits_per_period;
> - uint32_t tb_size;
> + sched_counter_t tb_period;
> + sched_counter_t tb_credits_per_period;
> + sched_counter_t tb_size;
>
> /* Pipe traffic classes */
> - uint32_t tc_period;
> - uint32_t
> tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t tc_period;
> + sched_counter_t
> tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> uint8_t tc_ov_weight;
>
> /* Pipe best-effort traffic class queues */
> @@ -65,20 +65,20 @@ struct rte_sched_pipe_profile {
> struct rte_sched_pipe {
> /* Token bucket (TB) */
> uint64_t tb_time; /* time of last update */
> - uint32_t tb_credits;
> + sched_counter_t tb_credits;
>
> /* Pipe profile and flags */
> uint32_t profile;
>
> /* Traffic classes (TCs) */
> uint64_t tc_time; /* time of next update */
> - uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t
> tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
>
> /* Weighted Round Robin (WRR) */
> uint8_t wrr_tokens[RTE_SCHED_BE_QUEUES_PER_PIPE];
>
> /* TC oversubscription */
> - uint32_t tc_ov_credits;
> + sched_counter_t tc_ov_credits;
> uint8_t tc_ov_period_id;
> } __rte_cache_aligned;
>
> @@ -141,28 +141,28 @@ struct rte_sched_grinder {
> struct rte_sched_subport {
> /* Token bucket (TB) */
> uint64_t tb_time; /* time of last update */
> - uint32_t tb_period;
> - uint32_t tb_credits_per_period;
> - uint32_t tb_size;
> - uint32_t tb_credits;
> + sched_counter_t tb_period;
> + sched_counter_t tb_credits_per_period;
> + sched_counter_t tb_size;
> + sched_counter_t tb_credits;
>
> /* Traffic classes (TCs) */
> uint64_t tc_time; /* time of next update */
> - uint32_t
> tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> - uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> - uint32_t tc_period;
> + sched_counter_t
> tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t
> tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t tc_period;
>
> /* TC oversubscription */
> - uint32_t tc_ov_wm;
> - uint32_t tc_ov_wm_min;
> - uint32_t tc_ov_wm_max;
> + sched_counter_t tc_ov_wm;
> + sched_counter_t tc_ov_wm_min;
> + sched_counter_t tc_ov_wm_max;
> uint8_t tc_ov_period_id;
> uint8_t tc_ov;
> uint32_t tc_ov_n;
> double tc_ov_rate;
>
> /* Statistics */
> - struct rte_sched_subport_stats stats;
> + struct rte_sched_subport_stats stats __rte_cache_aligned;
>
> /* Subport pipes */
> uint32_t n_pipes_per_subport_enabled;
> @@ -170,7 +170,7 @@ struct rte_sched_subport {
> uint32_t n_max_pipe_profiles;
>
> /* Pipe best-effort TC rate */
> - uint32_t pipe_tc_be_rate_max;
> + sched_counter_t pipe_tc_be_rate_max;
>
> /* Pipe queues size */
> uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> @@ -212,7 +212,7 @@ struct rte_sched_port {
> uint16_t pipe_queue[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> uint8_t pipe_tc[RTE_SCHED_QUEUES_PER_PIPE];
> uint8_t tc_queue[RTE_SCHED_QUEUES_PER_PIPE];
> - uint32_t rate;
> + sched_counter_t rate;
> uint32_t mtu;
> uint32_t frame_overhead;
> int socket;
> @@ -517,33 +517,35 @@ rte_sched_port_log_pipe_profile(struct
> rte_sched_subport *subport, uint32_t i)
> struct rte_sched_pipe_profile *p = subport->pipe_profiles + i;
>
> RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n"
> - " Token bucket: period = %u, credits per period = %u,
> size = %u\n"
> - " Traffic classes: period = %u,\n"
> - " credits per period = [%u, %u, %u, %u, %u, %u, %u,
> %u, %u, %u, %u, %u, %u]\n"
> + " Token bucket: period = %"PRIu64", credits per period
> = %"PRIu64", size = %"PRIu64"\n"
> + " Traffic classes: period = %"PRIu64",\n"
> + " credits per period = [%"PRIu64", %"PRIu64",
> %"PRIu64", %"PRIu64
> + ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64",
> %"PRIu64
> + ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
> " Best-effort traffic class oversubscription: weight =
> %hhu\n"
> " WRR cost: [%hhu, %hhu, %hhu, %hhu]\n",
> i,
>
> /* Token bucket */
> - p->tb_period,
> - p->tb_credits_per_period,
> - p->tb_size,
> + (uint64_t)p->tb_period,
> + (uint64_t)p->tb_credits_per_period,
> + (uint64_t)p->tb_size,
>
> /* Traffic classes */
> - p->tc_period,
> - p->tc_credits_per_period[0],
> - p->tc_credits_per_period[1],
> - p->tc_credits_per_period[2],
> - p->tc_credits_per_period[3],
> - p->tc_credits_per_period[4],
> - p->tc_credits_per_period[5],
> - p->tc_credits_per_period[6],
> - p->tc_credits_per_period[7],
> - p->tc_credits_per_period[8],
> - p->tc_credits_per_period[9],
> - p->tc_credits_per_period[10],
> - p->tc_credits_per_period[11],
> - p->tc_credits_per_period[12],
> + (uint64_t)p->tc_period,
> + (uint64_t)p->tc_credits_per_period[0],
> + (uint64_t)p->tc_credits_per_period[1],
> + (uint64_t)p->tc_credits_per_period[2],
> + (uint64_t)p->tc_credits_per_period[3],
> + (uint64_t)p->tc_credits_per_period[4],
> + (uint64_t)p->tc_credits_per_period[5],
> + (uint64_t)p->tc_credits_per_period[6],
> + (uint64_t)p->tc_credits_per_period[7],
> + (uint64_t)p->tc_credits_per_period[8],
> + (uint64_t)p->tc_credits_per_period[9],
> + (uint64_t)p->tc_credits_per_period[10],
> + (uint64_t)p->tc_credits_per_period[11],
> + (uint64_t)p->tc_credits_per_period[12],
>
> /* Best-effort traffic class oversubscription */
> p->tc_ov_weight,
> @@ -553,7 +555,7 @@ rte_sched_port_log_pipe_profile(struct
> rte_sched_subport *subport, uint32_t i)
> }
>
> static inline uint64_t
> -rte_sched_time_ms_to_bytes(uint32_t time_ms, uint32_t rate)
> +rte_sched_time_ms_to_bytes(sched_counter_t time_ms,
> sched_counter_t rate)
> {
> uint64_t time = time_ms;
>
> @@ -566,7 +568,7 @@ static void
> rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
> struct rte_sched_pipe_params *src,
> struct rte_sched_pipe_profile *dst,
> - uint32_t rate)
> + sched_counter_t rate)
> {
> uint32_t wrr_cost[RTE_SCHED_BE_QUEUES_PER_PIPE];
> uint32_t lcd1, lcd2, lcd;
> @@ -581,8 +583,8 @@ rte_sched_pipe_profile_convert(struct
> rte_sched_subport *subport,
> / (double) rate;
> double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
>
> - rte_approx(tb_rate, d,
> - &dst->tb_credits_per_period, &dst->tb_period);
> + rte_approx(tb_rate, d, &dst->tb_credits_per_period,
> + &dst->tb_period);
> }
>
> dst->tb_size = src->tb_size;
> @@ -594,8 +596,8 @@ rte_sched_pipe_profile_convert(struct
> rte_sched_subport *subport,
> for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
> if (subport->qsize[i])
> dst->tc_credits_per_period[i]
> - = rte_sched_time_ms_to_bytes(src-
> >tc_period,
> - src->tc_rate[i]);
> + = (sched_counter_t)
> rte_sched_time_ms_to_bytes(
> + src->tc_period, src->tc_rate[i]);
>
> dst->tc_ov_weight = src->tc_ov_weight;
>
> @@ -637,7 +639,8 @@ rte_sched_subport_config_pipe_profile_table(struct
> rte_sched_subport *subport,
> subport->pipe_tc_be_rate_max = 0;
> for (i = 0; i < subport->n_pipe_profiles; i++) {
> struct rte_sched_pipe_params *src = params->pipe_profiles
> + i;
> - uint32_t pipe_tc_be_rate = src-
> >tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
> + sched_counter_t pipe_tc_be_rate =
> + src->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
>
> if (subport->pipe_tc_be_rate_max < pipe_tc_be_rate)
> subport->pipe_tc_be_rate_max = pipe_tc_be_rate;
> @@ -647,7 +650,7 @@ rte_sched_subport_config_pipe_profile_table(struct
> rte_sched_subport *subport,
> static int
> rte_sched_subport_check_params(struct rte_sched_subport_params
> *params,
> uint32_t n_max_pipes_per_subport,
> - uint32_t rate)
> + sched_counter_t rate)
> {
> uint32_t i;
>
> @@ -684,7 +687,7 @@ rte_sched_subport_check_params(struct
> rte_sched_subport_params *params,
> }
>
> for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
> - uint32_t tc_rate = params->tc_rate[i];
> + sched_counter_t tc_rate = params->tc_rate[i];
> uint16_t qsize = params->qsize[i];
>
> if ((qsize == 0 && tc_rate != 0) ||
> @@ -910,36 +913,40 @@ rte_sched_port_log_subport_config(struct
> rte_sched_port *port, uint32_t i)
> struct rte_sched_subport *s = port->subports[i];
>
> RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
> - " Token bucket: period = %u, credits per period = %u,
> size = %u\n"
> - " Traffic classes: period = %u\n"
> - " credits per period = [%u, %u, %u, %u, %u, %u, %u,
> %u, %u, %u, %u, %u, %u]\n"
> - " Best effort traffic class oversubscription: wm min =
> %u, wm max = %u\n",
> + " Token bucket: period = %"PRIu64", credits per period
> = %"PRIu64
> + ", size = %"PRIu64"\n"
> + " Traffic classes: period = %"PRIu64"\n"
> + " credits per period = [%"PRIu64", %"PRIu64",
> %"PRIu64", %"PRIu64
> + ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64",
> %"PRIu64
> + ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
> + " Best effort traffic class oversubscription: wm min =
> %"PRIu64
> + ", wm max = %"PRIu64"\n",
> i,
>
> /* Token bucket */
> - s->tb_period,
> - s->tb_credits_per_period,
> - s->tb_size,
> + (uint64_t)s->tb_period,
> + (uint64_t)s->tb_credits_per_period,
> + (uint64_t)s->tb_size,
>
> /* Traffic classes */
> - s->tc_period,
> - s->tc_credits_per_period[0],
> - s->tc_credits_per_period[1],
> - s->tc_credits_per_period[2],
> - s->tc_credits_per_period[3],
> - s->tc_credits_per_period[4],
> - s->tc_credits_per_period[5],
> - s->tc_credits_per_period[6],
> - s->tc_credits_per_period[7],
> - s->tc_credits_per_period[8],
> - s->tc_credits_per_period[9],
> - s->tc_credits_per_period[10],
> - s->tc_credits_per_period[11],
> - s->tc_credits_per_period[12],
> + (uint64_t)s->tc_period,
> + (uint64_t)s->tc_credits_per_period[0],
> + (uint64_t)s->tc_credits_per_period[1],
> + (uint64_t)s->tc_credits_per_period[2],
> + (uint64_t)s->tc_credits_per_period[3],
> + (uint64_t)s->tc_credits_per_period[4],
> + (uint64_t)s->tc_credits_per_period[5],
> + (uint64_t)s->tc_credits_per_period[6],
> + (uint64_t)s->tc_credits_per_period[7],
> + (uint64_t)s->tc_credits_per_period[8],
> + (uint64_t)s->tc_credits_per_period[9],
> + (uint64_t)s->tc_credits_per_period[10],
> + (uint64_t)s->tc_credits_per_period[11],
> + (uint64_t)s->tc_credits_per_period[12],
>
> /* Best effort traffic class oversubscription */
> - s->tc_ov_wm_min,
> - s->tc_ov_wm_max);
> + (uint64_t)s->tc_ov_wm_min,
> + (uint64_t)s->tc_ov_wm_max);
> }
>
> static void
> @@ -1023,7 +1030,8 @@ rte_sched_subport_config(struct rte_sched_port
> *port,
> double tb_rate = ((double) params->tb_rate) / ((double)
> port->rate);
> double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
>
> - rte_approx(tb_rate, d, &s->tb_credits_per_period, &s-
> >tb_period);
> + rte_approx(tb_rate, d, &s->tb_credits_per_period,
> + &s->tb_period);
> }
>
> s->tb_size = params->tb_size;
> @@ -1035,8 +1043,8 @@ rte_sched_subport_config(struct rte_sched_port
> *port,
> for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
> if (params->qsize[i])
> s->tc_credits_per_period[i]
> - = rte_sched_time_ms_to_bytes(params-
> >tc_period,
> - params->tc_rate[i]);
> + = (sched_counter_t)
> rte_sched_time_ms_to_bytes(
> + params->tc_period, params-
> >tc_rate[i]);
> }
> s->tc_time = port->time + s->tc_period;
> for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
> @@ -1970,13 +1978,15 @@ grinder_credits_update(struct rte_sched_port
> *port,
> /* Subport TB */
> n_periods = (port->time - subport->tb_time) / subport->tb_period;
> subport->tb_credits += n_periods * subport-
> >tb_credits_per_period;
> - subport->tb_credits = rte_sched_min_val_2_u32(subport-
> >tb_credits, subport->tb_size);
> + subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
> + subport->tb_size);
> subport->tb_time += n_periods * subport->tb_period;
>
> /* Pipe TB */
> n_periods = (port->time - pipe->tb_time) / params->tb_period;
> pipe->tb_credits += n_periods * params->tb_credits_per_period;
> - pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits,
> params->tb_size);
> + pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
> + params->tb_size);
Can we remove all the usages of rte_sched_min_val() (including its definition in rte_sched_common.h) and replace it with RTE_MIN, please?
> pipe->tb_time += n_periods * params->tb_period;
>
> /* Subport TCs */
> @@ -1998,13 +2008,13 @@ grinder_credits_update(struct rte_sched_port
> *port,
>
> #else
>
> -static inline uint32_t
> +static inline sched_counter_t
> grinder_tc_ov_credits_update(struct rte_sched_port *port,
> struct rte_sched_subport *subport)
> {
> - uint32_t
> tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> - uint32_t tc_consumption = 0, tc_ov_consumption_max;
> - uint32_t tc_ov_wm = subport->tc_ov_wm;
> + sched_counter_t
> tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t tc_consumption = 0, tc_ov_consumption_max;
> + sched_counter_t tc_ov_wm = subport->tc_ov_wm;
> uint32_t i;
>
> if (subport->tc_ov == 0)
> @@ -2053,13 +2063,15 @@ grinder_credits_update(struct rte_sched_port
> *port,
> /* Subport TB */
> n_periods = (port->time - subport->tb_time) / subport->tb_period;
> subport->tb_credits += n_periods * subport-
> >tb_credits_per_period;
> - subport->tb_credits = rte_sched_min_val_2_u32(subport-
> >tb_credits, subport->tb_size);
> + subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
> + subport->tb_size);
> subport->tb_time += n_periods * subport->tb_period;
>
> /* Pipe TB */
> n_periods = (port->time - pipe->tb_time) / params->tb_period;
> pipe->tb_credits += n_periods * params->tb_credits_per_period;
> - pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits,
> params->tb_size);
> + pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
> + params->tb_size);
> pipe->tb_time += n_periods * params->tb_period;
>
> /* Subport TCs */
> @@ -2101,11 +2113,11 @@ grinder_credits_check(struct rte_sched_port
> *port,
> struct rte_sched_pipe *pipe = grinder->pipe;
> struct rte_mbuf *pkt = grinder->pkt;
> uint32_t tc_index = grinder->tc_index;
> - uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
> - uint32_t subport_tb_credits = subport->tb_credits;
> - uint32_t subport_tc_credits = subport->tc_credits[tc_index];
> - uint32_t pipe_tb_credits = pipe->tb_credits;
> - uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
> + sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
> + sched_counter_t subport_tb_credits = subport->tb_credits;
> + sched_counter_t subport_tc_credits = subport-
> >tc_credits[tc_index];
> + sched_counter_t pipe_tb_credits = pipe->tb_credits;
> + sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
> int enough_credits;
>
> /* Check queue credits */
> @@ -2136,21 +2148,22 @@ grinder_credits_check(struct rte_sched_port
> *port,
> struct rte_sched_pipe *pipe = grinder->pipe;
> struct rte_mbuf *pkt = grinder->pkt;
> uint32_t tc_index = grinder->tc_index;
> - uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
> - uint32_t subport_tb_credits = subport->tb_credits;
> - uint32_t subport_tc_credits = subport->tc_credits[tc_index];
> - uint32_t pipe_tb_credits = pipe->tb_credits;
> - uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
> - uint32_t
> pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> - uint32_t
> pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
> - uint32_t pipe_tc_ov_credits, i;
> + sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
> + sched_counter_t subport_tb_credits = subport->tb_credits;
> + sched_counter_t subport_tc_credits = subport-
> >tc_credits[tc_index];
> + sched_counter_t pipe_tb_credits = pipe->tb_credits;
> + sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
> + sched_counter_t
> pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
> + sched_counter_t
> pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
> + sched_counter_t pipe_tc_ov_credits;
> + uint32_t i;
> int enough_credits;
>
> for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
> - pipe_tc_ov_mask1[i] = UINT32_MAX;
> + pipe_tc_ov_mask1[i] = ~0;
Please use ~0LLU (or UINT64_MAX) to cover the 64-bit case gracefully. Please also double-check that there are no usages of UINT32_MAX left in this code, unless there is a reason for it. Translation from 32-bit to 64-bit arithmetic can be very tricky and yield some very difficult to debug issues.
>
> pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASS_BE] = pipe-
> >tc_ov_credits;
> - pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] =
> UINT32_MAX;
> + pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] = ~0;
> pipe_tc_ov_credits = pipe_tc_ov_mask1[tc_index];
>
> /* Check pipe and subport credits */
> diff --git a/lib/librte_sched/rte_sched_common.h
> b/lib/librte_sched/rte_sched_common.h
> index 8c191a9b8..06520a686 100644
> --- a/lib/librte_sched/rte_sched_common.h
> +++ b/lib/librte_sched/rte_sched_common.h
> @@ -14,8 +14,16 @@ extern "C" {
>
> #define __rte_aligned_16 __attribute__((__aligned__(16)))
>
> -static inline uint32_t
> -rte_sched_min_val_2_u32(uint32_t x, uint32_t y)
> +//#define COUNTER_SIZE_64
> +
> +#ifdef COUNTER_SIZE_64
> +typedef uint64_t sched_counter_t;
> +#else
> +typedef uint32_t sched_counter_t;
> +#endif
> +
> +static inline sched_counter_t
> +rte_sched_min_val_2(sched_counter_t x, sched_counter_t y)
> {
> return (x < y)? x : y;
> }
> --
> 2.21.0
I know I have previously suggested the creation of sched_counter_t, but this was meant to be a temporary solution until the full implementation is made available. Now that 19.11 is meant to be an ABI stable release, we cannot really afford this trick (which might not be necessary either, since you have the full implementation), as this #ifdef COUNTER_SIZE is a massive ABI breakage.
Therefore, I strongly suggest we remove the sched_counter_t and use uint64_t everywhere throughout the implementation. Agree?
Regards,
Cristian
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions
@ 2019-10-15 15:11 5% ` David Marchand
2019-10-25 11:43 5% ` Ray Kinsella
2019-10-24 0:43 11% ` Thomas Monjalon
1 sibling, 1 reply; 200+ results
From: David Marchand @ 2019-10-15 15:11 UTC (permalink / raw)
To: Ray Kinsella
Cc: dev, Thomas Monjalon, Stephen Hemminger, Bruce Richardson, Yigit,
Ferruh, Ananyev, Konstantin, Jerin Jacob Kollanukkaran,
Olivier Matz, Neil Horman, Maxime Coquelin, Mcnamara, John,
Kovacevic, Marko, Hemant Agrawal, Kevin Traynor, Aaron Conole
Hello,
On Fri, Sep 27, 2019 at 6:55 PM Ray Kinsella <mdr@ashroe.eu> wrote:
>
> This policy change introduces major ABI versions, these are
> declared every year, typically aligned with the LTS release
> and are supported by subsequent releases in the following year.
> This change is intended to improve ABI stabilty for those projects
> consuming DPDK.
I spotted a few typos (far from being a complete report of them).
We can wait later in the release to fix those, but it would be more
efficient if native speakers proofread those docs.
John and Marko?
Volunteers?
Thanks.
>
> Signed-off-by: Ray Kinsella <mdr@ashroe.eu>
> ---
> doc/guides/contributing/abi_policy.rst | 321 +++++++++++++++------
> .../contributing/img/abi_stability_policy.png | Bin 0 -> 61277 bytes
> doc/guides/contributing/img/what_is_an_abi.png | Bin 0 -> 151683 bytes
> doc/guides/contributing/stable.rst | 12 +-
> 4 files changed, 241 insertions(+), 92 deletions(-)
> create mode 100644 doc/guides/contributing/img/abi_stability_policy.png
> create mode 100644 doc/guides/contributing/img/what_is_an_abi.png
>
> diff --git a/doc/guides/contributing/abi_policy.rst b/doc/guides/contributing/abi_policy.rst
> index 55bacb4..8862d24 100644
> --- a/doc/guides/contributing/abi_policy.rst
> +++ b/doc/guides/contributing/abi_policy.rst
> @@ -1,33 +1,46 @@
> .. SPDX-License-Identifier: BSD-3-Clause
> - Copyright 2018 The DPDK contributors
> + Copyright 2019 The DPDK contributors
>
> -.. abi_api_policy:
> +.. _abi_policy:
>
> -DPDK ABI/API policy
> -===================
> +ABI Policy
> +==========
>
> Description
> -----------
>
> -This document details some methods for handling ABI management in the DPDK.
> +This document details the management policy that ensures the long-term stability
> +of the DPDK ABI and API.
>
> General Guidelines
> ------------------
>
> -#. Whenever possible, ABI should be preserved
> -#. ABI/API may be changed with a deprecation process
> -#. The modification of symbols can generally be managed with versioning
> -#. Libraries or APIs marked in ``experimental`` state may change without constraint
> -#. New APIs will be marked as ``experimental`` for at least one release to allow
> - any issues found by users of the new API to be fixed quickly
> -#. The addition of symbols is generally not problematic
> -#. The removal of symbols generally is an ABI break and requires bumping of the
> - LIBABIVER macro
> -#. Updates to the minimum hardware requirements, which drop support for hardware which
> - was previously supported, should be treated as an ABI change.
> -
> -What is an ABI
> -~~~~~~~~~~~~~~
> +#. Major ABI versions are declared every **year** and are then supported for one
> + year, typically aligned with the :ref:`LTS release <stable_lts_releases>`.
> +#. The ABI version is managed at a project level in DPDK, with the ABI version
> + reflected in all :ref:`library's soname <what_is_soname>`.
> +#. The ABI should be preserved and not changed lightly. ABI changes must follow
> + the outlined :ref:`deprecation process <abi_changes>`.
> +#. The addition of symbols is generally not problematic. The modification of
> + symbols is managed with :ref:`ABI Versioning <abi_versioning>`.
> +#. The removal of symbols is considered an :ref:`ABI breakage <abi_breakages>`,
> + once approved these will form part of the next ABI version.
> +#. Libraries or APIs marked as :ref:`Experimental <experimental_apis>` are not
> + considered part of an ABI version and may change without constraint.
> +#. Updates to the :ref:`minimum hardware requirements <hw_rqmts>`, which drop
> + support for hardware which was previously supported, should be treated as an
> + ABI change.
> +
> +.. note::
> +
> + In 2019, the DPDK community stated it's intention to move to ABI stable
its?
> + releases, over a number of release cycles. Beginning with maintaining ABI
> + stability through one year of DPDK releases starting from DPDK 19.11. This
sentence without a verb?
> + policy will be reviewed in 2020, with intention of lengthening the stability
> + period.
> +
> +What is an ABI?
> +~~~~~~~~~~~~~~~
>
> An ABI (Application Binary Interface) is the set of runtime interfaces exposed
> by a library. It is similar to an API (Application Programming Interface) but
> @@ -39,30 +52,80 @@ Therefore, in the case of dynamic linking, it is critical that an ABI is
> preserved, or (when modified), done in such a way that the application is unable
> to behave improperly or in an unexpected fashion.
>
> +.. _figure_what_is_an_abi:
> +
> +.. figure:: img/what_is_an_abi.*
> +
> +*Figure 1. Illustration of DPDK API and ABI .*
>
> -ABI/API Deprecation
> --------------------
> +
> +What is an ABI version?
> +~~~~~~~~~~~~~~~~~~~~~~~
> +
> +An ABI version is an instance of a library's ABI at a specific release. Certain
> +releases are considered by the community to be milestone releases, the yearly
> +LTS for example. Supporting those milestone release's ABI for some number of
> +subsequent releases is desirable to facilitate application upgrade. Those ABI
> +version's aligned with milestones release are therefore called 'ABI major
versions?
milestone releases
> +versions' and are supported for some number of releases.
> +
> +More details on major ABI version can be found in the :ref:`ABI versioning
> +<major_abi_versions>` guide.
>
> The DPDK ABI policy
> -~~~~~~~~~~~~~~~~~~~
> +-------------------
> +
> +A major ABI version is declared every year, aligned with that year's LTS
> +release, e.g. v19.11. This ABI version is then supported for one year by all
> +subsequent releases within that time period, until the next LTS release, e.g.
> +v20.11.
> +
> +At the declaration of a major ABI version, major version numbers encoded in
> +libraries soname's are bumped to indicate the new version, with the minor
> +version reset to ``0``. An example would be ``librte_eal.so.20.3`` would become
> +``librte_eal.so.21.0``.
>
> -ABI versions are set at the time of major release labeling, and the ABI may
> -change multiple times, without warning, between the last release label and the
> -HEAD label of the git tree.
> +The ABI may then change multiple times, without warning, between the last major
> +ABI version increment and the HEAD label of the git tree, with the condition
> +that ABI compatibility with the major ABI version is preserved and therefore
> +soname's do not change.
>
> -ABI versions, once released, are available until such time as their
> -deprecation has been noted in the Release Notes for at least one major release
> -cycle. For example consider the case where the ABI for DPDK 2.0 has been
> -shipped and then a decision is made to modify it during the development of
> -DPDK 2.1. The decision will be recorded in the Release Notes for the DPDK 2.1
> -release and the modification will be made available in the DPDK 2.2 release.
> +Minor versions are incremented to indicate the release of a new ABI compatible
> +DPDK release, typically the DPDK quarterly releases. An example of this, might
> +be that ``librte_eal.so.20.1`` would indicate the first ABI compatible DPDK
> +release, following the declaration of the new major ABI version ``20``.
>
> -ABI versions may be deprecated in whole or in part as needed by a given
> -update.
> +ABI versions, are supported by each release until such time as the next major
> +ABI version is declared. At that time, the deprecation of the previous major ABI
> +version will be noted in the Release Notes with guidance on individual symbol
> +depreciation and upgrade notes provided.
deprecation?
>
> -Some ABI changes may be too significant to reasonably maintain multiple
> -versions. In those cases ABI's may be updated without backward compatibility
> -being provided. The requirements for doing so are:
> +.. _figure_abi_stability_policy:
> +
> +.. figure:: img/abi_stability_policy.*
> +
> +*Figure 2. Mapping of new ABI versions and ABI version compatibility to DPDK
> +releases.*
> +
> +.. _abi_changes:
> +
> +ABI Changes
> +~~~~~~~~~~~
> +
> +The ABI may still change after the declaration of a major ABI version, that is
> +new APIs may be still added or existing APIs may be modified.
> +
> +.. Warning::
> +
> + Note that, this policy details the method by which the ABI may be changed,
> + with due regard to preserving compatibility and observing depreciation
deprecation?
> + notices. This process however should not be undertaken lightly, as a general
> + rule ABI stability is extremely important for downstream consumers of DPDK.
> + The ABI should only be changed for significant reasons, such as performance
> + enhancements. ABI breakages due to changes such as reorganizing public
> + structure fields for aesthetic or readability purposes should be avoided.
> +
> +The requirements for changing the ABI are:
[snip]
--
David Marchand
^ permalink raw reply [relevance 5%]
* [dpdk-dev] [PATCH v10 1/3] eal/arm64: add 128-bit atomic compare exchange
2019-10-14 15:43 0% ` David Marchand
@ 2019-10-15 11:38 2% ` Phil Yang
2019-10-18 11:21 4% ` [dpdk-dev] [PATCH v11 " Phil Yang
1 sibling, 1 reply; 200+ results
From: Phil Yang @ 2019-10-15 11:38 UTC (permalink / raw)
To: david.marchand, jerinj, gage.eads, dev
Cc: thomas, hemant.agrawal, Honnappa.Nagarahalli, gavin.hu, nd
This patch adds the implementation of the 128-bit atomic compare
exchange API on AArch64. Using 64-bit 'ldxp/stxp' instructions
can perform this operation. Moreover, on the LSE atomic extension
accelerated platforms, it implemented by 'casp' instructions for
better performance.
Since the '__ARM_FEATURE_ATOMICS' flag only supports GCC-9, so this
patch adds a new config flag 'RTE_ARM_FEATURE_ATOMICS' to enable the
'cas' version on elder version compilers.
Suggested-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
v10:
1.Removed all the rte tag for internal functions.
2.Removed __MO_LOAD and _MO_STORE macros and keep define __HAS_ACQ
and __HAS_REL under non LSE conditional branch.
3.Undef the macro once it is unused.
4.Reword the 1/3 and 2/3 patches' commitlog more specific.
v9:
Updated 19.11 release note.
v8:
Fixed "WARNING:LONG_LINE: line over 80 characters" warnings with latest kernel
checkpatch.pl
v7:
1. Adjust code comment.
v6:
1. Put the RTE_ARM_FEATURE_ATOMICS flag into EAL group. (Jerin Jocob)
2. Keep rte_stack_lf_stubs.h doing nothing. (Gage Eads)
3. Fixed 32 bit build issue.
v5:
1. Enable RTE_ARM_FEATURE_ATOMICS on octeontx2 in default. (Jerin Jocob)
2. Record the reason of introducing "rte_stack_lf_stubs.h" in git
commit.
(Jerin, Jocob)
3. Fixed a conditional MACRO error in rte_atomic128_cmp_exchange. (Jerin
Jocob)
v4:
1. Add RTE_ARM_FEATURE_ATOMICS flag to support LSE CASP instructions.
(Jerin Jocob)
2. Fix possible arm64 ABI break by making casp_op_name noinline. (Jerin
Jocob)
3. Add rte_stack_lf_stubs.h to reduce the ifdef clutter. (Gage
Eads/Jerin Jocob)
v3:
1. Avoid duplication code with macro. (Jerin Jocob)
2. Make invalid memory order to strongest barrier. (Jerin Jocob)
3. Update doc/guides/prog_guide/env_abstraction_layer.rst. (Gage Eads)
4. Fix 32-bit x86 builds issue. (Gage Eads)
5. Correct documentation issues in UT. (Gage Eads)
v2:
Initial version.
config/arm/meson.build | 2 +
config/common_base | 3 +
config/defconfig_arm64-octeontx2-linuxapp-gcc | 1 +
config/defconfig_arm64-thunderx2-linuxapp-gcc | 1 +
.../common/include/arch/arm/rte_atomic_64.h | 173 +++++++++++++++++++++
.../common/include/arch/x86/rte_atomic_64.h | 12 --
lib/librte_eal/common/include/generic/rte_atomic.h | 17 +-
7 files changed, 196 insertions(+), 13 deletions(-)
diff --git a/config/arm/meson.build b/config/arm/meson.build
index 979018e..9f28271 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -71,11 +71,13 @@ flags_thunderx2_extra = [
['RTE_CACHE_LINE_SIZE', 64],
['RTE_MAX_NUMA_NODES', 2],
['RTE_MAX_LCORE', 256],
+ ['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_USE_C11_MEM_MODEL', true]]
flags_octeontx2_extra = [
['RTE_MACHINE', '"octeontx2"'],
['RTE_MAX_NUMA_NODES', 1],
['RTE_MAX_LCORE', 24],
+ ['RTE_ARM_FEATURE_ATOMICS', true],
['RTE_EAL_IGB_UIO', false],
['RTE_USE_C11_MEM_MODEL', true]]
diff --git a/config/common_base b/config/common_base
index e843a21..a96beb9 100644
--- a/config/common_base
+++ b/config/common_base
@@ -82,6 +82,9 @@ CONFIG_RTE_MAX_LCORE=128
CONFIG_RTE_MAX_NUMA_NODES=8
CONFIG_RTE_MAX_HEAPS=32
CONFIG_RTE_MAX_MEMSEG_LISTS=64
+
+# Use ARM LSE ATOMIC instructions
+CONFIG_RTE_ARM_FEATURE_ATOMICS=n
# each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages
# or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is smaller
CONFIG_RTE_MAX_MEMSEG_PER_LIST=8192
diff --git a/config/defconfig_arm64-octeontx2-linuxapp-gcc b/config/defconfig_arm64-octeontx2-linuxapp-gcc
index f20da24..7687dbe 100644
--- a/config/defconfig_arm64-octeontx2-linuxapp-gcc
+++ b/config/defconfig_arm64-octeontx2-linuxapp-gcc
@@ -9,6 +9,7 @@ CONFIG_RTE_MACHINE="octeontx2"
CONFIG_RTE_CACHE_LINE_SIZE=128
CONFIG_RTE_MAX_NUMA_NODES=1
CONFIG_RTE_MAX_LCORE=24
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
# Doesn't support NUMA
CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/config/defconfig_arm64-thunderx2-linuxapp-gcc b/config/defconfig_arm64-thunderx2-linuxapp-gcc
index cc5c64b..af4a89c 100644
--- a/config/defconfig_arm64-thunderx2-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx2-linuxapp-gcc
@@ -9,3 +9,4 @@ CONFIG_RTE_MACHINE="thunderx2"
CONFIG_RTE_CACHE_LINE_SIZE=64
CONFIG_RTE_MAX_NUMA_NODES=2
CONFIG_RTE_MAX_LCORE=256
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
index 97060e4..7854c07 100644
--- a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
@@ -1,5 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright(c) 2015 Cavium, Inc
+ * Copyright(c) 2019 Arm Limited
*/
#ifndef _RTE_ATOMIC_ARM64_H_
@@ -14,6 +15,9 @@ extern "C" {
#endif
#include "generic/rte_atomic.h"
+#include <rte_branch_prediction.h>
+#include <rte_compat.h>
+#include <rte_debug.h>
#define dsb(opt) asm volatile("dsb " #opt : : : "memory")
#define dmb(opt) asm volatile("dmb " #opt : : : "memory")
@@ -40,6 +44,175 @@ extern "C" {
#define rte_cio_rmb() dmb(oshld)
+/*------------------------ 128 bit atomic operations -------------------------*/
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+#define __ATOMIC128_CAS_OP(cas_op_name, op_string) \
+static __rte_noinline rte_int128_t \
+cas_op_name(rte_int128_t *dst, rte_int128_t old, \
+ rte_int128_t updated) \
+{ \
+ /* caspX instructions register pair must start from even-numbered
+ * register at operand 1.
+ * So, specify registers for local variables here.
+ */ \
+ register uint64_t x0 __asm("x0") = (uint64_t)old.val[0]; \
+ register uint64_t x1 __asm("x1") = (uint64_t)old.val[1]; \
+ register uint64_t x2 __asm("x2") = (uint64_t)updated.val[0]; \
+ register uint64_t x3 __asm("x3") = (uint64_t)updated.val[1]; \
+ asm volatile( \
+ op_string " %[old0], %[old1], %[upd0], %[upd1], [%[dst]]" \
+ : [old0] "+r" (x0), \
+ [old1] "+r" (x1) \
+ : [upd0] "r" (x2), \
+ [upd1] "r" (x3), \
+ [dst] "r" (dst) \
+ : "memory"); \
+ old.val[0] = x0; \
+ old.val[1] = x1; \
+ return old; \
+}
+
+__ATOMIC128_CAS_OP(__cas_relaxed, "casp")
+__ATOMIC128_CAS_OP(__cas_acquire, "caspa")
+__ATOMIC128_CAS_OP(__cas_release, "caspl")
+__ATOMIC128_CAS_OP(__cas_acq_rel, "caspal")
+
+#undef __ATOMIC128_CAS_OP
+
+#else
+#define __ATOMIC128_LDX_OP(ldx_op_name, op_string) \
+static inline rte_int128_t \
+ldx_op_name(const rte_int128_t *src) \
+{ \
+ rte_int128_t ret; \
+ asm volatile( \
+ op_string " %0, %1, %2" \
+ : "=&r" (ret.val[0]), \
+ "=&r" (ret.val[1]) \
+ : "Q" (src->val[0]) \
+ : "memory"); \
+ return ret; \
+}
+
+__ATOMIC128_LDX_OP(__ldx_relaxed, "ldxp")
+__ATOMIC128_LDX_OP(__ldx_acquire, "ldaxp")
+
+#undef __ATOMIC128_LDX_OP
+
+#define __ATOMIC128_STX_OP(stx_op_name, op_string) \
+static inline uint32_t \
+stx_op_name(rte_int128_t *dst, const rte_int128_t src) \
+{ \
+ uint32_t ret; \
+ asm volatile( \
+ op_string " %w0, %1, %2, %3" \
+ : "=&r" (ret) \
+ : "r" (src.val[0]), \
+ "r" (src.val[1]), \
+ "Q" (dst->val[0]) \
+ : "memory"); \
+ /* Return 0 on success, 1 on failure */ \
+ return ret; \
+}
+
+__ATOMIC128_STX_OP(__stx_relaxed, "stxp")
+__ATOMIC128_STX_OP(__stx_release, "stlxp")
+
+#undef __ATOMIC128_STX_OP
+
+#endif
+
+__rte_experimental
+static inline int
+rte_atomic128_cmp_exchange(rte_int128_t *dst,
+ rte_int128_t *exp,
+ const rte_int128_t *src,
+ unsigned int weak,
+ int success,
+ int failure)
+{
+ /* Always do strong CAS */
+ RTE_SET_USED(weak);
+ /* Ignore memory ordering for failure, memory order for
+ * success must be stronger or equal
+ */
+ RTE_SET_USED(failure);
+ /* Find invalid memory order */
+ RTE_ASSERT(success == __ATOMIC_RELAXED
+ || success == __ATOMIC_ACQUIRE
+ || success == __ATOMIC_RELEASE
+ || success == __ATOMIC_ACQ_REL
+ || success == __ATOMIC_SEQ_CST);
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+ rte_int128_t expected = *exp;
+ rte_int128_t desired = *src;
+ rte_int128_t old;
+
+ if (success == __ATOMIC_RELAXED)
+ old = __cas_relaxed(dst, expected, desired);
+ else if (success == __ATOMIC_ACQUIRE)
+ old = __cas_acquire(dst, expected, desired);
+ else if (success == __ATOMIC_RELEASE)
+ old = __cas_release(dst, expected, desired);
+ else
+ old = __cas_acq_rel(dst, expected, desired);
+#else
+#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
+#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
+ (mo) == __ATOMIC_SEQ_CST)
+
+ int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
+ int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+
+#undef __HAS_ACQ
+#undef __HAS_RLS
+
+ uint32_t ret = 1;
+ register rte_int128_t expected = *exp;
+ register rte_int128_t desired = *src;
+ register rte_int128_t old;
+
+ /* ldx128 can not guarantee atomic,
+ * Must write back src or old to verify atomicity of ldx128;
+ */
+ do {
+ if (ldx_mo == __ATOMIC_RELAXED)
+ old = __ldx_relaxed(dst);
+ else
+ old = __ldx_acquire(dst);
+
+ if (likely(old.int128 == expected.int128)) {
+ if (stx_mo == __ATOMIC_RELAXED)
+ ret = __stx_relaxed(dst, desired);
+ else
+ ret = __stx_release(dst, desired);
+ } else {
+ /* In the failure case (since 'weak' is ignored and only
+ * weak == 0 is implemented), expected should contain
+ * the atomically read value of dst. This means, 'old'
+ * needs to be stored back to ensure it was read
+ * atomically.
+ */
+ if (stx_mo == __ATOMIC_RELAXED)
+ ret = __stx_relaxed(dst, old);
+ else
+ ret = __stx_release(dst, old);
+ }
+ } while (unlikely(ret));
+#endif
+
+ /* Unconditionally updating expected removes
+ * an 'if' statement.
+ * expected should already be in register if
+ * not in the cache.
+ */
+ *exp = old;
+
+ return (old.int128 == expected.int128);
+}
+
#ifdef __cplusplus
}
#endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
index 1335d92..cfe7067 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
@@ -183,18 +183,6 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
/*------------------------ 128 bit atomic operations -------------------------*/
-/**
- * 128-bit integer structure.
- */
-RTE_STD_C11
-typedef struct {
- RTE_STD_C11
- union {
- uint64_t val[2];
- __extension__ __int128 int128;
- };
-} __rte_aligned(16) rte_int128_t;
-
__rte_experimental
static inline int
rte_atomic128_cmp_exchange(rte_int128_t *dst,
diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index 24ff7dc..e6ab15a 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -1081,6 +1081,20 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
/*------------------------ 128 bit atomic operations -------------------------*/
+/**
+ * 128-bit integer structure.
+ */
+RTE_STD_C11
+typedef struct {
+ RTE_STD_C11
+ union {
+ uint64_t val[2];
+#ifdef RTE_ARCH_64
+ __extension__ __int128 int128;
+#endif
+ };
+} __rte_aligned(16) rte_int128_t;
+
#ifdef __DOXYGEN__
/**
@@ -1093,7 +1107,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
* *exp = *dst
* @endcode
*
- * @note This function is currently only available for the x86-64 platform.
+ * @note This function is currently available for the x86-64 and aarch64
+ * platforms.
*
* @note The success and failure arguments must be one of the __ATOMIC_* values
* defined in the C++11 standard. For details on their behavior, refer to the
--
2.7.4
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH v3 1/3] lib/lpm: integrate RCU QSBR
2019-10-13 4:36 3% ` Honnappa Nagarahalli
@ 2019-10-15 11:15 0% ` Ananyev, Konstantin
2019-10-18 3:32 0% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Ananyev, Konstantin @ 2019-10-15 11:15 UTC (permalink / raw)
To: Honnappa Nagarahalli, Richardson, Bruce, Medvedkin, Vladimir,
olivier.matz
Cc: dev, stephen, paulmck, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Ruifeng Wang (Arm Technology China),
nd, Ruifeng Wang (Arm Technology China),
nd
> <snip>
>
> > Hi guys,
> I have tried to consolidate design related questions here. If I have missed anything, please add.
>
> >
> > >
> > > From: Ruifeng Wang <ruifeng.wang@arm.com>
> > >
> > > Currently, the tbl8 group is freed even though the readers might be
> > > using the tbl8 group entries. The freed tbl8 group can be reallocated
> > > quickly. This results in incorrect lookup results.
> > >
> > > RCU QSBR process is integrated for safe tbl8 group reclaim.
> > > Refer to RCU documentation to understand various aspects of
> > > integrating RCU library into other libraries.
> > >
> > > Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > > ---
> > > lib/librte_lpm/Makefile | 3 +-
> > > lib/librte_lpm/meson.build | 2 +
> > > lib/librte_lpm/rte_lpm.c | 102 +++++++++++++++++++++++++----
> > > lib/librte_lpm/rte_lpm.h | 21 ++++++
> > > lib/librte_lpm/rte_lpm_version.map | 6 ++
> > > 5 files changed, 122 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index
> > > a7946a1c5..ca9e16312 100644
> > > --- a/lib/librte_lpm/Makefile
> > > +++ b/lib/librte_lpm/Makefile
> > > @@ -6,9 +6,10 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name
> > > LIB = librte_lpm.a
> > >
> > > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > > CFLAGS += -O3
> > > CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -LDLIBS += -lrte_eal -lrte_hash
> > > +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu
> > >
> > > EXPORT_MAP := rte_lpm_version.map
> > >
> > > diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build
> > > index a5176d8ae..19a35107f 100644
> > > --- a/lib/librte_lpm/meson.build
> > > +++ b/lib/librte_lpm/meson.build
> > > @@ -2,9 +2,11 @@
> > > # Copyright(c) 2017 Intel Corporation
> > >
> > > version = 2
> > > +allow_experimental_apis = true
> > > sources = files('rte_lpm.c', 'rte_lpm6.c') headers =
> > > files('rte_lpm.h', 'rte_lpm6.h') # since header files have different
> > > names, we can install all vector headers # without worrying about
> > > which architecture we actually need headers +=
> > > files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h') deps +=
> > > ['hash']
> > > +deps += ['rcu']
> > > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> > > 3a929a1b1..ca58d4b35 100644
> > > --- a/lib/librte_lpm/rte_lpm.c
> > > +++ b/lib/librte_lpm/rte_lpm.c
> > > @@ -1,5 +1,6 @@
> > > /* SPDX-License-Identifier: BSD-3-Clause
> > > * Copyright(c) 2010-2014 Intel Corporation
> > > + * Copyright(c) 2019 Arm Limited
> > > */
> > >
> > > #include <string.h>
> > > @@ -381,6 +382,8 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
> > >
> > > rte_mcfg_tailq_write_unlock();
> > >
> > > + if (lpm->dq)
> > > + rte_rcu_qsbr_dq_delete(lpm->dq);
> > > rte_free(lpm->tbl8);
> > > rte_free(lpm->rules_tbl);
> > > rte_free(lpm);
> > > @@ -390,6 +393,59 @@ BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604,
> > 16.04);
> > > MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
> > > rte_lpm_free_v1604);
> > >
> > > +struct __rte_lpm_rcu_dq_entry {
> > > + uint32_t tbl8_group_index;
> > > + uint32_t pad;
> > > +};
> > > +
> > > +static void
> > > +__lpm_rcu_qsbr_free_resource(void *p, void *data) {
> > > + struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > > + struct __rte_lpm_rcu_dq_entry *e =
> > > + (struct __rte_lpm_rcu_dq_entry *)data;
> > > + struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p;
> > > +
> > > + /* Set tbl8 group invalid */
> > > + __atomic_store(&tbl8[e->tbl8_group_index], &zero_tbl8_entry,
> > > + __ATOMIC_RELAXED);
> > > +}
> > > +
> > > +/* Associate QSBR variable with an LPM object.
> > > + */
> > > +int
> > > +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr *v) {
> > > + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
> > > + struct rte_rcu_qsbr_dq_parameters params;
> > > +
> > > + if ((lpm == NULL) || (v == NULL)) {
> > > + rte_errno = EINVAL;
> > > + return 1;
> > > + }
> > > +
> > > + if (lpm->dq) {
> > > + rte_errno = EEXIST;
> > > + return 1;
> > > + }
> > > +
> > > + /* Init QSBR defer queue. */
> > > + snprintf(rcu_dq_name, sizeof(rcu_dq_name), "LPM_RCU_%s", lpm-
> > >name);
> > > + params.name = rcu_dq_name;
> > > + params.size = lpm->number_tbl8s;
> > > + params.esize = sizeof(struct __rte_lpm_rcu_dq_entry);
> > > + params.f = __lpm_rcu_qsbr_free_resource;
> > > + params.p = lpm->tbl8;
> > > + params.v = v;
> > > + lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
> > > + if (lpm->dq == NULL) {
> > > + RTE_LOG(ERR, LPM, "LPM QS defer queue creation failed\n");
> > > + return 1;
> > > + }
> >
> > Few thoughts about that function:
> Few things to keep in mind, the goal of the design is to make it easy for the applications to adopt lock-free algorithms. The reclamation
> process in the writer is a major portion of code one has to write for using lock-free algorithms. The current design is such that the writer
> does not have to change any code or write additional code other than calling 'rte_lpm_rcu_qsbr_add'.
>
> > It names rcu_qsbr_add() but in fact it allocates defer queue for give rcu var.
> > So first thought - is it always necessary?
> This is part of the design. If the application does not want to use this integrated logic then, it does not have to call this API. It can use the
> RCU defer APIs to implement its own logic. But, if I ask the question, does this integrated logic address most of the use cases of the LPM
> library, I think the answer is yes.
>
> > For some use-cases I suppose user might be ok to wait for quiescent state
> > change
> > inside tbl8_free()?
> Yes, that is a possibility (for ex: no frequent route changes). But, I think that is very trivial for the application to implement. Though, the LPM
> library has to separate the 'delete' and 'free' operations.
Exactly.
That's why it is not trivial with current LPM library.
In fact to do that himself right now, user would have to implement and support his own version of LPM code.
Honestly, I don't understand why you consider it as a drawback.
From my perspective only few things need to be changed:
1. Add 2 parameters to 'rte_lpm_rcu_qsbr_add():
number of elems in defer_queue
reclaim() threshold value.
If the user doesn't want to provide any values, that's fine we can use default ones here
(as you do it right now).
2. Make rte_lpm_rcu_qsbr_add() to return pointer to the defer_queue.
Again if user doesn't want to call reclaim() himself, he can just ignore return value.
These 2 changes will provide us with necessary flexibility that would help to cover more use-cases:
- user can decide how big should be the defer queue
- user can decide when/how he wants to do reclaim()
Konstantin
>Similar operations are provided in rte_hash library. IMO, we should follow
> consistent approach.
>
> > Another thing you do allocate defer queue, but it is internal, so user can't call
> > reclaim() manually, which looks strange.
> > Why not to return defer_queue pointer to the user, so he can call reclaim()
> > himself at appropriate time?
> The intention of the design is to take the complexity away from the user of LPM library. IMO, the current design will address most uses
> cases of LPM library. If we expose the 2 parameters (when to trigger reclamation and how much to reclaim) in the 'rte_lpm_rcu_qsbr_add'
> API, it should provide enough flexibility to the application.
>
> > Third thing - you always allocate defer queue with size equal to number of
> > tbl8.
> > Though I understand it could be up to 16M tbl8 groups inside the LPM.
> > Do we really need defer queue that long?
> No, we do not need it to be this long. It is this long today to avoid returning no-space on the defer queue error.
>
> > Especially considering that current rcu_defer_queue will start reclamation
> > when 1/8 of defer_quueue becomes full and wouldn't reclaim more then
> > 1/16 of it.
> > Probably better to let user to decide himself how long defer_queue he needs
> > for that LPM?
> It makes sense to expose it to the user if the writer-writer concurrency is lock-free (no memory allocation allowed to expand the defer
> queue size when the queue is full). However, LPM is not lock-free on the writer side. If we think the writer could be lock-free in the future, it
> has to be exposed to the user.
>
> >
> > Konstantin
> Pulling questions/comments from other threads:
> Can we leave reclamation to some other house-keeping thread to do (sort of garbage collector). Or such mode is not supported/planned?
>
> [Honnappa] If the reclamation cost is small, the current method provides advantages over having a separate thread to do reclamation. I did
> not plan to provide such an option. But may be it makes sense to keep the options open (especially from ABI perspective). May be we
> should add a flags field which will allow us to implement different methods in the future?
>
> >
> >
> > > +
> > > + return 0;
> > > +}
> > > +
> > > /*
> > > * Adds a rule to the rule table.
> > > *
> > > @@ -679,14 +735,15 @@ tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20
> > > *tbl8) }
> > >
> > > static int32_t
> > > -tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > number_tbl8s)
> > > +__tbl8_alloc_v1604(struct rte_lpm *lpm)
> > > {
> > > uint32_t group_idx; /* tbl8 group index. */
> > > struct rte_lpm_tbl_entry *tbl8_entry;
> > >
> > > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> > > - for (group_idx = 0; group_idx < number_tbl8s; group_idx++) {
> > > - tbl8_entry = &tbl8[group_idx *
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > + for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
> > > + tbl8_entry = &lpm->tbl8[group_idx *
> > > +
> > RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > > /* If a free tbl8 group is found clean it and set as VALID. */
> > > if (!tbl8_entry->valid_group) {
> > > struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -
> > 712,6 +769,21 @@
> > > tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
> > > return -ENOSPC;
> > > }
> > >
> > > +static int32_t
> > > +tbl8_alloc_v1604(struct rte_lpm *lpm) {
> > > + int32_t group_idx; /* tbl8 group index. */
> > > +
> > > + group_idx = __tbl8_alloc_v1604(lpm);
> > > + if ((group_idx < 0) && (lpm->dq != NULL)) {
> > > + /* If there are no tbl8 groups try to reclaim some. */
> > > + if (rte_rcu_qsbr_dq_reclaim(lpm->dq) == 0)
> > > + group_idx = __tbl8_alloc_v1604(lpm);
> > > + }
> > > +
> > > + return group_idx;
> > > +}
> > > +
> > > static void
> > > tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t
> > > tbl8_group_start) { @@ -728,13 +800,21 @@ tbl8_free_v20(struct
> > > rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start) }
> > >
> > > static void
> > > -tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > > tbl8_group_start)
> > > +tbl8_free_v1604(struct rte_lpm *lpm, uint32_t tbl8_group_start)
> > > {
> > > - /* Set tbl8 group invalid*/
> > > struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > > + struct __rte_lpm_rcu_dq_entry e;
> > >
> > > - __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
> > > - __ATOMIC_RELAXED);
> > > + if (lpm->dq != NULL) {
> > > + e.tbl8_group_index = tbl8_group_start;
> > > + e.pad = 0;
> > > + /* Push into QSBR defer queue. */
> > > + rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&e);
> > > + } else {
> > > + /* Set tbl8 group invalid*/
> > > + __atomic_store(&lpm->tbl8[tbl8_group_start],
> > &zero_tbl8_entry,
> > > + __ATOMIC_RELAXED);
> > > + }
> > > }
> > >
> > > static __rte_noinline int32_t
> > > @@ -1037,7 +1117,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > > uint32_t ip_masked, uint8_t depth,
> > >
> > > if (!lpm->tbl24[tbl24_index].valid) {
> > > /* Search for a free tbl8 group. */
> > > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> > >number_tbl8s);
> > > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> > >
> > > /* Check tbl8 allocation was successful. */
> > > if (tbl8_group_index < 0) {
> > > @@ -1083,7 +1163,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > uint32_t ip_masked, uint8_t depth,
> > > } /* If valid entry but not extended calculate the index into Table8. */
> > > else if (lpm->tbl24[tbl24_index].valid_group == 0) {
> > > /* Search for free tbl8 group. */
> > > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> > >number_tbl8s);
> > > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> > >
> > > if (tbl8_group_index < 0) {
> > > return tbl8_group_index;
> > > @@ -1818,7 +1898,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm,
> > uint32_t ip_masked,
> > > */
> > > lpm->tbl24[tbl24_index].valid = 0;
> > > __atomic_thread_fence(__ATOMIC_RELEASE);
> > > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > > + tbl8_free_v1604(lpm, tbl8_group_start);
> > > } else if (tbl8_recycle_index > -1) {
> > > /* Update tbl24 entry. */
> > > struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1834,7
> > +1914,7 @@
> > > delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
> > > __atomic_store(&lpm->tbl24[tbl24_index],
> > &new_tbl24_entry,
> > > __ATOMIC_RELAXED);
> > > __atomic_thread_fence(__ATOMIC_RELEASE);
> > > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > > + tbl8_free_v1604(lpm, tbl8_group_start);
> > > }
> > > #undef group_idx
> > > return 0;
> > > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> > > 906ec4483..49c12a68d 100644
> > > --- a/lib/librte_lpm/rte_lpm.h
> > > +++ b/lib/librte_lpm/rte_lpm.h
> > > @@ -1,5 +1,6 @@
> > > /* SPDX-License-Identifier: BSD-3-Clause
> > > * Copyright(c) 2010-2014 Intel Corporation
> > > + * Copyright(c) 2019 Arm Limited
> > > */
> > >
> > > #ifndef _RTE_LPM_H_
> > > @@ -21,6 +22,7 @@
> > > #include <rte_common.h>
> > > #include <rte_vect.h>
> > > #include <rte_compat.h>
> > > +#include <rte_rcu_qsbr.h>
> > >
> > > #ifdef __cplusplus
> > > extern "C" {
> > > @@ -186,6 +188,7 @@ struct rte_lpm {
> > > __rte_cache_aligned; /**< LPM tbl24 table. */
> > > struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
> > > struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> > > + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue.*/
> > > };
> > >
> > > /**
> > > @@ -248,6 +251,24 @@ rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
> > void
> > > rte_lpm_free_v1604(struct rte_lpm *lpm);
> > >
> > > +/**
> > > + * Associate RCU QSBR variable with an LPM object.
> > > + *
> > > + * @param lpm
> > > + * the lpm object to add RCU QSBR
> > > + * @param v
> > > + * RCU QSBR variable
> > > + * @return
> > > + * On success - 0
> > > + * On error - 1 with error code set in rte_errno.
> > > + * Possible rte_errno codes are:
> > > + * - EINVAL - invalid pointer
> > > + * - EEXIST - already added QSBR
> > > + * - ENOMEM - memory allocation failure
> > > + */
> > > +__rte_experimental
> > > +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr
> > > +*v);
> > > +
> > > /**
> > > * Add a rule to the LPM table.
> > > *
> > > diff --git a/lib/librte_lpm/rte_lpm_version.map
> > > b/lib/librte_lpm/rte_lpm_version.map
> > > index 90beac853..b353aabd2 100644
> > > --- a/lib/librte_lpm/rte_lpm_version.map
> > > +++ b/lib/librte_lpm/rte_lpm_version.map
> > > @@ -44,3 +44,9 @@ DPDK_17.05 {
> > > rte_lpm6_lookup_bulk_func;
> > >
> > > } DPDK_16.04;
> > > +
> > > +EXPERIMENTAL {
> > > + global:
> > > +
> > > + rte_lpm_rcu_qsbr_add;
> > > +};
> > > --
> > > 2.17.1
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v6 00/13] vhost packed ring performance optimization
2019-10-15 14:30 3% ` [dpdk-dev] [PATCH v5 00/13] " Marvin Liu
@ 2019-10-15 16:07 3% ` Marvin Liu
2019-10-17 7:31 0% ` Maxime Coquelin
2019-10-21 15:40 3% ` [dpdk-dev] [PATCH v7 " Marvin Liu
0 siblings, 2 replies; 200+ results
From: Marvin Liu @ 2019-10-15 16:07 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen, gavin.hu
Cc: dev, Marvin Liu
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.
http://mails.dpdk.org/archives/dev/2018-April/095470.html
However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses.
For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be splitted into fast and normal path.
Several methods will be taken in fast path:
Handle descriptors in one cache line by batch.
Split loop function into more pieces and unroll them.
Prerequisite check that whether I/O space can copy directly into mbuf
space and vice versa.
Prerequisite check that whether descriptor mapping is successful.
Distinguish vhost used ring update function by enqueue and dequeue
function.
Buffer dequeue used descriptors as many as possible.
Update enqueue used descriptors by cache line.
After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 35%.
v6:
- Fix dequeue zcopy result check
v5:
- Remove disable sw prefetch as performance impact is small
- Change unroll pragma macro format
- Rename shadow counter elements names
- Clean dequeue update check condition
- Add inline functions replace of duplicated code
- Unify code style
v4:
- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two
v3:
- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization
v2:
- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated
Marvin Liu (13):
vhost: add packed ring indexes increasing function
vhost: add packed ring single enqueue
vhost: try to unroll for each loop
vhost: add packed ring batch enqueue
vhost: add packed ring single dequeue
vhost: add packed ring batch dequeue
vhost: flush enqueue updates by batch
vhost: flush batched enqueue descs directly
vhost: buffer packed ring dequeue updates
vhost: optimize packed ring enqueue
vhost: add packed ring zcopy batch and single dequeue
vhost: optimize packed ring dequeue
vhost: optimize packed ring dequeue when in-order
lib/librte_vhost/Makefile | 18 +
lib/librte_vhost/meson.build | 7 +
lib/librte_vhost/vhost.h | 57 +++
lib/librte_vhost/virtio_net.c | 924 +++++++++++++++++++++++++++-------
4 files changed, 812 insertions(+), 194 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 00/13] vhost packed ring performance optimization
@ 2019-10-15 14:30 3% ` Marvin Liu
2019-10-15 16:07 3% ` [dpdk-dev] [PATCH v6 " Marvin Liu
0 siblings, 1 reply; 200+ results
From: Marvin Liu @ 2019-10-15 14:30 UTC (permalink / raw)
To: maxime.coquelin, tiwei.bie, zhihong.wang, stephen, gavin.hu
Cc: dev, Marvin Liu
Packed ring has more compact ring format and thus can significantly
reduce the number of cache miss. It can lead to better performance.
This has been approved in virtio user driver, on normal E5 Xeon cpu
single core performance can raise 12%.
http://mails.dpdk.org/archives/dev/2018-April/095470.html
However vhost performance with packed ring performance was decreased.
Through analysis, mostly extra cost was from the calculating of each
descriptor flag which depended on ring wrap counter. Moreover, both
frontend and backend need to write same descriptors which will cause
cache contention. Especially when doing vhost enqueue function, virtio
refill packed ring function may write same cache line when vhost doing
enqueue function. This kind of extra cache cost will reduce the benefit
of reducing cache misses.
For optimizing vhost packed ring performance, vhost enqueue and dequeue
function will be splitted into fast and normal path.
Several methods will be taken in fast path:
Handle descriptors in one cache line by batch.
Split loop function into more pieces and unroll them.
Prerequisite check that whether I/O space can copy directly into mbuf
space and vice versa.
Prerequisite check that whether descriptor mapping is successful.
Distinguish vhost used ring update function by enqueue and dequeue
function.
Buffer dequeue used descriptors as many as possible.
Update enqueue used descriptors by cache line.
After all these methods done, single core vhost PvP performance with 64B
packet on Xeon 8180 can boost 35%.
v5:
- Remove disable sw prefetch as performance impact is small
- change unroll pragma macro format
- Rename shadow counter elements names
- clean dequeue update check condition
- add inline functions replace of duplicated code
- unify code style
v4:
- Support meson build
- Remove memory region cache for no clear performance gain and ABI break
- Not assume ring size is power of two
v3:
- Check available index overflow
- Remove dequeue remained descs number check
- Remove changes in split ring datapath
- Call memory write barriers once when updating used flags
- Rename some functions and macros
- Code style optimization
v2:
- Utilize compiler's pragma to unroll loop, distinguish clang/icc/gcc
- Buffered dequeue used desc number changed to (RING_SZ - PKT_BURST)
- Optimize dequeue used ring update when in_order negotiated
Marvin Liu (13):
vhost: add packed ring indexes increasing function
vhost: add packed ring single enqueue
vhost: try to unroll for each loop
vhost: add packed ring batch enqueue
vhost: add packed ring single dequeue
vhost: add packed ring batch dequeue
vhost: flush enqueue updates by batch
vhost: flush batched enqueue descs flags directly
vhost: buffer packed ring dequeue updates
vhost: optimize packed ring enqueue
vhost: add packed ring zcopy batch and single dequeue
vhost: optimize packed ring dequeue
vhost: optimize packed ring dequeue when in-order
lib/librte_vhost/Makefile | 18 +
lib/librte_vhost/meson.build | 7 +
lib/librte_vhost/vhost.h | 57 +++
lib/librte_vhost/virtio_net.c | 927 +++++++++++++++++++++++++++-------
4 files changed, 814 insertions(+), 195 deletions(-)
--
2.17.1
^ permalink raw reply [relevance 3%]
* [dpdk-dev] [PATCH v5 15/15] sched: remove redundant code
@ 2019-10-14 17:23 4% ` Jasvinder Singh
1 sibling, 0 replies; 200+ results
From: Jasvinder Singh @ 2019-10-14 17:23 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, Lukasz Krakowiak
Remove redundant data structure fields from port level data
structures and update the release notes.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
---
doc/guides/rel_notes/release_19_11.rst | 7 ++++-
lib/librte_sched/rte_sched.c | 42 +-------------------------
lib/librte_sched/rte_sched.h | 22 --------------
3 files changed, 7 insertions(+), 64 deletions(-)
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 23ceb8f67..87812b32c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -172,6 +172,11 @@ API Changes
* ethdev: changed ``rte_eth_dev_owner_delete`` return value from ``void`` to
``int`` to provide a way to report various error conditions.
+* sched: The pipe nodes configuration parameters such as number of pipes,
+ pipe queue sizes, pipe profiles, etc., are moved from port level structure
+ to subport level. This allows different subports of the same port to
+ have different configuration for the pipe nodes.
+
ABI Changes
-----------
@@ -259,7 +264,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
- librte_sched.so.3
+ + librte_sched.so.4
librte_security.so.2
librte_stack.so.1
librte_table.so.3
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 1faa580d0..710ecf65a 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -216,13 +216,6 @@ struct rte_sched_port {
uint32_t mtu;
uint32_t frame_overhead;
int socket;
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t n_pipe_profiles;
- uint32_t n_max_pipe_profiles;
- uint32_t pipe_tc_be_rate_max;
-#ifdef RTE_SCHED_RED
- struct rte_red_config red_config[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
/* Timing */
uint64_t time_cpu_cycles; /* Current CPU time measured in CPU cyles */
@@ -230,48 +223,15 @@ struct rte_sched_port {
uint64_t time; /* Current NIC TX time measured in bytes */
struct rte_reciprocal inv_cycles_per_byte; /* CPU cycles per byte */
- /* Scheduling loop detection */
- uint32_t pipe_loop;
- uint32_t pipe_exhaustion;
-
- /* Bitmap */
- struct rte_bitmap *bmp;
- uint32_t grinder_base_bmp_pos[RTE_SCHED_PORT_N_GRINDERS] __rte_aligned_16;
-
/* Grinders */
- struct rte_sched_grinder grinder[RTE_SCHED_PORT_N_GRINDERS];
- uint32_t busy_grinders;
struct rte_mbuf **pkts_out;
uint32_t n_pkts_out;
uint32_t subport_id;
- /* Queue base calculation */
- uint32_t qsize_add[RTE_SCHED_QUEUES_PER_PIPE];
- uint32_t qsize_sum;
-
/* Large data structures */
- struct rte_sched_subport *subports[0];
- struct rte_sched_subport *subport;
- struct rte_sched_pipe *pipe;
- struct rte_sched_queue *queue;
- struct rte_sched_queue_extra *queue_extra;
- struct rte_sched_pipe_profile *pipe_profiles;
- uint8_t *bmp_array;
- struct rte_mbuf **queue_array;
- uint8_t memory[0] __rte_cache_aligned;
+ struct rte_sched_subport *subports[0] __rte_cache_aligned;
} __rte_cache_aligned;
-enum rte_sched_port_array {
- e_RTE_SCHED_PORT_ARRAY_SUBPORT = 0,
- e_RTE_SCHED_PORT_ARRAY_PIPE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_EXTRA,
- e_RTE_SCHED_PORT_ARRAY_PIPE_PROFILES,
- e_RTE_SCHED_PORT_ARRAY_BMP_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_QUEUE_ARRAY,
- e_RTE_SCHED_PORT_ARRAY_TOTAL,
-};
-
enum rte_sched_subport_array {
e_RTE_SCHED_SUBPORT_ARRAY_PIPE = 0,
e_RTE_SCHED_SUBPORT_ARRAY_QUEUE,
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 40f02f124..c82c23c14 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -260,28 +260,6 @@ struct rte_sched_port_params {
* the subports of the same port.
*/
uint32_t n_pipes_per_subport;
-
- /** Packet queue size for each traffic class.
- * All the pipes within the same subport share the similar
- * configuration for the queues.
- */
- uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
-
- /** Pipe profile table.
- * Every pipe is configured using one of the profiles from this table.
- */
- struct rte_sched_pipe_params *pipe_profiles;
-
- /** Profiles in the pipe profile table */
- uint32_t n_pipe_profiles;
-
- /** Max profiles allowed in the pipe profile table */
- uint32_t n_max_pipe_profiles;
-
-#ifdef RTE_SCHED_RED
- /** RED parameters */
- struct rte_red_params red_params[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE][RTE_COLORS];
-#endif
};
/*
--
2.21.0
^ permalink raw reply [relevance 4%]
* Re: [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic compare exchange
@ 2019-10-14 15:43 0% ` David Marchand
2019-10-15 11:38 2% ` [dpdk-dev] [PATCH v10 " Phil Yang
1 sibling, 1 reply; 200+ results
From: David Marchand @ 2019-10-14 15:43 UTC (permalink / raw)
To: Phil Yang
Cc: Thomas Monjalon, Jerin Jacob Kollanukkaran, Gage Eads, dev,
Hemant Agrawal, Honnappa Nagarahalli, Gavin Hu, nd
On Wed, Aug 14, 2019 at 10:29 AM Phil Yang <phil.yang@arm.com> wrote:
>
> Add 128-bit atomic compare exchange on aarch64.
A bit short, seeing the complexity of the code and the additional
RTE_ARM_FEATURE_ATOMICS config flag.
Comments inline.
>
> Suggested-by: Jerin Jacob <jerinj@marvell.com>
> Signed-off-by: Phil Yang <phil.yang@arm.com>
> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> Acked-by: Jerin Jacob <jerinj@marvell.com>
> ---
>
> v9:
> Updated 19.11 release note.
>
> v8:
> Fixed "WARNING:LONG_LINE: line over 80 characters" warnings with latest kernel
> checkpatch.pl
>
> v7:
> 1. Adjust code comment.
>
> v6:
> 1. Put the RTE_ARM_FEATURE_ATOMICS flag into EAL group. (Jerin Jocob)
> 2. Keep rte_stack_lf_stubs.h doing nothing. (Gage Eads)
> 3. Fixed 32 bit build issue.
>
> v5:
> 1. Enable RTE_ARM_FEATURE_ATOMICS on octeontx2 in default. (Jerin Jocob)
> 2. Record the reason of introducing "rte_stack_lf_stubs.h" in git
> commit.
> (Jerin, Jocob)
> 3. Fixed a conditional MACRO error in rte_atomic128_cmp_exchange. (Jerin
> Jocob)
>
> v4:
> 1. Add RTE_ARM_FEATURE_ATOMICS flag to support LSE CASP instructions.
> (Jerin Jocob)
> 2. Fix possible arm64 ABI break by making casp_op_name noinline. (Jerin
> Jocob)
> 3. Add rte_stack_lf_stubs.h to reduce the ifdef clutter. (Gage
> Eads/Jerin Jocob)
>
> v3:
> 1. Avoid duplication code with macro. (Jerin Jocob)
> 2. Make invalid memory order to strongest barrier. (Jerin Jocob)
> 3. Update doc/guides/prog_guide/env_abstraction_layer.rst. (Gage Eads)
> 4. Fix 32-bit x86 builds issue. (Gage Eads)
> 5. Correct documentation issues in UT. (Gage Eads)
>
> v2:
> Initial version.
>
> config/arm/meson.build | 2 +
> config/common_base | 3 +
> config/defconfig_arm64-octeontx2-linuxapp-gcc | 1 +
> config/defconfig_arm64-thunderx2-linuxapp-gcc | 1 +
> .../common/include/arch/arm/rte_atomic_64.h | 163 +++++++++++++++++++++
> .../common/include/arch/x86/rte_atomic_64.h | 12 --
> lib/librte_eal/common/include/generic/rte_atomic.h | 17 ++-
> 7 files changed, 186 insertions(+), 13 deletions(-)
>
> diff --git a/config/arm/meson.build b/config/arm/meson.build
> index 979018e..9f28271 100644
> --- a/config/arm/meson.build
> +++ b/config/arm/meson.build
> @@ -71,11 +71,13 @@ flags_thunderx2_extra = [
> ['RTE_CACHE_LINE_SIZE', 64],
> ['RTE_MAX_NUMA_NODES', 2],
> ['RTE_MAX_LCORE', 256],
> + ['RTE_ARM_FEATURE_ATOMICS', true],
> ['RTE_USE_C11_MEM_MODEL', true]]
> flags_octeontx2_extra = [
> ['RTE_MACHINE', '"octeontx2"'],
> ['RTE_MAX_NUMA_NODES', 1],
> ['RTE_MAX_LCORE', 24],
> + ['RTE_ARM_FEATURE_ATOMICS', true],
> ['RTE_EAL_IGB_UIO', false],
> ['RTE_USE_C11_MEM_MODEL', true]]
>
> diff --git a/config/common_base b/config/common_base
> index 8ef75c2..2054480 100644
> --- a/config/common_base
> +++ b/config/common_base
> @@ -82,6 +82,9 @@ CONFIG_RTE_MAX_LCORE=128
> CONFIG_RTE_MAX_NUMA_NODES=8
> CONFIG_RTE_MAX_HEAPS=32
> CONFIG_RTE_MAX_MEMSEG_LISTS=64
> +
> +# Use ARM LSE ATOMIC instructions
> +CONFIG_RTE_ARM_FEATURE_ATOMICS=n
> # each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages
> # or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is smaller
> CONFIG_RTE_MAX_MEMSEG_PER_LIST=8192
> diff --git a/config/defconfig_arm64-octeontx2-linuxapp-gcc b/config/defconfig_arm64-octeontx2-linuxapp-gcc
> index f20da24..7687dbe 100644
> --- a/config/defconfig_arm64-octeontx2-linuxapp-gcc
> +++ b/config/defconfig_arm64-octeontx2-linuxapp-gcc
> @@ -9,6 +9,7 @@ CONFIG_RTE_MACHINE="octeontx2"
> CONFIG_RTE_CACHE_LINE_SIZE=128
> CONFIG_RTE_MAX_NUMA_NODES=1
> CONFIG_RTE_MAX_LCORE=24
> +CONFIG_RTE_ARM_FEATURE_ATOMICS=y
>
> # Doesn't support NUMA
> CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
> diff --git a/config/defconfig_arm64-thunderx2-linuxapp-gcc b/config/defconfig_arm64-thunderx2-linuxapp-gcc
> index cc5c64b..af4a89c 100644
> --- a/config/defconfig_arm64-thunderx2-linuxapp-gcc
> +++ b/config/defconfig_arm64-thunderx2-linuxapp-gcc
> @@ -9,3 +9,4 @@ CONFIG_RTE_MACHINE="thunderx2"
> CONFIG_RTE_CACHE_LINE_SIZE=64
> CONFIG_RTE_MAX_NUMA_NODES=2
> CONFIG_RTE_MAX_LCORE=256
> +CONFIG_RTE_ARM_FEATURE_ATOMICS=y
> diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
> index 97060e4..14d869b 100644
> --- a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
> +++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
> @@ -1,5 +1,6 @@
> /* SPDX-License-Identifier: BSD-3-Clause
> * Copyright(c) 2015 Cavium, Inc
> + * Copyright(c) 2019 Arm Limited
> */
>
> #ifndef _RTE_ATOMIC_ARM64_H_
> @@ -14,6 +15,9 @@ extern "C" {
> #endif
>
> #include "generic/rte_atomic.h"
> +#include <rte_branch_prediction.h>
> +#include <rte_compat.h>
> +#include <rte_debug.h>
>
> #define dsb(opt) asm volatile("dsb " #opt : : : "memory")
> #define dmb(opt) asm volatile("dmb " #opt : : : "memory")
> @@ -40,6 +44,165 @@ extern "C" {
>
> #define rte_cio_rmb() dmb(oshld)
>
> +/*------------------------ 128 bit atomic operations -------------------------*/
> +
> +#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
> +#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
> + (mo) == __ATOMIC_SEQ_CST)
> +
> +#define __MO_LOAD(mo) (__HAS_ACQ((mo)) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED)
> +#define __MO_STORE(mo) (__HAS_RLS((mo)) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED)
Those 4 first macros only make sense when LSE is not available (see below [1]).
Besides, they are used only once, why not directly use those
conditions where needed?
> +
> +#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
> +#define __ATOMIC128_CAS_OP(cas_op_name, op_string) \
> +static __rte_noinline rte_int128_t \
> +cas_op_name(rte_int128_t *dst, rte_int128_t old, \
> + rte_int128_t updated) \
> +{ \
> + /* caspX instructions register pair must start from even-numbered
> + * register at operand 1.
> + * So, specify registers for local variables here.
> + */ \
> + register uint64_t x0 __asm("x0") = (uint64_t)old.val[0]; \
> + register uint64_t x1 __asm("x1") = (uint64_t)old.val[1]; \
> + register uint64_t x2 __asm("x2") = (uint64_t)updated.val[0]; \
> + register uint64_t x3 __asm("x3") = (uint64_t)updated.val[1]; \
> + asm volatile( \
> + op_string " %[old0], %[old1], %[upd0], %[upd1], [%[dst]]" \
> + : [old0] "+r" (x0), \
> + [old1] "+r" (x1) \
> + : [upd0] "r" (x2), \
> + [upd1] "r" (x3), \
> + [dst] "r" (dst) \
> + : "memory"); \
> + old.val[0] = x0; \
> + old.val[1] = x1; \
> + return old; \
> +}
> +
> +__ATOMIC128_CAS_OP(__rte_cas_relaxed, "casp")
> +__ATOMIC128_CAS_OP(__rte_cas_acquire, "caspa")
> +__ATOMIC128_CAS_OP(__rte_cas_release, "caspl")
> +__ATOMIC128_CAS_OP(__rte_cas_acq_rel, "caspal")
If LSE is available, we expose __rte_cas_XX (explicitely) *non*
inlined functions, while without LSE, we expose inlined __rte_ldr_XX
and __rte_stx_XX functions.
So we have a first disparity with non-inlined vs inlined functions
depending on a #ifdef.
Then, we have a second disparity with two sets of "apis" depending on
this #ifdef.
And we expose those sets with a rte_ prefix, meaning people will try
to use them, but those are not part of a public api.
Can't we do without them ? (see below [2] for a proposal with ldr/stx,
cas should be the same)
> +#else
> +#define __ATOMIC128_LDX_OP(ldx_op_name, op_string) \
> +static inline rte_int128_t \
> +ldx_op_name(const rte_int128_t *src) \
> +{ \
> + rte_int128_t ret; \
> + asm volatile( \
> + op_string " %0, %1, %2" \
> + : "=&r" (ret.val[0]), \
> + "=&r" (ret.val[1]) \
> + : "Q" (src->val[0]) \
> + : "memory"); \
> + return ret; \
> +}
> +
> +__ATOMIC128_LDX_OP(__rte_ldx_relaxed, "ldxp")
> +__ATOMIC128_LDX_OP(__rte_ldx_acquire, "ldaxp")
> +
> +#define __ATOMIC128_STX_OP(stx_op_name, op_string) \
> +static inline uint32_t \
> +stx_op_name(rte_int128_t *dst, const rte_int128_t src) \
> +{ \
> + uint32_t ret; \
> + asm volatile( \
> + op_string " %w0, %1, %2, %3" \
> + : "=&r" (ret) \
> + : "r" (src.val[0]), \
> + "r" (src.val[1]), \
> + "Q" (dst->val[0]) \
> + : "memory"); \
> + /* Return 0 on success, 1 on failure */ \
> + return ret; \
> +}
> +
> +__ATOMIC128_STX_OP(__rte_stx_relaxed, "stxp")
> +__ATOMIC128_STX_OP(__rte_stx_release, "stlxp")
> +#endif
> +
> +static inline int __rte_experimental
The __rte_experimental tag comes first.
> +rte_atomic128_cmp_exchange(rte_int128_t *dst,
> + rte_int128_t *exp,
> + const rte_int128_t *src,
> + unsigned int weak,
> + int success,
> + int failure)
> +{
> + /* Always do strong CAS */
> + RTE_SET_USED(weak);
> + /* Ignore memory ordering for failure, memory order for
> + * success must be stronger or equal
> + */
> + RTE_SET_USED(failure);
> + /* Find invalid memory order */
> + RTE_ASSERT(success == __ATOMIC_RELAXED
> + || success == __ATOMIC_ACQUIRE
> + || success == __ATOMIC_RELEASE
> + || success == __ATOMIC_ACQ_REL
> + || success == __ATOMIC_SEQ_CST);
> +
> +#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
> + rte_int128_t expected = *exp;
> + rte_int128_t desired = *src;
> + rte_int128_t old;
> +
> + if (success == __ATOMIC_RELAXED)
> + old = __rte_cas_relaxed(dst, expected, desired);
> + else if (success == __ATOMIC_ACQUIRE)
> + old = __rte_cas_acquire(dst, expected, desired);
> + else if (success == __ATOMIC_RELEASE)
> + old = __rte_cas_release(dst, expected, desired);
> + else
> + old = __rte_cas_acq_rel(dst, expected, desired);
> +#else
1: the four first macros (on the memory ordering constraints) can be
moved here then undef'd once unused.
Or you can just do without them.
> + int ldx_mo = __MO_LOAD(success);
> + int stx_mo = __MO_STORE(success);
> + uint32_t ret = 1;
> + register rte_int128_t expected = *exp;
> + register rte_int128_t desired = *src;
> + register rte_int128_t old;
> +
> + /* ldx128 can not guarantee atomic,
> + * Must write back src or old to verify atomicity of ldx128;
> + */
> + do {
> + if (ldx_mo == __ATOMIC_RELAXED)
> + old = __rte_ldx_relaxed(dst);
> + else
> + old = __rte_ldx_acquire(dst);
2: how about using a simple macro that gets passed the op string?
Something like (untested):
#define __READ_128(op_string, src, dst) \
asm volatile( \
op_string " %0, %1, %2" \
: "=&r" (dst.val[0]), \
"=&r" (dst.val[1]) \
: "Q" (src->val[0]) \
: "memory")
Then used like this:
if (ldx_mo == __ATOMIC_RELAXED)
__READ_128("ldxp", dst, old);
else
__READ_128("ldaxp", dst, old);
#undef __READ_128
> +
> + if (likely(old.int128 == expected.int128)) {
> + if (stx_mo == __ATOMIC_RELAXED)
> + ret = __rte_stx_relaxed(dst, desired);
> + else
> + ret = __rte_stx_release(dst, desired);
> + } else {
> + /* In the failure case (since 'weak' is ignored and only
> + * weak == 0 is implemented), expected should contain
> + * the atomically read value of dst. This means, 'old'
> + * needs to be stored back to ensure it was read
> + * atomically.
> + */
> + if (stx_mo == __ATOMIC_RELAXED)
> + ret = __rte_stx_relaxed(dst, old);
> + else
> + ret = __rte_stx_release(dst, old);
And:
#define __STORE_128(op_string, dst, val, ret) \
asm volatile( \
op_string " %w0, %1, %2, %3" \
: "=&r" (ret) \
: "r" (val.val[0]), \
"r" (val.val[1]), \
"Q" (dst->val[0]) \
: "memory")
Used like this:
if (likely(old.int128 == expected.int128)) {
if (stx_mo == __ATOMIC_RELAXED)
__STORE_128("stxp", dst, desired, ret);
else
__STORE_128("stlxp", dst, desired, ret);
} else {
/* In the failure case (since 'weak' is ignored and only
* weak == 0 is implemented), expected should contain
* the atomically read value of dst. This means, 'old'
* needs to be stored back to ensure it was read
* atomically.
*/
if (stx_mo == __ATOMIC_RELAXED)
__STORE_128("stxp", dst, old, ret);
else
__STORE_128("stlxp", dst, old, ret);
}
#undef __STORE_128
> + }
> + } while (unlikely(ret));
> +#endif
> +
> + /* Unconditionally updating expected removes
> + * an 'if' statement.
> + * expected should already be in register if
> + * not in the cache.
> + */
> + *exp = old;
> +
> + return (old.int128 == expected.int128);
> +}
> +
> #ifdef __cplusplus
> }
> #endif
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
> index 1335d92..cfe7067 100644
> --- a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
> +++ b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
> @@ -183,18 +183,6 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
>
> /*------------------------ 128 bit atomic operations -------------------------*/
>
> -/**
> - * 128-bit integer structure.
> - */
> -RTE_STD_C11
> -typedef struct {
> - RTE_STD_C11
> - union {
> - uint64_t val[2];
> - __extension__ __int128 int128;
> - };
> -} __rte_aligned(16) rte_int128_t;
> -
> __rte_experimental
> static inline int
> rte_atomic128_cmp_exchange(rte_int128_t *dst,
> diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
> index 24ff7dc..e6ab15a 100644
> --- a/lib/librte_eal/common/include/generic/rte_atomic.h
> +++ b/lib/librte_eal/common/include/generic/rte_atomic.h
> @@ -1081,6 +1081,20 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
>
> /*------------------------ 128 bit atomic operations -------------------------*/
>
> +/**
> + * 128-bit integer structure.
> + */
> +RTE_STD_C11
> +typedef struct {
> + RTE_STD_C11
> + union {
> + uint64_t val[2];
> +#ifdef RTE_ARCH_64
> + __extension__ __int128 int128;
> +#endif
You hid this field for x86.
What is the reason?
> + };
> +} __rte_aligned(16) rte_int128_t;
> +
> #ifdef __DOXYGEN__
>
> /**
> @@ -1093,7 +1107,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
> * *exp = *dst
> * @endcode
> *
> - * @note This function is currently only available for the x86-64 platform.
> + * @note This function is currently available for the x86-64 and aarch64
> + * platforms.
> *
> * @note The success and failure arguments must be one of the __ATOMIC_* values
> * defined in the C++11 standard. For details on their behavior, refer to the
> --
> 2.7.4
>
--
David Marchand
^ permalink raw reply [relevance 0%]
* [dpdk-dev] [PATCH v4 17/17] sched: modify internal structs and functions for 64 bit values
@ 2019-10-14 12:09 2% ` Jasvinder Singh
1 sibling, 0 replies; 200+ results
From: Jasvinder Singh @ 2019-10-14 12:09 UTC (permalink / raw)
To: dev; +Cc: cristian.dumitrescu, Lukasz Krakowiak
Modify internal structure and functions to support 64-bit
values for rates and stats parameters.
Release note is updated and deprecation notice is removed.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
---
doc/guides/rel_notes/deprecation.rst | 6 -
doc/guides/rel_notes/release_19_11.rst | 7 +-
lib/librte_sched/rte_approx.c | 57 ++++---
lib/librte_sched/rte_approx.h | 3 +-
lib/librte_sched/rte_sched.c | 211 +++++++++++++------------
lib/librte_sched/rte_sched_common.h | 12 +-
6 files changed, 162 insertions(+), 134 deletions(-)
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index 237813b64..91916d4ac 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -125,12 +125,6 @@ Deprecation Notices
to one it means it represents IV, when is set to zero it means J0 is used
directly, in this case 16 bytes of J0 need to be passed.
-* sched: To allow more traffic classes, flexible mapping of pipe queues to
- traffic classes, and subport level configuration of pipes and queues
- changes will be made to macros, data structures and API functions defined
- in "rte_sched.h". These changes are aligned to improvements suggested in the
- RFC https://mails.dpdk.org/archives/dev/2018-November/120035.html.
-
* metrics: The function ``rte_metrics_init`` will have a non-void return
in order to notify errors instead of calling ``rte_exit``.
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 23ceb8f67..87812b32c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -172,6 +172,11 @@ API Changes
* ethdev: changed ``rte_eth_dev_owner_delete`` return value from ``void`` to
``int`` to provide a way to report various error conditions.
+* sched: The pipe nodes configuration parameters such as number of pipes,
+ pipe queue sizes, pipe profiles, etc., are moved from port level structure
+ to subport level. This allows different subports of the same port to
+ have different configuration for the pipe nodes.
+
ABI Changes
-----------
@@ -259,7 +264,7 @@ The libraries prepended with a plus sign were incremented in this version.
librte_rcu.so.1
librte_reorder.so.1
librte_ring.so.2
- librte_sched.so.3
+ + librte_sched.so.4
librte_security.so.2
librte_stack.so.1
librte_table.so.3
diff --git a/lib/librte_sched/rte_approx.c b/lib/librte_sched/rte_approx.c
index 30620b83d..4883d3969 100644
--- a/lib/librte_sched/rte_approx.c
+++ b/lib/librte_sched/rte_approx.c
@@ -18,22 +18,23 @@
*/
/* fraction comparison: compare (a/b) and (c/d) */
-static inline uint32_t
-less(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
+static inline sched_counter_t
+less(sched_counter_t a, sched_counter_t b, sched_counter_t c, sched_counter_t d)
{
return a*d < b*c;
}
-static inline uint32_t
-less_or_equal(uint32_t a, uint32_t b, uint32_t c, uint32_t d)
+static inline sched_counter_t
+less_or_equal(sched_counter_t a, sched_counter_t b, sched_counter_t c,
+ sched_counter_t d)
{
return a*d <= b*c;
}
/* check whether a/b is a valid approximation */
-static inline uint32_t
-matches(uint32_t a, uint32_t b,
- uint32_t alpha_num, uint32_t d_num, uint32_t denum)
+static inline sched_counter_t
+matches(sched_counter_t a, sched_counter_t b,
+ sched_counter_t alpha_num, sched_counter_t d_num, sched_counter_t denum)
{
if (less_or_equal(a, b, alpha_num - d_num, denum))
return 0;
@@ -45,33 +46,39 @@ matches(uint32_t a, uint32_t b,
}
static inline void
-find_exact_solution_left(uint32_t p_a, uint32_t q_a, uint32_t p_b, uint32_t q_b,
- uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p, uint32_t *q)
+find_exact_solution_left(sched_counter_t p_a, sched_counter_t q_a,
+ sched_counter_t p_b, sched_counter_t q_b, sched_counter_t alpha_num,
+ sched_counter_t d_num, sched_counter_t denum, sched_counter_t *p,
+ sched_counter_t *q)
{
- uint32_t k_num = denum * p_b - (alpha_num + d_num) * q_b;
- uint32_t k_denum = (alpha_num + d_num) * q_a - denum * p_a;
- uint32_t k = (k_num / k_denum) + 1;
+ sched_counter_t k_num = denum * p_b - (alpha_num + d_num) * q_b;
+ sched_counter_t k_denum = (alpha_num + d_num) * q_a - denum * p_a;
+ sched_counter_t k = (k_num / k_denum) + 1;
*p = p_b + k * p_a;
*q = q_b + k * q_a;
}
static inline void
-find_exact_solution_right(uint32_t p_a, uint32_t q_a, uint32_t p_b, uint32_t q_b,
- uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p, uint32_t *q)
+find_exact_solution_right(sched_counter_t p_a, sched_counter_t q_a,
+ sched_counter_t p_b, sched_counter_t q_b, sched_counter_t alpha_num,
+ sched_counter_t d_num, sched_counter_t denum, sched_counter_t *p,
+ sched_counter_t *q)
{
- uint32_t k_num = - denum * p_b + (alpha_num - d_num) * q_b;
- uint32_t k_denum = - (alpha_num - d_num) * q_a + denum * p_a;
- uint32_t k = (k_num / k_denum) + 1;
+ sched_counter_t k_num = -denum * p_b + (alpha_num - d_num) * q_b;
+ sched_counter_t k_denum = -(alpha_num - d_num) * q_a + denum * p_a;
+ sched_counter_t k = (k_num / k_denum) + 1;
*p = p_b + k * p_a;
*q = q_b + k * q_a;
}
static int
-find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num, uint32_t denum, uint32_t *p, uint32_t *q)
+find_best_rational_approximation(sched_counter_t alpha_num,
+ sched_counter_t d_num, sched_counter_t denum, sched_counter_t *p,
+ sched_counter_t *q)
{
- uint32_t p_a, q_a, p_b, q_b;
+ sched_counter_t p_a, q_a, p_b, q_b;
/* check assumptions on the inputs */
if (!((0 < d_num) && (d_num < alpha_num) && (alpha_num < denum) && (d_num + alpha_num < denum))) {
@@ -85,8 +92,8 @@ find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num, uint32_t de
q_b = 1;
while (1) {
- uint32_t new_p_a, new_q_a, new_p_b, new_q_b;
- uint32_t x_num, x_denum, x;
+ sched_counter_t new_p_a, new_q_a, new_p_b, new_q_b;
+ sched_counter_t x_num, x_denum, x;
int aa, bb;
/* compute the number of steps to the left */
@@ -139,9 +146,9 @@ find_best_rational_approximation(uint32_t alpha_num, uint32_t d_num, uint32_t de
}
}
-int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
+int rte_approx(double alpha, double d, sched_counter_t *p, sched_counter_t *q)
{
- uint32_t alpha_num, d_num, denum;
+ sched_counter_t alpha_num, d_num, denum;
/* Check input arguments */
if (!((0.0 < d) && (d < alpha) && (alpha < 1.0))) {
@@ -159,8 +166,8 @@ int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q)
d *= 10;
denum *= 10;
}
- alpha_num = (uint32_t) alpha;
- d_num = (uint32_t) d;
+ alpha_num = (sched_counter_t) alpha;
+ d_num = (sched_counter_t) d;
/* Perform approximation */
return find_best_rational_approximation(alpha_num, d_num, denum, p, q);
diff --git a/lib/librte_sched/rte_approx.h b/lib/librte_sched/rte_approx.h
index 0244d98f1..e591e122d 100644
--- a/lib/librte_sched/rte_approx.h
+++ b/lib/librte_sched/rte_approx.h
@@ -20,6 +20,7 @@ extern "C" {
***/
#include <stdint.h>
+#include "rte_sched_common.h"
/**
* Find best rational approximation
@@ -37,7 +38,7 @@ extern "C" {
* @return
* 0 upon success, error code otherwise
*/
-int rte_approx(double alpha, double d, uint32_t *p, uint32_t *q);
+int rte_approx(double alpha, double d, sched_counter_t *p, sched_counter_t *q);
#ifdef __cplusplus
}
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 710ecf65a..11d1febe2 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -49,13 +49,13 @@
struct rte_sched_pipe_profile {
/* Token bucket (TB) */
- uint32_t tb_period;
- uint32_t tb_credits_per_period;
- uint32_t tb_size;
+ sched_counter_t tb_period;
+ sched_counter_t tb_credits_per_period;
+ sched_counter_t tb_size;
/* Pipe traffic classes */
- uint32_t tc_period;
- uint32_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t tc_period;
+ sched_counter_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
uint8_t tc_ov_weight;
/* Pipe best-effort traffic class queues */
@@ -65,20 +65,20 @@ struct rte_sched_pipe_profile {
struct rte_sched_pipe {
/* Token bucket (TB) */
uint64_t tb_time; /* time of last update */
- uint32_t tb_credits;
+ sched_counter_t tb_credits;
/* Pipe profile and flags */
uint32_t profile;
/* Traffic classes (TCs) */
uint64_t tc_time; /* time of next update */
- uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
/* Weighted Round Robin (WRR) */
uint8_t wrr_tokens[RTE_SCHED_BE_QUEUES_PER_PIPE];
/* TC oversubscription */
- uint32_t tc_ov_credits;
+ sched_counter_t tc_ov_credits;
uint8_t tc_ov_period_id;
} __rte_cache_aligned;
@@ -141,28 +141,28 @@ struct rte_sched_grinder {
struct rte_sched_subport {
/* Token bucket (TB) */
uint64_t tb_time; /* time of last update */
- uint32_t tb_period;
- uint32_t tb_credits_per_period;
- uint32_t tb_size;
- uint32_t tb_credits;
+ sched_counter_t tb_period;
+ sched_counter_t tb_credits_per_period;
+ sched_counter_t tb_size;
+ sched_counter_t tb_credits;
/* Traffic classes (TCs) */
uint64_t tc_time; /* time of next update */
- uint32_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t tc_period;
+ sched_counter_t tc_credits_per_period[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t tc_credits[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t tc_period;
/* TC oversubscription */
- uint32_t tc_ov_wm;
- uint32_t tc_ov_wm_min;
- uint32_t tc_ov_wm_max;
+ sched_counter_t tc_ov_wm;
+ sched_counter_t tc_ov_wm_min;
+ sched_counter_t tc_ov_wm_max;
uint8_t tc_ov_period_id;
uint8_t tc_ov;
uint32_t tc_ov_n;
double tc_ov_rate;
/* Statistics */
- struct rte_sched_subport_stats stats;
+ struct rte_sched_subport_stats stats __rte_cache_aligned;
/* Subport pipes */
uint32_t n_pipes_per_subport_enabled;
@@ -170,7 +170,7 @@ struct rte_sched_subport {
uint32_t n_max_pipe_profiles;
/* Pipe best-effort TC rate */
- uint32_t pipe_tc_be_rate_max;
+ sched_counter_t pipe_tc_be_rate_max;
/* Pipe queues size */
uint16_t qsize[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
@@ -212,7 +212,7 @@ struct rte_sched_port {
uint16_t pipe_queue[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
uint8_t pipe_tc[RTE_SCHED_QUEUES_PER_PIPE];
uint8_t tc_queue[RTE_SCHED_QUEUES_PER_PIPE];
- uint32_t rate;
+ sched_counter_t rate;
uint32_t mtu;
uint32_t frame_overhead;
int socket;
@@ -517,33 +517,35 @@ rte_sched_port_log_pipe_profile(struct rte_sched_subport *subport, uint32_t i)
struct rte_sched_pipe_profile *p = subport->pipe_profiles + i;
RTE_LOG(DEBUG, SCHED, "Low level config for pipe profile %u:\n"
- " Token bucket: period = %u, credits per period = %u, size = %u\n"
- " Traffic classes: period = %u,\n"
- " credits per period = [%u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u]\n"
+ " Token bucket: period = %"PRIu64", credits per period = %"PRIu64", size = %"PRIu64"\n"
+ " Traffic classes: period = %"PRIu64",\n"
+ " credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+ ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+ ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
" Best-effort traffic class oversubscription: weight = %hhu\n"
" WRR cost: [%hhu, %hhu, %hhu, %hhu]\n",
i,
/* Token bucket */
- p->tb_period,
- p->tb_credits_per_period,
- p->tb_size,
+ (uint64_t)p->tb_period,
+ (uint64_t)p->tb_credits_per_period,
+ (uint64_t)p->tb_size,
/* Traffic classes */
- p->tc_period,
- p->tc_credits_per_period[0],
- p->tc_credits_per_period[1],
- p->tc_credits_per_period[2],
- p->tc_credits_per_period[3],
- p->tc_credits_per_period[4],
- p->tc_credits_per_period[5],
- p->tc_credits_per_period[6],
- p->tc_credits_per_period[7],
- p->tc_credits_per_period[8],
- p->tc_credits_per_period[9],
- p->tc_credits_per_period[10],
- p->tc_credits_per_period[11],
- p->tc_credits_per_period[12],
+ (uint64_t)p->tc_period,
+ (uint64_t)p->tc_credits_per_period[0],
+ (uint64_t)p->tc_credits_per_period[1],
+ (uint64_t)p->tc_credits_per_period[2],
+ (uint64_t)p->tc_credits_per_period[3],
+ (uint64_t)p->tc_credits_per_period[4],
+ (uint64_t)p->tc_credits_per_period[5],
+ (uint64_t)p->tc_credits_per_period[6],
+ (uint64_t)p->tc_credits_per_period[7],
+ (uint64_t)p->tc_credits_per_period[8],
+ (uint64_t)p->tc_credits_per_period[9],
+ (uint64_t)p->tc_credits_per_period[10],
+ (uint64_t)p->tc_credits_per_period[11],
+ (uint64_t)p->tc_credits_per_period[12],
/* Best-effort traffic class oversubscription */
p->tc_ov_weight,
@@ -553,7 +555,7 @@ rte_sched_port_log_pipe_profile(struct rte_sched_subport *subport, uint32_t i)
}
static inline uint64_t
-rte_sched_time_ms_to_bytes(uint32_t time_ms, uint32_t rate)
+rte_sched_time_ms_to_bytes(sched_counter_t time_ms, sched_counter_t rate)
{
uint64_t time = time_ms;
@@ -566,7 +568,7 @@ static void
rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
struct rte_sched_pipe_params *src,
struct rte_sched_pipe_profile *dst,
- uint32_t rate)
+ sched_counter_t rate)
{
uint32_t wrr_cost[RTE_SCHED_BE_QUEUES_PER_PIPE];
uint32_t lcd1, lcd2, lcd;
@@ -581,8 +583,8 @@ rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
/ (double) rate;
double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
- rte_approx(tb_rate, d,
- &dst->tb_credits_per_period, &dst->tb_period);
+ rte_approx(tb_rate, d, &dst->tb_credits_per_period,
+ &dst->tb_period);
}
dst->tb_size = src->tb_size;
@@ -594,8 +596,8 @@ rte_sched_pipe_profile_convert(struct rte_sched_subport *subport,
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
if (subport->qsize[i])
dst->tc_credits_per_period[i]
- = rte_sched_time_ms_to_bytes(src->tc_period,
- src->tc_rate[i]);
+ = (sched_counter_t) rte_sched_time_ms_to_bytes(
+ src->tc_period, src->tc_rate[i]);
dst->tc_ov_weight = src->tc_ov_weight;
@@ -637,7 +639,8 @@ rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport,
subport->pipe_tc_be_rate_max = 0;
for (i = 0; i < subport->n_pipe_profiles; i++) {
struct rte_sched_pipe_params *src = params->pipe_profiles + i;
- uint32_t pipe_tc_be_rate = src->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
+ sched_counter_t pipe_tc_be_rate =
+ src->tc_rate[RTE_SCHED_TRAFFIC_CLASS_BE];
if (subport->pipe_tc_be_rate_max < pipe_tc_be_rate)
subport->pipe_tc_be_rate_max = pipe_tc_be_rate;
@@ -647,7 +650,7 @@ rte_sched_subport_config_pipe_profile_table(struct rte_sched_subport *subport,
static int
rte_sched_subport_check_params(struct rte_sched_subport_params *params,
uint32_t n_max_pipes_per_subport,
- uint32_t rate)
+ sched_counter_t rate)
{
uint32_t i;
@@ -684,7 +687,7 @@ rte_sched_subport_check_params(struct rte_sched_subport_params *params,
}
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
- uint32_t tc_rate = params->tc_rate[i];
+ sched_counter_t tc_rate = params->tc_rate[i];
uint16_t qsize = params->qsize[i];
if ((qsize == 0 && tc_rate != 0) ||
@@ -910,36 +913,40 @@ rte_sched_port_log_subport_config(struct rte_sched_port *port, uint32_t i)
struct rte_sched_subport *s = port->subports[i];
RTE_LOG(DEBUG, SCHED, "Low level config for subport %u:\n"
- " Token bucket: period = %u, credits per period = %u, size = %u\n"
- " Traffic classes: period = %u\n"
- " credits per period = [%u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u, %u]\n"
- " Best effort traffic class oversubscription: wm min = %u, wm max = %u\n",
+ " Token bucket: period = %"PRIu64", credits per period = %"PRIu64
+ ", size = %"PRIu64"\n"
+ " Traffic classes: period = %"PRIu64"\n"
+ " credits per period = [%"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+ ", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64", %"PRIu64
+ ", %"PRIu64", %"PRIu64", %"PRIu64"]\n"
+ " Best effort traffic class oversubscription: wm min = %"PRIu64
+ ", wm max = %"PRIu64"\n",
i,
/* Token bucket */
- s->tb_period,
- s->tb_credits_per_period,
- s->tb_size,
+ (uint64_t)s->tb_period,
+ (uint64_t)s->tb_credits_per_period,
+ (uint64_t)s->tb_size,
/* Traffic classes */
- s->tc_period,
- s->tc_credits_per_period[0],
- s->tc_credits_per_period[1],
- s->tc_credits_per_period[2],
- s->tc_credits_per_period[3],
- s->tc_credits_per_period[4],
- s->tc_credits_per_period[5],
- s->tc_credits_per_period[6],
- s->tc_credits_per_period[7],
- s->tc_credits_per_period[8],
- s->tc_credits_per_period[9],
- s->tc_credits_per_period[10],
- s->tc_credits_per_period[11],
- s->tc_credits_per_period[12],
+ (uint64_t)s->tc_period,
+ (uint64_t)s->tc_credits_per_period[0],
+ (uint64_t)s->tc_credits_per_period[1],
+ (uint64_t)s->tc_credits_per_period[2],
+ (uint64_t)s->tc_credits_per_period[3],
+ (uint64_t)s->tc_credits_per_period[4],
+ (uint64_t)s->tc_credits_per_period[5],
+ (uint64_t)s->tc_credits_per_period[6],
+ (uint64_t)s->tc_credits_per_period[7],
+ (uint64_t)s->tc_credits_per_period[8],
+ (uint64_t)s->tc_credits_per_period[9],
+ (uint64_t)s->tc_credits_per_period[10],
+ (uint64_t)s->tc_credits_per_period[11],
+ (uint64_t)s->tc_credits_per_period[12],
/* Best effort traffic class oversubscription */
- s->tc_ov_wm_min,
- s->tc_ov_wm_max);
+ (uint64_t)s->tc_ov_wm_min,
+ (uint64_t)s->tc_ov_wm_max);
}
static void
@@ -1023,7 +1030,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
double tb_rate = ((double) params->tb_rate) / ((double) port->rate);
double d = RTE_SCHED_TB_RATE_CONFIG_ERR;
- rte_approx(tb_rate, d, &s->tb_credits_per_period, &s->tb_period);
+ rte_approx(tb_rate, d, &s->tb_credits_per_period,
+ &s->tb_period);
}
s->tb_size = params->tb_size;
@@ -1035,8 +1043,8 @@ rte_sched_subport_config(struct rte_sched_port *port,
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++) {
if (params->qsize[i])
s->tc_credits_per_period[i]
- = rte_sched_time_ms_to_bytes(params->tc_period,
- params->tc_rate[i]);
+ = (sched_counter_t) rte_sched_time_ms_to_bytes(
+ params->tc_period, params->tc_rate[i]);
}
s->tc_time = port->time + s->tc_period;
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
@@ -1970,13 +1978,15 @@ grinder_credits_update(struct rte_sched_port *port,
/* Subport TB */
n_periods = (port->time - subport->tb_time) / subport->tb_period;
subport->tb_credits += n_periods * subport->tb_credits_per_period;
- subport->tb_credits = rte_sched_min_val_2_u32(subport->tb_credits, subport->tb_size);
+ subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
+ subport->tb_size);
subport->tb_time += n_periods * subport->tb_period;
/* Pipe TB */
n_periods = (port->time - pipe->tb_time) / params->tb_period;
pipe->tb_credits += n_periods * params->tb_credits_per_period;
- pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits, params->tb_size);
+ pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
+ params->tb_size);
pipe->tb_time += n_periods * params->tb_period;
/* Subport TCs */
@@ -1998,13 +2008,13 @@ grinder_credits_update(struct rte_sched_port *port,
#else
-static inline uint32_t
+static inline sched_counter_t
grinder_tc_ov_credits_update(struct rte_sched_port *port,
struct rte_sched_subport *subport)
{
- uint32_t tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t tc_consumption = 0, tc_ov_consumption_max;
- uint32_t tc_ov_wm = subport->tc_ov_wm;
+ sched_counter_t tc_ov_consumption[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t tc_consumption = 0, tc_ov_consumption_max;
+ sched_counter_t tc_ov_wm = subport->tc_ov_wm;
uint32_t i;
if (subport->tc_ov == 0)
@@ -2053,13 +2063,15 @@ grinder_credits_update(struct rte_sched_port *port,
/* Subport TB */
n_periods = (port->time - subport->tb_time) / subport->tb_period;
subport->tb_credits += n_periods * subport->tb_credits_per_period;
- subport->tb_credits = rte_sched_min_val_2_u32(subport->tb_credits, subport->tb_size);
+ subport->tb_credits = rte_sched_min_val_2(subport->tb_credits,
+ subport->tb_size);
subport->tb_time += n_periods * subport->tb_period;
/* Pipe TB */
n_periods = (port->time - pipe->tb_time) / params->tb_period;
pipe->tb_credits += n_periods * params->tb_credits_per_period;
- pipe->tb_credits = rte_sched_min_val_2_u32(pipe->tb_credits, params->tb_size);
+ pipe->tb_credits = rte_sched_min_val_2(pipe->tb_credits,
+ params->tb_size);
pipe->tb_time += n_periods * params->tb_period;
/* Subport TCs */
@@ -2101,11 +2113,11 @@ grinder_credits_check(struct rte_sched_port *port,
struct rte_sched_pipe *pipe = grinder->pipe;
struct rte_mbuf *pkt = grinder->pkt;
uint32_t tc_index = grinder->tc_index;
- uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
- uint32_t subport_tb_credits = subport->tb_credits;
- uint32_t subport_tc_credits = subport->tc_credits[tc_index];
- uint32_t pipe_tb_credits = pipe->tb_credits;
- uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
+ sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
+ sched_counter_t subport_tb_credits = subport->tb_credits;
+ sched_counter_t subport_tc_credits = subport->tc_credits[tc_index];
+ sched_counter_t pipe_tb_credits = pipe->tb_credits;
+ sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
int enough_credits;
/* Check queue credits */
@@ -2136,21 +2148,22 @@ grinder_credits_check(struct rte_sched_port *port,
struct rte_sched_pipe *pipe = grinder->pipe;
struct rte_mbuf *pkt = grinder->pkt;
uint32_t tc_index = grinder->tc_index;
- uint32_t pkt_len = pkt->pkt_len + port->frame_overhead;
- uint32_t subport_tb_credits = subport->tb_credits;
- uint32_t subport_tc_credits = subport->tc_credits[tc_index];
- uint32_t pipe_tb_credits = pipe->tb_credits;
- uint32_t pipe_tc_credits = pipe->tc_credits[tc_index];
- uint32_t pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
- uint32_t pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
- uint32_t pipe_tc_ov_credits, i;
+ sched_counter_t pkt_len = pkt->pkt_len + port->frame_overhead;
+ sched_counter_t subport_tb_credits = subport->tb_credits;
+ sched_counter_t subport_tc_credits = subport->tc_credits[tc_index];
+ sched_counter_t pipe_tb_credits = pipe->tb_credits;
+ sched_counter_t pipe_tc_credits = pipe->tc_credits[tc_index];
+ sched_counter_t pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE];
+ sched_counter_t pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE] = {0};
+ sched_counter_t pipe_tc_ov_credits;
+ uint32_t i;
int enough_credits;
for (i = 0; i < RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE; i++)
- pipe_tc_ov_mask1[i] = UINT32_MAX;
+ pipe_tc_ov_mask1[i] = ~0;
pipe_tc_ov_mask1[RTE_SCHED_TRAFFIC_CLASS_BE] = pipe->tc_ov_credits;
- pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] = UINT32_MAX;
+ pipe_tc_ov_mask2[RTE_SCHED_TRAFFIC_CLASS_BE] = ~0;
pipe_tc_ov_credits = pipe_tc_ov_mask1[tc_index];
/* Check pipe and subport credits */
diff --git a/lib/librte_sched/rte_sched_common.h b/lib/librte_sched/rte_sched_common.h
index 8c191a9b8..06520a686 100644
--- a/lib/librte_sched/rte_sched_common.h
+++ b/lib/librte_sched/rte_sched_common.h
@@ -14,8 +14,16 @@ extern "C" {
#define __rte_aligned_16 __attribute__((__aligned__(16)))
-static inline uint32_t
-rte_sched_min_val_2_u32(uint32_t x, uint32_t y)
+//#define COUNTER_SIZE_64
+
+#ifdef COUNTER_SIZE_64
+typedef uint64_t sched_counter_t;
+#else
+typedef uint32_t sched_counter_t;
+#endif
+
+static inline sched_counter_t
+rte_sched_min_val_2(sched_counter_t x, sched_counter_t y)
{
return (x < y)? x : y;
}
--
2.21.0
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
2019-10-13 23:07 0% ` Zhang, Roy Fan
@ 2019-10-14 11:10 0% ` Ananyev, Konstantin
0 siblings, 0 replies; 200+ results
From: Ananyev, Konstantin @ 2019-10-14 11:10 UTC (permalink / raw)
To: Zhang, Roy Fan, Akhil Goyal, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Doherty, Declan
Cc: 'Anoob Joseph', Jerin Jacob, Hemant Agrawal
> Hi Akhil,
>
> Thanks for the review and comments!
> Knowing you are extremely busy. Here is my point in brief:
> I think placing the CPU synchronous crypto in the rte_security make sense, as
>
> 1. rte_security contains inline crypto and lookaside crypto action type already, adding cpu_crypto action type is reasonable.
> 2. rte_security contains the security features may not supported by all devices, such as crypto, ipsec, and PDCP. cpu_crypto follow this
> category, again crypto.
> 3. placing CPU synchronous crypto API in rte_security is natural - as inline mode works synchronously, too. However cryptodev doesn't.
> 4. placing CPU synchronous crypto API in rte_security helps boosting SW crypto performance, I have already provided a simple perf test
> inside the unit test in the patchset for the user to try out - just comparing its output against DPDK crypto perf app output.
> 5. placing CPU synchronous crypto API in cryptodev will never serve HW lookaside crypto PMDs, as making them to work synchronously
> have huge performance penalty. However Cryptodev framework's existing design is providing APIs that will work in all crypto PMDs
> (rte_cryptodev_enqueue_burst / dequeue_burst for example), this does not fit in cryptodev's principle.
> 6. placing CPU synchronous crypto API in cryptodev confuses the user, as:
> - the session created for async mode may not work in sync mode
> - both enqueue/dequeue and cpu_crypto_process does the same crypto processing, but one PMD may support only one API (set),
> the other may support another, and the third PMD supports both. We have to provide another API to let the user query which one to
> support which.
> - two completely different code paths for async/sync mode.
> 7. You said in the end of the email - placing CPU synchronous crypto API into rte_security is not acceptable as it does not do any
> rte_security stuff - crypto isn't? You may call this a quibble, but in my idea, in the patchset both PMDs' implementations did offload the work
> to the CPU's special circuit designed dedicated to accelerate the crypto processing.
>
> To me cryptodev is the one CPU synchronous crypto API should not go into, rte_security is.
I also don't understand why rte_security is not an option here.
We do have inline-crypto right now, why we can't have cpu-crypto with new process() API here?
Actually would like to hear more opinions from the community here -
what other interested parties think is the best way for introducing cpu-crypto specific API?
Konstantin
>
> Regards,
> Fan
>
> > -----Original Message-----
> > From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> > Sent: Friday, October 11, 2019 2:24 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; 'dev@dpdk.org'
> > <dev@dpdk.org>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> > 'Thomas Monjalon' <thomas@monjalon.net>; Zhang, Roy Fan
> > <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>
> > Cc: 'Anoob Joseph' <anoobj@marvell.com>
> > Subject: RE: [RFC PATCH 1/9] security: introduce CPU Crypto action type and
> > API
> >
> > Hi Konstantin,
> >
> > >
> > > Hi Akhil,
> > >
> > ..[snip]
> >
> > > > > > > > OK let us assume that you have a separate structure. But I
> > > > > > > > have a few
> > > > > queries:
> > > > > > > > 1. how can multiple drivers use a same session
> > > > > > >
> > > > > > > As a short answer: they can't.
> > > > > > > It is pretty much the same approach as with rte_security -
> > > > > > > each device
> > > needs
> > > > > to
> > > > > > > create/init its own session.
> > > > > > > So upper layer would need to maintain its own array (or so) for such
> > case.
> > > > > > > Though the question is why would you like to have same session
> > > > > > > over
> > > > > multiple
> > > > > > > SW backed devices?
> > > > > > > As it would be anyway just a synchronous function call that
> > > > > > > will be
> > > executed
> > > > > on
> > > > > > > the same cpu.
> > > > > >
> > > > > > I may have single FAT tunnel which may be distributed over
> > > > > > multiple Cores, and each core is affined to a different SW device.
> > > > >
> > > > > If it is pure SW, then we don't need multiple devices for such scenario.
> > > > > Device in that case is pure abstraction that we can skip.
> > > >
> > > > Yes agreed, but that liberty is given to the application whether it
> > > > need multiple devices with single queue or a single device with multiple
> > queues.
> > > > I think that independence should not be broken in this new API.
> > > > >
> > > > > > So a single session may be accessed by multiple devices.
> > > > > >
> > > > > > One more example would be depending on packet sizes, I may
> > > > > > switch
> > > between
> > > > > > HW/SW PMDs with the same session.
> > > > >
> > > > > Sure, but then we'll have multiple sessions.
> > > >
> > > > No, the session will be same and it will have multiple private data
> > > > for each of
> > > the PMD.
> > > >
> > > > > BTW, we have same thing now - these private session pointers are
> > > > > just
> > > stored
> > > > > inside the same rte_crypto_sym_session.
> > > > > And if user wants to support this model, he would also need to
> > > > > store <dev_id, queue_id> pair for each HW device anyway.
> > > >
> > > > Yes agreed, but how is that thing happening in your new struct, you
> > > > cannot
> > > support that.
> > >
> > > User can store all these info in his own struct.
> > > That's exactly what we have right now.
> > > Let say ipsec-secgw has to store for each IPsec SA:
> > > pointer to crypto-session and/or pointer to security session plus (for
> > > lookaside-devices) cdev_id_qp that allows it to extract dev_id +
> > > queue_id information.
> > > As I understand that works for now, as each ipsec_sa uses only one
> > > dev+queue. Though if someone would like to use multiple devices/queues
> > > for the same SA - he would need to have an array of these <dev+queue>
> > pairs.
> > > So even right now rte_cryptodev_sym_session is not self-consistent and
> > > requires extra information to be maintained by user.
> >
> > Why are you increasing the complexity for the user application.
> > The new APIs and struct should be such that it need to do minimum changes
> > in the stack so that stack is portable on multiple vendors.
> > You should try to hide as much complexity in the driver or lib to give the user
> > simple APIs.
> >
> > Having a same session for multiple devices was added by Intel only for some
> > use cases.
> > And we had split that session create API into 2. Now if those are not useful
> > shall we move back to the single API. I think @Doherty, Declan and @De Lara
> > Guarch, Pablo can comment on this.
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > 2. Can somebody use the scheduler pmd for scheduling the
> > > > > > > > different
> > > type
> > > > > of
> > > > > > > payloads for the same session?
> > > > > > >
> > > > > > > In theory yes.
> > > > > > > Though for that scheduler pmd should have inside it's
> > > > > > > rte_crypto_cpu_sym_session an array of pointers to the
> > > > > > > underlying devices sessions.
> > > > > > >
> > > > > > > >
> > > > > > > > With your proposal the APIs would be very specific to your
> > > > > > > > use case
> > > only.
> > > > > > >
> > > > > > > Yes in some way.
> > > > > > > I consider that API specific for SW backed crypto PMDs.
> > > > > > > I can hardly see how any 'real HW' PMDs (lksd-none,
> > > > > > > lksd-proto) will
> > > benefit
> > > > > > > from it.
> > > > > > > Current crypto-op API is very much HW oriented.
> > > > > > > Which is ok, that's for it was intended for, but I think we
> > > > > > > also need one
> > > that
> > > > > > > would be designed
> > > > > > > for SW backed implementation in mind.
> > > > > >
> > > > > > We may re-use your API for HW PMDs as well which do not have
> > > requirement
> > > > > of
> > > > > > Crypto-op/mbuf etc.
> > > > > > The return type of your new process API may have a status which
> > > > > > say
> > > > > 'processed'
> > > > > > Or can be say 'enqueued'. So if it is 'enqueued', we may have a
> > > > > > new API for
> > > > > raw
> > > > > > Bufs dequeue as well.
> > > > > >
> > > > > > This requirement can be for any hardware PMDs like QAT as well.
> > > > >
> > > > > I don't think it is a good idea to extend this API for async (lookaside)
> > devices.
> > > > > You'll need to:
> > > > > - provide dev_id and queue_id for each process(enqueue) and
> > > > > dequeuer operation.
> > > > > - provide IOVA for all buffers passing to that function (data
> > > > > buffers, digest,
> > > IV,
> > > > > aad).
> > > > > - On dequeue provide some way to associate dequed data and digest
> > > > > buffers with
> > > > > crypto-session that was used (and probably with mbuf).
> > > > > So most likely we'll end up with another just version of our
> > > > > current crypto-op structure.
> > > > > If you'd like to get rid of mbufs dependency within current
> > > > > crypto-op API that understandable, but I don't think we should
> > > > > have same API for both sync (CPU) and async
> > > > > (lookaside) cases.
> > > > > It doesn't seem feasible at all and voids whole purpose of that patch.
> > > >
> > > > At this moment we are not much concerned about the dequeue API and
> > > > about
> > > the
> > > > HW PMD support. It is just that the new API should be generic enough
> > > > to be
> > > used in
> > > > some future scenarios as well. I am just highlighting the possible
> > > > usecases
> > > which can
> > > > be there in future.
> > >
> > > Sorry, but I strongly disagree with such approach.
> > > We should stop adding/modifying API 'just in case' and because 'it
> > > might be useful for some future HW'.
> > > Inside DPDK we already do have too many dev level APIs without any
> > > implementations.
> > > That's quite bad practice and very dis-orienting for end-users.
> > > I think to justify API additions/changes we need at least one proper
> > > implementation for it, or at least some strong evidence that people
> > > are really committed to support it in nearest future.
> > > BTW, that what TB agreed on, nearly a year ago.
> > >
> > > This new API (if we'll go ahead with it of course) would stay
> > > experimental for some time anyway to make sure we don't miss anything
> > > needed (I think for about a year time- frame).
> > > So if you guys *really* want to extend it support _async_ devices too
> > > - I am open for modifications/additions here.
> > > Though personally I think such addition would over-complicate things
> > > and we'll end up with another reincarnation of current crypto-op.
> > > We actually discussed it internally, and decided to drop that idea because
> > of that.
> > > Again, my opinion - for lookaside devices it might be better to try to
> > > optimize current crypto-op path (remove mbuf requirement, probably add
> > > ability to group by session on enqueue/dequeue, etc.).
> >
> > I agree that the new API is experimental and can be modified later. So no
> > issues in that, but we can keep some things in mind while defining APIs.
> > These were some comments from my side, if those are impacting the current
> > scenario, you can drop those. We will take care of those later.
> >
> > >
> > > >
> > > > What is the issue that you face in making a dev-op for this new API.
> > > > Do you see
> > > any
> > > > performance impact with that?
> > >
> > > There are two main things:
> > > 1. user would need to maintain and provide for each process() call
> > > dev_id+queue_id.
> > > That's means extra (and totally unnecessary for SW) overhead.
> >
> > You are using a crypto device for performing the processing, you must use
> > dev_id to identify which SW device it is. This is how the DPDK Framework
> > works.
> > .
> >
> > > 2. yes I would expect some perf overhead too - it would be extra call or
> > branch.
> > > Again as it would be data-dependency - most likely cpu wouldn't be
> > > able to pipeline it efficiently:
> > >
> > > rte_crypto_sym_process(uint8_t dev_id, uint16 qp_id,
> > > rte_crypto_sym_session *ses, ...) {
> > > struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> > > return (*dev->process)(sess->data[dev->driver_id, ...); }
> > >
> > > driver_specific_process(driver_specific_sym_session *sess) {
> > > return sess->process(sess, ...) ;
> > > }
> > >
> > > I didn't make any exact measurements but sure it would be slower than
> > just:
> > > session_udata->process(session->udata->sess, ...); Again it would be
> > > much more noticeable on low end cpus.
> > > Let say here:
> > > http://mails.dpdk.org/archives/dev/2019-September/144350.html
> > > Jerin claims 1.5-3% drop for introducing extra call via hiding eth_dev
> > > contents - I suppose we would have something similar here.
> > > I do realize that in majority of cases crypto is more expensive then
> > > RX/TX, but still.
> > >
> > > If it would be a really unavoidable tradeoff (support already existing
> > > API, or so) I wouldn't mind, but I don't see any real need for it right now.
> >
> > Calling session_udata->process(session->udata->sess, ...); from the
> > application and Application need to maintain for each PMD the process() API
> > in its memory will make the application not portable to other vendors.
> >
> > What we are doing here is defining another way to create sessions for the
> > same stuff that is already done. This make applications non-portable and
> > confusing for the application writer.
> >
> > I would say you should do some profiling first. As you also mentioned crypto
> > workload is more Cycle consuming, it will not impact this case.
> >
> >
> > >
> > > >
> > > > >
> > > > > > That is why a dev-ops would be a better option.
> > > > > >
> > > > > > >
> > > > > > > > When you would add more functionality to this sync
> > > > > > > > API/struct, it will
> > > end
> > > > > up
> > > > > > > being the same API/struct.
> > > > > > > >
> > > > > > > > Let us see how close/ far we are from the existing APIs
> > > > > > > > when the
> > > actual
> > > > > > > implementation is done.
> > > > > > > >
> > > > > > > > > > I am not sure if that would be needed.
> > > > > > > > > > It would be internal to the driver that if synchronous
> > > > > > > > > > processing is
> > > > > > > > > supported(from feature flag) and
> > > > > > > > > > Have relevant fields in xform(the newly added ones which
> > > > > > > > > > are
> > > packed
> > > > > as
> > > > > > > per
> > > > > > > > > your suggestions) set,
> > > > > > > > > > It will create that type of session.
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > > + * Main points:
> > > > > > > > > > > + * - Current crypto-dev API is reasonably mature and
> > > > > > > > > > > + it is
> > > desirable
> > > > > > > > > > > + * to keep it unchanged (API/ABI stability). From other
> > side, this
> > > > > > > > > > > + * new sync API is new one and probably would require
> > extra
> > > > > changes.
> > > > > > > > > > > + * Having it as a new one allows to mark it as experimental,
> > > without
> > > > > > > > > > > + * affecting existing one.
> > > > > > > > > > > + * - Fully opaque cpu_sym_session structure gives more
> > flexibility
> > > > > > > > > > > + * to the PMD writers and again allows to avoid ABI
> > breakages
> > > in
> > > > > future.
> > > > > > > > > > > + * - process() function per set of xforms
> > > > > > > > > > > + * allows to expose different process() functions for
> > different
> > > > > > > > > > > + * xform combinations. PMD writer can decide, does he
> > wants
> > > to
> > > > > > > > > > > + * push all supported algorithms into one process()
> > function,
> > > > > > > > > > > + * or spread it across several ones.
> > > > > > > > > > > + * I.E. More flexibility for PMD writer.
> > > > > > > > > >
> > > > > > > > > > Which process function should be chosen is internal to
> > > > > > > > > > PMD, how
> > > > > would
> > > > > > > that
> > > > > > > > > info
> > > > > > > > > > be visible to the application or the library. These will
> > > > > > > > > > get stored in
> > > the
> > > > > > > session
> > > > > > > > > private
> > > > > > > > > > data. It would be upto the PMD writer, to store the per
> > > > > > > > > > session
> > > process
> > > > > > > > > function in
> > > > > > > > > > the session private data.
> > > > > > > > > >
> > > > > > > > > > Process function would be a dev ops just like enc/deq
> > > > > > > > > > operations
> > > and it
> > > > > > > should
> > > > > > > > > call
> > > > > > > > > > The respective process API stored in the session private data.
> > > > > > > > >
> > > > > > > > > That model (via devops) is possible, but has several
> > > > > > > > > drawbacks from
> > > my
> > > > > > > > > perspective:
> > > > > > > > >
> > > > > > > > > 1. It means we'll need to pass dev_id as a parameter to
> > > > > > > > > process()
> > > function.
> > > > > > > > > Though in fact dev_id is not a relevant information for us
> > > > > > > > > here (all we need is pointer to the session and pointer to
> > > > > > > > > the fuction to call) and I tried to avoid using it in data-path
> > functions for that API.
> > > > > > > >
> > > > > > > > You have a single vdev, but someone may have multiple vdevs
> > > > > > > > for each
> > > > > thread,
> > > > > > > or may
> > > > > > > > Have same dev with multiple queues for each core.
> > > > > > >
> > > > > > > That's fine. As I said above it is a SW backed implementation.
> > > > > > > Each session has to be a separate entity that contains all
> > > > > > > necessary
> > > > > information
> > > > > > > (keys, alg/mode info, etc.) to process input buffers.
> > > > > > > Plus we need the actual function pointer to call.
> > > > > > > I just don't see what for we need a dev_id in that situation.
> > > > > >
> > > > > > To iterate the session private data in the session.
> > > > > >
> > > > > > > Again, here we don't need care about queues and their pinning to
> > cores.
> > > > > > > If let say someone would like to process buffers from the same
> > > > > > > IPsec SA
> > > on 2
> > > > > > > different cores in parallel, he can just create 2 sessions for
> > > > > > > the same
> > > xform,
> > > > > > > give one to thread #1 and second to thread #2.
> > > > > > > After that both threads are free to call process(this_thread_ses, ...)
> > at will.
> > > > > >
> > > > > > Say you have a 16core device to handle 100G of traffic on a single
> > tunnel.
> > > > > > Will we make 16 sessions with same parameters?
> > > > >
> > > > > Absolutely same question we can ask for current crypto-op API.
> > > > > You have lookaside crypto-dev with 16 HW queues, each queue is
> > > > > serviced by different CPU.
> > > > > For the same SA, do you need a separate session per queue, or is
> > > > > it ok to
> > > reuse
> > > > > current one?
> > > > > AFAIK, right now this is a grey area not clearly defined.
> > > > > For crypto-devs I am aware - user can reuse the same session (as
> > > > > PMD uses it read-only).
> > > > > But again, right now I think it is not clearly defined and is
> > > > > implementation specific.
> > > >
> > > > User can use the same session, that is what I am also insisting, but
> > > > it may have
> > > separate
> > > > Session private data. Cryptodev session create API provide that
> > > > functionality
> > > and we can
> > > > Leverage that.
> > >
> > > rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which
> > > means we can't use the same rte_cryptodev_sym_session to hold sessions
> > > for both sync and async mode for the same device. Off course we can
> > > add a hard requirement that any driver that wants to support process()
> > > has to create sessions that can handle both process and
> > > enqueue/dequeue, but then again what for to create such overhead?
> > >
> > > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > > construct for multiple device_ids:
> > > __extension__ struct {
> > > void *data;
> > > uint16_t refcnt;
> > > } sess_data[0];
> > > /**< Driver specific session material, variable size */
> > >
> > Yes I also feel the same. I was also not in favor of this when it was introduced.
> > Please go ahead and remove this. I have no issues with that.
> >
> > > as an advantage.
> > > It looks too error prone for me:
> > > 1. Simultaneous session initialization/de-initialization for devices
> > > with the same driver_id is not possible.
> > > 2. It assumes that all device driver will be loaded before we start to
> > > create session pools.
> > >
> > > Right now it seems ok, as no-one requires such functionality, but I
> > > don't know how it will be in future.
> > > For me rte_security session model, where for each security context
> > > user have to create new session looks much more robust.
> > Agreed
> >
> > >
> > > >
> > > > BTW, I can see a v2 to this RFC which is still based on security library.
> > >
> > > Yes, v2 was concentrated on fixing found issues, some code
> > > restructuring, i.e. - changes that would be needed anyway whatever API
> > aproach we'll choose.
> > >
> > > > When do you plan
> > > > To submit the patches for crypto based APIs. We have RC1 merge
> > > > deadline for
> > > this
> > > > patchset on 21st Oct.
> > >
> > > We'd like to start working on it ASAP, but it seems we still have a
> > > major disagreement about how this crypto-dev API should look like.
> > > Which makes me think - should we return to our original proposal via
> > > rte_security?
> > > It still looks to me like clean and straightforward way to enable this
> > > new API, and probably wouldn't cause that much controversy.
> > > What do you think?
> >
> > I cannot spend more time discussing on this until RC1 date. I have some other
> > stuff pending.
> > You can send the patches early next week with the approach that I
> > mentioned or else we can discuss this post RC1(which would mean deferring
> > to 20.02).
> >
> > But moving back to security is not acceptable to me. The code should be put
> > where it is intended and not where it is easy to put. You are not doing any
> > rte_security stuff.
> >
> >
> > Regards,
> > Akhil
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API
@ 2019-10-13 23:07 0% ` Zhang, Roy Fan
2019-10-14 11:10 0% ` Ananyev, Konstantin
2019-10-16 22:07 3% ` Ananyev, Konstantin
1 sibling, 1 reply; 200+ results
From: Zhang, Roy Fan @ 2019-10-13 23:07 UTC (permalink / raw)
To: Akhil Goyal, Ananyev, Konstantin, 'dev@dpdk.org',
De Lara Guarch, Pablo, 'Thomas Monjalon',
Doherty, Declan
Cc: 'Anoob Joseph'
Hi Akhil,
Thanks for the review and comments!
Knowing you are extremely busy. Here is my point in brief:
I think placing the CPU synchronous crypto in the rte_security make sense, as
1. rte_security contains inline crypto and lookaside crypto action type already, adding cpu_crypto action type is reasonable.
2. rte_security contains the security features may not supported by all devices, such as crypto, ipsec, and PDCP. cpu_crypto follow this category, again crypto.
3. placing CPU synchronous crypto API in rte_security is natural - as inline mode works synchronously, too. However cryptodev doesn't.
4. placing CPU synchronous crypto API in rte_security helps boosting SW crypto performance, I have already provided a simple perf test inside the unit test in the patchset for the user to try out - just comparing its output against DPDK crypto perf app output.
5. placing CPU synchronous crypto API in cryptodev will never serve HW lookaside crypto PMDs, as making them to work synchronously have huge performance penalty. However Cryptodev framework's existing design is providing APIs that will work in all crypto PMDs (rte_cryptodev_enqueue_burst / dequeue_burst for example), this does not fit in cryptodev's principle.
6. placing CPU synchronous crypto API in cryptodev confuses the user, as:
- the session created for async mode may not work in sync mode
- both enqueue/dequeue and cpu_crypto_process does the same crypto processing, but one PMD may support only one API (set), the other may support another, and the third PMD supports both. We have to provide another API to let the user query which one to support which.
- two completely different code paths for async/sync mode.
7. You said in the end of the email - placing CPU synchronous crypto API into rte_security is not acceptable as it does not do any rte_security stuff - crypto isn't? You may call this a quibble, but in my idea, in the patchset both PMDs' implementations did offload the work to the CPU's special circuit designed dedicated to accelerate the crypto processing.
To me cryptodev is the one CPU synchronous crypto API should not go into, rte_security is.
Regards,
Fan
> -----Original Message-----
> From: Akhil Goyal [mailto:akhil.goyal@nxp.com]
> Sent: Friday, October 11, 2019 2:24 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; 'dev@dpdk.org'
> <dev@dpdk.org>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>;
> 'Thomas Monjalon' <thomas@monjalon.net>; Zhang, Roy Fan
> <roy.fan.zhang@intel.com>; Doherty, Declan <declan.doherty@intel.com>
> Cc: 'Anoob Joseph' <anoobj@marvell.com>
> Subject: RE: [RFC PATCH 1/9] security: introduce CPU Crypto action type and
> API
>
> Hi Konstantin,
>
> >
> > Hi Akhil,
> >
> ..[snip]
>
> > > > > > > OK let us assume that you have a separate structure. But I
> > > > > > > have a few
> > > > queries:
> > > > > > > 1. how can multiple drivers use a same session
> > > > > >
> > > > > > As a short answer: they can't.
> > > > > > It is pretty much the same approach as with rte_security -
> > > > > > each device
> > needs
> > > > to
> > > > > > create/init its own session.
> > > > > > So upper layer would need to maintain its own array (or so) for such
> case.
> > > > > > Though the question is why would you like to have same session
> > > > > > over
> > > > multiple
> > > > > > SW backed devices?
> > > > > > As it would be anyway just a synchronous function call that
> > > > > > will be
> > executed
> > > > on
> > > > > > the same cpu.
> > > > >
> > > > > I may have single FAT tunnel which may be distributed over
> > > > > multiple Cores, and each core is affined to a different SW device.
> > > >
> > > > If it is pure SW, then we don't need multiple devices for such scenario.
> > > > Device in that case is pure abstraction that we can skip.
> > >
> > > Yes agreed, but that liberty is given to the application whether it
> > > need multiple devices with single queue or a single device with multiple
> queues.
> > > I think that independence should not be broken in this new API.
> > > >
> > > > > So a single session may be accessed by multiple devices.
> > > > >
> > > > > One more example would be depending on packet sizes, I may
> > > > > switch
> > between
> > > > > HW/SW PMDs with the same session.
> > > >
> > > > Sure, but then we'll have multiple sessions.
> > >
> > > No, the session will be same and it will have multiple private data
> > > for each of
> > the PMD.
> > >
> > > > BTW, we have same thing now - these private session pointers are
> > > > just
> > stored
> > > > inside the same rte_crypto_sym_session.
> > > > And if user wants to support this model, he would also need to
> > > > store <dev_id, queue_id> pair for each HW device anyway.
> > >
> > > Yes agreed, but how is that thing happening in your new struct, you
> > > cannot
> > support that.
> >
> > User can store all these info in his own struct.
> > That's exactly what we have right now.
> > Let say ipsec-secgw has to store for each IPsec SA:
> > pointer to crypto-session and/or pointer to security session plus (for
> > lookaside-devices) cdev_id_qp that allows it to extract dev_id +
> > queue_id information.
> > As I understand that works for now, as each ipsec_sa uses only one
> > dev+queue. Though if someone would like to use multiple devices/queues
> > for the same SA - he would need to have an array of these <dev+queue>
> pairs.
> > So even right now rte_cryptodev_sym_session is not self-consistent and
> > requires extra information to be maintained by user.
>
> Why are you increasing the complexity for the user application.
> The new APIs and struct should be such that it need to do minimum changes
> in the stack so that stack is portable on multiple vendors.
> You should try to hide as much complexity in the driver or lib to give the user
> simple APIs.
>
> Having a same session for multiple devices was added by Intel only for some
> use cases.
> And we had split that session create API into 2. Now if those are not useful
> shall we move back to the single API. I think @Doherty, Declan and @De Lara
> Guarch, Pablo can comment on this.
>
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > 2. Can somebody use the scheduler pmd for scheduling the
> > > > > > > different
> > type
> > > > of
> > > > > > payloads for the same session?
> > > > > >
> > > > > > In theory yes.
> > > > > > Though for that scheduler pmd should have inside it's
> > > > > > rte_crypto_cpu_sym_session an array of pointers to the
> > > > > > underlying devices sessions.
> > > > > >
> > > > > > >
> > > > > > > With your proposal the APIs would be very specific to your
> > > > > > > use case
> > only.
> > > > > >
> > > > > > Yes in some way.
> > > > > > I consider that API specific for SW backed crypto PMDs.
> > > > > > I can hardly see how any 'real HW' PMDs (lksd-none,
> > > > > > lksd-proto) will
> > benefit
> > > > > > from it.
> > > > > > Current crypto-op API is very much HW oriented.
> > > > > > Which is ok, that's for it was intended for, but I think we
> > > > > > also need one
> > that
> > > > > > would be designed
> > > > > > for SW backed implementation in mind.
> > > > >
> > > > > We may re-use your API for HW PMDs as well which do not have
> > requirement
> > > > of
> > > > > Crypto-op/mbuf etc.
> > > > > The return type of your new process API may have a status which
> > > > > say
> > > > 'processed'
> > > > > Or can be say 'enqueued'. So if it is 'enqueued', we may have a
> > > > > new API for
> > > > raw
> > > > > Bufs dequeue as well.
> > > > >
> > > > > This requirement can be for any hardware PMDs like QAT as well.
> > > >
> > > > I don't think it is a good idea to extend this API for async (lookaside)
> devices.
> > > > You'll need to:
> > > > - provide dev_id and queue_id for each process(enqueue) and
> > > > dequeuer operation.
> > > > - provide IOVA for all buffers passing to that function (data
> > > > buffers, digest,
> > IV,
> > > > aad).
> > > > - On dequeue provide some way to associate dequed data and digest
> > > > buffers with
> > > > crypto-session that was used (and probably with mbuf).
> > > > So most likely we'll end up with another just version of our
> > > > current crypto-op structure.
> > > > If you'd like to get rid of mbufs dependency within current
> > > > crypto-op API that understandable, but I don't think we should
> > > > have same API for both sync (CPU) and async
> > > > (lookaside) cases.
> > > > It doesn't seem feasible at all and voids whole purpose of that patch.
> > >
> > > At this moment we are not much concerned about the dequeue API and
> > > about
> > the
> > > HW PMD support. It is just that the new API should be generic enough
> > > to be
> > used in
> > > some future scenarios as well. I am just highlighting the possible
> > > usecases
> > which can
> > > be there in future.
> >
> > Sorry, but I strongly disagree with such approach.
> > We should stop adding/modifying API 'just in case' and because 'it
> > might be useful for some future HW'.
> > Inside DPDK we already do have too many dev level APIs without any
> > implementations.
> > That's quite bad practice and very dis-orienting for end-users.
> > I think to justify API additions/changes we need at least one proper
> > implementation for it, or at least some strong evidence that people
> > are really committed to support it in nearest future.
> > BTW, that what TB agreed on, nearly a year ago.
> >
> > This new API (if we'll go ahead with it of course) would stay
> > experimental for some time anyway to make sure we don't miss anything
> > needed (I think for about a year time- frame).
> > So if you guys *really* want to extend it support _async_ devices too
> > - I am open for modifications/additions here.
> > Though personally I think such addition would over-complicate things
> > and we'll end up with another reincarnation of current crypto-op.
> > We actually discussed it internally, and decided to drop that idea because
> of that.
> > Again, my opinion - for lookaside devices it might be better to try to
> > optimize current crypto-op path (remove mbuf requirement, probably add
> > ability to group by session on enqueue/dequeue, etc.).
>
> I agree that the new API is experimental and can be modified later. So no
> issues in that, but we can keep some things in mind while defining APIs.
> These were some comments from my side, if those are impacting the current
> scenario, you can drop those. We will take care of those later.
>
> >
> > >
> > > What is the issue that you face in making a dev-op for this new API.
> > > Do you see
> > any
> > > performance impact with that?
> >
> > There are two main things:
> > 1. user would need to maintain and provide for each process() call
> > dev_id+queue_id.
> > That's means extra (and totally unnecessary for SW) overhead.
>
> You are using a crypto device for performing the processing, you must use
> dev_id to identify which SW device it is. This is how the DPDK Framework
> works.
> .
>
> > 2. yes I would expect some perf overhead too - it would be extra call or
> branch.
> > Again as it would be data-dependency - most likely cpu wouldn't be
> > able to pipeline it efficiently:
> >
> > rte_crypto_sym_process(uint8_t dev_id, uint16 qp_id,
> > rte_crypto_sym_session *ses, ...) {
> > struct rte_cryptodev *dev = &rte_cryptodevs[dev_id];
> > return (*dev->process)(sess->data[dev->driver_id, ...); }
> >
> > driver_specific_process(driver_specific_sym_session *sess) {
> > return sess->process(sess, ...) ;
> > }
> >
> > I didn't make any exact measurements but sure it would be slower than
> just:
> > session_udata->process(session->udata->sess, ...); Again it would be
> > much more noticeable on low end cpus.
> > Let say here:
> > http://mails.dpdk.org/archives/dev/2019-September/144350.html
> > Jerin claims 1.5-3% drop for introducing extra call via hiding eth_dev
> > contents - I suppose we would have something similar here.
> > I do realize that in majority of cases crypto is more expensive then
> > RX/TX, but still.
> >
> > If it would be a really unavoidable tradeoff (support already existing
> > API, or so) I wouldn't mind, but I don't see any real need for it right now.
>
> Calling session_udata->process(session->udata->sess, ...); from the
> application and Application need to maintain for each PMD the process() API
> in its memory will make the application not portable to other vendors.
>
> What we are doing here is defining another way to create sessions for the
> same stuff that is already done. This make applications non-portable and
> confusing for the application writer.
>
> I would say you should do some profiling first. As you also mentioned crypto
> workload is more Cycle consuming, it will not impact this case.
>
>
> >
> > >
> > > >
> > > > > That is why a dev-ops would be a better option.
> > > > >
> > > > > >
> > > > > > > When you would add more functionality to this sync
> > > > > > > API/struct, it will
> > end
> > > > up
> > > > > > being the same API/struct.
> > > > > > >
> > > > > > > Let us see how close/ far we are from the existing APIs
> > > > > > > when the
> > actual
> > > > > > implementation is done.
> > > > > > >
> > > > > > > > > I am not sure if that would be needed.
> > > > > > > > > It would be internal to the driver that if synchronous
> > > > > > > > > processing is
> > > > > > > > supported(from feature flag) and
> > > > > > > > > Have relevant fields in xform(the newly added ones which
> > > > > > > > > are
> > packed
> > > > as
> > > > > > per
> > > > > > > > your suggestions) set,
> > > > > > > > > It will create that type of session.
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > > + * Main points:
> > > > > > > > > > + * - Current crypto-dev API is reasonably mature and
> > > > > > > > > > + it is
> > desirable
> > > > > > > > > > + * to keep it unchanged (API/ABI stability). From other
> side, this
> > > > > > > > > > + * new sync API is new one and probably would require
> extra
> > > > changes.
> > > > > > > > > > + * Having it as a new one allows to mark it as experimental,
> > without
> > > > > > > > > > + * affecting existing one.
> > > > > > > > > > + * - Fully opaque cpu_sym_session structure gives more
> flexibility
> > > > > > > > > > + * to the PMD writers and again allows to avoid ABI
> breakages
> > in
> > > > future.
> > > > > > > > > > + * - process() function per set of xforms
> > > > > > > > > > + * allows to expose different process() functions for
> different
> > > > > > > > > > + * xform combinations. PMD writer can decide, does he
> wants
> > to
> > > > > > > > > > + * push all supported algorithms into one process()
> function,
> > > > > > > > > > + * or spread it across several ones.
> > > > > > > > > > + * I.E. More flexibility for PMD writer.
> > > > > > > > >
> > > > > > > > > Which process function should be chosen is internal to
> > > > > > > > > PMD, how
> > > > would
> > > > > > that
> > > > > > > > info
> > > > > > > > > be visible to the application or the library. These will
> > > > > > > > > get stored in
> > the
> > > > > > session
> > > > > > > > private
> > > > > > > > > data. It would be upto the PMD writer, to store the per
> > > > > > > > > session
> > process
> > > > > > > > function in
> > > > > > > > > the session private data.
> > > > > > > > >
> > > > > > > > > Process function would be a dev ops just like enc/deq
> > > > > > > > > operations
> > and it
> > > > > > should
> > > > > > > > call
> > > > > > > > > The respective process API stored in the session private data.
> > > > > > > >
> > > > > > > > That model (via devops) is possible, but has several
> > > > > > > > drawbacks from
> > my
> > > > > > > > perspective:
> > > > > > > >
> > > > > > > > 1. It means we'll need to pass dev_id as a parameter to
> > > > > > > > process()
> > function.
> > > > > > > > Though in fact dev_id is not a relevant information for us
> > > > > > > > here (all we need is pointer to the session and pointer to
> > > > > > > > the fuction to call) and I tried to avoid using it in data-path
> functions for that API.
> > > > > > >
> > > > > > > You have a single vdev, but someone may have multiple vdevs
> > > > > > > for each
> > > > thread,
> > > > > > or may
> > > > > > > Have same dev with multiple queues for each core.
> > > > > >
> > > > > > That's fine. As I said above it is a SW backed implementation.
> > > > > > Each session has to be a separate entity that contains all
> > > > > > necessary
> > > > information
> > > > > > (keys, alg/mode info, etc.) to process input buffers.
> > > > > > Plus we need the actual function pointer to call.
> > > > > > I just don't see what for we need a dev_id in that situation.
> > > > >
> > > > > To iterate the session private data in the session.
> > > > >
> > > > > > Again, here we don't need care about queues and their pinning to
> cores.
> > > > > > If let say someone would like to process buffers from the same
> > > > > > IPsec SA
> > on 2
> > > > > > different cores in parallel, he can just create 2 sessions for
> > > > > > the same
> > xform,
> > > > > > give one to thread #1 and second to thread #2.
> > > > > > After that both threads are free to call process(this_thread_ses, ...)
> at will.
> > > > >
> > > > > Say you have a 16core device to handle 100G of traffic on a single
> tunnel.
> > > > > Will we make 16 sessions with same parameters?
> > > >
> > > > Absolutely same question we can ask for current crypto-op API.
> > > > You have lookaside crypto-dev with 16 HW queues, each queue is
> > > > serviced by different CPU.
> > > > For the same SA, do you need a separate session per queue, or is
> > > > it ok to
> > reuse
> > > > current one?
> > > > AFAIK, right now this is a grey area not clearly defined.
> > > > For crypto-devs I am aware - user can reuse the same session (as
> > > > PMD uses it read-only).
> > > > But again, right now I think it is not clearly defined and is
> > > > implementation specific.
> > >
> > > User can use the same session, that is what I am also insisting, but
> > > it may have
> > separate
> > > Session private data. Cryptodev session create API provide that
> > > functionality
> > and we can
> > > Leverage that.
> >
> > rte_cryptodev_sym_session. sess_data[] is indexed by driver_id, which
> > means we can't use the same rte_cryptodev_sym_session to hold sessions
> > for both sync and async mode for the same device. Off course we can
> > add a hard requirement that any driver that wants to support process()
> > has to create sessions that can handle both process and
> > enqueue/dequeue, but then again what for to create such overhead?
> >
> > BTW, to be honest, I don't consider current rte_cryptodev_sym_session
> > construct for multiple device_ids:
> > __extension__ struct {
> > void *data;
> > uint16_t refcnt;
> > } sess_data[0];
> > /**< Driver specific session material, variable size */
> >
> Yes I also feel the same. I was also not in favor of this when it was introduced.
> Please go ahead and remove this. I have no issues with that.
>
> > as an advantage.
> > It looks too error prone for me:
> > 1. Simultaneous session initialization/de-initialization for devices
> > with the same driver_id is not possible.
> > 2. It assumes that all device driver will be loaded before we start to
> > create session pools.
> >
> > Right now it seems ok, as no-one requires such functionality, but I
> > don't know how it will be in future.
> > For me rte_security session model, where for each security context
> > user have to create new session looks much more robust.
> Agreed
>
> >
> > >
> > > BTW, I can see a v2 to this RFC which is still based on security library.
> >
> > Yes, v2 was concentrated on fixing found issues, some code
> > restructuring, i.e. - changes that would be needed anyway whatever API
> aproach we'll choose.
> >
> > > When do you plan
> > > To submit the patches for crypto based APIs. We have RC1 merge
> > > deadline for
> > this
> > > patchset on 21st Oct.
> >
> > We'd like to start working on it ASAP, but it seems we still have a
> > major disagreement about how this crypto-dev API should look like.
> > Which makes me think - should we return to our original proposal via
> > rte_security?
> > It still looks to me like clean and straightforward way to enable this
> > new API, and probably wouldn't cause that much controversy.
> > What do you think?
>
> I cannot spend more time discussing on this until RC1 date. I have some other
> stuff pending.
> You can send the patches early next week with the approach that I
> mentioned or else we can discuss this post RC1(which would mean deferring
> to 20.02).
>
> But moving back to security is not acceptable to me. The code should be put
> where it is intended and not where it is easy to put. You are not doing any
> rte_security stuff.
>
>
> Regards,
> Akhil
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH v3 1/3] lib/lpm: integrate RCU QSBR
@ 2019-10-13 4:36 3% ` Honnappa Nagarahalli
2019-10-15 11:15 0% ` Ananyev, Konstantin
0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2019-10-13 4:36 UTC (permalink / raw)
To: Ananyev, Konstantin, Richardson, Bruce, Medvedkin, Vladimir,
olivier.matz
Cc: dev, stephen, paulmck, Gavin Hu (Arm Technology China),
Dharmik Thakkar, Ruifeng Wang (Arm Technology China),
nd, Ruifeng Wang (Arm Technology China),
Honnappa Nagarahalli, nd
<snip>
> Hi guys,
I have tried to consolidate design related questions here. If I have missed anything, please add.
>
> >
> > From: Ruifeng Wang <ruifeng.wang@arm.com>
> >
> > Currently, the tbl8 group is freed even though the readers might be
> > using the tbl8 group entries. The freed tbl8 group can be reallocated
> > quickly. This results in incorrect lookup results.
> >
> > RCU QSBR process is integrated for safe tbl8 group reclaim.
> > Refer to RCU documentation to understand various aspects of
> > integrating RCU library into other libraries.
> >
> > Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
> > ---
> > lib/librte_lpm/Makefile | 3 +-
> > lib/librte_lpm/meson.build | 2 +
> > lib/librte_lpm/rte_lpm.c | 102 +++++++++++++++++++++++++----
> > lib/librte_lpm/rte_lpm.h | 21 ++++++
> > lib/librte_lpm/rte_lpm_version.map | 6 ++
> > 5 files changed, 122 insertions(+), 12 deletions(-)
> >
> > diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile index
> > a7946a1c5..ca9e16312 100644
> > --- a/lib/librte_lpm/Makefile
> > +++ b/lib/librte_lpm/Makefile
> > @@ -6,9 +6,10 @@ include $(RTE_SDK)/mk/rte.vars.mk # library name
> > LIB = librte_lpm.a
> >
> > +CFLAGS += -DALLOW_EXPERIMENTAL_API
> > CFLAGS += -O3
> > CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -LDLIBS += -lrte_eal -lrte_hash
> > +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu
> >
> > EXPORT_MAP := rte_lpm_version.map
> >
> > diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build
> > index a5176d8ae..19a35107f 100644
> > --- a/lib/librte_lpm/meson.build
> > +++ b/lib/librte_lpm/meson.build
> > @@ -2,9 +2,11 @@
> > # Copyright(c) 2017 Intel Corporation
> >
> > version = 2
> > +allow_experimental_apis = true
> > sources = files('rte_lpm.c', 'rte_lpm6.c') headers =
> > files('rte_lpm.h', 'rte_lpm6.h') # since header files have different
> > names, we can install all vector headers # without worrying about
> > which architecture we actually need headers +=
> > files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h') deps +=
> > ['hash']
> > +deps += ['rcu']
> > diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index
> > 3a929a1b1..ca58d4b35 100644
> > --- a/lib/librte_lpm/rte_lpm.c
> > +++ b/lib/librte_lpm/rte_lpm.c
> > @@ -1,5 +1,6 @@
> > /* SPDX-License-Identifier: BSD-3-Clause
> > * Copyright(c) 2010-2014 Intel Corporation
> > + * Copyright(c) 2019 Arm Limited
> > */
> >
> > #include <string.h>
> > @@ -381,6 +382,8 @@ rte_lpm_free_v1604(struct rte_lpm *lpm)
> >
> > rte_mcfg_tailq_write_unlock();
> >
> > + if (lpm->dq)
> > + rte_rcu_qsbr_dq_delete(lpm->dq);
> > rte_free(lpm->tbl8);
> > rte_free(lpm->rules_tbl);
> > rte_free(lpm);
> > @@ -390,6 +393,59 @@ BIND_DEFAULT_SYMBOL(rte_lpm_free, _v1604,
> 16.04);
> > MAP_STATIC_SYMBOL(void rte_lpm_free(struct rte_lpm *lpm),
> > rte_lpm_free_v1604);
> >
> > +struct __rte_lpm_rcu_dq_entry {
> > + uint32_t tbl8_group_index;
> > + uint32_t pad;
> > +};
> > +
> > +static void
> > +__lpm_rcu_qsbr_free_resource(void *p, void *data) {
> > + struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > + struct __rte_lpm_rcu_dq_entry *e =
> > + (struct __rte_lpm_rcu_dq_entry *)data;
> > + struct rte_lpm_tbl_entry *tbl8 = (struct rte_lpm_tbl_entry *)p;
> > +
> > + /* Set tbl8 group invalid */
> > + __atomic_store(&tbl8[e->tbl8_group_index], &zero_tbl8_entry,
> > + __ATOMIC_RELAXED);
> > +}
> > +
> > +/* Associate QSBR variable with an LPM object.
> > + */
> > +int
> > +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr *v) {
> > + char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];
> > + struct rte_rcu_qsbr_dq_parameters params;
> > +
> > + if ((lpm == NULL) || (v == NULL)) {
> > + rte_errno = EINVAL;
> > + return 1;
> > + }
> > +
> > + if (lpm->dq) {
> > + rte_errno = EEXIST;
> > + return 1;
> > + }
> > +
> > + /* Init QSBR defer queue. */
> > + snprintf(rcu_dq_name, sizeof(rcu_dq_name), "LPM_RCU_%s", lpm-
> >name);
> > + params.name = rcu_dq_name;
> > + params.size = lpm->number_tbl8s;
> > + params.esize = sizeof(struct __rte_lpm_rcu_dq_entry);
> > + params.f = __lpm_rcu_qsbr_free_resource;
> > + params.p = lpm->tbl8;
> > + params.v = v;
> > + lpm->dq = rte_rcu_qsbr_dq_create(¶ms);
> > + if (lpm->dq == NULL) {
> > + RTE_LOG(ERR, LPM, "LPM QS defer queue creation failed\n");
> > + return 1;
> > + }
>
> Few thoughts about that function:
Few things to keep in mind, the goal of the design is to make it easy for the applications to adopt lock-free algorithms. The reclamation process in the writer is a major portion of code one has to write for using lock-free algorithms. The current design is such that the writer does not have to change any code or write additional code other than calling 'rte_lpm_rcu_qsbr_add'.
> It names rcu_qsbr_add() but in fact it allocates defer queue for give rcu var.
> So first thought - is it always necessary?
This is part of the design. If the application does not want to use this integrated logic then, it does not have to call this API. It can use the RCU defer APIs to implement its own logic. But, if I ask the question, does this integrated logic address most of the use cases of the LPM library, I think the answer is yes.
> For some use-cases I suppose user might be ok to wait for quiescent state
> change
> inside tbl8_free()?
Yes, that is a possibility (for ex: no frequent route changes). But, I think that is very trivial for the application to implement. Though, the LPM library has to separate the 'delete' and 'free' operations. Similar operations are provided in rte_hash library. IMO, we should follow consistent approach.
> Another thing you do allocate defer queue, but it is internal, so user can't call
> reclaim() manually, which looks strange.
> Why not to return defer_queue pointer to the user, so he can call reclaim()
> himself at appropriate time?
The intention of the design is to take the complexity away from the user of LPM library. IMO, the current design will address most uses cases of LPM library. If we expose the 2 parameters (when to trigger reclamation and how much to reclaim) in the 'rte_lpm_rcu_qsbr_add' API, it should provide enough flexibility to the application.
> Third thing - you always allocate defer queue with size equal to number of
> tbl8.
> Though I understand it could be up to 16M tbl8 groups inside the LPM.
> Do we really need defer queue that long?
No, we do not need it to be this long. It is this long today to avoid returning no-space on the defer queue error.
> Especially considering that current rcu_defer_queue will start reclamation
> when 1/8 of defer_quueue becomes full and wouldn't reclaim more then
> 1/16 of it.
> Probably better to let user to decide himself how long defer_queue he needs
> for that LPM?
It makes sense to expose it to the user if the writer-writer concurrency is lock-free (no memory allocation allowed to expand the defer queue size when the queue is full). However, LPM is not lock-free on the writer side. If we think the writer could be lock-free in the future, it has to be exposed to the user.
>
> Konstantin
Pulling questions/comments from other threads:
Can we leave reclamation to some other house-keeping thread to do (sort of garbage collector). Or such mode is not supported/planned?
[Honnappa] If the reclamation cost is small, the current method provides advantages over having a separate thread to do reclamation. I did not plan to provide such an option. But may be it makes sense to keep the options open (especially from ABI perspective). May be we should add a flags field which will allow us to implement different methods in the future?
>
>
> > +
> > + return 0;
> > +}
> > +
> > /*
> > * Adds a rule to the rule table.
> > *
> > @@ -679,14 +735,15 @@ tbl8_alloc_v20(struct rte_lpm_tbl_entry_v20
> > *tbl8) }
> >
> > static int32_t
> > -tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > number_tbl8s)
> > +__tbl8_alloc_v1604(struct rte_lpm *lpm)
> > {
> > uint32_t group_idx; /* tbl8 group index. */
> > struct rte_lpm_tbl_entry *tbl8_entry;
> >
> > /* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */
> > - for (group_idx = 0; group_idx < number_tbl8s; group_idx++) {
> > - tbl8_entry = &tbl8[group_idx *
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > + for (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {
> > + tbl8_entry = &lpm->tbl8[group_idx *
> > +
> RTE_LPM_TBL8_GROUP_NUM_ENTRIES];
> > /* If a free tbl8 group is found clean it and set as VALID. */
> > if (!tbl8_entry->valid_group) {
> > struct rte_lpm_tbl_entry new_tbl8_entry = { @@ -
> 712,6 +769,21 @@
> > tbl8_alloc_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)
> > return -ENOSPC;
> > }
> >
> > +static int32_t
> > +tbl8_alloc_v1604(struct rte_lpm *lpm) {
> > + int32_t group_idx; /* tbl8 group index. */
> > +
> > + group_idx = __tbl8_alloc_v1604(lpm);
> > + if ((group_idx < 0) && (lpm->dq != NULL)) {
> > + /* If there are no tbl8 groups try to reclaim some. */
> > + if (rte_rcu_qsbr_dq_reclaim(lpm->dq) == 0)
> > + group_idx = __tbl8_alloc_v1604(lpm);
> > + }
> > +
> > + return group_idx;
> > +}
> > +
> > static void
> > tbl8_free_v20(struct rte_lpm_tbl_entry_v20 *tbl8, uint32_t
> > tbl8_group_start) { @@ -728,13 +800,21 @@ tbl8_free_v20(struct
> > rte_lpm_tbl_entry_v20 *tbl8, uint32_t tbl8_group_start) }
> >
> > static void
> > -tbl8_free_v1604(struct rte_lpm_tbl_entry *tbl8, uint32_t
> > tbl8_group_start)
> > +tbl8_free_v1604(struct rte_lpm *lpm, uint32_t tbl8_group_start)
> > {
> > - /* Set tbl8 group invalid*/
> > struct rte_lpm_tbl_entry zero_tbl8_entry = {0};
> > + struct __rte_lpm_rcu_dq_entry e;
> >
> > - __atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,
> > - __ATOMIC_RELAXED);
> > + if (lpm->dq != NULL) {
> > + e.tbl8_group_index = tbl8_group_start;
> > + e.pad = 0;
> > + /* Push into QSBR defer queue. */
> > + rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&e);
> > + } else {
> > + /* Set tbl8 group invalid*/
> > + __atomic_store(&lpm->tbl8[tbl8_group_start],
> &zero_tbl8_entry,
> > + __ATOMIC_RELAXED);
> > + }
> > }
> >
> > static __rte_noinline int32_t
> > @@ -1037,7 +1117,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> > uint32_t ip_masked, uint8_t depth,
> >
> > if (!lpm->tbl24[tbl24_index].valid) {
> > /* Search for a free tbl8 group. */
> > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> >number_tbl8s);
> > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> >
> > /* Check tbl8 allocation was successful. */
> > if (tbl8_group_index < 0) {
> > @@ -1083,7 +1163,7 @@ add_depth_big_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked, uint8_t depth,
> > } /* If valid entry but not extended calculate the index into Table8. */
> > else if (lpm->tbl24[tbl24_index].valid_group == 0) {
> > /* Search for free tbl8 group. */
> > - tbl8_group_index = tbl8_alloc_v1604(lpm->tbl8, lpm-
> >number_tbl8s);
> > + tbl8_group_index = tbl8_alloc_v1604(lpm);
> >
> > if (tbl8_group_index < 0) {
> > return tbl8_group_index;
> > @@ -1818,7 +1898,7 @@ delete_depth_big_v1604(struct rte_lpm *lpm,
> uint32_t ip_masked,
> > */
> > lpm->tbl24[tbl24_index].valid = 0;
> > __atomic_thread_fence(__ATOMIC_RELEASE);
> > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > + tbl8_free_v1604(lpm, tbl8_group_start);
> > } else if (tbl8_recycle_index > -1) {
> > /* Update tbl24 entry. */
> > struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1834,7
> +1914,7 @@
> > delete_depth_big_v1604(struct rte_lpm *lpm, uint32_t ip_masked,
> > __atomic_store(&lpm->tbl24[tbl24_index],
> &new_tbl24_entry,
> > __ATOMIC_RELAXED);
> > __atomic_thread_fence(__ATOMIC_RELEASE);
> > - tbl8_free_v1604(lpm->tbl8, tbl8_group_start);
> > + tbl8_free_v1604(lpm, tbl8_group_start);
> > }
> > #undef group_idx
> > return 0;
> > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index
> > 906ec4483..49c12a68d 100644
> > --- a/lib/librte_lpm/rte_lpm.h
> > +++ b/lib/librte_lpm/rte_lpm.h
> > @@ -1,5 +1,6 @@
> > /* SPDX-License-Identifier: BSD-3-Clause
> > * Copyright(c) 2010-2014 Intel Corporation
> > + * Copyright(c) 2019 Arm Limited
> > */
> >
> > #ifndef _RTE_LPM_H_
> > @@ -21,6 +22,7 @@
> > #include <rte_common.h>
> > #include <rte_vect.h>
> > #include <rte_compat.h>
> > +#include <rte_rcu_qsbr.h>
> >
> > #ifdef __cplusplus
> > extern "C" {
> > @@ -186,6 +188,7 @@ struct rte_lpm {
> > __rte_cache_aligned; /**< LPM tbl24 table. */
> > struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */
> > struct rte_lpm_rule *rules_tbl; /**< LPM rules. */
> > + struct rte_rcu_qsbr_dq *dq; /**< RCU QSBR defer queue.*/
> > };
> >
> > /**
> > @@ -248,6 +251,24 @@ rte_lpm_free_v20(struct rte_lpm_v20 *lpm);
> void
> > rte_lpm_free_v1604(struct rte_lpm *lpm);
> >
> > +/**
> > + * Associate RCU QSBR variable with an LPM object.
> > + *
> > + * @param lpm
> > + * the lpm object to add RCU QSBR
> > + * @param v
> > + * RCU QSBR variable
> > + * @return
> > + * On success - 0
> > + * On error - 1 with error code set in rte_errno.
> > + * Possible rte_errno codes are:
> > + * - EINVAL - invalid pointer
> > + * - EEXIST - already added QSBR
> > + * - ENOMEM - memory allocation failure
> > + */
> > +__rte_experimental
> > +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_rcu_qsbr
> > +*v);
> > +
> > /**
> > * Add a rule to the LPM table.
> > *
> > diff --git a/lib/librte_lpm/rte_lpm_version.map
> > b/lib/librte_lpm/rte_lpm_version.map
> > index 90beac853..b353aabd2 100644
> > --- a/lib/librte_lpm/rte_lpm_version.map
> > +++ b/lib/librte_lpm/rte_lpm_version.map
> > @@ -44,3 +44,9 @@ DPDK_17.05 {
> > rte_lpm6_lookup_bulk_func;
> >
> > } DPDK_16.04;
> > +
> > +EXPERIMENTAL {
> > + global:
> > +
> > + rte_lpm_rcu_qsbr_add;
> > +};
> > --
> > 2.17.1
^ permalink raw reply [relevance 3%]
* Re: [dpdk-dev] [PATCH v3 2/3] lib/rcu: add resource reclamation APIs
@ 2019-10-13 4:35 0% ` Honnappa Nagarahalli
0 siblings, 0 replies; 200+ results
From: Honnappa Nagarahalli @ 2019-10-13 4:35 UTC (permalink / raw)
To: Ananyev, Konstantin, stephen, paulmck
Cc: Wang, Yipeng1, Medvedkin, Vladimir,
Ruifeng Wang (Arm Technology China),
Dharmik Thakkar, dev, Honnappa Nagarahalli, nd, nd
<snip>
> > > > > > Add resource reclamation APIs to make it simple for
> > > > > > applications and libraries to integrate rte_rcu library.
> > > > > >
> > > > > > Signed-off-by: Honnappa Nagarahalli
> > > > > > <honnappa.nagarahalli@arm.com>
> > > > > > Reviewed-by: Ola Liljedhal <ola.liljedhal@arm.com>
> > > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > > ---
> > > > > > app/test/test_rcu_qsbr.c | 291
> ++++++++++++++++++++++++++++-
> > > > > > lib/librte_rcu/meson.build | 2 +
> > > > > > lib/librte_rcu/rte_rcu_qsbr.c | 185 ++++++++++++++++++
> > > > > > lib/librte_rcu/rte_rcu_qsbr.h | 169 +++++++++++++++++
> > > > > > lib/librte_rcu/rte_rcu_qsbr_pvt.h | 46 +++++
> > > > > > lib/librte_rcu/rte_rcu_version.map | 4 +
> > > > > > lib/meson.build | 6 +-
> > > > > > 7 files changed, 700 insertions(+), 3 deletions(-) create
> > > > > > mode
> > > > > > 100644 lib/librte_rcu/rte_rcu_qsbr_pvt.h
> > > > > >
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.c
> > > > > > b/lib/librte_rcu/rte_rcu_qsbr.c index ce7f93dd3..76814f50b
> > > > > > 100644
> > > > > > --- a/lib/librte_rcu/rte_rcu_qsbr.c
> > > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.c
> > > > > > @@ -21,6 +21,7 @@
> > > > > > #include <rte_errno.h>
> > > > > >
> > > > > > #include "rte_rcu_qsbr.h"
> > > > > > +#include "rte_rcu_qsbr_pvt.h"
> > > > > >
> > > > > > /* Get the memory size of QSBR variable */ size_t @@ -267,6
> > > > > > +268,190 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> > > > > > return 0;
> > > > > > }
> > > > > >
> > > > > > +/* Create a queue used to store the data structure elements
> > > > > > +that can
> > > > > > + * be freed later. This queue is referred to as 'defer queue'.
> > > > > > + */
> > > > > > +struct rte_rcu_qsbr_dq *
> > > > > > +rte_rcu_qsbr_dq_create(const struct
> > > > > > +rte_rcu_qsbr_dq_parameters
> > > > > > +*params) {
> > > > > > + struct rte_rcu_qsbr_dq *dq;
> > > > > > + uint32_t qs_fifo_size;
> > > > > > +
> > > > > > + if (params == NULL || params->f == NULL ||
> > > > > > + params->v == NULL || params->name == NULL ||
> > > > > > + params->size == 0 || params->esize == 0 ||
> > > > > > + (params->esize % 8 != 0)) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return NULL;
> > > > > > + }
> > > > > > +
> > > > > > + dq = rte_zmalloc(NULL,
> > > > > > + (sizeof(struct rte_rcu_qsbr_dq) + params->esize),
> > > > > > + RTE_CACHE_LINE_SIZE);
> > > > > > + if (dq == NULL) {
> > > > > > + rte_errno = ENOMEM;
> > > > > > +
> > > > > > + return NULL;
> > > > > > + }
> > > > > > +
> > > > > > + /* round up qs_fifo_size to next power of two that is not less
> than
> > > > > > + * max_size.
> > > > > > + */
> > > > > > + qs_fifo_size = rte_align32pow2((((params->esize/8) + 1)
> > > > > > + * params->size) + 1);
> > > > > > + dq->r = rte_ring_create(params->name, qs_fifo_size,
> > > > > > + SOCKET_ID_ANY, 0);
> > > > >
> > > > > If it is going to be not MT safe, then why not to create the
> > > > > ring with (RING_F_SP_ENQ | RING_F_SC_DEQ) flags set?
> > > > Agree.
> > > >
> > > > > Though I think it could be changed to allow MT safe multiple
> > > > > enqeue/single dequeue, see below.
> > > > The MT safe issue is due to reclaim code. The reclaim code has the
> > > > following
> > > sequence:
> > > >
> > > > rte_ring_peek
> > > > rte_rcu_qsbr_check
> > > > rte_ring_dequeue
> > > >
> > > > This entire sequence needs to be atomic as the entry cannot be
> > > > dequeued
> > > without knowing that the grace period for that entry is over.
> > >
> > > I understand that, though I believe at least it should be possible
> > > to support multiple-enqueue/single dequeuer and reclaim mode.
> > > With serialized dequeue() even multiple dequeue should be possible.
> > Agreed. Please see the response on the other thread.
> >
> > >
> > > > Note that due to optimizations in rte_rcu_qsbr_check API, this
> > > > sequence should not be large in most cases. I do not have ideas on
> > > > how to
> > > make this sequence lock-free.
> > > >
> > > > If the writer is on the control plane, most use cases will use
> > > > mutex locks for synchronization if they are multi-threaded. That
> > > > lock should be
> > > enough to provide the thread safety for these APIs.
> > >
> > > In that is case, why do we need ring at all?
> > > For sure people can create their own queue quite easily with mutex and
> TAILQ.
> > > If performance is not an issue, they can even add pthread_cond to
> > > it, and have an ability for the consumer to sleep/wakeup on empty/full
> queue.
> > >
> > > >
> > > > If the writer is multi-threaded and lock-free, then one should use
> > > > per thread
> > > defer queue.
> > >
> > > If that's the only working model, then the question is why do we
> > > need that API at all?
> > > Just simple array with counter or linked-list should do for majority of
> cases.
> > Please see the other thread.
> >
> > >
> > > >
> > > > >
> > > > > > + if (dq->r == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): defer queue create failed\n",
> __func__);
> > > > > > + rte_free(dq);
> > > > > > + return NULL;
> > > > > > + }
> > > > > > +
> > > > > > + dq->v = params->v;
> > > > > > + dq->size = params->size;
> > > > > > + dq->esize = params->esize;
> > > > > > + dq->f = params->f;
> > > > > > + dq->p = params->p;
> > > > > > +
> > > > > > + return dq;
> > > > > > +}
> > > > > > +
> > > > > > +/* Enqueue one resource to the defer queue to free after the
> > > > > > +grace
> > > > > > + * period is over.
> > > > > > + */
> > > > > > +int rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e)
> {
> > > > > > + uint64_t token;
> > > > > > + uint64_t *tmp;
> > > > > > + uint32_t i;
> > > > > > + uint32_t cur_size, free_size;
> > > > > > +
> > > > > > + if (dq == NULL || e == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > >
> > > > > Why just not to return -EINVAL straightway?
> > > > > I think there is no much point to set rte_errno in that function
> > > > > at all, just return value should do.
> > > > I am trying to keep these consistent with the existing APIs. They
> > > > return 0 or 1
> > > and set the rte_errno.
> > >
> > > A lot of public DPDK API functions do use return value to return
> > > status code (0, or some positive numbers of success, negative errno
> > > values on failure), I am not inventing anything new here.
> > Agree, you are not proposing a new thing here. May be I was not clear.
> > I really do not have an opinion on how this should be done. But, I do have
> an opinion on consistency. These new APIs follow what has been done in the
> existing RCU APIs. I think we have 2 options here.
> > 1) Either we change existing RCU APIs to get rid of rte_errno (is it
> > an ABI change?) or
> > 2) The new APIs follow what has been done in the existing RCU APIs.
> > I want to make sure we are consistent at least within RCU APIs.
>
> But as I can see right now rcu API sets rte_errno only for control-path
> functions (get_memsize, init, register, unregister, dump).
> All fast-path (inline) function don't set/use it.
> So from perspective that is consistent behavior, no?
Agree. I am treating this as a control plane function mainly (hence it is a non-inline function as well).
>
> >
> > >
> > > >
> > > > >
> > > > > > + }
> > > > > > +
> > > > > > + /* Start the grace period */
> > > > > > + token = rte_rcu_qsbr_start(dq->v);
> > > > > > +
> > > > > > + /* Reclaim resources if the queue is 1/8th full. This helps
> > > > > > + * the queue from growing too large and allows time for
> reader
> > > > > > + * threads to report their quiescent state.
> > > > > > + */
> > > > > > + cur_size = rte_ring_count(dq->r) / (dq->esize/8 + 1);
> > > > >
> > > > > Probably would be a bit easier if you just store in dq->esize
> > > > > (elt size + token
> > > > > size) / 8.
> > > > Agree
> > > >
> > > > >
> > > > > > + if (cur_size > (dq->size >>
> > > > > > +RTE_RCU_QSBR_AUTO_RECLAIM_LIMIT)) {
> > > > >
> > > > > Why to make this threshold value hard-coded?
> > > > > Why either not to put it into create parameter, or just return a
> > > > > special return value, to indicate that threshold is reached?
> > > > My thinking was to keep the programming interface easy to use. The
> > > > more the parameters, the more painful it is for the user. IMO, the
> > > > constants chosen should be good enough for most cases. More
> > > > advanced
> > > users could modify the constants. However, we could make these as
> > > part of the parameters, but make them optional for the user. For ex:
> > > if they set them to 0, default values can be used.
> > > >
> > > > > Or even return number of filled/free entroes on success, so
> > > > > caller can decide to reclaim or not based on that information on his
> own?
> > > > This means more code on the user side.
> > >
> > > I personally think it it really wouldn't be that big problem to the
> > > user to pass extra parameter to the function.
> > I will convert the 2 constants into optional parameters (user can set
> > them to 0 to make the algorithm use default values)
> >
> > > Again what if user doesn't want to reclaim() in enqueue() thread at all?
> > 'enqueue' has to do reclamation if the defer queue is full. I do not think this
> is trivial.
> >
> > In the current design, reclamation in enqueue is also done on regular
> > basis (automatic triggering of reclamation when the queue reaches
> > certain limit) to keep the queue from growing too large. This is
> > required when we implement a dynamically adjusting defer queue. The
> current algorithm keeps the cost of reclamation spread across multiple calls
> and puts an upper bound on cycles for delete API by reclaiming a fixed
> number of entries.
> >
> > This algorithm is proven to work in the LPM integration performance
> > tests at a very low performance over head (~1%). So, I do not know why a
> user would not want to use this.
>
> Yeh, I looked at LPM implementation and one thing I found strange -
> defer_queue is hidden inside LPM struct and all reclamations are done
> internally.
> Yes for sure it allows to defer and group actual reclaim(), which hopefully will
> lead to better performance.
> But why not to allow user to call reclaim() for it directly too?
> In that way user might avoid/(minimize) doing reclaim() in LPM write() at all.
> And let say do it somewhere later in the same thread (when no other tasks to
> do), or even leave it to some other house-keeping thread to do (sort of
> garbage collector).
> Or such mode is not supported/planned?
The goal of integrating the RCU defer APIs with libraries is to take away the complexity on the writer to adopt the lock-free algorithms. I am looking to address most used use cases. There will be use cases which are not very common and I think those should be addressed by the application by using the base RCU APIs. Let us discuss this more in the other thread, where you have similar questions.
>
> > The 2 additional parameters should give the user more flexibility.
>
> Ok, let's keep it as config params.
> After another though - I think you right, it should be good enough.
>
> >
> > However, if the user wants his own algorithm, he can create one with the
> base APIs provided.
> >
> > >
> > > > I think adding these to parameters seems like a better option.
> > > >
> > > > >
> > > > > > + rte_log(RTE_LOG_INFO, rte_rcu_log_type,
> > > > > > + "%s(): Triggering reclamation\n", __func__);
> > > > > > + rte_rcu_qsbr_dq_reclaim(dq);
> > > > > > + }
> > > > > > +
> > > > > > + /* Check if there is space for atleast for 1 resource */
> > > > > > + free_size = rte_ring_free_count(dq->r) / (dq->esize/8 + 1);
> > > > > > + if (!free_size) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): Defer queue is full\n", __func__);
> > > > > > + rte_errno = ENOSPC;
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + /* Enqueue the resource */
> > > > > > + rte_ring_sp_enqueue(dq->r, (void *)(uintptr_t)token);
> > > > > > +
> > > > > > + /* The resource to enqueue needs to be a multiple of 64b
> > > > > > + * due to the limitation of the rte_ring implementation.
> > > > > > + */
> > > > > > + for (i = 0, tmp = (uint64_t *)e; i < dq->esize/8; i++, tmp++)
> > > > > > + rte_ring_sp_enqueue(dq->r, (void *)(uintptr_t)*tmp);
> > > > >
> > > > >
> > > > > That whole construction above looks a bit clumsy and error prone...
> > > > > I suppose just:
> > > > >
> > > > > const uint32_t nb_elt = dq->elt_size/8 + 1; uint32_t free, n; ...
> > > > > n = rte_ring_enqueue_bulk(dq->r, e, nb_elt, &free); if (n == 0)
> > > > Yes, bulk enqueue can be used. But note that once the flexible
> > > > element size
> > > ring patch is done, this code will use that.
> > >
> > > Well, when it will be in the mainline, and it would provide a better
> > > way, for sure this code can be updated to use new API (if it is provide
> some improvements).
> > > But as I udenrstand, right now it is not there, while bulk
> enqueue/dequeue are.
> > Apologies, I was not clear. I agree we can go with bulk APIs for now.
> >
> > >
> > > >
> > > > > return -ENOSPC;
> > > > > return free;
> > > > >
> > > > > That way I think you can have MT-safe version of that function.
> > > > Please see the description of MT safe issue above.
> > > >
> > > > >
> > > > > > +
> > > > > > + return 0;
> > > > > > +}
> > > > > > +
> > > > > > +/* Reclaim resources from the defer queue. */ int
> > > > > > +rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq) {
> > > > > > + uint32_t max_cnt;
> > > > > > + uint32_t cnt;
> > > > > > + void *token;
> > > > > > + uint64_t *tmp;
> > > > > > + uint32_t i;
> > > > > > +
> > > > > > + if (dq == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > >
> > > > > Same story as above - I think rte_errno is excessive in this function.
> > > > > Just return value should be enough.
> > > > >
> > > > >
> > > > > > + }
> > > > > > +
> > > > > > + /* Anything to reclaim? */
> > > > > > + if (rte_ring_count(dq->r) == 0)
> > > > > > + return 0;
> > > > >
> > > > > Not sure you need that, see below.
> > > > >
> > > > > > +
> > > > > > + /* Reclaim at the max 1/16th the total number of entries. */
> > > > > > + max_cnt = dq->size >> RTE_RCU_QSBR_MAX_RECLAIM_LIMIT;
> > > > > > + max_cnt = (max_cnt == 0) ? dq->size : max_cnt;
> > > > >
> > > > > Again why not to make max_cnt a configurable at create() parameter?
> > > > I think making this as an optional parameter for creating defer
> > > > queue is a
> > > better option.
> > > >
> > > > > Or even a parameter for that function?
> > > > >
> > > > > > + cnt = 0;
> > > > > > +
> > > > > > + /* Check reader threads quiescent state and reclaim
> resources */
> > > > > > + while ((cnt < max_cnt) && (rte_ring_peek(dq->r, &token) ==
> 0) &&
> > > > > > + (rte_rcu_qsbr_check(dq->v,
> (uint64_t)((uintptr_t)token), false)
> > > > > > + == 1)) {
> > > > >
> > > > >
> > > > > > + (void)rte_ring_sc_dequeue(dq->r, &token);
> > > > > > + /* The resource to dequeue needs to be a multiple of
> 64b
> > > > > > + * due to the limitation of the rte_ring
> implementation.
> > > > > > + */
> > > > > > + for (i = 0, tmp = (uint64_t *)dq->e; i < dq->esize/8;
> > > > > > + i++, tmp++)
> > > > > > + (void)rte_ring_sc_dequeue(dq->r,
> > > > > > + (void *)(uintptr_t)tmp);
> > > > >
> > > > > Again, no need for such constructs with multiple dequeuer I believe.
> > > > > Just:
> > > > >
> > > > > const uint32_t nb_elt = dq->elt_size/8 + 1; uint32_t n;
> > > > > uintptr_t elt[nb_elt]; ...
> > > > > n = rte_ring_dequeue_bulk(dq->r, elt, nb_elt, NULL); if (n != 0)
> > > > > {dq->f(dq->p, elt);}
> > > > Agree on bulk API use.
> > > >
> > > > >
> > > > > Seems enough.
> > > > > Again in that case you can have enqueue/reclaim running in
> > > > > different threads simultaneously, plus you don't need dq->e at all.
> > > > Will check on dq->e
> > > >
> > > > >
> > > > > > + dq->f(dq->p, dq->e);
> > > > > > +
> > > > > > + cnt++;
> > > > > > + }
> > > > > > +
> > > > > > + rte_log(RTE_LOG_INFO, rte_rcu_log_type,
> > > > > > + "%s(): Reclaimed %u resources\n", __func__, cnt);
> > > > > > +
> > > > > > + if (cnt == 0) {
> > > > > > + /* No resources were reclaimed */
> > > > > > + rte_errno = EAGAIN;
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + return 0;
> > > > >
> > > > > I'd suggest to return cnt on success.
> > > > I am trying to keep the APIs simple. I do not see much use for 'cnt'
> > > > as return value to the user. It exposes more details which I think
> > > > are internal
> > > to the library.
> > >
> > > Not sure what is the hassle to return number of completed reclamaitions?
> > > If user doesn't need that information, he simply wouldn't use it.
> > > But might be it would be usefull - he can decide should he try
> > > another attempt of reclaim() immediately or is it ok to do something else.
> > There is no hassle to return that information.
> >
> > As per the current design, user calls 'reclaim' when it is out of
> > resources while adding an entry to the data structure. At that point
> > the user wants to know if at least 1 resource was reclaimed because the
> user has to allocate 1 resource. He does not have a use for the number of
> resources reclaimed.
>
> Ok, but why user can't decide to do reclaim in advance, let say when he
> foresee that he would need a lot of allocations in nearest future?
> Or when there is some idle time? Or some combination of these things?
> At he would like to free some extra resources in that case to minimize
> number of reclaims in future peak interval?
If the user has free time he can call the reclaim API. By making the parameters configurable, he should be able to control how much he can reclaim.
If the user wants to make sure that he has enough free resources for the future. He should be able to do it by knowing how many free resources are available in his data structure currently.
But, I do not see it as a problem to return the number of resources reclaimed. I will add that.
>
> >
> > If this API returns 0, then the user can decide to repeat the call or
> > return failure. But that decision depends on the length of the grace period
> which is under user's control.
> >
> > >
> > > >
> > > > >
> > > > > > +}
> > > > > > +
> > > > > > +/* Delete a defer queue. */
> > > > > > +int
> > > > > > +rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq) {
> > > > > > + if (dq == NULL) {
> > > > > > + rte_log(RTE_LOG_ERR, rte_rcu_log_type,
> > > > > > + "%s(): Invalid input parameter\n", __func__);
> > > > > > + rte_errno = EINVAL;
> > > > > > +
> > > > > > + return 1;
> > > > > > + }
> > > > > > +
> > > > > > + /* Reclaim all the resources */
> > > > > > + if (rte_rcu_qsbr_dq_reclaim(dq) != 0)
> > > > > > + /* Error number is already set by the reclaim API */
> > > > > > + return 1;
> > > > >
> > > > > How do you know that you have reclaimed everything?
> > > > Good point, will come back with a different solution.
> > > >
> > > > >
> > > > > > +
> > > > > > + rte_ring_free(dq->r);
> > > > > > + rte_free(dq);
> > > > > > +
> > > > > > + return 0;
> > > > > > +}
> > > > > > +
> > > > > > int rte_rcu_log_type;
> > > > > >
> > > > > > RTE_INIT(rte_rcu_register)
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > b/lib/librte_rcu/rte_rcu_qsbr.h index c80f15c00..185d4b50a
> > > > > > 100644
> > > > > > --- a/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr.h
> > > > > > @@ -34,6 +34,7 @@ extern "C" { #include <rte_lcore.h>
> > > > > > #include <rte_debug.h> #include <rte_atomic.h>
> > > > > > +#include <rte_ring.h>
> > > > > >
> > > > > > extern int rte_rcu_log_type;
> > > > > >
> > > > > > @@ -109,6 +110,67 @@ struct rte_rcu_qsbr {
> > > > > > */
> > > > > > } __rte_cache_aligned;
> > > > > >
> > > > > > +/**
> > > > > > + * Call back function called to free the resources.
> > > > > > + *
> > > > > > + * @param p
> > > > > > + * Pointer provided while creating the defer queue
> > > > > > + * @param e
> > > > > > + * Pointer to the resource data stored on the defer queue
> > > > > > + *
> > > > > > + * @return
> > > > > > + * None
> > > > > > + */
> > > > > > +typedef void (*rte_rcu_qsbr_free_resource)(void *p, void *e);
> > > > >
> > > > > Stylish thing - usually in DPDK we have typedf newtype_t ...
> > > > > Though I am not sure you need a new typedef at all - just a
> > > > > function pointer inside the struct seems enough.
> > > > Other libraries (for ex: rte_hash) use this approach. I think it
> > > > is better to keep
> > > it out of the structure to allow for better commenting.
> > >
> > > I am saying majority of DPDK code use _t suffix for typedef:
> > > typedef void (*rte_rcu_qsbr_free_resource_t)(void *p, void *e);
> > Apologies, got it, will change.
> >
> > >
> > > >
> > > > >
> > > > > > +
> > > > > > +#define RTE_RCU_QSBR_DQ_NAMESIZE RTE_RING_NAMESIZE
> > > > > > +
> > > > > > +/**
> > > > > > + * Trigger automatic reclamation after 1/8th the defer queue is full.
> > > > > > + */
> > > > > > +#define RTE_RCU_QSBR_AUTO_RECLAIM_LIMIT 3
> > > > > > +
> > > > > > +/**
> > > > > > + * Reclaim at the max 1/16th the total number of resources.
> > > > > > + */
> > > > > > +#define RTE_RCU_QSBR_MAX_RECLAIM_LIMIT 4
> > > > >
> > > > >
> > > > > As I said above, I don't think these thresholds need to be hardcoded.
> > > > > In any case, there seems not much point to put them in the
> > > > > public header
> > > file.
> > > > >
> > > > > > +
> > > > > > +/**
> > > > > > + * Parameters used when creating the defer queue.
> > > > > > + */
> > > > > > +struct rte_rcu_qsbr_dq_parameters {
> > > > > > + const char *name;
> > > > > > + /**< Name of the queue. */
> > > > > > + uint32_t size;
> > > > > > + /**< Number of entries in queue. Typically, this will be
> > > > > > + * the same as the maximum number of entries supported in
> the
> > > > > > + * lock free data structure.
> > > > > > + * Data structures with unbounded number of entries is not
> > > > > > + * supported currently.
> > > > > > + */
> > > > > > + uint32_t esize;
> > > > > > + /**< Size (in bytes) of each element in the defer queue.
> > > > > > + * This has to be multiple of 8B as the rte_ring APIs
> > > > > > + * support 8B element sizes only.
> > > > > > + */
> > > > > > + rte_rcu_qsbr_free_resource f;
> > > > > > + /**< Function to call to free the resource. */
> > > > > > + void *p;
> > > > >
> > > > > Style nit again - I like short names myself, but that seems a
> > > > > bit extreme... :) Might be at least:
> > > > > void (*reclaim)(void *, void *);
> > > > May be 'free_fn'?
> > > >
> > > > > void * reclaim_data;
> > > > > ?
> > > > This is the pointer to the data structure to free the resource
> > > > into. For ex: In
> > > LPM data structure, it will be pointer to LPM. 'reclaim_data'
> > > > does not convey the meaning correctly.
> > >
> > > Ok, please free to comeup with your own names.
> > > I just wanted to say that 'f' and 'p' are a bit an extreme for public API.
> > ok, this is the hardest thing to do 😊
> >
> > >
> > > >
> > > > >
> > > > > > + /**< Pointer passed to the free function. Typically, this is the
> > > > > > + * pointer to the data structure to which the resource to
> free
> > > > > > + * belongs. This can be NULL.
> > > > > > + */
> > > > > > + struct rte_rcu_qsbr *v;
> > > > >
> > > > > Does it need to be inside that struct?
> > > > > Might be better:
> > > > > rte_rcu_qsbr_dq_create(struct rte_rcu_qsbr *v, const struct
> > > > > rte_rcu_qsbr_dq_parameters *params);
> > > > The API takes a parameter structure as input anyway, why to add
> > > > another argument to the function? QSBR variable is also another
> parameter.
> > > >
> > > > >
> > > > > Another alternative: make both reclaim() and enqueue() to take v
> > > > > as a parameter.
> > > > But both of them need access to some of the parameters provided in
> > > > rte_rcu_qsbr_dq_create API. We would end up passing 2 arguments to
> > > > the
> > > functions.
> > >
> > > Pure stylish thing.
> > > From my perspective it just provides better visibility what is going in the
> code:
> > > For QSBR var 'v' create a new deferred queue.
> > > But no strong opinion here.
> > >
> > > >
> > > > >
> > > > > > + /**< RCU QSBR variable to use for this defer queue */ };
> > > > > > +
> > > > > > +/* RTE defer queue structure.
> > > > > > + * This structure holds the defer queue. The defer queue is
> > > > > > +used to
> > > > > > + * hold the deleted entries from the data structure that are
> > > > > > +not
> > > > > > + * yet freed.
> > > > > > + */
> > > > > > +struct rte_rcu_qsbr_dq;
> > > > > > +
> > > > > > /**
> > > > > > * @warning
> > > > > > * @b EXPERIMENTAL: this API may change without prior notice
> > > > > > @@
> > > > > > -648,6 +710,113 @@ __rte_experimental int
> > > > > > rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v);
> > > > > >
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > > > + *
> > > > > > + * Create a queue used to store the data structure elements
> > > > > > +that can
> > > > > > + * be freed later. This queue is referred to as 'defer queue'.
> > > > > > + *
> > > > > > + * @param params
> > > > > > + * Parameters to create a defer queue.
> > > > > > + * @return
> > > > > > + * On success - Valid pointer to defer queue
> > > > > > + * On error - NULL
> > > > > > + * Possible rte_errno codes are:
> > > > > > + * - EINVAL - NULL parameters are passed
> > > > > > + * - ENOMEM - Not enough memory
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +struct rte_rcu_qsbr_dq *
> > > > > > +rte_rcu_qsbr_dq_create(const struct
> > > > > > +rte_rcu_qsbr_dq_parameters *params);
> > > > > > +
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > > > + *
> > > > > > + * Enqueue one resource to the defer queue and start the grace
> period.
> > > > > > + * The resource will be freed later after at least one grace
> > > > > > +period
> > > > > > + * is over.
> > > > > > + *
> > > > > > + * If the defer queue is full, it will attempt to reclaim resources.
> > > > > > + * It will also reclaim resources at regular intervals to
> > > > > > +avoid
> > > > > > + * the defer queue from growing too big.
> > > > > > + *
> > > > > > + * This API is not multi-thread safe. It is expected that the
> > > > > > +caller
> > > > > > + * provides multi-thread safety by locking a mutex or some other
> means.
> > > > > > + *
> > > > > > + * A lock free multi-thread writer algorithm could achieve
> > > > > > +multi-thread
> > > > > > + * safety by creating and using one defer queue per thread.
> > > > > > + *
> > > > > > + * @param dq
> > > > > > + * Defer queue to allocate an entry from.
> > > > > > + * @param e
> > > > > > + * Pointer to resource data to copy to the defer queue. The size of
> > > > > > + * the data to copy is equal to the element size provided when the
> > > > > > + * defer queue was created.
> > > > > > + * @return
> > > > > > + * On success - 0
> > > > > > + * On error - 1 with rte_errno set to
> > > > > > + * - EINVAL - NULL parameters are passed
> > > > > > + * - ENOSPC - Defer queue is full. This condition can not happen
> > > > > > + * if the defer queue size is equal (or larger) than the
> > > > > > + * number of elements in the data structure.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_rcu_qsbr_dq_enqueue(struct rte_rcu_qsbr_dq *dq, void *e);
> > > > > > +
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > > > + *
> > > > > > + * Reclaim resources from the defer queue.
> > > > > > + *
> > > > > > + * This API is not multi-thread safe. It is expected that the
> > > > > > +caller
> > > > > > + * provides multi-thread safety by locking a mutex or some other
> means.
> > > > > > + *
> > > > > > + * A lock free multi-thread writer algorithm could achieve
> > > > > > +multi-thread
> > > > > > + * safety by creating and using one defer queue per thread.
> > > > > > + *
> > > > > > + * @param dq
> > > > > > + * Defer queue to reclaim an entry from.
> > > > > > + * @return
> > > > > > + * On successful reclamation of at least 1 resource - 0
> > > > > > + * On error - 1 with rte_errno set to
> > > > > > + * - EINVAL - NULL parameters are passed
> > > > > > + * - EAGAIN - None of the resources have completed at least 1
> grace
> > > > > period,
> > > > > > + * try again.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_rcu_qsbr_dq_reclaim(struct rte_rcu_qsbr_dq *dq);
> > > > > > +
> > > > > > +/**
> > > > > > + * @warning
> > > > > > + * @b EXPERIMENTAL: this API may change without prior notice
> > > > > > + *
> > > > > > + * Delete a defer queue.
> > > > > > + *
> > > > > > + * It tries to reclaim all the resources on the defer queue.
> > > > > > + * If any of the resources have not completed the grace
> > > > > > +period
> > > > > > + * the reclamation stops and returns immediately. The rest of
> > > > > > + * the resources are not reclaimed and the defer queue is not
> > > > > > + * freed.
> > > > > > + *
> > > > > > + * @param dq
> > > > > > + * Defer queue to delete.
> > > > > > + * @return
> > > > > > + * On success - 0
> > > > > > + * On error - 1
> > > > > > + * Possible rte_errno codes are:
> > > > > > + * - EINVAL - NULL parameters are passed
> > > > > > + * - EAGAIN - Some of the resources have not completed at least 1
> > > grace
> > > > > > + * period, try again.
> > > > > > + */
> > > > > > +__rte_experimental
> > > > > > +int
> > > > > > +rte_rcu_qsbr_dq_delete(struct rte_rcu_qsbr_dq *dq);
> > > > > > +
> > > > > > #ifdef __cplusplus
> > > > > > }
> > > > > > #endif
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_qsbr_pvt.h
> > > > > > b/lib/librte_rcu/rte_rcu_qsbr_pvt.h
> > > > > > new file mode 100644
> > > > > > index 000000000..2122bc36a
> > > > > > --- /dev/null
> > > > > > +++ b/lib/librte_rcu/rte_rcu_qsbr_pvt.h
> > > > >
> > > > > Again style suggestion: as it is not public header - don't use
> > > > > rte_ prefix for naming.
> > > > > From my perspective - easier to relalize for reader what is
> > > > > public header, what is not.
> > > > Looks like the guidelines are not defined very well. I see one
> > > > private file with rte_ prefix. I see Stephen not using rte_
> > > > prefix. I do not have any
> > > preference. But, a consistent approach is required.
> > >
> > > That's just a suggestion.
> > > For me (and I hope for others) it would be a bit easier.
> > > When looking at the code for first time I had to look a t
> > > meson.build to check is it a public header or not.
> > > If the file doesn't have 'rte_' prefix, I assume that it is an
> > > internal one straightway.
> > > But , as you said, there is no exact guidelines here, so up to you to decide.
> > I think it makes sense to remove 'rte_' prefix. I will also change the file
> name to have '_private' suffix.
> > There are some inconsistencies in the existing code, will send a patch to
> correct them to follow this approach.
> >
> > >
> > > >
> > > > >
> > > > > > @@ -0,0 +1,46 @@
> > > > > > +/* SPDX-License-Identifier: BSD-3-Clause
> > > > > > + * Copyright (c) 2019 Arm Limited */
> > > > > > +
> > > > > > +#ifndef _RTE_RCU_QSBR_PVT_H_
> > > > > > +#define _RTE_RCU_QSBR_PVT_H_
> > > > > > +
> > > > > > +/**
> > > > > > + * This file is private to the RCU library. It should not be
> > > > > > +included
> > > > > > + * by the user of this library.
> > > > > > + */
> > > > > > +
> > > > > > +#ifdef __cplusplus
> > > > > > +extern "C" {
> > > > > > +#endif
> > > > > > +
> > > > > > +#include "rte_rcu_qsbr.h"
> > > > > > +
> > > > > > +/* RTE defer queue structure.
> > > > > > + * This structure holds the defer queue. The defer queue is
> > > > > > +used to
> > > > > > + * hold the deleted entries from the data structure that are
> > > > > > +not
> > > > > > + * yet freed.
> > > > > > + */
> > > > > > +struct rte_rcu_qsbr_dq {
> > > > > > + struct rte_rcu_qsbr *v; /**< RCU QSBR variable used by this
> queue.*/
> > > > > > + struct rte_ring *r; /**< RCU QSBR defer queue. */
> > > > > > + uint32_t size;
> > > > > > + /**< Number of elements in the defer queue */
> > > > > > + uint32_t esize;
> > > > > > + /**< Size (in bytes) of data stored on the defer queue */
> > > > > > + rte_rcu_qsbr_free_resource f;
> > > > > > + /**< Function to call to free the resource. */
> > > > > > + void *p;
> > > > > > + /**< Pointer passed to the free function. Typically, this is the
> > > > > > + * pointer to the data structure to which the resource to
> free
> > > > > > + * belongs.
> > > > > > + */
> > > > > > + char e[0];
> > > > > > + /**< Temporary storage to copy the defer queue element. */
> > > > >
> > > > > Do you really need 'e' at all?
> > > > > Can't it be just temporary stack variable?
> > > > Ok, will check.
> > > >
> > > > >
> > > > > > +};
> > > > > > +
> > > > > > +#ifdef __cplusplus
> > > > > > +}
> > > > > > +#endif
> > > > > > +
> > > > > > +#endif /* _RTE_RCU_QSBR_PVT_H_ */
> > > > > > diff --git a/lib/librte_rcu/rte_rcu_version.map
> > > > > > b/lib/librte_rcu/rte_rcu_version.map
> > > > > > index f8b9ef2ab..dfac88a37 100644
> > > > > > --- a/lib/librte_rcu/rte_rcu_version.map
> > > > > > +++ b/lib/librte_rcu/rte_rcu_version.map
> > > > > > @@ -8,6 +8,10 @@ EXPERIMENTAL {
> > > > > > rte_rcu_qsbr_synchronize;
> > > > > > rte_rcu_qsbr_thread_register;
> > > > > > rte_rcu_qsbr_thread_unregister;
> > > > > > + rte_rcu_qsbr_dq_create;
> > > > > > + rte_rcu_qsbr_dq_enqueue;
> > > > > > + rte_rcu_qsbr_dq_reclaim;
> > > > > > + rte_rcu_qsbr_dq_delete;
> > > > > >
> > > > > > local: *;
> > > > > > };
> > > > > > diff --git a/lib/meson.build b/lib/meson.build index
> > > > > > e5ff83893..0e1be8407 100644
> > > > > > --- a/lib/meson.build
> > > > > > +++ b/lib/meson.build
> > > > > > @@ -11,7 +11,9 @@
> > > > > > libraries = [
> > > > > > 'kvargs', # eal depends on kvargs
> > > > > > 'eal', # everything depends on eal
> > > > > > - 'ring', 'mempool', 'mbuf', 'net', 'meter', 'ethdev', 'pci', # core
> > > > > > + 'ring',
> > > > > > + 'rcu', # rcu depends on ring
> > > > > > + 'mempool', 'mbuf', 'net', 'meter', 'ethdev', 'pci', # core
> > > > > > 'cmdline',
> > > > > > 'metrics', # bitrate/latency stats depends on this
> > > > > > 'hash', # efd depends on this
> > > > > > @@ -22,7 +24,7 @@ libraries = [
> > > > > > 'gro', 'gso', 'ip_frag', 'jobstats',
> > > > > > 'kni', 'latencystats', 'lpm', 'member',
> > > > > > 'power', 'pdump', 'rawdev',
> > > > > > - 'rcu', 'reorder', 'sched', 'security', 'stack', 'vhost',
> > > > > > + 'reorder', 'sched', 'security', 'stack', 'vhost',
> > > > > > # ipsec lib depends on net, crypto and security
> > > > > > 'ipsec',
> > > > > > # add pkt framework libs which use other libs from above
> > > > > > --
> > > > > > 2.17.1
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] crypto/armv8: enable meson build
2019-10-11 20:14 2% ` Honnappa Nagarahalli
@ 2019-10-11 20:33 0% ` Jerin Jacob
0 siblings, 0 replies; 200+ results
From: Jerin Jacob @ 2019-10-11 20:33 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Thomas Monjalon, Jerin Jacob, Dharmik Thakkar,
Akhil.goyal@nxp.com, hemant.agrawal, anoobj, pathreya,
Richardson, Bruce, dpdk-dev, nd, prasun.kapoor
On Sat, 12 Oct, 2019, 1:44 AM Honnappa Nagarahalli, <
Honnappa.Nagarahalli@arm.com> wrote:
> On Sat, 12 Oct, 2019, 12:44 AM Honnappa Nagarahalli, <
> Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
>
>
>
> On Thu, 10 Oct, 2019, 10:17 AM Honnappa Nagarahalli, <
> Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
>
>
>
> On Mon, 7 Oct, 2019, 3:49 PM Jerin Jacob, <jerinjacobk@gmail.com> wrote:
>
>
>
> On Sun, 6 Oct, 2019, 11:36 PM Thomas Monjalon, <thomas@monjalon.net>
> wrote:
>
> 05/10/2019 17:28, Jerin Jacob:
> > On Fri, Oct 4, 2019 at 4:27 AM Dharmik Thakkar <dharmik.thakkar@arm.com>
> wrote:
> > >
> > > Add new meson.build file for crypto/armv8
> > >
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > > ---
> > > drivers/crypto/armv8/meson.build | 25 +++++++++++++++++++++++++
> > > drivers/crypto/meson.build | 6 +++---
> > > meson_options.txt | 2 ++
> > > 3 files changed, 30 insertions(+), 3 deletions(-)
> > > create mode 100644 drivers/crypto/armv8/meson.build
> >
> > >
> > > option('allow_invalid_socket_id', type: 'boolean', value: false,
> > > description: 'allow out-of-range NUMA socket id\'s for
> platforms that don\'t report the value correctly')
> > > +option('armv8_crypto_dir', type: 'string', value: '',
> > > + description: 'path to the armv8_crypto library installation
> directory')
>
> You should not need such option if you provide a pkg-config file
> in your library.
>
>
> > It is not specific to this patch but it is connected to this patch.
> >
> > Three years back when Cavium contributed to this driver the situation
> > was different where only Cavium was contributing to DPDK and now we
> > have multiple vendors from
> > ARMv8 platform and ARM itself is contributing it.
> >
> > When it is submitted, I was not in favor of the external library. But
> > various reasons it happened to be the external library where 90% meat
> > in this library and shim PMD
> > the driver moved to DPDK.
> >
> > Now, I look back, It does not make sense to the external library.
> Reasons are
> > - It won't allow another ARMv8 player to contribute to this library as
> > Marvell owns this repo and there is no upstreaming path to this
> > library.
>
> This is a real issue and you are able to fix it.
>
>
>
> Note sure how I can fix it and why I need to fix it. I just dont want to
> start a parallel collaborating infrastructure for DPDK armv8.
>
>
>
>
>
> > - That made this library to not have 'any' change for the last three
> > year and everyone have there owned copy of this driver. In fact the
> > library was not compiling for last 2.5 years.
> > - AES-NI case it makes sense to have an external library as it is a
> > single vendor and it is not specific to DPDK. But in this, It is
> > another way around
>
> I don't see how it is different, except it is badly maintained.
>
>
>
> It is different because only one company contributing to it. In this case,
> multiple companies needs to contribute.
>
>
>
> The library badly maintained in upstream as there is no incentives to
> upstream to external library. I believe each vendor has it own copy of
> that. At least Some teams in Marvell internally has copy of it.
>
> What is their incentive to upstream? They ask me the same thing.
>
>
>
>
>
> > - If it an external library, we might as well add the PMD code as well
> > there and that only 10% of the real stuff.
> > We are not able able to improve anything in this library due to this
> situation.
> >
> > Does anyone care about this PMD? If not, we might as well remove this
> > DPDK and every vendor can manage the external library and external
> > PMD(Situation won't change much)
>
> External PMD is bad.
>
>
>
> It is SHIM layer. I would say external library also bad if it is specific
> to DPDK.
>
>
>
> I think this library should not be specific to DPDK,
>
>
>
> Sadly it is VERY specific to DPDK for doing authentication and encryption
> in one shot to improve the performance. Openssl has already has armv8
> instructions support for doing it as two pass just that performance is not
> good. For use cae such as IPsec it make sense do authentication and
> encryption in one shot for performance improvement.
>
> *[Honnappa] *I think there is a need for such a library not just for
> DPDK. It would be good if it could do UDP checksum validation for the inner
> packet as well.
>
>
>
> so it would make sense as an external library
>
>
>
> If it an external library, it does NOT make much sense for Marvell to
> maintain it(No incentive and it is pain due lack of collaboration)
>
>
>
> Either someone need to step up and maintain it if we NOT choose to make it
> as external else we can remove the PMD from dpdk(Makes life easy for
> everyone). I don't want to maintain something not upsteamble nor
> collaboration friendly aka less quality.
>
>
>
> .
>
>
>
>
> > Thoughts from ARM, other ARMv8 vendors or community?
>
>
>
> I have expressed my concerns. If there is no constructive feedback to fix
> the concern. I will plan for submitting a patch to remove the shim crypto
> Armv8 PMD from dpdk by next week.
>
> *[Honnappa] *I do not think there is a need to remove the PMD. As you
> have mentioned, many might have developed their own libraries and may be
> dependent on DPDK Armv8 PMD.
>
>
>
> Problem with that approach is that, No convergence/collaboration on this
> PMD aka no improvement and less quality.
>
> *[Honnappa] *Would not removing this fall under ABI/API compatibility?
> Essentially, DPDK defines how an external Armv8 Crypto library can work
> with DPDK. Is it possible to remove it considering that there might be
> users dependent on this?
>
> I agree with you on the improvements (features?), but not sure on quality.
> For the features that are supported, the quality should be good.
>
>
>
> The library was broken for last 2.5 years. Is that the high quality and no
> improvement for last 3 year and no single contribution otherthan Marvell in
> external library.
>
> *[Honnappa] *We need to separate the discussion about PMD and the
> external library. IMO, PMD cannot be removed as some might be using the
> interfaces with their own crypto library.
>
Multiple libraries for same job. That's same thing I would like to avoid.
And the PMD does not exist without external library. If some has their own
crypto library then please update in the documentation so that others can
use it. It is not open-source way of doing the stuff. Else I need assume no
one is using the PMD.
>
> From Arm side, there have been efforts to fix the situation. Some have not
> gone far and some have shown promise, but fell flat. I can say that this is
> still a priority but I am not sure when we will have something.
>
>
>
> If ARM is ready to take over the maintenance on PMD and external library
> then I am fine with any decision.
>
> Let us know. Personally, I don't like to maintain something not upsteamble
> friendly.
>
> *[Honnappa] *What is the maintenance burden on the PMD? Can you elaborate?
>
>
>
> Marvell open-source policy is bit different than cavium policy. We can not
> contribute to GitHub repository with out approval. The existing external
> library, not belongs to Marvell GitHub domain. I need to create a case to
> add new GitHub repo under Marvell to contribute to all armv8 partners. I
> don't have justification for that to legal. We have approvals to contribute
> to dpdk.org
>
>
>
> On the external library, I do not think this is the right forum to make a
> decision. There are channels provided to all our partners to discuss these
> kind of topics and I think those should be made use of.
>
> It is cavium created library for dpdk. Why we need to discuss in some
> other channel. I believe this is the correct forum for dpdk discussions.
>
> *[Honnappa] *May be I was not clear, please see my comment below.
>
>
>
> For example, Dharmik got comment to update the external library to
> support autoconfig for meson. What is the path for Dharmik to do that?
>
> *[Honnappa] *Is this mainly from testing purposes?
>
>
>
Not for testing. See Thomas comment
Don't you think, you need have access to the complete code base to
> contribute. That's the reason why I am saying remove the external library
> and have it in DPDK so that everyone can contribute and improve.
>
> *[Honnappa] *I don’t have any issues
>
But DPDK does have support for that.
>
> If you think, otherwise please take over the maintenance keeping initial
> author credit. If you need time to take decision that makes sense. You can
> share the ETA. Otherwise, this discussion going in circles.
>
> *[Honnappa] *This cannot be decided in this forum.
>
>
>
Then please start the discussion with the form you think it as appropriate.
>
> My suggestion, we should go ahead with adding the meson build for this PMD.
>
>
^ permalink raw reply [relevance 0%]
* Re: [dpdk-dev] [PATCH] crypto/armv8: enable meson build
2019-10-11 20:02 0% ` Jerin Jacob
@ 2019-10-11 20:14 2% ` Honnappa Nagarahalli
2019-10-11 20:33 0% ` Jerin Jacob
0 siblings, 1 reply; 200+ results
From: Honnappa Nagarahalli @ 2019-10-11 20:14 UTC (permalink / raw)
To: Jerin Jacob
Cc: thomas, jerinj, Dharmik Thakkar, Akhil.goyal@nxp.com,
hemant.agrawal, anoobj, pathreya, Richardson, Bruce, dpdk-dev,
Honnappa Nagarahalli, nd, prasun.kapoor, nd
On Sat, 12 Oct, 2019, 12:44 AM Honnappa Nagarahalli, <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>> wrote:
<snip>
On Thu, 10 Oct, 2019, 10:17 AM Honnappa Nagarahalli, <Honnappa.Nagarahalli@arm.com<mailto:Honnappa.Nagarahalli@arm.com>> wrote:
<snip>
On Mon, 7 Oct, 2019, 3:49 PM Jerin Jacob, <jerinjacobk@gmail.com<mailto:jerinjacobk@gmail.com>> wrote:
On Sun, 6 Oct, 2019, 11:36 PM Thomas Monjalon, <thomas@monjalon.net<mailto:thomas@monjalon.net>> wrote:
05/10/2019 17:28, Jerin Jacob:
> On Fri, Oct 4, 2019 at 4:27 AM Dharmik Thakkar <dharmik.thakkar@arm.com<mailto:dharmik.thakkar@arm.com>> wrote:
> >
> > Add new meson.build file for crypto/armv8
> >
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com<mailto:dharmik.thakkar@arm.com>>
> > ---
> > drivers/crypto/armv8/meson.build | 25 +++++++++++++++++++++++++
> > drivers/crypto/meson.build | 6 +++---
> > meson_options.txt | 2 ++
> > 3 files changed, 30 insertions(+), 3 deletions(-)
> > create mode 100644 drivers/crypto/armv8/meson.build
>
> >
> > option('allow_invalid_socket_id', type: 'boolean', value: false,
> > description: 'allow out-of-range NUMA socket id\'s for platforms that don\'t report the value correctly')
> > +option('armv8_crypto_dir', type: 'string', value: '',
> > + description: 'path to the armv8_crypto library installation directory')
You should not need such option if you provide a pkg-config file
in your library.
> It is not specific to this patch but it is connected to this patch.
>
> Three years back when Cavium contributed to this driver the situation
> was different where only Cavium was contributing to DPDK and now we
> have multiple vendors from
> ARMv8 platform and ARM itself is contributing it.
>
> When it is submitted, I was not in favor of the external library. But
> various reasons it happened to be the external library where 90% meat
> in this library and shim PMD
> the driver moved to DPDK.
>
> Now, I look back, It does not make sense to the external library. Reasons are
> - It won't allow another ARMv8 player to contribute to this library as
> Marvell owns this repo and there is no upstreaming path to this
> library.
This is a real issue and you are able to fix it.
Note sure how I can fix it and why I need to fix it. I just dont want to start a parallel collaborating infrastructure for DPDK armv8.
> - That made this library to not have 'any' change for the last three
> year and everyone have there owned copy of this driver. In fact the
> library was not compiling for last 2.5 years.
> - AES-NI case it makes sense to have an external library as it is a
> single vendor and it is not specific to DPDK. But in this, It is
> another way around
I don't see how it is different, except it is badly maintained.
It is different because only one company contributing to it. In this case, multiple companies needs to contribute.
The library badly maintained in upstream as there is no incentives to upstream to external library. I believe each vendor has it own copy of that. At least Some teams in Marvell internally has copy of it.
What is their incentive to upstream? They ask me the same thing.
> - If it an external library, we might as well add the PMD code as well
> there and that only 10% of the real stuff.
> We are not able able to improve anything in this library due to this situation.
>
> Does anyone care about this PMD? If not, we might as well remove this
> DPDK and every vendor can manage the external library and external
> PMD(Situation won't change much)
External PMD is bad.
It is SHIM layer. I would say external library also bad if it is specific to DPDK.
I think this library should not be specific to DPDK,
Sadly it is VERY specific to DPDK for doing authentication and encryption in one shot to improve the performance. Openssl has already has armv8 instructions support for doing it as two pass just that performance is not good. For use cae such as IPsec it make sense do authentication and encryption in one shot for performance improvement.
[Honnappa] I think there is a need for such a library not just for DPDK. It would be good if it could do UDP checksum validation for the inner packet as well.
so it would make sense as an external library
If it an external library, it does NOT make much sense for Marvell to maintain it(No incentive and it is pain due lack of collaboration)
Either someone need to step up and maintain it if we NOT choose to make it as external else we can remove the PMD from dpdk(Makes life easy for everyone). I don't want to maintain something not upsteamble nor collaboration friendly aka less quality.
.
> Thoughts from ARM, other ARMv8 vendors or community?
I have expressed my concerns. If there is no constructive feedback to fix the concern. I will plan for submitting a patch to remove the shim crypto Armv8 PMD from dpdk by next week.
[Honnappa] I do not think there is a need to remove the PMD. As you have mentioned, many might have developed their own libraries and may be dependent on DPDK Armv8 PMD.
Problem with that approach is that, No convergence/collaboration on this PMD aka no improvement and less quality.
[Honnappa] Would not removing this fall under ABI/API compatibility? Essentially, DPDK defines how an external Armv8 Crypto library can work with DPDK. Is it possible to remove it considering that there might be users dependent on this?
I agree with you on the improvements (features?), but not sure on quality. For the features that are supported, the quality should be good.
The library was broken for last 2.5 years. Is that the high quality and no improvement for last 3 year and no single contribution otherthan Marvell in external library.
[Honnappa] We need to separate the discussion about PMD and the external library. IMO, PMD cannot be removed as some might be using the interfaces with their own crypto library.
From Arm side, there have been efforts to fix the situation. Some have not gone far and some have shown promise, but fell flat. I can say that this is still a priority but I am not sure when we will have something.
If ARM is ready to take over the maintenance on PMD and external library then I am fine with any decision.
Let us know. Personally, I don't like to maintain something not upsteamble friendly.
[Honnappa] What is the maintenance burden on the PMD? Can you elaborate?
Marvell open-source policy is bit different than cavium policy. We can not contribute to GitHub repository with out approval. The existing external library, not belongs to Marvell GitHub domain. I need to create a case to add new GitHub repo under Marvell to contribute to all armv8 partners. I don't have justification for that to legal. We have approvals to contribute to dpdk.org<http://dpdk.org>
On the external library, I do not think this is the right forum to make a decision. There are channels provided to all our partners to discuss these kind of topics and I think those should be made use of.
It is cavium created library for dpdk. Why we need to discuss in some other channel. I believe this is the correct forum for dpdk discussions.
[Honnappa] May be I was not clear, please see my comment below.
For example, Dharmik got comment to update the external library to support autoconfig for meson. What is the path for Dharmik to do that?
[Honnappa] Is this mainly from testing purposes?
Don't you think, you need have access to the complete code base to contribute. That's the reason why I am saying remove the external library and have it in DPDK so that everyone can contribute and improve.
[Honnappa] I don’t have any issues
If you think, otherwise please take over the maintenance keeping initial author credit. If you need time to take decision that makes sense. You can share the ETA. Otherwise, this discussion going in circles.
[Honnappa] This cannot be decided in this forum.
My suggestion, we should go ahead with adding the meson build for this PMD.
^ permalink raw reply [relevance 2%]
* Re: [dpdk-dev] [PATCH] crypto/armv8: enable meson build
@ 2019-10-11 20:02 0% ` Jerin Jacob
2019-10-11 20:14 2% ` Honnappa Nagarahalli
0 siblings, 1 reply; 200+ results
From: Jerin Jacob @ 2019-10-11 20:02 UTC (permalink / raw)
To: Honnappa Nagarahalli
Cc: Thomas Monjalon, Jerin Jacob, Dharmik Thakkar, Akhil Goyal,
Hemant Agrawal, anoobj, pathreya, Richardson, Bruce, dpdk-dev,
nd, prasun.kapoor
On Sat, 12 Oct, 2019, 12:44 AM Honnappa Nagarahalli, <
Honnappa.Nagarahalli@arm.com> wrote:
> <snip>
>
>
>
> On Thu, 10 Oct, 2019, 10:17 AM Honnappa Nagarahalli, <
> Honnappa.Nagarahalli@arm.com> wrote:
>
> <snip>
>
>
>
> On Mon, 7 Oct, 2019, 3:49 PM Jerin Jacob, <jerinjacobk@gmail.com> wrote:
>
>
>
> On Sun, 6 Oct, 2019, 11:36 PM Thomas Monjalon, <thomas@monjalon.net>
> wrote:
>
> 05/10/2019 17:28, Jerin Jacob:
> > On Fri, Oct 4, 2019 at 4:27 AM Dharmik Thakkar <dharmik.thakkar@arm.com>
> wrote:
> > >
> > > Add new meson.build file for crypto/armv8
> > >
> > > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > > ---
> > > drivers/crypto/armv8/meson.build | 25 +++++++++++++++++++++++++
> > > drivers/crypto/meson.build | 6 +++---
> > > meson_options.txt | 2 ++
> > > 3 files changed, 30 insertions(+), 3 deletions(-)
> > > create mode 100644 drivers/crypto/armv8/meson.build
> >
> > >
> > > option('allow_invalid_socket_id', type: 'boolean', value: false,
> > > description: 'allow out-of-range NUMA socket id\'s for
> platforms that don\'t report the value correctly')
> > > +option('armv8_crypto_dir', type: 'string', value: '',
> > > + description: 'path to the armv8_crypto library installation
> directory')
>
> You should not need such option if you provide a pkg-config file
> in your library.
>
>
> > It is not specific to this patch but it is connected to this patch.
> >
> > Three years back when Cavium contributed to this driver the situation
> > was different where only Cavium was contributing to DPDK and now we
> > have multiple vendors from
> > ARMv8 platform and ARM itself is contributing it.
> >
> > When it is submitted, I was not in favor of the external library. But
> > various reasons it happened to be the external library where 90% meat
> > in this library and shim PMD
> > the driver moved to DPDK.
> >
> > Now, I look back, It does not make sense to the external library.
> Reasons are
> > - It won't allow another ARMv8 player to contribute to this library as
> > Marvell owns this repo and there is no upstreaming path to this
> > library.
>
> This is a real issue and you are able to fix it.
>
>
>
> Note sure how I can fix it and why I need to fix it. I just dont want to
> start a parallel collaborating infrastructure for DPDK armv8.
>
>
>
>
>
> > - That made this library to not have 'any' change for the last three
> > year and everyone have there owned copy of this driver. In fact the
> > library was not compiling for last 2.5 years.
> > - AES-NI case it makes sense to have an external library as it is a
> > single vendor and it is not specific to DPDK. But in this, It is
> > another way around
>
> I don't see how it is different, except it is badly maintained.
>
>
>
> It is different because only one company contributing to it. In this case,
> multiple companies needs to contribute.
>
>
>
> The library badly maintained in upstream as there is no incentives to
> upstream to external library. I believe each vendor has it own copy of
> that. At least Some teams in Marvell internally has copy of it.
>
> What is their incentive to upstream? They ask me the same thing.
>
>
>
>
>
> > - If it an external library, we might as well add the PMD code as well
> > there and that only 10% of the real stuff.
> > We are not able able to improve anything in this library due to this
> situation.
> >
> > Does anyone care about this PMD? If not, we might as well remove this
> > DPDK and every vendor can manage the external library and external
> > PMD(Situation won't change much)
>
> External PMD is bad.
>
>
>
> It is SHIM layer. I would say external library also bad if it is specific
> to DPDK.
>
>
>
> I think this library should not be specific to DPDK,
>
>
>
> Sadly it is VERY specific to DPDK for doing authentication and encryption
> in one shot to improve the performance. Openssl has already has armv8
> instructions support for doing it as two pass just that performance is not
> good. For use cae such as IPsec it make sense do authentication and
> encryption in one shot for performance improvement.
>
> *[Honnappa] *I think there is a need for such a library not just for
> DPDK. It would be good if it could do UDP checksum validation for the inner
> packet as well.
>
>
>
> so it would make sense as an external library
>
>
>
> If it an external library, it does NOT make much sense for Marvell to
> maintain it(No incentive and it is pain due lack of collaboration)
>
>
>
> Either someone need to step up and maintain it if we NOT choose to make it
> as external else we can remove the PMD from dpdk(Makes life easy for
> everyone). I don't want to maintain something not upsteamble nor
> collaboration friendly aka less quality.
>
>
>
> .
>
>
>
>
> > Thoughts from ARM, other ARMv8 vendors or community?
>
>
>
> I have expressed my concerns. If there is no constructive feedback to fix
> the concern. I will plan for submitting a patch to remove the shim crypto
> Armv8 PMD from dpdk by next week.
>
> *[Honnappa] *I do not think there is a need to remove the PMD. As you
> have mentioned, many might have developed their own libraries and may be
> dependent on DPDK Armv8 PMD.
>
>
>
> Problem with that approach is that, No convergence/collaboration on this
> PMD aka no improvement and less quality.
>
> *[Honnappa] *Would not removing this fall under ABI/API compatibility?
> Essentially, DPDK defines how an external Armv8 Crypto library can work
> with DPDK. Is it possible to remove it considering that there might be
> users dependent on this?
>
> I agree with you on the improvements (features?), but not sure on quality.
> For the features that are supported, the quality should be good.
>
The library was broken for last 2.5 years. Is that the high quality and no
improvement for last 3 year and no single contribution otherthan Marvell in
external library.
>
> From Arm side, there have been efforts to fix the situation. Some have not
> gone far and some have shown promise, but fell flat. I can say that this is
> still a priority but I am not sure when we will have something.
>
>
>
> If ARM is ready to take over the maintenance on PMD and external library
> then I am fine with any decision.
>
> Let us know. Personally, I don't like to maintain something not upsteamble
> friendly.
>
> *[Honnappa] *What is the maintenance burden on the PMD? Can you elaborate?
>
Marvell open-source policy is bit different than cavium policy. We can not
contribute to GitHub repository with out approval. The existing external
library, not belongs to Marvell GitHub domain. I need to create a case to
add new GitHub repo under Marvell to contribute to all armv8 partners. I
don't have justification for that to legal. We have approvals to contribute
to dpdk.org
On the external library, I do not think this is the right forum to make a
> decision. There are channels provided to all our partners to discuss these
> kind of topics and I think those should be made use of.
>
It is cavium created library for dpdk. Why we need to discuss in some other
channel. I believe this is the correct forum for dpdk discussions.
For example, Dharmik got comment to update the external library to support
autoconfig for meson. What is the path for Dharmik to do that?
Don't you think, you need have access to the complete code base to
contribute. That's the reason why I am saying remove the external library
and have it in DPDK so that everyone can contribute and improve.
If you think, otherwise please take over the maintenance keeping initial
author credit. If you need time to take decision that makes sense. You can
share the ETA. Otherwise, this discussion going in circles.
>
> My suggestion, we should go ahead with adding the meson build for this PMD.
>
>
^ permalink raw reply [relevance 0%]
Results 7401-7600 of ~18000 next (older) | prev (newer) | reverse | sort options + mbox downloads above
-- links below jump to the message on this page --
2019-07-04 23:21 [dpdk-dev] [PATCH] ethdev: extend flow metadata Yongseok Koh
2019-10-10 16:02 ` [dpdk-dev] [PATCH v2] " Viacheslav Ovsiienko
2019-10-18 9:22 ` Olivier Matz
2019-10-19 19:47 ` Slava Ovsiienko
2019-10-21 16:37 ` Olivier Matz
2019-10-24 6:49 3% ` Slava Ovsiienko
2019-10-24 9:22 0% ` Olivier Matz
2019-10-24 12:30 0% ` Slava Ovsiienko
2019-07-10 9:29 [dpdk-dev] [RFC] mbuf: support dynamic fields and flags Olivier Matz
2019-09-18 16:54 ` [dpdk-dev] [PATCH] " Olivier Matz
2019-10-01 10:49 ` Ananyev, Konstantin
2019-10-17 7:54 0% ` Olivier Matz
2019-10-17 11:58 0% ` Ananyev, Konstantin
2019-10-17 12:58 0% ` Olivier Matz
2019-10-17 14:42 3% ` [dpdk-dev] [PATCH v2] " Olivier Matz
2019-10-18 2:47 0% ` Wang, Haiyue
2019-10-18 7:53 0% ` Olivier Matz
2019-10-18 8:28 0% ` Wang, Haiyue
2019-10-18 9:47 0% ` Olivier Matz
2019-10-18 11:24 0% ` Wang, Haiyue
2019-10-22 22:51 0% ` Ananyev, Konstantin
2019-10-23 3:16 0% ` Wang, Haiyue
2019-10-23 10:21 0% ` Olivier Matz
2019-10-23 15:00 0% ` Stephen Hemminger
2019-10-23 15:12 0% ` Wang, Haiyue
2019-10-23 10:19 0% ` Olivier Matz
2019-10-23 12:00 0% ` Shahaf Shuler
2019-10-23 13:33 0% ` Olivier Matz
2019-10-24 4:54 0% ` Shahaf Shuler
2019-10-24 7:07 0% ` Olivier Matz
2019-10-24 7:38 0% ` Slava Ovsiienko
2019-10-24 7:56 0% ` Olivier Matz
2019-10-24 8:13 3% ` [dpdk-dev] [PATCH v3] " Olivier Matz
2019-10-24 16:40 0% ` Thomas Monjalon
2019-10-26 12:39 3% ` [dpdk-dev] [PATCH v4] " Olivier Matz
2019-10-26 17:04 0% ` Thomas Monjalon
2019-07-23 7:05 [dpdk-dev] [PATCH v8 1/3] eal/arm64: add 128-bit atomic compare exchange jerinj
2019-08-14 8:27 ` [dpdk-dev] [PATCH v9 " Phil Yang
2019-10-14 15:43 0% ` David Marchand
2019-10-15 11:32 ` Phil Yang (Arm Technology China)
2019-10-15 12:16 ` David Marchand
2019-10-16 9:04 4% ` Phil Yang (Arm Technology China)
2019-10-17 12:45 0% ` David Marchand
2019-10-15 11:38 2% ` [dpdk-dev] [PATCH v10 " Phil Yang
2019-10-18 11:21 4% ` [dpdk-dev] [PATCH v11 " Phil Yang
2019-08-22 8:42 [dpdk-dev] [PATCH v3] timer: use rte_mp_msg to get freq from primary process Jim Harris
2019-10-07 15:28 ` [dpdk-dev] [PATCH v6 RESEND] eal: add tsc_hz to rte_mem_config Jim Harris
2019-10-08 8:38 ` Bruce Richardson
2019-10-21 8:23 0% ` David Marchand
2019-08-29 7:47 [dpdk-dev] [RFC] ethdev: add new fields for max LRO session size Matan Azrad
2019-09-16 15:37 ` Ferruh Yigit
2019-09-24 12:03 ` Matan Azrad
2019-10-02 13:58 ` Thomas Monjalon
2019-10-18 16:35 0% ` Ferruh Yigit
2019-10-18 18:05 0% ` Ananyev, Konstantin
2019-10-22 12:56 0% ` Andrew Rybchenko
2019-09-03 15:40 [dpdk-dev] [RFC PATCH 0/9] security: add software synchronous crypto process Fan Zhang
2019-09-03 15:40 ` [dpdk-dev] [RFC PATCH 1/9] security: introduce CPU Crypto action type and API Fan Zhang
2019-09-04 10:32 ` Akhil Goyal
2019-09-04 13:06 ` Zhang, Roy Fan
2019-09-06 9:01 ` Akhil Goyal
2019-09-06 13:27 ` Ananyev, Konstantin
2019-09-10 10:44 ` Akhil Goyal
2019-09-11 12:29 ` Ananyev, Konstantin
2019-09-12 14:12 ` Akhil Goyal
2019-09-16 14:53 ` Ananyev, Konstantin
2019-09-17 6:02 ` Akhil Goyal
2019-09-18 7:44 ` Ananyev, Konstantin
2019-09-25 18:24 ` Ananyev, Konstantin
2019-09-27 9:26 ` Akhil Goyal
2019-09-30 12:22 ` Ananyev, Konstantin
2019-09-30 13:43 ` Akhil Goyal
2019-10-01 14:49 ` Ananyev, Konstantin
2019-10-03 13:24 ` Akhil Goyal
2019-10-07 12:53 ` Ananyev, Konstantin
2019-10-09 7:20 ` Akhil Goyal
2019-10-09 13:43 ` Ananyev, Konstantin
2019-10-11 13:23 ` Akhil Goyal
2019-10-13 23:07 0% ` Zhang, Roy Fan
2019-10-14 11:10 0% ` Ananyev, Konstantin
2019-10-16 22:07 3% ` Ananyev, Konstantin
2019-10-17 12:49 0% ` Ananyev, Konstantin
2019-10-18 13:17 4% ` Akhil Goyal
2019-10-21 13:47 4% ` Ananyev, Konstantin
2019-10-22 13:31 5% ` Akhil Goyal
2019-10-22 17:44 0% ` Ananyev, Konstantin
2019-10-22 22:21 0% ` Ananyev, Konstantin
2019-10-23 10:05 0% ` Akhil Goyal
2019-09-06 9:45 [dpdk-dev] [PATCH v2 0/6] RCU integration with LPM library Ruifeng Wang
2019-10-01 6:29 ` [dpdk-dev] [PATCH v3 0/3] Add RCU reclamation APIs Honnappa Nagarahalli
2019-10-01 6:29 ` [dpdk-dev] [PATCH v3 2/3] lib/rcu: add resource " Honnappa Nagarahalli
2019-10-02 17:39 ` Ananyev, Konstantin
2019-10-03 6:29 ` Honnappa Nagarahalli
2019-10-03 12:26 ` Ananyev, Konstantin
2019-10-04 6:07 ` Honnappa Nagarahalli
2019-10-07 10:46 ` Ananyev, Konstantin
2019-10-13 4:35 0% ` Honnappa Nagarahalli
2019-10-01 18:28 ` [dpdk-dev] [PATCH v3 0/3] RCU integration with LPM library Honnappa Nagarahalli
2019-10-01 18:28 ` [dpdk-dev] [PATCH v3 1/3] lib/lpm: integrate RCU QSBR Honnappa Nagarahalli
2019-10-07 9:21 ` Ananyev, Konstantin
2019-10-13 4:36 3% ` Honnappa Nagarahalli
2019-10-15 11:15 0% ` Ananyev, Konstantin
2019-10-18 3:32 0% ` Honnappa Nagarahalli
2019-09-17 9:09 [dpdk-dev] [PATCH v2 1/3] net/ifcvf: add multiqueue configuration Andy Pei
2019-09-17 9:09 ` [dpdk-dev] [PATCH v2 2/3] vhost: call vDPA callback at the end of vring enable handler Andy Pei
2019-09-23 8:12 ` Tiwei Bie
2019-10-18 16:54 0% ` Ferruh Yigit
2019-09-25 16:10 [dpdk-dev] [PATCH v7] eal: make lcore_config private Stephen Hemminger
2019-10-02 19:40 ` [dpdk-dev] [PATCH v8] " Stephen Hemminger
2019-10-22 9:05 3% ` David Marchand
2019-10-22 16:30 0% ` Stephen Hemminger
2019-10-22 16:49 0% ` David Marchand
2019-09-26 8:52 [dpdk-dev] [PATCH v3 00/15] sched: subport level configuration of pipe nodes Jasvinder Singh
2019-10-14 12:09 ` [dpdk-dev] [PATCH v4 00/17] " Jasvinder Singh
2019-10-14 12:09 2% ` [dpdk-dev] [PATCH v4 17/17] sched: modify internal structs and functions for 64 bit values Jasvinder Singh
2019-10-14 17:23 ` [dpdk-dev] [PATCH v5 00/15] sched: subport level configuration of pipe nodes Jasvinder Singh
2019-10-14 17:23 4% ` [dpdk-dev] [PATCH v5 15/15] sched: remove redundant code Jasvinder Singh
2019-10-24 18:46 ` [dpdk-dev] [PATCH v6 00/15] sched: subport level configuration of pipe nodes Jasvinder Singh
2019-10-24 18:46 4% ` [dpdk-dev] [PATCH v6 15/15] sched: remove redundant code Jasvinder Singh
2019-10-25 10:51 ` [dpdk-dev] [PATCH v7 00/15] sched: subport level configuration of pipe nodes Jasvinder Singh
2019-10-25 10:51 4% ` [dpdk-dev] [PATCH v7 15/15] sched: remove redundant code Jasvinder Singh
2019-09-27 16:54 [dpdk-dev] [PATCH v6 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-09-27 16:54 ` [dpdk-dev] [PATCH v6 1/4] doc: separate versioning.rst into version and policy Ray Kinsella
2019-10-21 9:53 0% ` Thomas Monjalon
2019-10-25 11:36 0% ` Ray Kinsella
2019-09-27 16:54 ` [dpdk-dev] [PATCH v6 2/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-10-15 15:11 5% ` David Marchand
2019-10-25 11:43 5% ` Ray Kinsella
2019-10-24 0:43 11% ` Thomas Monjalon
2019-10-25 9:10 5% ` Ray Kinsella
2019-10-25 12:45 10% ` Ray Kinsella
2019-10-21 9:50 5% ` [dpdk-dev] [PATCH v6 0/4] " Thomas Monjalon
2019-10-21 10:10 10% ` Ray Kinsella
2019-10-21 14:38 5% ` Thomas Monjalon
2019-10-22 8:12 5% ` Ray Kinsella
2019-09-30 9:21 [dpdk-dev] [PATCH 1/8] config: change ABI versioning for global Marcin Baran
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 00/10] Implement the new ABI policy and add helper scripts Anatoly Burakov
2019-10-16 17:03 8% ` [dpdk-dev] [PATCH v3 0/9] " Anatoly Burakov
2019-10-17 8:50 4% ` Bruce Richardson
2019-10-17 14:31 8% ` [dpdk-dev] [PATCH v4 00/10] " Anatoly Burakov
2019-10-24 9:46 8% ` [dpdk-dev] [PATCH v5 " Anatoly Burakov
2019-10-24 9:46 7% ` [dpdk-dev] [PATCH v5 01/10] config: change ABI versioning to global Anatoly Burakov
2019-10-24 9:46 14% ` [dpdk-dev] [PATCH v5 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 03/10] buildtools: add ABI update shell script Anatoly Burakov
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 04/10] timer: remove deprecated code Anatoly Burakov
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 05/10] lpm: " Anatoly Burakov
2019-10-24 9:46 4% ` [dpdk-dev] [PATCH v5 06/10] distributor: " Anatoly Burakov
2019-10-24 9:46 6% ` [dpdk-dev] [PATCH v5 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
2019-10-24 9:46 3% ` [dpdk-dev] [PATCH v5 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
2019-10-24 9:46 2% ` [dpdk-dev] [PATCH v5 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-24 9:46 23% ` [dpdk-dev] [PATCH v5 10/10] buildtools: add ABI versioning check script Anatoly Burakov
2019-10-17 14:31 7% ` [dpdk-dev] [PATCH v4 01/10] config: change ABI versioning to global Anatoly Burakov
2019-10-17 14:31 14% ` [dpdk-dev] [PATCH v4 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
2019-10-17 14:31 23% ` [dpdk-dev] [PATCH v4 03/10] buildtools: add ABI update shell script Anatoly Burakov
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 04/10] timer: remove deprecated code Anatoly Burakov
2019-10-17 21:04 0% ` Carrillo, Erik G
2019-10-21 13:24 3% ` Kevin Traynor
2019-10-24 9:07 4% ` Burakov, Anatoly
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 05/10] lpm: " Anatoly Burakov
2019-10-17 14:31 4% ` [dpdk-dev] [PATCH v4 06/10] distributor: " Anatoly Burakov
2019-10-17 15:59 0% ` Hunt, David
2019-10-17 14:31 6% ` [dpdk-dev] [PATCH v4 07/10] distributor: rename v2.0 ABI to _single suffix Anatoly Burakov
2019-10-17 16:00 4% ` Hunt, David
2019-10-17 14:31 3% ` [dpdk-dev] [PATCH v4 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
2019-10-17 14:31 2% ` [dpdk-dev] [PATCH v4 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-17 14:32 23% ` [dpdk-dev] [PATCH v4 10/10] buildtools: add ABI versioning check script Anatoly Burakov
2019-10-16 17:03 7% ` [dpdk-dev] [PATCH v3 1/9] config: change ABI versioning to global Anatoly Burakov
2019-10-17 8:44 9% ` Bruce Richardson
2019-10-17 10:25 4% ` Burakov, Anatoly
2019-10-17 14:09 8% ` Luca Boccassi
2019-10-17 14:12 4% ` Bruce Richardson
2019-10-18 10:07 7% ` Kevin Traynor
2019-10-16 17:03 14% ` [dpdk-dev] [PATCH v3 2/9] buildtools: add script for updating symbols abi version Anatoly Burakov
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 3/9] buildtools: add ABI update shell script Anatoly Burakov
2019-10-16 17:03 4% ` [dpdk-dev] [PATCH v3 4/9] timer: remove deprecated code Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 5/9] lpm: " Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 6/9] distributor: " Anatoly Burakov
2019-10-17 10:53 0% ` Hunt, David
2019-10-16 17:03 3% ` [dpdk-dev] [PATCH v3 7/9] drivers/octeontx: add missing public symbol Anatoly Burakov
2019-10-16 17:03 2% ` [dpdk-dev] [PATCH v3 8/9] build: change ABI version to 20.0 Anatoly Burakov
2019-10-16 17:03 23% ` [dpdk-dev] [PATCH v3 9/9] buildtools: add ABI versioning check script Anatoly Burakov
2019-10-16 12:43 8% ` [dpdk-dev] [PATCH v2 01/10] config: change ABI versioning for global Anatoly Burakov
2019-10-16 13:22 4% ` Bruce Richardson
2019-10-16 12:43 14% ` [dpdk-dev] [PATCH v2 02/10] buildtools: add script for updating symbols abi version Anatoly Burakov
2019-10-16 13:25 4% ` Bruce Richardson
2019-10-16 12:43 22% ` [dpdk-dev] [PATCH v2 03/10] buildtools: add ABI update shell script Anatoly Burakov
2019-10-16 13:33 4% ` Bruce Richardson
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 04/10] timer: remove deprecated code Anatoly Burakov
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 05/10] lpm: " Anatoly Burakov
2019-10-16 12:43 4% ` [dpdk-dev] [PATCH v2 06/10] distributor: " Anatoly Burakov
2019-10-16 12:43 3% ` [dpdk-dev] [PATCH v2 08/10] drivers/octeontx: add missing public symbol Anatoly Burakov
2019-10-16 12:43 2% ` [dpdk-dev] [PATCH v2 09/10] build: change ABI version to 20.0 Anatoly Burakov
2019-10-16 12:43 23% ` [dpdk-dev] [PATCH v2 10/10] buildtools: add ABI versioning check script Anatoly Burakov
2019-10-03 22:57 [dpdk-dev] [PATCH] crypto/armv8: enable meson build Dharmik Thakkar
2019-10-05 15:28 ` Jerin Jacob
2019-10-06 18:06 ` Thomas Monjalon
2019-10-07 10:19 ` Jerin Jacob
2019-10-08 7:18 ` Jerin Jacob
2019-10-10 4:46 ` Honnappa Nagarahalli
2019-10-10 5:24 ` Jerin Jacob
2019-10-11 19:13 ` Honnappa Nagarahalli
2019-10-11 20:02 0% ` Jerin Jacob
2019-10-11 20:14 2% ` Honnappa Nagarahalli
2019-10-11 20:33 0% ` Jerin Jacob
2019-10-09 13:38 [dpdk-dev] [PATCH v4 00/14] vhost packed ring performance optimization Marvin Liu
2019-10-15 14:30 3% ` [dpdk-dev] [PATCH v5 00/13] " Marvin Liu
2019-10-15 16:07 3% ` [dpdk-dev] [PATCH v6 " Marvin Liu
2019-10-17 7:31 0% ` Maxime Coquelin
2019-10-17 7:32 0% ` Liu, Yong
2019-10-21 15:40 3% ` [dpdk-dev] [PATCH v7 " Marvin Liu
2019-10-21 22:08 3% ` [dpdk-dev] [PATCH v8 " Marvin Liu
2019-10-24 6:49 0% ` Maxime Coquelin
2019-10-24 7:18 0% ` Liu, Yong
2019-10-24 8:24 0% ` Maxime Coquelin
2019-10-24 8:29 0% ` Liu, Yong
2019-10-24 16:08 3% ` [dpdk-dev] [PATCH v9 " Marvin Liu
2019-10-24 10:18 0% ` Maxime Coquelin
2019-10-14 17:24 [dpdk-dev] [PATCH 1/2] sched: add support for 64 bit values Jasvinder Singh
2019-10-14 17:24 ` [dpdk-dev] [PATCH 2/2] sched: modify internal structs and functions " Jasvinder Singh
2019-10-15 15:47 4% ` Dumitrescu, Cristian
2019-10-15 16:01 0% ` Singh, Jasvinder
2019-10-17 9:59 3% [dpdk-dev] DPDK Release Status Meeting 17/10/2019 Ferruh Yigit
2019-10-22 9:32 8% [dpdk-dev] [PATCH 0/8] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-22 9:32 12% ` [dpdk-dev] [PATCH 1/8] eal: make lcore config private David Marchand
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 2/8] eal: remove deprecated CPU flags check function David Marchand
2019-10-22 9:32 5% ` [dpdk-dev] [PATCH 3/8] eal: remove deprecated malloc virt2phys function David Marchand
2019-10-22 9:32 4% ` [dpdk-dev] [PATCH 6/8] pci: remove deprecated functions David Marchand
2019-10-22 9:32 3% ` [dpdk-dev] [PATCH 8/8] log: hide internal log structure David Marchand
2019-10-22 16:35 0% ` Stephen Hemminger
2019-10-23 13:02 0% ` David Marchand
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 David Marchand
2019-10-23 18:54 12% ` [dpdk-dev] [PATCH v2 01/12] eal: make lcore config private David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 02/12] eal: remove deprecated CPU flags check function David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 03/12] eal: remove deprecated malloc virt2phys function David Marchand
2019-10-23 18:54 4% ` [dpdk-dev] [PATCH v2 06/12] pci: remove deprecated functions David Marchand
2019-10-23 18:54 8% ` [dpdk-dev] [PATCH v2 08/12] log: hide internal log structure David Marchand
2019-10-24 16:30 0% ` Thomas Monjalon
2019-10-25 9:19 0% ` Kevin Traynor
2019-10-23 18:54 3% ` [dpdk-dev] [PATCH v2 10/12] eal: deinline lcore APIs David Marchand
2019-10-23 18:54 5% ` [dpdk-dev] [PATCH v2 12/12] eal: make the global configuration private David Marchand
2019-10-23 21:10 7% ` [dpdk-dev] [PATCH v2 00/12] EAL and PCI ABI changes for 19.11 Stephen Hemminger
2019-10-24 7:32 4% ` David Marchand
2019-10-24 15:37 4% ` Stephen Hemminger
2019-10-24 16:01 4% ` David Marchand
2019-10-24 16:37 4% ` Thomas Monjalon
2019-10-25 13:55 8% ` [dpdk-dev] [PATCH v3 " David Marchand
2019-10-25 13:56 12% ` [dpdk-dev] [PATCH v3 01/12] eal: make lcore config private David Marchand
2019-10-25 15:18 0% ` Burakov, Anatoly
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 02/12] eal: remove deprecated CPU flags check function David Marchand
2019-10-25 13:56 5% ` [dpdk-dev] [PATCH v3 03/12] eal: remove deprecated malloc virt2phys function David Marchand
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 06/12] pci: remove deprecated functions David Marchand
2019-10-25 13:56 3% ` [dpdk-dev] [PATCH v3 09/12] eal: deinline lcore APIs David Marchand
2019-10-25 13:56 4% ` [dpdk-dev] [PATCH v3 11/12] eal: make the global configuration private David Marchand
2019-10-25 13:56 9% ` [dpdk-dev] [PATCH v3 12/12] doc: announce global logs struct removal from ABI David Marchand
2019-10-25 15:30 4% ` Burakov, Anatoly
2019-10-25 15:33 4% ` Thomas Monjalon
2019-10-26 18:14 4% ` Kevin Traynor
2019-10-23 1:07 9% [dpdk-dev] [RFC 0/6] Add ABI compatibility checks to the meson build Kevin Laatz
2019-10-23 1:07 3% ` [dpdk-dev] [RFC 1/6] build: enable debug info by default in meson builds Kevin Laatz
2019-10-23 1:07 22% ` [dpdk-dev] [RFC 3/6] devtools: add abi dump generation script Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 4/6] build: add meson option for abi related checks Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 5/6] build: add lib abi checks to meson Kevin Laatz
2019-10-23 1:07 14% ` [dpdk-dev] [RFC 6/6] build: add drivers " Kevin Laatz
2019-10-23 8:51 [dpdk-dev] [PATCH 0/3] net definitions fixes David Marchand
2019-10-23 12:12 ` Ferruh Yigit
2019-10-23 12:57 3% ` David Marchand
2019-10-23 13:00 0% ` David Marchand
2019-10-23 13:19 0% ` Ferruh Yigit
2019-10-24 11:44 4% [dpdk-dev] DPDK Release Status Meeting 24/10/2019 Ferruh Yigit
2019-10-24 18:09 1% [dpdk-dev] [PATCH] cmdline: prefix cmdline numeric enum Stephen Hemminger
2019-10-25 4:45 [dpdk-dev] Please stop using iopl() in DPDK Andy Lutomirski
2019-10-25 7:22 3% ` David Marchand
2019-10-25 6:20 [dpdk-dev] [PATCH 1/2] security: add anti replay window size Hemant Agrawal
2019-10-25 10:00 4% ` Ananyev, Konstantin
2019-10-25 15:56 0% ` Hemant Agrawal
2019-10-25 16:28 10% [dpdk-dev] [PATCH v7 0/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 1/4] doc: separate versioning.rst into version and policy Ray Kinsella
2019-10-25 16:28 23% ` [dpdk-dev] [PATCH v7 2/4] doc: changes to abi policy introducing major abi versions Ray Kinsella
2019-10-25 16:28 30% ` [dpdk-dev] [PATCH v7 3/4] doc: updates to versioning guide for " Ray Kinsella
2019-10-25 16:28 13% ` [dpdk-dev] [PATCH v7 4/4] doc: add maintainer for abi policy Ray Kinsella
2019-10-25 17:53 2% [dpdk-dev] [RFC v2 0/7] RFC: Support MACSEC offload in the RTE_SECURITY infrastructure Pavel Belous
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).